4.1. conx package

4.1.1. Submodules

4.1.2. conx.network module

The network module contains the code for the Network class.

class conx.network.FunctionCallback(network, on_method, function)[source]

Bases: keras.callbacks.Callback

‘on_batch_begin’, ‘on_batch_end’, ‘on_epoch_begin’, ‘on_epoch_end’, ‘on_train_begin’, ‘on_train_end’,

on_batch_begin(batch, logs=None)[source]
on_batch_end(batch, logs=None)[source]
on_epoch_begin(epoch, logs=None)[source]
on_epoch_end(epoch, logs=None)[source]
on_train_begin(logs=None)[source]
on_train_end(logs=None)[source]
class conx.network.Network(name: str, *sizes: int, load_config=True, debug=False, build_propagate_from_models=True, **config: typing.Any)[source]

Bases: object

The main class for the conx neural network package.

Parameters:
  • name – Required. The name of the network. Should not contain special HTML characters.
  • sizes – Optional numbers. Defines the sizes of layers of a sequential network. These will be created, added, and connected automatically.
  • config – Configuration overrides for the network.

Note

To create a complete, operating network, you must do the following items:

  1. create a network
  2. add layers
  3. connect the layers
  4. compile the network
  5. set the dataset
  6. train the network

See also Layer, Network.add, Network.connect, and Network.compile.

Examples

>>> net = Network("XOR1", 2, 5, 2)
>>> len(net.layers)
3
>>> net = Network("XOR2")
>>> net.add(Layer("input", 2))
'input'
>>> net.add(Layer("hidden", 5))
'hidden'
>>> net.add(Layer("output", 2))
'output'
>>> net.connect()
>>> len(net.layers)
3
>>> net = Network("XOR3")
>>> net.add(Layer("input", 2))
'input'
>>> net.add(Layer("hidden", 5))
'hidden'
>>> net.add(Layer("output", 2))
'output'
>>> net.connect("input", "hidden")
>>> net.connect("hidden", "output")
>>> len(net.layers)
3
>>> net = Network("NMIST")
>>> net.name
'NMIST'
>>> len(net.layers)
0
>>> net = Network("NMIST", 10, 5, 1)
>>> len(net.layers)
3
>>> net = Network("NMIST", 10, 5, 5, 1, activation="sigmoid")
>>> net.config["activation"]
'sigmoid'
>>> net["output"].activation == "sigmoid"
True
>>> net["hidden1"].activation == "sigmoid"
True
>>> net["hidden2"].activation == "sigmoid"
True
>>> net["input"].activation is None
True
>>> net.layers[0].name == "input"
True
ERROR_FUNCTIONS = ['binary_crossentropy', 'categorical_crossentropy', 'categorical_hinge', 'cosine', 'cosine_proximity', 'hinge', 'kld', 'kullback_leibler_divergence', 'logcosh', 'mae', 'mape', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_error', 'mean_squared_logarithmic_error', 'mse', 'msle', 'poisson', 'sparse_categorical_crossentropy', 'squared_hinge']
OPTIMIZERS = ('sgd', 'rmsprop', 'adagrad', 'adadelta', 'adam', 'adamax', 'nadam', 'tfoptimizer')
acc(targets, outputs)[source]
add(*layers: conx.layers.Layer) → None[source]

Add layers to the network layer connections. Order is not important, unless calling Network.connect without any arguments.

Parameters:layer – One or more layer instances.
Returns:layer_name (str) - name of last layer added

Examples

>>> net = Network("XOR2")
>>> net.add(Layer("input", 2))
'input'
>>> len(net.layers)
1
>>> net = Network("XOR3")
>>> net.add(Layer("input", 2))
'input'
>>> net.add(Layer("hidden", 5))
'hidden'
>>> net.add(Layer("hidden2", 5),
...         Layer("hidden3", 5),
...         Layer("hidden4", 5),
...         Layer("hidden5", 5))
'hidden5'
>>> net.add(Layer("output", 2))
'output'
>>> len(net.layers)
7

Note

See Network for more information.

build_struct(inputs, class_id, config)[source]
compile(**kwargs)[source]

Check and compile the network.

You must provide error/loss and optimizer keywords.

Possible error/loss functions are:
  • ‘mse’ - mean_squared_error
  • ‘mae’ - mean_absolute_error
  • ‘mape’ - mean_absolute_percentage_error
  • ‘msle’ - mean_squared_logarithmic_error
  • ‘kld’ - kullback_leibler_divergence
  • ‘cosine’ - cosine_proximity
Possible optimizers are:
  • ‘sgd’
  • ‘rmsprop’
  • ‘adagrad’
  • ‘adadelta’
  • ‘adam’
  • ‘adamax’
  • ‘nadam’

See https://keras.io/ Model.compile method for more details.

compute_correct(outputs, targets, tolerance=None)[source]

Both are np.arrays. Return [True, …].

connect(from_layer_name: str = None, to_layer_name: str = None)[source]

Connect two layers together if called with arguments. If called with no arguments, then it will make a sequential run through the layers in order added.

Parameters:
  • from_layer_name – Name of layer where connect begins.
  • to_layer_name – Name of layer where connection ends.
  • both from_layer_name and to_layer_name are None, then (If) –
  • of the layers are connected sequentially in the order (all) –
  • added.

Examples

>>> net = Network("XOR2")
>>> net.add(Layer("input", 2))
'input'
>>> net.add(Layer("hidden", 5))
'hidden'
>>> net.add(Layer("output", 2))
'output'
>>> net.connect()
>>> [layer.name for layer in net["input"].outgoing_connections]
['hidden']
dashboard(width='95%', height='550px', play_rate=0.5)[source]

Build the dashboard for Jupyter widgets. Requires running in a notebook/jupyterlab.

delete(dir=None)[source]

Delete network save folder.

depth()[source]

Find the depth of the network graph of connections.

describe_connection_to(layer1, layer2)[source]

Returns a textual description of the weights for the SVG tooltip.

display_component(vector, component, class_id=None, **opts)[source]

vector is a list, one each per output layer. component is “errors” or “targets”

evaluate(batch_size=32)[source]

Test the network on the train and test data, returning a dict of results.

Example

>>> net = Network("Evaluate", 2, 2, 1, activation="sigmoid")
>>> net.compile(error='mean_squared_error', optimizer="adam")
>>> ds = [[[0, 0], [0]],
...       [[0, 1], [1]],
...       [[1, 0], [1]],
...       [[1, 1], [0]]]
>>> net.dataset.load(ds)
>>> net.evaluate()           
{'loss': ..., 'acc': ...}
from_array(array: list)[source]

Load the weights from a list.

Parameters:array – a sequence (e.g., list, np.array) of numbers

Example

>>> from conx import Network
>>> net = Network("Deep", 3, 4, 5, 2, 3, 4, 5)
>>> net.compile(optimizer="adam", error="mse")
>>> net.from_array([0] * 103)
>>> array = net.to_array()
>>> len(array)
103
get_metric(metric)[source]

Returns the metric data from the network’s history.

>>> net = Network("Test", 2, 2, 1)
>>> net.get_metric("loss")
[]
get_metrics()[source]

Returns a list of the metrics available in the Network’s history.

get_weights(layer_name=None)[source]

Get the weights from a layer, or the entire model.

Examples

>>> net = Network("Weight Test", 2, 2, 5)
>>> net.compile(error="mse", optimizer="adam")
>>> len(net.get_weights("input"))
0
>>> len(net.get_weights("hidden"))
2
>>> shape(net.get_weights("hidden")[0])  ## weights
(2, 2)
>>> shape(net.get_weights("hidden")[1])  ## biases
(2,)
>>> len(net.get_weights("output"))
2
>>> shape(net.get_weights("output")[0])  ## weights
(2, 5)
>>> shape(net.get_weights("output")[1])  ## biases
(5,)
>>> net = Network("Weight Get Test", 2, 2, 1, activation="sigmoid")
>>> net.compile(error="mse", optimizer="sgd")
>>> len(net.get_weights())
4

See also

  • Network.to_array
  • Network.from_array
  • Network.get_weights_as_image
get_weights_as_image(layer_name, colormap=None)[source]

Get the weights from the model.

>>> net = Network("Weight as Image Test", 2, 2, 5)
>>> net.compile(error="mse", optimizer="adam")
>>> net.get_weights_as_image("hidden") 
<PIL.Image.Image image mode=RGBA size=2x2 at ...>
get_weights_from_history(index, epochs=None)[source]

Get the weights of the network from a particular point in the learning sequence.

wts = net.get_weights_from_history(0) # get initial weights wts = net.get_weights_from_history(-1) # get last weights

See also

  • Network.set_weights_from_history
in_console(mpl_backend: str) → bool[source]

Return True if running connected to a console; False if connected to notebook, or other non-console system.

Possible values:
  • ‘TkAgg’ - console with Tk
  • ‘Qt5Agg’ - console with Qt
  • ‘MacOSX’ - mac console
  • ‘module://ipykernel.pylab.backend_inline’ - default for notebook and non-console, and when using %matplotlib inline
  • ‘NbAgg’ - notebook, using %matplotlib notebook

Here, None means not plotting, or just use text.

Note

If you are running ipython without a DISPLAY with the QT background, you may wish to:

export QT_QPA_PLATFORM=’offscreen’

load(dir=None)[source]

Load the model and the weights/history into an existing conx network.

load_config(datadir=None, config_file=None)[source]
load_history(dir=None, filename=None)[source]

Load the history from a dir/file.

network.load_history()

load_model(dir=None, filename=None)[source]

Load a model from a dir/filename.

load_weights(dir=None, filename=None)[source]

Load the network weights and history from dir/files.

network.load_weights()

movie(function, movie_name=None, start=0, stop=None, step=1, loop=0, optimize=True, duration=100, embed=False, mp4=True)[source]

Make a movie from a playback function over the set of recorded weights.

function has signature: function(network, epoch) and should return a PIL.Image.

Example

>>> net = Network("Movie Test", 2, 2, 1, activation="sigmoid")
>>> net.compile(error='mse', optimizer="adam")
>>> ds = [[[0, 0], [0]],
...       [[0, 1], [1]],
...       [[1, 0], [1]],
...       [[1, 1], [0]]]
>>> net.dataset.load(ds)
>>> epochs, khistory = net.train(10, verbose=0, report_rate=1000, record=True, plot=False)
>>> img = net.movie(lambda net, epoch: net.propagate_to_image("hidden", [1, 1],
...                                                           resize=(500, 100)),
...                 "/tmp/movie.gif", mp4=False)
>>> img
<IPython.core.display.Image object>
pf(vector, **opts)[source]

Pretty-format a vector. Returns string.

Parameters:
  • vector (list) – The first parameter.
  • precision (int) – Number of decimal places to show for each value in vector.
Returns:

Returns the vector formatted as a short string.

Return type:

str

Examples

These examples demonstrate the net.pf formatting function:

>>> import conx
>>> net = Network("Test")
>>> net.pf([1.01])
'[1.01]'
>>> net.pf(range(10), precision=2)
'[0,1,2,3,4,5,6,7,8,9]'
>>> net.pf([0]*10000) 
'[0,0,0,...]'
pf_matrix(matrix, force=False, **opts)[source]

Pretty-fromat a matrix. If a list, then that implies multi-bank.

picture(inputs=None, dynamic=False, rotate=False, scale=None, show_errors=False, show_targets=False, format='html', class_id=None, **kwargs)[source]

Create an SVG of the network given some inputs (optional).

>>> net = Network("Picture", 2, 2, 1)
>>> net.compile(error="mse", optimizer="adam")
>>> net.picture([.5, .5])
<IPython.core.display.HTML object>
>>> net.picture([.5, .5], dynamic=True)
<IPython.core.display.HTML object>
playback(function)[source]

Playback a function over the set of recorded weights.

function has signature: function(network, epoch) and returns
a displayable, or list of displayables.

Example: >>> net = Network(“Playback Test”, 2, 2, 1, activation=”sigmoid”) >>> net.compile(error=”mse”, optimizer=”sgd”) >>> net.dataset.load([ … [[0, 0], [0]], … [[0, 1], [1]], … [[1, 0], [1]], … [[1, 1], [0]]]) >>> results = net.train(10, record=True, verbose=0, plot=False) >>> def function(network, epoch): … return None >>> sv = net.playback(function) >>> ## Testing: >>> class Dummy: … def update(self, result): … return result >>> sv.displayers = [Dummy()] >>> print(“Testing”); sv.goto(“end”) # doctest: +ELLIPSIS Testing…

plot(metrics=None, ymin=None, ymax=None, start=0, end=None, legend='best', label=None, symbols=None, default_symbol='-', title=None, return_fig_ax=False, fig_ax=None, format=None)[source]

Plots the current network history for the specific epoch range and metrics. metrics is ‘?’, ‘all’, a metric keyword, or a list of metric keywords. if metrics is None, loss and accuracy are plotted on separate graphs.

>>> net = Network("Plot Test", 1, 3, 1)
>>> net.compile(error="mse", optimizer="rmsprop")
>>> net.dataset.append([0.0], [1.0])
>>> net.dataset.append([1.0], [0.0])
>>> net.train(plot=False)  
Evaluating initial training metrics...
Training...
...
>>> net.plot('?')
Available metrics: acc, loss
plot_activation_map(from_layer='input', from_units=(0, 1), to_layer='output', to_unit=0, colormap=None, default_from_layer_value=0, resolution=None, act_range=(0, 1), show_values=False, title=None, scatter=None, symbols=None, default_symbol='o', format=None, update_pictures=False)[source]

Plot the activations at a bank/unit given two input units.

plot_layer_weights(layer_name, units='all', wrange=None, wmin=None, wmax=None, colormap='gray', vshape=None, cbar=True, ticks=5, format=None, layout=None, spacing=0.2, figsize=None, scale=None, title=None)[source]

weight range displayed on the colorbar can be specified as wrange=(wmin, wmax), or individually via wmin/wmax keywords. if wmin or wmax is None, the actual min/max value of the weight matrix is used. wrange overrides provided wmin/wmax values. ticks is the number of colorbar ticks displayed. cbar=False turns off the colorbar. units can be a single unit index number or a list/tuple/range of indices.

plot_results(callback=None, format=None)[source]

plots loss and accuracy on separate graphs, ignoring any other metrics

pp(*args, **opts)[source]

Pretty-print a vector.

propagate(input, batch_size=32, class_id=None, update_pictures=False, raw=False)[source]

Propagate an input (in human API) through the network. If visualizing, the network image will be updated.

Inputs should be a vector if one input bank, or a list of vectors if more than one input bank.

Alternatively, inputs can be a dictionary mapping bank to vector.

>>> net = Network("Prop Test", 2, 2, 5)
>>> net.compile(error="mse", optimizer="adam")
>>> len(net.propagate([0.5, 0.5]))
5
>>> len(net.propagate({"input": [1, 1]}))
5
propagate_from(layer_name, input, output_layer_names=None, batch_size=32, update_pictures=False, raw=False)[source]

Propagate activations from the given layer name to the output layers.

propagate_to(layer_name, inputs, batch_size=32, class_id=None, update_pictures=False, update_path=True, raw=False)[source]

Computes activation at a layer. Side-effect: updates live SVG.

Parameters:
  • layer_name (str) –
  • - list of numbers, vector to propagate (inputs) –
  • batch_size (int) –
  • update_pictures (bool) –
  • raw (bool) –
propagate_to_features(layer_name, inputs, cols=5, resize=None, scale=1.0, html=True, size=None, display=True, class_id=None, update_pictures=False, raw=False)[source]

if html is True, then generate HTML, otherwise send images.

propagate_to_image(layer_name, input, batch_size=32, resize=None, scale=1.0, class_id=None, update_pictures=False, raw=False, feature=None)[source]

Gets an image of activations at a layer. Always returns image in proper orientation.

rebuild_config()[source]
report_epoch(epoch_count, results)[source]

Print out stats for the epoch.

reset(clear=False, **overrides)[source]

Reset all of the weights/biases in a network. The magnitude is based on the size of the network.

reset_config()[source]

Reset the config back to factor defaults.

retrain(**overrides)[source]

Call network.train() again with same options as last call, unless overrides.

save(dir=None)[source]

Save the model and the weights/history (if compiled) to a dir.

save_config(datadir=None, config_file=None)[source]
save_history(dir=None, filename=None)[source]

Save the history to a file.

network.save_history()

save_model(dir=None, filename=None)[source]

Save a model (if compiled) to a dir/filename.

save_weights(dir=None, filename=None)[source]

Save the network weights and history to dir/files.

network.save_weights()

saved(dir=None)[source]

Return True if network has been saved.

set_activation(layer_name, activation)[source]

Swap activation function of a layer after compile.

set_dataset(dataset)[source]

Set the dataset for the network.

Examples

>>> from conx import Dataset
>>> data = [[[0, 0], [0]],
...         [[0, 1], [1]],
...         [[1, 0], [1]],
...         [[1, 1], [0]]]
>>> ds = Dataset()
>>> ds.load(data)
>>> net = Network("Set Dataset Test", 2, 2, 1)
>>> net.compile(error="mse", optimizer="adam")
>>> net.set_dataset(ds)
set_weights(weights, layer_name=None)[source]

Set the model’s weights, or a particular layer’s weights.

>>> net = Network("Weight Set Test", 2, 2, 1, activation="sigmoid")
>>> net.compile(error="mse", optimizer="sgd")
>>> net.set_weights(net.get_weights())
>>> hw = net.get_weights("hidden")
>>> net.set_weights(hw, "hidden")
set_weights_from_history(index, epochs=None)[source]

Set the weights of the network from a particular point in the learning sequence.

net.set_weights_from_history(0) # restore initial weights net.set_weights_from_history(-1) # restore last weights

See also

  • Network.get_weights_from_history
show_results(report_rate=None)[source]

Show the history of training results. If report_rate is given use that, else, try to use the last trained report_rate.

show_unit_weights(layer_name, unit, vshape=None, ascii=False)[source]
summary()[source]

Print out a summary of the network.

test(batch_size=32, show=False, tolerance=None, force=False, show_inputs=True, show_outputs=True, filter='all', interactive=True)[source]

Test a dataset.

test_dataset_ranges()[source]

Test the dataset ranges to see if in range of activation functions.

to_array() → list[source]

Get the weights of a network as a flat, one-dimensional list.

Example

>>> from conx import Network
>>> net = Network("Deep", 3, 4, 5, 2, 3, 4, 5)
>>> net.compile(optimizer="adam", error="mse")
>>> array = net.to_array()
>>> len(array)
103
Returns:All of weights and biases of the network in a single, flat list.
to_svg(inputs=None, class_id=None, **kwargs)[source]

opts - temporary override of config

includes:
“font_size”: 12, “border_top”: 25, “border_bottom”: 25, “hspace”: 100, “vspace”: 50, “image_maxdim”: 200 “image_pixels_per_unit”: 50

See .config for all options.

tolerance
train(epochs=1, accuracy=None, error=None, batch_size=32, report_rate=1, verbose=1, kverbose=0, shuffle=True, tolerance=None, class_weight=None, sample_weight=None, use_validation_to_stop=False, plot=True, record=0, callbacks=None, save=False)[source]

Train the network.

To stop before number of epochs, give either error=VALUE, or accuracy=VALUE.

Normally, it will check training info to stop, unless you use_validation_to_stop = True.

Parameters:
  • epochs (int) – Maximum number of epochs (sweeps) through training data.
  • accuracy (float) – Value of correctness (0.0 - 1.0) to attain in order to stop. Depends on tolerance to determine accuracy.
  • error (float) – Error to attain in order to stop. Depends on error function given in Network.compile.
  • batch_size (int) – Size of batch to train on.
  • report_rate (int) – Rate of feedback on learning, in epochs.
  • verbose (int) – Level of feedback on training. verbose=0 gives no feedback, but returns (epoch_count, result)
  • kverbose (int) – Level of feedback from Keras.
  • shuffle (bool or str) – Should the training data be shuffled? ‘batch’ shuffles in batch-sized chunks.
  • tolerance (float) – The maximum difference between target and output that should be considered correct.
  • class_weight (float) –
  • sample_weight (float) –
  • use_validation_to_stop (bool) – If True, then accuracy and error will use the validation set rather than the training set.
  • plot (bool) – If True, then the feedback will be shown in graphical form.
  • record (int) – If ‘record != 0’, the weights will be saved every record number of epochs.
  • callbacks (list) – A list of (str, function) where str is ‘on_batch_begin’, ‘on_batch_end’, ‘on_epoch_begin’, ‘on_epoch_end’, ‘on_train_begin’, or ‘on_train_end’, and function takes a network, and other parameters, depending on str.
  • save (bool) – If True, then the network is saved at end, whether interrupted or not.
Returns:

(epoch_count, result) if verbose == 0 None: if verbose != 0

Return type:

tuple

Examples

>>> net = Network("Train Test", 1, 3, 1)
>>> net.compile(error="mse", optimizer="rmsprop")
>>> net.dataset.append([0.0], [1.0])
>>> net.dataset.append([1.0], [0.0])
>>> net.train(plot=False)  
Evaluating initial training metrics...
Training...
...
train_one(inputs, targets, batch_size=32, update_pictures=False)[source]

Train on one input/target pair.

Inputs should be a vector if one input bank, or a list of vectors if more than one input bank.

Targets should be a vector if one output bank, or a list of vectors if more than one output bank.

Alternatively, inputs and targets can each be a dictionary mapping bank to vector.

Examples

>>> from conx import Network, Layer, SGD, Dataset
>>> net = Network("XOR", 2, 2, 1, activation="sigmoid")
>>> net.compile(error='mean_squared_error',
...             optimizer=SGD(lr=0.3, momentum=0.9))
>>> ds = [[[0, 0], [0]],
...       [[0, 1], [1]],
...       [[1, 0], [1]],
...       [[1, 1], [0]]]
>>> net.dataset.load(ds)
>>> out, err = net.train_one({"input": [0, 0]},
...                          {"output": [0]})
>>> len(out)
1
>>> len(err)
1
>>> from conx import Network, Layer, SGD, Dataset
>>> net = Network("XOR2")
>>> net.add(Layer("input%d", shape=1))
'input1'
>>> net.add(Layer("input%d", shape=1))
'input2'
>>> net.add(Layer("hidden%d", shape=2, activation="sigmoid"))
'hidden1'
>>> net.add(Layer("hidden%d", shape=2, activation="sigmoid"))
'hidden2'
>>> net.add(Layer("shared-hidden", shape=2, activation="sigmoid"))
'shared-hidden'
>>> net.add(Layer("output%d", shape=1, activation="sigmoid"))
'output1'
>>> net.add(Layer("output%d", shape=1, activation="sigmoid"))
'output2'
>>> net.connect("input1", "hidden1")
>>> net.connect("input2", "hidden2")
>>> net.connect("hidden1", "shared-hidden")
>>> net.connect("hidden2", "shared-hidden")
>>> net.connect("shared-hidden", "output1")
>>> net.connect("shared-hidden", "output2")
>>> net.compile(error='mean_squared_error',
...             optimizer=SGD(lr=0.3, momentum=0.9))
>>> ds = [([[0],[0]], [[0],[0]]),
...       ([[0],[1]], [[1],[1]]),
...       ([[1],[0]], [[1],[1]]),
...       ([[1],[1]], [[0],[0]])]
>>> net.dataset.load(ds)
>>> net.compile(error='mean_squared_error',
...             optimizer=SGD(lr=0.3, momentum=0.9))
>>> out, err = net.train_one({"input1": [0], "input2": [0]},
...                          {"output1": [0], "output2": [0]})
>>> len(out)
2
>>> len(err)
2
>>> net.dataset._num_input_banks()
2
>>> net.dataset._num_target_banks()
2
update_config(config)[source]
update_layer_from_config(layer)[source]
update_model()[source]

Useful if you change, say, an activation function after training.

vshape(layer_name)[source]

Find the vshape of layer.

class conx.network.PlotCallback(network, report_rate, mpl_backend)[source]

Bases: keras.callbacks.Callback

on_epoch_end(epoch, logs=None)[source]
class conx.network.ReportCallback(network, verbose, report_rate, mpl_backend, record)[source]

Bases: keras.callbacks.Callback

on_epoch_end(epoch, logs=None)[source]
class conx.network.StoppingCriteria(item, op, value, use_validation_to_stop)[source]

Bases: keras.callbacks.Callback

compare(v1, op, v2)[source]
on_epoch_end(epoch, logs=None)[source]

4.1.3. conx.dataset module

The Dataset class is useful for loading standard datasets, or for manipulating a set of inputs/targets.

class conx.dataset.DataVector(dataset, item)[source]

Bases: object

Class to make internal Keras numpy arrays look like lists in the [bank, bank, …] format.

append_bank(shape=None, dtype=None)[source]

Add a new bank of inputs, targets, or labels with given shape.

For labels, shape is max length of any string in bank. dtype is str for labels.

For inputs and targets, shape is shape of tensor. dtype is ‘float32’ by default.

Note

labels and targets should have the same number of banks.

>>> ds = Dataset()
>>> ds.load(inputs=[[0,0], [1,1]], targets=[[0,0,0], [1,1,1]], labels=["zero", "one"])
>>> ds.inputs.append_bank(4)
>>> ds.inputs.shape
[(2,), (4,)]
>>> ds.targets.append_bank(5)
>>> ds.targets.shape
[(3,), (5,)]
>>> ds.labels.append_bank(1)
>>> ds.labels[0]
['zero', ' ']
>>> ds.labels[1]
['one', ' ']
delete_bank(position)[source]

Delete a bank of inputs, targets, or labels.

>>> ds = Dataset()
>>> ds.load(inputs=[[0, 0]], targets=[[0, 0, 0]], labels=["zero"])
>>> ds.inputs.append_bank(4)
>>> ds.targets.append_bank(5)
>>> ds.labels.append_bank(10)
>>> ds.inputs.delete_bank(0)
>>> ds.targets.delete_bank(0)
>>> ds.labels.delete_bank(0)
get_shape(bank_index=None)[source]

Get the shape of the tensor at bank_index.

>>> from conx import Network, Layer
>>> net = Network("Get Shape")
>>> net.add(Layer("input1", 5))
'input1'
>>> net.add(Layer("input2", 6))
'input2'
>>> net.add(Layer("output", 3))
'output'
>>> net.connect("input1", "output")
>>> net.connect("input2", "output")
>>> net.compile(optimizer="adam", error="mse")
>>> net.dataset.load([
...   (
...     [[1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0]],
...     [0.5, 0.5, 0.5]
...   ),
... ])
>>> net.dataset.inputs.get_shape()
[(5,), (6,)]
>>> net.dataset.inputs.get_shape(0)
(5,)
>>> net.dataset.inputs.get_shape(1)
(6,)
>>> net.dataset.targets.get_shape()
[(3,)]
>>> net.dataset.targets.get_shape(0)
(3,)
>>> net.dataset.inputs.shape
[(5,), (6,)]
>>> net.dataset.targets.shape
[(3,)]
reshape(bank_index, new_shape=None)[source]

Reshape the tensor at bank_index.

>>> from conx import Network
>>> net = Network("Test 1", 10, 2, 3, 28 * 28)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.append([0] * 10, [0] * 28 * 28)
>>> net.dataset.inputs.shape
[(10,)]
>>> net.dataset.inputs.reshape(0, (2, 5))
>>> net.dataset.inputs.shape
[(2, 5)]
>>> net.dataset.targets.shape
[(784,)]
>>> net.dataset.targets.shape = (28 * 28,)
>>> net.dataset.targets.shape
[(784,)]
select(function, slice=None, index=False)[source]

select selects items or indices from a dataset pattern.

function() takes (i, dataset) and returns True or False filter will return all items that match the filter.

Examples

>>> ds = Dataset()
>>> print("Downloading...");ds.get("mnist") 
Downloading...
>>> ds.inputs.select(lambda i,dataset: True, slice=10, index=True)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> s = ds.inputs.select(lambda i,dataset: ds.inputs[i], slice=(10, 20, 2))
>>> shape(s)
(5, 28, 28, 1)
>>> ds.clear()
Parameters:
  • - callable that takes (function) –
  • - range of items/indices to return (slice) –
  • - if index is True, then return indices, else return the items. (index) –
shape

Get the shape of the tensor at bank_index.

>>> from conx import Network, Layer
>>> net = Network("Get Shape")
>>> net.add(Layer("input1", 5))
'input1'
>>> net.add(Layer("input2", 6))
'input2'
>>> net.add(Layer("output", 3))
'output'
>>> net.connect("input1", "output")
>>> net.connect("input2", "output")
>>> net.compile(optimizer="adam", error="mse")
>>> net.dataset.load([
...   (
...     [[1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0]],
...     [0.5, 0.5, 0.5]
...   ),
... ])
>>> net.dataset.inputs.get_shape()
[(5,), (6,)]
>>> net.dataset.inputs.get_shape(0)
(5,)
>>> net.dataset.inputs.get_shape(1)
(6,)
>>> net.dataset.targets.get_shape()
[(3,)]
>>> net.dataset.targets.get_shape(0)
(3,)
>>> net.dataset.inputs.shape
[(5,), (6,)]
>>> net.dataset.targets.shape
[(3,)]
class conx.dataset.Dataset(network=None, name=None, description=None, input_shapes=None, target_shapes=None)[source]

Bases: object

Contains the dataset, and metadata about it.

input_shapes = [shape, …] target_shapes = [shape, …]

append(pairs=None, inputs=None)[source]

Append a input, and a target or a list of [[input, target], …].

>>> ds = Dataset()
>>> ds.append([0, 0], [0])
>>> ds.append([0, 1], [1])
>>> ds.append([1, 0], [1])
>>> ds.append([1, 1], [0])
>>> len(ds)
4
>>> ds.clear()
>>> len(ds)
0
>>> ds.append([[[0, 0], [0]],
...            [[0, 1], [1]],
...            [[1, 0], [1]],
...            [[1, 1], [0]]])
>>> len(ds)
4
>>> ds.append([[[0, 0], [0]],
...            [[0, 1], [1]],
...            [[1, 0], [1]],
...            [[1, 1], [0]]])
>>> len(ds)
8
append_by_function(width, frange, ifunction, tfunction)[source]

width - length of an input vector frange - (start, stop) or (start, stop, step) ifunction - “onehot” or “binary” or callable(i, width) tfunction - a function given (i, input vector), return target vector

To add an AND problem:

>>> from conx import Network
>>> net = Network("Test 3", 2, 2, 3, 1)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.append_by_function(2, (0, 4), "binary", lambda i,v: [int(sum(v) == len(v))])
>>> len(net.dataset.inputs)
4

Adds the following for inputs/targets: [0, 0], [0] [0, 1], [0] [1, 0], [0] [1, 1], [1]

>>> net = Network("Test 4", 10, 2, 3, 10)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.append_by_function(10, (0, 10), "onehot", lambda i,v: v)
>>> len(net.dataset.inputs)
10
>>> import numpy as np
>>> net = Network("Test 5", 10, 2, 3, 10)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.append_by_function(10, (0, 10), lambda i, width: np.random.rand(width), lambda i,v: v)
>>> len(net.dataset.inputs)
10
append_random(count, frange=(-1, 1))[source]

Append a number of random values in the range frange to inputs and targets.

Requires that dataset belongs to a network with input layers.

>>> from conx import *
>>> net = Network("Random", 5, 2, 3, 4)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.append_random(100)
>>> len(net.dataset.inputs)
100
>>> shape(net.dataset.inputs)
(100, 5)
>>> len(net.dataset.targets)
100
>>> shape(net.dataset.targets)
(100, 4)
chop(amount)[source]

Chop off the specified amount of input and target patterns from the dataset, starting from the end. Amount can be a fraction in the range 0-1, or an integer number of patterns to drop. >>> dataset = Dataset() >>> print(“Downloading…”); dataset.get(“mnist”) # doctest: +ELLIPSIS Downloading… >>> len(dataset) 70000 >>> dataset.chop(10000) >>> len(dataset) 60000 >>> dataset.split(0.25) >>> dataset.split() (45000, 15000) >>> dataset.chop(0.10) >>> dataset.split() (54000, 0)

>>> dataset.clear()
clear()[source]

Remove all of the inputs/targets.

compile(pairs)[source]
copy(dataset)[source]

Copy the inputs/targets from one dataset into this one.

datasets()[source]

Returns the list of available datasets.

Can be called on the Dataset class.

>>> len(Dataset.datasets())
10
>>> ds = Dataset()
>>> len(ds.datasets())
10
get(dataset_name=None, *args, **kwargs)[source]

Get a known dataset by name.

Can be called on the Dataset class. If it is, returns a new Dataset instance.

>>> print("Downloading..."); ds = Dataset.get("mnist") 
Downloading...
>>> len(ds.inputs)
70000
>>> ds = Dataset()
>>> ds.get("mnist")
>>> len(ds.targets)
70000
>>> ds.targets[0]
[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]
>>> ds.clear()
info()[source]

Print out high-level information about the dataset.

load(pairs=None, inputs=None, targets=None, labels=None)[source]

Dataset.load() will clear and load a new dataset.

You can load a dataset through a number of variations:

  • dataset.load([[input, target], …])
  • dataset.load(inputs=[input, …], targets=[target, …])
  • dataset.load(generator, count)
>>> ds = Dataset()
>>> ds.load([[[0, 0], [0]],
...          [[0, 1], [1]],
...          [[1, 0], [1]],
...          [[1, 1], [0]]])
>>> len(ds)
4
>>> ds.load(inputs=[[0, 0], [0, 1], [1, 0], [1, 1]], # inputs
...         targets=[[0], [1], [1], [0]]) # targets
>>> len(ds)
4
>>> def generator():
...     for data in [[[0, 0], [0]],
...                  [[0, 1], [1]],
...                  [[1, 0], [1]],
...                  [[1, 1], [0]]]:
...         yield data
>>> ds.load(generator(), 4)
>>> len(ds)
4
load_direct(inputs=None, targets=None, labels=None)[source]

Set the inputs/targets in the specific internal format:

[[input-layer-1-vectors, …], [input-layer-2-vectors, …], …]

[[target-layer-1-vectors, …], [target-layer-2-vectors, …], …]

make_info()[source]
rescale_inputs(bank_index, old_range, new_range, new_dtype)[source]

Rescale the inputs.

set_inputs_from_targets(f=None, input_bank=0, target_bank=0)[source]

Copy the targets to inputs. Optionally, apply a function f to target copy.

>>> from conx import Network
>>> net = Network("Sample", 2, 2, 1)
>>> ds = [[[0, 0], [0]],
...       [[0, 1], [1]],
...       [[1, 0], [1]],
...       [[1, 1], [0]]]
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.load(ds)
>>> net.dataset.set_inputs_from_targets(lambda tv: [tv[0], tv[0]])
>>> net.dataset.inputs[1]
[1.0, 1.0]
set_targets_from_inputs(f=None, input_bank=0, target_bank=0)[source]

Copy the inputs to targets. Optionally, apply a function f to input copy.

>>> from conx import Network
>>> net = Network("Sample", 2, 2, 1)
>>> ds = [[[0, 0], [0]],
...       [[0, 1], [1]],
...       [[1, 0], [1]],
...       [[1, 1], [0]]]
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.load(ds)
>>> net.dataset.set_targets_from_inputs(lambda iv: [iv[0]])
>>> net.dataset.targets[1]
[0.0]
set_targets_from_labels(num_classes=None, bank_index=0)[source]

Given net.labels are integers, set the net.targets to onehot() categories.

shuffle()[source]

Shuffle the inputs/targets.

slice(start=None, stop=None)[source]

Cut out some input/targets.

net.slice(100) - reduce to first 100 inputs/targets net.slice(100, 200) - reduce to second 100 inputs/targets

split(split=None)[source]

Splits the inputs/targets into training and validation sets. The split keyword parameter specifies what portion of the dataset to use for validation. It can be a fraction in the range [0,1), or an integer number of patterns from 0 to the dataset size, or ‘all’. For example, a split of 0.25 reserves the last 1/4 of the dataset for validation. A split of 1.0 (specified as ‘all’ or an int equal to the dataset size) is a special case in which the entire dataset is used for both training and validation.

summary()[source]

4.1.4. conx.layers module

The conx.layers module contains the code for all of the layers. In addition, it dynamically loads all of the Keras layers and wraps them as a conx layer.

class conx.layers.ActivationLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ActivationLayer

Applies an activation function to an output.

Arguments

  • activation: name of activation function to use (see: activations), or alternatively, a Theano or TensorFlow operation.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as input.

CLASS

alias of Activation

class conx.layers.ActivityRegularizationLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ActivityRegularizationLayer

Layer that applies an update to the cost function based input activity.

Arguments

  • l1: L1 regularization factor (positive float).
  • l2: L2 regularization factor (positive float).

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as input.

CLASS

alias of ActivityRegularization

class conx.layers.AddLayer(name, **params)[source]

Bases: conx.layers._BaseLayer

A Layer for adding the output vectors of multiple layers together.

CLASS

alias of Add

make_keras_function()[source]
make_keras_functions()[source]

This keras function just returns the Tensor.

on_connect(relation, other_layer)[source]

relation is “to”/”from” indicating which layer self is.

conx.layers.AdditionLayer

alias of AddLayer

class conx.layers.AlphaDropoutLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AlphaDropoutLayer

Applies Alpha Dropout to the input.

Alpha Dropout is a Dropout that keeps mean and variance of inputs
to their original values, in order to ensure the self-normalizing property
even after this dropout.
Alpha Dropout fits well to Scaled Exponential Linear Units
by randomly setting activations to the negative saturation value.

Arguments

  • rate: float, drop probability (as with Dropout). The multiplicative noise will have standard deviation sqrt(rate / (1 - rate)).
  • seed: A Python integer to use as random seed.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as input.

References

CLASS

alias of AlphaDropout

class conx.layers.AverageLayer(name, **params)[source]

Bases: conx.layers.AddLayer

A layer for averaging the output vectors of layers together.

CLASS

alias of Average

make_keras_function()[source]
class conx.layers.AveragePooling1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AveragePooling1DLayer

Average pooling for temporal data.

Arguments

  • pool_size: Integer, size of the average pooling windows.
  • strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

3D tensor with shape: (batch_size, downsampled_steps, features).

CLASS

alias of AveragePooling1D

class conx.layers.AveragePooling2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AveragePooling2DLayer

Average pooling operation for spatial data.

Arguments

  • pool_size: integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions.
  • strides: Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, pooled_rows, pooled_cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, pooled_rows, pooled_cols)
CLASS

alias of AveragePooling2D

class conx.layers.AveragePooling3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AveragePooling3DLayer

Average pooling operation for 3D data (spatial or spatio-temporal).

Arguments

  • pool_size: tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension.
  • strides: tuple of 3 integers, or None. Strides values.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)
CLASS

alias of AveragePooling3D

class conx.layers.AvgPool1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AvgPool1DLayer

Average pooling for temporal data.

Arguments

  • pool_size: Integer, size of the average pooling windows.
  • strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

3D tensor with shape: (batch_size, downsampled_steps, features).

CLASS

alias of AveragePooling1D

class conx.layers.AvgPool2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AvgPool2DLayer

Average pooling operation for spatial data.

Arguments

  • pool_size: integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions.
  • strides: Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, pooled_rows, pooled_cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, pooled_rows, pooled_cols)
CLASS

alias of AveragePooling2D

class conx.layers.AvgPool3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

AvgPool3DLayer

Average pooling operation for 3D data (spatial or spatio-temporal).

Arguments

  • pool_size: tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension.
  • strides: tuple of 3 integers, or None. Strides values.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)
CLASS

alias of AveragePooling3D

class conx.layers.BatchNormalizationLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

BatchNormalizationLayer

Batch normalization layer (Ioffe and Szegedy, 2014).

Normalize the activations of the previous layer at each batch,
i.e. applies a transformation that maintains the mean activation
close to 0 and the activation standard deviation close to 1.

Arguments

  • axis: Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format="channels_first", set axis=1 in BatchNormalization.
  • momentum: Momentum for the moving mean and the moving variance.
  • epsilon: Small float added to variance to avoid dividing by zero.
  • center: If True, add offset of beta to normalized tensor. If False, beta is ignored.
  • scale: If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.
  • beta_initializer: Initializer for the beta weight.
  • gamma_initializer: Initializer for the gamma weight.
  • moving_mean_initializer: Initializer for the moving mean.
  • moving_variance_initializer: Initializer for the moving variance.
  • beta_regularizer: Optional regularizer for the beta weight.
  • gamma_regularizer: Optional regularizer for the gamma weight.
  • beta_constraint: Optional constraint for the beta weight.
  • gamma_constraint: Optional constraint for the gamma weight.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as input.

References

CLASS

alias of BatchNormalization

class conx.layers.BidirectionalLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

BidirectionalLayer

Bidirectional wrapper for RNNs.

Arguments

  • layer: Recurrent instance.
  • merge_mode: Mode by which outputs of the forward and backward RNNs will be combined. One of {‘sum’, ‘mul’, ‘concat’, ‘ave’, None}. If None, the outputs will not be combined, they will be returned as a list.

Raises

  • ValueError: In case of invalid merge_mode argument.

Examples

model = Sequential()
model.add(Bidirectional(LSTM(10, return_sequences=True),
            input_shape=(5, 10)))
model.add(Bidirectional(LSTM(10)))
model.add(Dense(5))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
CLASS

alias of Bidirectional

class conx.layers.ConcatenateLayer(name, **params)[source]

Bases: conx.layers.AddLayer

A layer for sticking layers together.

CLASS

alias of Concatenate

make_keras_function()[source]
class conx.layers.Conv1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Conv1DLayer

1D convolution layer (e.g. temporal convolution).

This layer creates a convolution kernel that is convolved
with the layer input over a single spatial (or temporal) dimension
to produce a tensor of outputs.
If use_bias is True, a bias vector is created and added to the outputs.
Finally, if activation is not None,
it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide an input_shape argument
(tuple of integers or None, e.g.
(10, 128) for sequences of 10 vectors of 128-dimensional vectors,
or (None, 128) for variable-length sequences of 128-dimensional vectors.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
  • strides: An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: One of "valid", "causal" or "same" (case-insensitive). "valid" means “no padding”. "same" results in padding the input such that the output has the same length as the original input. "causal" results in causal (dilated) convolutions, e.g. output[t] does not depend on input[t+1:]. Useful when modeling temporal data where the model should not violate the temporal order. See WaveNet: A Generative Model for Raw Audio, section 2.1.
  • dilation_rate: an integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

3D tensor with shape: (batch_size, steps, input_dim)

Output shape

3D tensor with shape: (batch_size, new_steps, filters)
steps value might have changed due to padding or strides.
CLASS

alias of Conv1D

class conx.layers.Conv2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Conv2DLayer

2D convolution layer (e.g. spatial convolution over images).

This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If use_bias is True,
a bias vector is created and added to the outputs. Finally, if
activation is not None, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(samples, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(samples, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.
CLASS

alias of Conv2D

class conx.layers.Conv2DTransposeLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Conv2DTransposeLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(batch, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(batch, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.

References

CLASS

alias of Conv2DTranspose

class conx.layers.Conv3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Conv3DLayer

3D convolution layer (e.g. spatial convolution over volumes).

This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If use_bias is True,
a bias vector is created and added to the outputs. Finally, if
activation is not None, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 128, 1) for 128x128x128 volumes
with a single channel,
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along each spatial dimension. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

5D tensor with shape:
(samples, channels, conv_dim1, conv_dim2, conv_dim3) if data_format=’channels_first’
or 5D tensor with shape:
(samples, conv_dim1, conv_dim2, conv_dim3, channels) if data_format=’channels_last’.

Output shape

5D tensor with shape:
(samples, filters, new_conv_dim1, new_conv_dim2, new_conv_dim3) if data_format=’channels_first’
or 5D tensor with shape:
(samples, new_conv_dim1, new_conv_dim2, new_conv_dim3, filters) if data_format=’channels_last’.
new_conv_dim1, new_conv_dim2 and new_conv_dim3 values might have changed due to padding.
CLASS

alias of Conv3D

class conx.layers.Conv3DTransposeLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Conv3DTransposeLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels
if data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 3 integers, specifying the width and height of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

5D tensor with shape:
(batch, channels, depth, rows, cols) if data_format=’channels_first’
or 5D tensor with shape:
(batch, depth, rows, cols, channels) if data_format=’channels_last’.

Output shape

5D tensor with shape:
(batch, filters, new_depth, new_rows, new_cols) if data_format=’channels_first’
or 5D tensor with shape:
(batch, new_depth, new_rows, new_cols, filters) if data_format=’channels_last’.
depth and rows and cols values might have changed due to padding.

References

CLASS

alias of Conv3DTranspose

class conx.layers.ConvLSTM2DCellLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ConvLSTM2DCellLayer

Cell class for the ConvLSTM2D layer.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of n integers, specifying the dimensions of the convolution window.
  • strides: An integer or tuple/list of n integers, specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: An integer or tuple/list of n integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • recurrent_activation: Activation function to use for the recurrent step (see activations).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at initialization. Use in combination with bias_initializer="zeros". This is recommended in Jozefowicz et al.
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
CLASS

alias of ConvLSTM2DCell

class conx.layers.ConvLSTM2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ConvLSTM2DLayer

Convolutional LSTM.

It is similar to an LSTM layer, but the input transformations
and recurrent transformations are both convolutional.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number output of filters in the convolution).
  • kernel_size: An integer or tuple/list of n integers, specifying the dimensions of the convolution window.
  • strides: An integer or tuple/list of n integers, specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, time, ..., channels) while channels_first corresponds to inputs with shape (batch, time, channels, ...). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: An integer or tuple/list of n integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • recurrent_activation: Activation function to use for the recurrent step (see activations).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at initialization. Use in combination with bias_initializer="zeros". This is recommended in Jozefowicz et al.
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.

Input shape

  • if data_format=’channels_first’ 5D tensor with shape: (samples,time, channels, rows, cols)
  • if data_format=’channels_last’ 5D tensor with shape: (samples,time, rows, cols, channels)

# Output shape

  • if return_sequences
    • if data_format=’channels_first’ 5D tensor with shape: (samples, time, filters, output_row, output_col)
    • if data_format=’channels_last’ 5D tensor with shape: (samples, time, output_row, output_col, filters)
  • else
    • if data_format =’channels_first’ 4D tensor with shape: (samples, filters, output_row, output_col)
    • if data_format=’channels_last’ 4D tensor with shape: (samples, output_row, output_col, filters) where o_row and o_col depend on the shape of the filter and the padding

Raises

  • ValueError: in case of invalid constructor arguments.

References

CLASS

alias of ConvLSTM2D

class conx.layers.ConvRNN2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ConvRNN2DLayer

Base class for convolutional-recurrent layers.

Arguments

  • cell: A RNN cell instance. A RNN cell is a class that has:
    • a call(input_at_t, states_at_t) method, returning (output_at_t, states_at_t_plus_1). The call method of the cell can also take the optional argument constants, see section “Note on passing external constants” below.
    • a state_size attribute. This can be a single integer (single state) in which case it is the number of channels of the recurrent state (which should be the same as the number of channels of the cell output). This can also be a list/tuple of integers (one size per state). In this case, the first entry (state_size[0]) should be the same as the size of the cell output.
  • return_sequences: Boolean. Whether to return the last output. in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • input_shape: Use this argument to specify the shape of the input when this layer is the first one in a model.

Input shape

5D tensor with shape:
(samples, timesteps, channels, rows, cols) if data_format=’channels_first’
or 5D tensor with shape:
(samples, timesteps, rows, cols, channels) if data_format=’channels_last’.

Output shape

  • if return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each 5D tensor with shape: (samples, timesteps, filters, new_rows, new_cols) if data_format=’channels_first’ or 5D tensor with shape: (samples, timesteps, new_rows, new_cols, filters) if data_format=’channels_last’. rows and cols values might have changed due to padding.
  • if return_sequences: 5D tensor with shape: (samples, timesteps, filters, new_rows, new_cols) if data_format=’channels_first’ or 5D tensor with shape: (samples, timesteps, new_rows, new_cols, filters) if data_format=’channels_last’.
  • else, 4D tensor with shape: (samples, filters, new_rows, new_cols) if data_format=’channels_first’ or 4D tensor with shape: (samples, new_rows, new_cols, filters) if data_format=’channels_last’.

Masking

This layer supports masking for input data with a variable number
of timesteps. To introduce masks to your data,
use an Embedding layer with the mask_zero parameter
set to True.

Note on using statefulness in RNNs

You can set RNN layers to be ‘stateful’, which means that the states
computed for the samples in one batch will be reused as initial states
for the samples in the next batch. This assumes a one-to-one mapping
between samples in different successive batches.
To enable statefulness:
- specify stateful=True in the layer constructor.
- specify a fixed batch size for your model, by passing
- if sequential model:
batch_input_shape=(...) to the first layer in your model.
- if functional model with 1 or more Input layers:
batch_shape=(...) to all the first layers in your model.
This is the expected shape of your inputs
including the batch size.
It should be a tuple of integers, e.g. (32, 10, 100, 100, 32).
Note that the number of rows and columns should be specified too.
- specify shuffle=False when calling fit().
To reset the states of your model, call .reset_states() on either
a specific layer, or on your entire model.

Note on specifying the initial state of RNNs

You can specify the initial state of RNN layers symbolically by
calling them with the keyword argument initial_state. The value of
initial_state should be a tensor or list of tensors representing
the initial state of the RNN layer.
You can specify the initial state of RNN layers numerically by
calling reset_states with the keyword argument states. The value of
states should be a numpy array or list of numpy arrays representing
the initial state of the RNN layer.

Note on passing external constants to RNNs

You can pass “external” constants to the cell using the constants
keyword argument of RNN.__call__ (as well as RNN.call) method. This
requires that the cell.call method accepts the same keyword argument
constants. Such constants can be used to condition the cell
transformation on additional static inputs (not changing over time),
a.k.a. an attention mechanism.
CLASS

alias of ConvRNN2D

class conx.layers.ConvRecurrent2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ConvRecurrent2DLayer

Abstract base class for convolutional recurrent layers.

Do not use in a model – it’s not a functional layer!

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number output of filters in the convolution).
  • kernel_size: An integer or tuple/list of n integers, specifying the dimensions of the convolution window.
  • strides: An integer or tuple/list of n integers, specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, time, ..., channels) while channels_first corresponds to inputs with shape (batch, time, channels, ...). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: An integer or tuple/list of n integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.

Input shape

5D tensor with shape (num_samples, timesteps, channels, rows, cols).

Output shape

  • if return_sequences: 5D tensor with shape (num_samples, timesteps, channels, rows, cols).
  • else, 4D tensor with shape (num_samples, channels, rows, cols).

Masking

This layer supports masking for input data with a variable number
of timesteps. To introduce masks to your data,
use an Embedding layer with the mask_zero parameter
set to True.
  • __**Note__:** for the time being, masking is only supported with Theano.

Note on using statefulness in RNNs

You can set RNN layers to be ‘stateful’, which means that the states
computed for the samples in one batch will be reused as initial states
for the samples in the next batch.
This assumes a one-to-one mapping between
samples in different successive batches.
To enable statefulness:
- specify stateful=True in the layer constructor.
- specify a fixed batch size for your model, by passing
a batch_input_size=(...) to the first layer in your model.
This is the expected shape of your inputs including the batch size.
It should be a tuple of integers, e.g. (32, 10, 100).
To reset the states of your model, call .reset_states() on either
a specific layer, or on your entire model.
CLASS

alias of ConvRecurrent2D

class conx.layers.Convolution1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Convolution1DLayer

1D convolution layer (e.g. temporal convolution).

This layer creates a convolution kernel that is convolved
with the layer input over a single spatial (or temporal) dimension
to produce a tensor of outputs.
If use_bias is True, a bias vector is created and added to the outputs.
Finally, if activation is not None,
it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide an input_shape argument
(tuple of integers or None, e.g.
(10, 128) for sequences of 10 vectors of 128-dimensional vectors,
or (None, 128) for variable-length sequences of 128-dimensional vectors.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
  • strides: An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: One of "valid", "causal" or "same" (case-insensitive). "valid" means “no padding”. "same" results in padding the input such that the output has the same length as the original input. "causal" results in causal (dilated) convolutions, e.g. output[t] does not depend on input[t+1:]. Useful when modeling temporal data where the model should not violate the temporal order. See WaveNet: A Generative Model for Raw Audio, section 2.1.
  • dilation_rate: an integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

3D tensor with shape: (batch_size, steps, input_dim)

Output shape

3D tensor with shape: (batch_size, new_steps, filters)
steps value might have changed due to padding or strides.
CLASS

alias of Conv1D

class conx.layers.Convolution2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Convolution2DLayer

2D convolution layer (e.g. spatial convolution over images).

This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If use_bias is True,
a bias vector is created and added to the outputs. Finally, if
activation is not None, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(samples, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(samples, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.
CLASS

alias of Conv2D

class conx.layers.Convolution2DTransposeLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Convolution2DTransposeLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(batch, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(batch, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.

References

CLASS

alias of Conv2DTranspose

class conx.layers.Convolution3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Convolution3DLayer

3D convolution layer (e.g. spatial convolution over volumes).

This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If use_bias is True,
a bias vector is created and added to the outputs. Finally, if
activation is not None, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 128, 1) for 128x128x128 volumes
with a single channel,
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along each spatial dimension. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

5D tensor with shape:
(samples, channels, conv_dim1, conv_dim2, conv_dim3) if data_format=’channels_first’
or 5D tensor with shape:
(samples, conv_dim1, conv_dim2, conv_dim3, channels) if data_format=’channels_last’.

Output shape

5D tensor with shape:
(samples, filters, new_conv_dim1, new_conv_dim2, new_conv_dim3) if data_format=’channels_first’
or 5D tensor with shape:
(samples, new_conv_dim1, new_conv_dim2, new_conv_dim3, filters) if data_format=’channels_last’.
new_conv_dim1, new_conv_dim2 and new_conv_dim3 values might have changed due to padding.
CLASS

alias of Conv3D

class conx.layers.Cropping1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Cropping1DLayer

Cropping layer for 1D input (e.g. temporal sequence).

It crops along the time dimension (axis 1).

Arguments

  • cropping: int or tuple of int (length 2) How many units should be trimmed off at the beginning and end of the cropping dimension (axis 1). If a single int is provided, the same value will be used for both.

Input shape

3D tensor with shape (batch, axis_to_crop, features)

Output shape

3D tensor with shape (batch, cropped_axis, features)

CLASS

alias of Cropping1D

class conx.layers.Cropping2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Cropping2DLayer

Cropping layer for 2D input (e.g. picture).

It crops along spatial dimensions, i.e. width and height.

Arguments

  • cropping: int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.
    • If int: the same symmetric cropping is applied to width and height.
    • If tuple of 2 ints: interpreted as two different symmetric cropping values for height and width: (symmetric_height_crop, symmetric_width_crop).
    • If tuple of 2 tuples of 2 ints: interpreted as ((top_crop, bottom_crop), (left_crop, right_crop))
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

4D tensor with shape:

  • If data_format is "channels_last": (batch, rows, cols, channels)
  • If data_format is "channels_first": (batch, channels, rows, cols)

Output shape

4D tensor with shape:

  • If data_format is "channels_last": (batch, cropped_rows, cropped_cols, channels)
  • If data_format is "channels_first": (batch, channels, cropped_rows, cropped_cols)

Examples

# Crop the input 2D images or feature maps
model = Sequential()
model.add(Cropping2D(cropping=((2, 2), (4, 4)),
         input_shape=(28, 28, 3)))
# now model.output_shape == (None, 24, 20, 3)
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Cropping2D(cropping=((2, 2), (2, 2))))
# now model.output_shape == (None, 20, 16. 64)
CLASS

alias of Cropping2D

class conx.layers.Cropping3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Cropping3DLayer

Cropping layer for 3D data (e.g. spatial or spatio-temporal).

Arguments

  • cropping: int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints.
    • If int: the same symmetric cropping is applied to depth, height, and width.
    • If tuple of 3 ints: interpreted as two different symmetric cropping values for depth, height, and width: (symmetric_dim1_crop, symmetric_dim2_crop, symmetric_dim3_crop).
    • If tuple of 3 tuples of 2 ints: interpreted as ((left_dim1_crop, right_dim1_crop), (left_dim2_crop, right_dim2_crop), (left_dim3_crop, right_dim3_crop))
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

5D tensor with shape:

  • If data_format is "channels_last": (batch, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop, depth)
  • If data_format is "channels_first": (batch, depth, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop)

Output shape

5D tensor with shape:

  • If data_format is "channels_last": (batch, first_cropped_axis, second_cropped_axis, third_cropped_axis, depth)
  • If data_format is "channels_first": (batch, depth, first_cropped_axis, second_cropped_axis, third_cropped_axis)
CLASS

alias of Cropping3D

class conx.layers.CuDNNGRULayer(name, *args, **params)

Bases: conx.layers._BaseLayer

CuDNNGRULayer

Fast GRU implementation backed by CuDNN.

Can only be run on GPU, with the TensorFlow backend.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • return_sequences: Boolean. Whether to return the last output. in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
CLASS

alias of CuDNNGRU

class conx.layers.CuDNNLSTMLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

CuDNNLSTMLayer

Fast LSTM implementation backed by CuDNN.

Can only be run on GPU, with the TensorFlow backend.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. (see initializers).
  • unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al.
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • return_sequences: Boolean. Whether to return the last output. in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
CLASS

alias of CuDNNLSTM

class conx.layers.Deconv2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Deconv2DLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(batch, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(batch, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.

References

CLASS

alias of Conv2DTranspose

class conx.layers.Deconv3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Deconv3DLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels
if data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 3 integers, specifying the width and height of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

5D tensor with shape:
(batch, channels, depth, rows, cols) if data_format=’channels_first’
or 5D tensor with shape:
(batch, depth, rows, cols, channels) if data_format=’channels_last’.

Output shape

5D tensor with shape:
(batch, filters, new_depth, new_rows, new_cols) if data_format=’channels_first’
or 5D tensor with shape:
(batch, new_depth, new_rows, new_cols, filters) if data_format=’channels_last’.
depth and rows and cols values might have changed due to padding.

References

CLASS

alias of Conv3DTranspose

class conx.layers.Deconvolution2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Deconvolution2DLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(batch, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(batch, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.

References

CLASS

alias of Conv2DTranspose

class conx.layers.Deconvolution3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

Deconvolution3DLayer

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels
if data_format="channels_last".

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 3 integers, specifying the width and height of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

5D tensor with shape:
(batch, channels, depth, rows, cols) if data_format=’channels_first’
or 5D tensor with shape:
(batch, depth, rows, cols, channels) if data_format=’channels_last’.

Output shape

5D tensor with shape:
(batch, filters, new_depth, new_rows, new_cols) if data_format=’channels_first’
or 5D tensor with shape:
(batch, new_depth, new_rows, new_cols, filters) if data_format=’channels_last’.
depth and rows and cols values might have changed due to padding.

References

CLASS

alias of Conv3DTranspose

conx.layers.DenseLayer

alias of Layer

class conx.layers.DepthwiseConv2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

DepthwiseConv2DLayer

Depthwise separable 2D convolution.

Depthwise Separable convolutions consists in performing
just the first step in a depthwise spatial convolution
(which acts on each input channel separately).
The depth_multiplier argument controls how many
output channels are generated per input channel in the depthwise step.

Arguments

  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of 'valid' or 'same' (case-insensitive).
  • depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filters_in * depth_multiplier.
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be ‘channels_last’.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. ‘linear’ activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • depthwise_initializer: Initializer for the depthwise kernel matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its ‘activation’). (see regularizer).
  • depthwise_constraint: Constraint function applied to the depthwise kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
[batch, channels, rows, cols] if data_format=’channels_first’
or 4D tensor with shape:
[batch, rows, cols, channels] if data_format=’channels_last’.

Output shape

4D tensor with shape:
[batch, filters, new_rows, new_cols] if data_format=’channels_first’
or 4D tensor with shape:
[batch, new_rows, new_cols, filters] if data_format=’channels_last’.
rows and cols values might have changed due to padding.
CLASS

alias of DepthwiseConv2D

class conx.layers.DotLayer(name, **params)[source]

Bases: conx.layers.AddLayer

A layer for computing the dot product between layers.

CLASS

alias of Dot

make_keras_function()[source]
class conx.layers.DropoutLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

DropoutLayer

Applies Dropout to the input.

Dropout consists in randomly setting
a fraction rate of input units to 0 at each update during training time,
which helps prevent overfitting.

Arguments

  • rate: float between 0 and 1. Fraction of the input units to drop.
  • noise_shape: 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).
  • seed: A Python integer to use as random seed.

References

CLASS

alias of Dropout

class conx.layers.ELULayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ELULayer

Exponential Linear Unit.

It follows:
f(x) =  alpha * (exp(x) - 1.) for x < 0,
f(x) = x for x >= 0.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • alpha: scale for the negative factor.

References

CLASS

alias of ELU

class conx.layers.EmbeddingLayer(name, in_size, out_size, **params)[source]

Bases: conx.layers.Layer

A class for embeddings. WIP.

make_keras_function()[source]
on_connect(relation, other_layer)[source]

relation is “to”/”from” indicating which layer self is.

class conx.layers.FlattenLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

FlattenLayer

Flattens the input. Does not affect the batch size.

Example

model = Sequential()
model.add(Conv2D(64, 3, 3,
         border_mode='same',
         input_shape=(3, 32, 32)))
# now: model.output_shape == (None, 64, 32, 32)

model.add(Flatten())
# now: model.output_shape == (None, 65536)
CLASS

alias of Flatten

class conx.layers.GRUCellLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GRUCellLayer

Cell class for the GRU layer.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • activation: Activation function to use (see activations).
    • Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • recurrent_activation: Activation function to use for the recurrent step (see activations).
    • Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
  • implementation: Implementation mode, either 1 or 2. Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
  • reset_after: GRU convention (whether to apply reset gate after or before matrix multiplication). False = “before” (default), True = “after” (CuDNN compatible).
CLASS

alias of GRUCell

class conx.layers.GRULayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GRULayer

Gated Recurrent Unit - Cho et al. 2014.

There are two variants. The default one is based on 1406.1078v3 and
has reset gate applied to hidden state before matrix multiplication. The
other one is based on original 1406.1078v1 and has the order reversed.
The second variant is compatible with CuDNNGRU (GPU-only) and allows
inference on CPU. Thus it has separate biases for kernel and
recurrent_kernel. Use 'reset_after'=True and
recurrent_activation='sigmoid'.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • activation: Activation function to use (see activations).
    • Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • recurrent_activation: Activation function to use for the recurrent step (see activations).
    • Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
  • implementation: Implementation mode, either 1 or 2. Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
  • reset_after: GRU convention (whether to apply reset gate after or before matrix multiplication). False = “before” (default), True = “after” (CuDNN compatible).

References

CLASS

alias of GRU

class conx.layers.GaussianDropoutLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GaussianDropoutLayer

Apply multiplicative 1-centered Gaussian noise.

As it is a regularization layer, it is only active at training time.

Arguments

  • rate: float, drop probability (as with Dropout). The multiplicative noise will have standard deviation sqrt(rate / (1 - rate)).

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as input.

References

CLASS

alias of GaussianDropout

class conx.layers.GaussianNoiseLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GaussianNoiseLayer

Apply additive zero-centered Gaussian noise.

This is useful to mitigate overfitting
(you could see it as a form of random data augmentation).
Gaussian Noise (GS) is a natural choice as corruption process
for real valued inputs.

As it is a regularization layer, it is only active at training time.

Arguments

  • stddev: float, standard deviation of the noise distribution.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as input.

CLASS

alias of GaussianNoise

class conx.layers.GlobalAveragePooling1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalAveragePooling1DLayer

Global average pooling operation for temporal data.

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

2D tensor with shape:
(batch_size, features)
CLASS

alias of GlobalAveragePooling1D

class conx.layers.GlobalAveragePooling2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalAveragePooling2DLayer

Global average pooling operation for spatial data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalAveragePooling2D

class conx.layers.GlobalAveragePooling3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalAveragePooling3DLayer

Global Average pooling operation for 3D data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalAveragePooling3D

class conx.layers.GlobalAvgPool1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalAvgPool1DLayer

Global average pooling operation for temporal data.

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

2D tensor with shape:
(batch_size, features)
CLASS

alias of GlobalAveragePooling1D

class conx.layers.GlobalAvgPool2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalAvgPool2DLayer

Global average pooling operation for spatial data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalAveragePooling2D

class conx.layers.GlobalAvgPool3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalAvgPool3DLayer

Global Average pooling operation for 3D data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalAveragePooling3D

class conx.layers.GlobalMaxPool1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalMaxPool1DLayer

Global max pooling operation for temporal data.

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

2D tensor with shape:
(batch_size, features)
CLASS

alias of GlobalMaxPooling1D

class conx.layers.GlobalMaxPool2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalMaxPool2DLayer

Global max pooling operation for spatial data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalMaxPooling2D

class conx.layers.GlobalMaxPool3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalMaxPool3DLayer

Global Max pooling operation for 3D data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalMaxPooling3D

class conx.layers.GlobalMaxPooling1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalMaxPooling1DLayer

Global max pooling operation for temporal data.

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

2D tensor with shape:
(batch_size, features)
CLASS

alias of GlobalMaxPooling1D

class conx.layers.GlobalMaxPooling2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalMaxPooling2DLayer

Global max pooling operation for spatial data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalMaxPooling2D

class conx.layers.GlobalMaxPooling3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

GlobalMaxPooling3DLayer

Global Max pooling operation for 3D data.

Arguments

  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

2D tensor with shape:
(batch_size, channels)
CLASS

alias of GlobalMaxPooling3D

class conx.layers.HighwayLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

HighwayLayer

Densely connected highway network.
Highway layers are a natural extension of LSTMs to feedforward networks.
Arguments
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don’t pass a weights argument.
  • activation: name of activation function to use (see activations), or alternatively, elementwise Theano function. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • weights: list of Numpy arrays to set as initial weights. The list should have 2 elements, of shape (input_dim, output_dim) and (output_dim,) for weights and biases respectively.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).
  • input_dim: dimensionality of the input (integer). This argument (or alternatively, the keyword argument input_shape) is required when using this layer as the first layer in a model. Input shape
2D tensor with shape: (nb_samples, input_dim).
Output shape
2D tensor with shape: (nb_samples, input_dim).
References
CLASS

alias of Highway

class conx.layers.ImageLayer(name, dimensions, depth, **params)[source]

Bases: conx.layers.Layer

A class for images. WIP.

make_image(vector, colormap=None, config={})[source]

Given an activation name (or function), and an output vector, display make and return an image widget. Colormap is ignored.

conx.layers.InputLayer

alias of Layer

class conx.layers.InputLayerLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

InputLayerLayer

Layer to be used as an entry point into a graph.

It can either wrap an existing tensor (pass an input_tensor argument)
or create its a placeholder tensor (pass arguments input_shape
or batch_input_shape as well as dtype).

Arguments

  • input_shape: Shape tuple, not including the batch axis.
  • batch_size: Optional input batch size (integer or None).
  • batch_input_shape: Shape tuple, including the batch axis.
  • dtype: Datatype of the input.
  • input_tensor: Optional tensor to use as layer input instead of creating a placeholder.
  • sparse: Boolean, whether the placeholder created is meant to be sparse.
  • name: Name of the layer (string).
CLASS

alias of InputLayer

class conx.layers.LSTMCellLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LSTMCellLayer

Cell class for the LSTM layer.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • activation: Activation function to use (see activations).
    • Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • recurrent_activation: Activation function to use for the recurrent step (see activations).
    • Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).x
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al.
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
  • implementation: Implementation mode, either 1 or 2. Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
CLASS

alias of LSTMCell

class conx.layers.LSTMLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LSTMLayer

Long Short-Term Memory layer - Hochreiter 1997.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • activation: Activation function to use (see activations).
    • Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • recurrent_activation: Activation function to use for the recurrent step (see activations).
    • Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al.
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
  • implementation: Implementation mode, either 1 or 2. Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

References

CLASS

alias of LSTM

class conx.layers.LambdaLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LambdaLayer

Wraps arbitrary expression as a Layer object.

Examples

# add a x -> x^2 layer
model.add(Lambda(lambda x: x ** 2))
# add a layer that returns the concatenation
# of the positive part of the input and
# the opposite of the negative part

def antirectifier(x):
    x -= K.mean(x, axis=1, keepdims=True)
    x = K.l2_normalize(x, axis=1)
    pos = K.relu(x)
    neg = K.relu(-x)
    return K.concatenate([pos, neg], axis=1)

def antirectifier_output_shape(input_shape):
    shape = list(input_shape)
    assert len(shape) == 2  # only valid for 2D tensors
    shape[-1] *= 2
    return tuple(shape)

model.add(Lambda(antirectifier,
         output_shape=antirectifier_output_shape))

Arguments

  • function: The function to be evaluated. Takes input tensor as first argument.
  • output_shape: Expected output shape from function. Only relevant when using Theano. Can be a tuple or function. If a tuple, it only specifies the first dimension onward; sample dimension is assumed either the same as the input: output_shape = (input_shape[0], ) + output_shape or, the input is None and the sample dimension is also None: output_shape = (None, ) + output_shape If a function, it specifies the entire shape as a function of the input shape: output_shape = f(input_shape)
  • arguments: optional dictionary of keyword arguments to be passed to the function.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Specified by output_shape argument
(or auto-inferred when using TensorFlow).
CLASS

alias of Lambda

class conx.layers.Layer(name: str, shape, **params)[source]

Bases: conx.layers._BaseLayer

The default layer type. Will create either an InputLayer, or DenseLayer, depending on its context after Network.connect.

Parameters:name – The name of the layer. Must be unique in this network. Should not contain special HTML characters.

Examples

>>> layer = Layer("input", 10)
>>> layer.name
'input'
>>> from conx import Network
>>> net = Network("XOR2")
>>> net.add(Layer("input", 2))
'input'
>>> net.add(Layer("hidden", 5))
'hidden'
>>> net.add(Layer("output", 2))
'output'
>>> net.connect()
>>> net["input"].kind()
'input'
>>> net["output"].kind()
'output'

Note

See also: Network, Network.add, and Network.connect for more information. See https://keras.io/ for more information on Keras layers.

CLASS

alias of Dense

make_keras_function()[source]

For all Keras-based functions. Returns the Keras class.

make_keras_function_text()[source]

For all Keras-based functions. Returns the Keras class.

print_summary(fp=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]

Print a summary of the dense/input layer.

class conx.layers.LayerLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LayerLayer

Abstract base layer class.

Properties

  • name: String, must be unique within a model.
  • input_spec: List of InputSpec class instances each entry describes one required input:
    • ndim
    • dtype A layer with n input tensors must have an input_spec of length n.
  • trainable: Boolean, whether the layer weights will be updated during training.
  • uses_learning_phase: Whether any operation of the layer uses K.in_training_phase() or K.in_test_phase().
  • input_shape: Shape tuple. Provided for convenience, but note that there may be cases in which this attribute is ill-defined (e.g. a shared layer with multiple input shapes), in which case requesting input_shape will raise an Exception. Prefer using layer.get_input_shape_for(input_shape), or layer.get_input_shape_at(node_index).
  • output_shape: Shape tuple. See above.
  • inbound_nodes: List of nodes.
  • outbound_nodes: List of nodes. input, output: Input/output tensor(s). Note that if the layer is used more than once (shared layer), this is ill-defined and will raise an exception. In such cases, use layer.get_input_at(node_index). input_mask, output_mask: Same as above, for masks.
  • trainable_weights: List of variables.
  • non_trainable_weights: List of variables.
  • weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order).

Methods

call(x, mask=None): Where the layer’s logic lives.
call(x, mask=None): Wrapper around the layer logic (call).
If x is a Keras tensor:
- Connect current layer with last layer from tensor:
self._add_inbound_node(last_layer)
- Add layer to tensor history
If layer is not built:
- Build from x._keras_shape
get_weights()
set_weights(weights)
get_config()
count_params()
compute_output_shape(input_shape)
compute_mask(x, mask)
get_input_at(node_index)
get_output_at(node_index)
get_input_shape_at(node_index)
get_output_shape_at(node_index)
get_input_mask_at(node_index)
get_output_mask_at(node_index)

Class Methods

from_config(config)

Internal methods:

build(input_shape)
_add_inbound_node(layer, index=0)
assert_input_compatibility()
CLASS

alias of Layer

class conx.layers.LeakyReLULayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LeakyReLULayer

Leaky version of a Rectified Linear Unit.

It allows a small gradient when the unit is not active:
f(x) = alpha * x for x < 0,
f(x) = x for x >= 0.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • alpha: float >= 0. Negative slope coefficient.

References

CLASS

alias of LeakyReLU

class conx.layers.LocallyConnected1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LocallyConnected1DLayer

Locally-connected layer for 1D inputs.

The LocallyConnected1D layer works similarly to
the Conv1D layer, except that weights are unshared,
that is, a different set of filters is applied at each different patch
of the input.

Example

# apply a unshared weight convolution 1d of length 3 to a sequence with
# 10 timesteps, with 64 output filters
model = Sequential()
model.add(LocallyConnected1D(64, 3, input_shape=(10, 32)))
# now model.output_shape == (None, 8, 64)
# add a new conv1d on top
model.add(LocallyConnected1D(32, 3))
# now model.output_shape == (None, 6, 32)

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
  • strides: An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: Currently only supports "valid" (case-insensitive). "same" may be supported in the future.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

3D tensor with shape: (batch_size, steps, input_dim)

Output shape

3D tensor with shape: (batch_size, new_steps, filters)
steps value might have changed due to padding or strides.
CLASS

alias of LocallyConnected1D

class conx.layers.LocallyConnected2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

LocallyConnected2DLayer

Locally-connected layer for 2D inputs.

The LocallyConnected2D layer works similarly
to the Conv2D layer, except that weights are unshared,
that is, a different set of filters is applied at each
different patch of the input.

Examples

# apply a 3x3 unshared weights convolution with 64 output filters on a 32x32 image
# with `data_format="channels_last"`:
model = Sequential()
model.add(LocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3)))
# now model.output_shape == (None, 30, 30, 64)
# notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64 parameters

# add a 3x3 unshared weights convolution on top, with 32 output filters:
model.add(LocallyConnected2D(32, (3, 3)))
# now model.output_shape == (None, 28, 28, 32)

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions.
  • padding: Currently only support "valid" (case-insensitive). "same" will be supported in future.
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(samples, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(samples, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.
CLASS

alias of LocallyConnected2D

class conx.layers.MaskingLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaskingLayer

Masks a sequence by using a mask value to skip timesteps.

For each timestep in the input tensor (dimension #1 in the tensor),
if all values in the input tensor at that timestep
are equal to mask_value, then the timestep will be masked (skipped)
in all downstream layers (as long as they support masking).
If any downstream layer does not support masking yet receives such
an input mask, an exception will be raised.

Example

Consider a Numpy data array x of shape (samples, timesteps, features),
to be fed to an LSTM layer.
You want to mask timestep #3 and #5 because you lack data for
these timesteps. You can:
  • set x[:, 3, :] = 0. and x[:, 5, :] = 0.
  • insert a Masking layer with mask_value=0. before the LSTM layer:
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(timesteps, features)))
model.add(LSTM(32))
CLASS

alias of Masking

class conx.layers.MaxPool1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxPool1DLayer

Max pooling operation for temporal data.

Arguments

  • pool_size: Integer, size of the max pooling windows.
  • strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

3D tensor with shape: (batch_size, downsampled_steps, features).

CLASS

alias of MaxPooling1D

class conx.layers.MaxPool2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxPool2DLayer

Max pooling operation for spatial data.

Arguments

  • pool_size: integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions.
  • strides: Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, pooled_rows, pooled_cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, pooled_rows, pooled_cols)
CLASS

alias of MaxPooling2D

class conx.layers.MaxPool3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxPool3DLayer

Max pooling operation for 3D data (spatial or spatio-temporal).

Arguments

  • pool_size: tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension.
  • strides: tuple of 3 integers, or None. Strides values.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)
CLASS

alias of MaxPooling3D

class conx.layers.MaxPooling1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxPooling1DLayer

Max pooling operation for temporal data.

Arguments

  • pool_size: Integer, size of the max pooling windows.
  • strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).

Input shape

3D tensor with shape: (batch_size, steps, features).

Output shape

3D tensor with shape: (batch_size, downsampled_steps, features).

CLASS

alias of MaxPooling1D

class conx.layers.MaxPooling2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxPooling2DLayer

Max pooling operation for spatial data.

Arguments

  • pool_size: integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions.
  • strides: Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, rows, cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, rows, cols)

Output shape

  • If data_format='channels_last': 4D tensor with shape: (batch_size, pooled_rows, pooled_cols, channels)
  • If data_format='channels_first': 4D tensor with shape: (batch_size, channels, pooled_rows, pooled_cols)
CLASS

alias of MaxPooling2D

class conx.layers.MaxPooling3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxPooling3DLayer

Max pooling operation for 3D data (spatial or spatio-temporal).

Arguments

  • pool_size: tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension.
  • strides: tuple of 3 integers, or None. Strides values.
  • padding: One of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)

Output shape

  • If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels)
  • If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3)
CLASS

alias of MaxPooling3D

class conx.layers.MaximumLayer(name, **params)[source]

Bases: conx.layers.AddLayer

A layer for finding the maximum values of layers.

CLASS

alias of Maximum

make_keras_function()[source]
class conx.layers.MaxoutDenseLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MaxoutDenseLayer

A dense maxout layer.
A MaxoutDense layer takes the element-wise maximum of
nb_feature Dense(input_dim, output_dim) linear layers.
This allows the layer to learn a convex,
piecewise linear activation function over the inputs.
Note that this is a linear layer;
if you wish to apply activation function
(you shouldn’t need to –they are universal function approximators),
an Activation layer must be added after.
Arguments
  • output_dim: int > 0.
  • nb_feature: number of Dense layers to use internally.
  • init: name of initialization function for the weights of the layer (see initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don’t pass a weights argument.
  • weights: list of Numpy arrays to set as initial weights. The list should have 2 elements, of shape (input_dim, output_dim) and (output_dim,) for weights and biases respectively.
  • W_regularizer: instance of WeightRegularizer (eg. L1 or L2 regularization), applied to the main weights matrix.
  • b_regularizer: instance of WeightRegularizer, applied to the bias.
  • activity_regularizer: instance of ActivityRegularizer, applied to the network output.
  • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
  • b_constraint: instance of the constraints module, applied to the bias.
  • bias: whether to include a bias (i.e. make the layer affine rather than linear).
  • input_dim: dimensionality of the input (integer). This argument (or alternatively, the keyword argument input_shape) is required when using this layer as the first layer in a model. Input shape
2D tensor with shape: (nb_samples, input_dim).
Output shape
2D tensor with shape: (nb_samples, output_dim).
References
CLASS

alias of MaxoutDense

class conx.layers.MergeLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MergeLayer

A Merge layer can be used to merge a list of tensors
into a single tensor, following some merge mode.
Example
model1 = Sequential()
model1.add(Dense(32, input_dim=32))
model2 = Sequential()
model2.add(Dense(32, input_dim=32))
merged_model = Sequential()
merged_model.add(Merge([model1, model2], mode='concat', concat_axis=1))

Arguments

  • layers: Can be a list of Keras tensors or a list of layer instances. Must be more than one layer/tensor.
  • mode: String or lambda/function. If string, must be one
    • of: ‘sum’, ‘mul’, ‘concat’, ‘ave’, ‘cos’, ‘dot’, ‘max’. If lambda/function, it should take as input a list of tensors and return a single tensor.
  • concat_axis: Integer, axis to use in mode concat.
  • dot_axes: Integer or tuple of integers, axes to use in mode dot or cos.
  • output_shape: Either a shape tuple (tuple of integers), or a lambda/function to compute output_shape (only if merge mode is a lambda/function). If the argument is a tuple, it should be expected output shape, not including the batch size (same convention as the input_shape argument in layers). If the argument is callable, it should take as input a list of shape tuples (1:1 mapping to input tensors) and return a single shape tuple, including the batch size (same convention as the compute_output_shape method of layers).
  • node_indices: Optional list of integers containing the output node index for each input layer (in case some input layers have multiple output nodes). will default to an array of 0s if not provided.
  • tensor_indices: Optional list of indices of output tensors to consider for merging (in case some input layer node returns multiple tensors).
  • output_mask: Mask or lambda/function to compute the output mask (only if merge mode is a lambda/function). If the latter case, it should take as input a list of masks and return a single mask.
CLASS

alias of Merge

class conx.layers.MinimumLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

MinimumLayer

Layer that computes the minimum (element-wise) a list of inputs.

It takes as input a list of tensors,
all of the same shape, and returns
a single tensor (also of the same shape).
CLASS

alias of Minimum

conx.layers.MultiplicationLayer

alias of MultiplyLayer

class conx.layers.MultiplyLayer(name, **params)[source]

Bases: conx.layers.AddLayer

A layer for multiplying the output vectors of layers together.

CLASS

alias of Multiply

make_keras_function()[source]
class conx.layers.PReLULayer(name, *args, **params)

Bases: conx.layers._BaseLayer

PReLULayer

Parametric Rectified Linear Unit.

It follows:
f(x) = alpha * x for x < 0,
f(x) = x for x >= 0,
where alpha is a learned array with the same shape as x.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • alpha_initializer: initializer function for the weights.
  • alpha_regularizer: regularizer for the weights.
  • alpha_constraint: constraint for the weights.
  • shared_axes: the axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=[1, 2].

References

CLASS

alias of PReLU

class conx.layers.PermuteLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

PermuteLayer

Permutes the dimensions of the input according to a given pattern.

Useful for e.g. connecting RNNs and convnets together.

Example

model = Sequential()
model.add(Permute((2, 1), input_shape=(10, 64)))
# now: model.output_shape == (None, 64, 10)
# note: `None` is the batch dimension

Arguments

  • dims: Tuple of integers. Permutation pattern, does not include the samples dimension. Indexing starts at 1. For instance, (2, 1) permutes the first and second dimension of the input.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same as the input shape, but with the dimensions re-ordered according
to the specified pattern.
CLASS

alias of Permute

class conx.layers.RNNLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

RNNLayer

Base class for recurrent layers.

Arguments

  • cell: A RNN cell instance. A RNN cell is a class that has:
    • a call(input_at_t, states_at_t) method, returning (output_at_t, states_at_t_plus_1). The call method of the cell can also take the optional argument constants, see section “Note on passing external constants” below.
    • a state_size attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be a list/tuple of integers (one size per state). In this case, the first entry (state_size[0]) should be the same as the size of the cell output. It is also possible for cell to be a list of RNN cell instances, in which cases the cells get stacked on after the other in the RNN, implementing an efficient stacked RNN.
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
  • input_dim: dimensionality of the input (integer). This argument (or alternatively, the keyword argument input_shape) is required when using this layer as the first layer in a model.
  • input_length: Length of input sequences, to be specified when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g. via the input_shape argument)

Input shape

3D tensor with shape (batch_size, timesteps, input_dim).

Output shape

  • if return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each with shape (batch_size, units).
  • if return_sequences: 3D tensor with shape (batch_size, timesteps, units).
  • else, 2D tensor with shape (batch_size, units).

Masking

This layer supports masking for input data with a variable number
of timesteps. To introduce masks to your data,
use an Embedding layer with the mask_zero parameter
set to True.

Note on using statefulness in RNNs

You can set RNN layers to be ‘stateful’, which means that the states
computed for the samples in one batch will be reused as initial states
for the samples in the next batch. This assumes a one-to-one mapping
between samples in different successive batches.
To enable statefulness:
- specify stateful=True in the layer constructor.
- specify a fixed batch size for your model, by passing
if sequential model:
batch_input_shape=(...) to the first layer in your model.
else for functional model with 1 or more Input layers:
batch_shape=(...) to all the first layers in your model.
This is the expected shape of your inputs
including the batch size.
It should be a tuple of integers, e.g. (32, 10, 100).
- specify shuffle=False when calling fit().
To reset the states of your model, call .reset_states() on either
a specific layer, or on your entire model.

Note on specifying the initial state of RNNs

You can specify the initial state of RNN layers symbolically by
calling them with the keyword argument initial_state. The value of
initial_state should be a tensor or list of tensors representing
the initial state of the RNN layer.
You can specify the initial state of RNN layers numerically by
calling reset_states with the keyword argument states. The value of
states should be a numpy array or list of numpy arrays representing
the initial state of the RNN layer.

Note on passing external constants to RNNs

You can pass “external” constants to the cell using the constants
keyword argument of RNN.__call__ (as well as RNN.call) method. This
requires that the cell.call method accepts the same keyword argument
constants. Such constants can be used to condition the cell
transformation on additional static inputs (not changing over time),
a.k.a. an attention mechanism.

Examples

# First, let's define a RNN Cell, as a layer subclass.

class MinimalRNNCell(keras.layers.Layer):

    def __init__(self, units, **kwargs):
    self.units = units
    self.state_size = units
    super(MinimalRNNCell, self).__init__(**kwargs)

    def build(self, input_shape):
    self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
                  initializer='uniform',
                  name='kernel')
    self.recurrent_kernel = self.add_weight(
        shape=(self.units, self.units),
        initializer='uniform',
        name='recurrent_kernel')
    self.built = True

    def call(self, inputs, states):
    prev_output = states[0]
    h = K.dot(inputs, self.kernel)
    output = h + K.dot(prev_output, self.recurrent_kernel)
    return output, [output]

# Let's use this cell in a RNN layer:

cell = MinimalRNNCell(32)
x = keras.Input((None, 5))
layer = RNN(cell)
y = layer(x)

# Here's how to use the cell to build a stacked RNN:

cells = [MinimalRNNCell(32), MinimalRNNCell(64)]
x = keras.Input((None, 5))
layer = RNN(cells)
y = layer(x)
CLASS

alias of RNN

class conx.layers.RecurrentLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

RecurrentLayer

Abstract base class for recurrent layers.

Do not use in a model – it’s not a valid layer!
Use its children classes LSTM, GRU and SimpleRNN instead.
All recurrent layers (LSTM, GRU, SimpleRNN) also
follow the specifications of this class and accept
the keyword arguments listed below.

Example

# as the first layer in a Sequential model
model = Sequential()
model.add(LSTM(32, input_shape=(10, 64)))
# now model.output_shape == (None, 32)
# note: `None` is the batch dimension.
# for subsequent layers, no need to specify the input size:
model.add(LSTM(16))
# to stack recurrent layers, you must use return_sequences=True
# on any recurrent layer that feeds into another recurrent layer.
# note that you only need to specify the input size on the first layer.
model = Sequential()
model.add(LSTM(64, input_dim=64, input_length=10, return_sequences=True))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(10))

Arguments

  • weights: list of Numpy arrays to set as initial weights. The list should have 3 elements, of shapes: [(input_dim, output_dim), (output_dim, output_dim), (output_dim,)].
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
  • implementation: one of {0, 1, or 2}. If set to 0, the RNN will use an implementation that uses fewer, larger matrix products, thus running faster on CPU but consuming more memory. If set to 1, the RNN will use more matrix products, but smaller ones, thus running slower (may actually be faster on GPU) while consuming less memory. If set to 2 (LSTM/GRU only), the RNN will combine the input gate, the forget gate and the output gate into a single matrix, enabling more time-efficient parallelization on the GPU.
    • Note: RNN dropout must be shared for all gates, resulting in a slightly reduced regularization.
  • input_dim: dimensionality of the input (integer). This argument (or alternatively, the keyword argument input_shape) is required when using this layer as the first layer in a model.
  • input_length: Length of input sequences, to be specified when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g. via the input_shape argument)

Input shapes

3D tensor with shape (batch_size, timesteps, input_dim),
(Optional) 2D tensors with shape (batch_size, output_dim).

Output shape

  • if return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each with shape (batch_size, units).
  • if return_sequences: 3D tensor with shape (batch_size, timesteps, units).
  • else, 2D tensor with shape (batch_size, units).

Masking

This layer supports masking for input data with a variable number
of timesteps. To introduce masks to your data,
use an Embedding layer with the mask_zero parameter
set to True.

Note on using statefulness in RNNs

You can set RNN layers to be ‘stateful’, which means that the states
computed for the samples in one batch will be reused as initial states
for the samples in the next batch. This assumes a one-to-one mapping
between samples in different successive batches.
To enable statefulness:
- specify stateful=True in the layer constructor.
- specify a fixed batch size for your model, by passing
if sequential model:
batch_input_shape=(...) to the first layer in your model.
else for functional model with 1 or more Input layers:
batch_shape=(...) to all the first layers in your model.
This is the expected shape of your inputs
including the batch size.
It should be a tuple of integers, e.g. (32, 10, 100).
- specify shuffle=False when calling fit().
To reset the states of your model, call .reset_states() on either
a specific layer, or on your entire model.

Note on specifying the initial state of RNNs

You can specify the initial state of RNN layers symbolically by
calling them with the keyword argument initial_state. The value of
initial_state should be a tensor or list of tensors representing
the initial state of the RNN layer.
You can specify the initial state of RNN layers numerically by
calling reset_states with the keyword argument states. The value of
states should be a numpy array or list of numpy arrays representing
the initial state of the RNN layer.
CLASS

alias of Recurrent

class conx.layers.RepeatVectorLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

RepeatVectorLayer

Repeats the input n times.

Example

model = Sequential()
model.add(Dense(32, input_dim=32))
# now: model.output_shape == (None, 32)
# note: `None` is the batch dimension

model.add(RepeatVector(3))
# now: model.output_shape == (None, 3, 32)

Arguments

  • n: integer, repetition factor.

Input shape

2D tensor of shape (num_samples, features).

Output shape

3D tensor of shape (num_samples, n, features).

CLASS

alias of RepeatVector

class conx.layers.ReshapeLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ReshapeLayer

Reshapes an output to a certain shape.

Arguments

  • target_shape: target shape. Tuple of integers. Does not include the batch axis.

Input shape

Arbitrary, although all dimensions in the input shaped must be fixed.
Use the keyword argument input_shape
(tuple of integers, does not include the batch axis)
when using this layer as the first layer in a model.

Output shape

(batch_size,) + target_shape

Example

# as first layer in a Sequential model
model = Sequential()
model.add(Reshape((3, 4), input_shape=(12,)))
# now: model.output_shape == (None, 3, 4)
# note: `None` is the batch dimension

# as intermediate layer in a Sequential model
model.add(Reshape((6, 2)))
# now: model.output_shape == (None, 6, 2)

# also supports shape inference using `-1` as dimension
model.add(Reshape((-1, 2, 2)))
# now: model.output_shape == (None, 3, 2, 2)
CLASS

alias of Reshape

class conx.layers.SeparableConv1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SeparableConv1DLayer

Depthwise separable 1D convolution.

Separable convolutions consist in first performing
a depthwise spatial convolution
(which acts on each input channel separately)
followed by a pointwise convolution which mixes together the resulting
output channels. The depth_multiplier argument controls how many
output channels are generated per input channel in the depthwise step.
Intuitively, separable convolutions can be understood as
a way to factorize a convolution kernel into two smaller kernels,
or as an extreme version of an Inception block.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of single integer, specifying the length of the 1D convolution window.
  • strides: An integer or tuple/list of single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filterss_in * depth_multiplier.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • depthwise_initializer: Initializer for the depthwise kernel matrix (see initializers).
  • pointwise_initializer: Initializer for the pointwise kernel matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see regularizer).
  • pointwise_regularizer: Regularizer function applied to the pointwise kernel matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • depthwise_constraint: Constraint function applied to the depthwise kernel matrix (see constraints).
  • pointwise_constraint: Constraint function applied to the pointwise kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

3D tensor with shape:
(batch, channels, steps) if data_format=’channels_first’
or 3D tensor with shape:
(batch, steps, channels) if data_format=’channels_last’.

Output shape

3D tensor with shape:
(batch, filters, new_steps) if data_format=’channels_first’
or 3D tensor with shape:
(batch, new_steps, filters) if data_format=’channels_last’.
new_steps values might have changed due to padding or strides.
CLASS

alias of SeparableConv1D

class conx.layers.SeparableConv2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SeparableConv2DLayer

Depthwise separable 2D convolution.

Separable convolutions consist in first performing
a depthwise spatial convolution
(which acts on each input channel separately)
followed by a pointwise convolution which mixes together the resulting
output channels. The depth_multiplier argument controls how many
output channels are generated per input channel in the depthwise step.
Intuitively, separable convolutions can be understood as
a way to factorize a convolution kernel into two smaller kernels,
or as an extreme version of an Inception block.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filterss_in * depth_multiplier.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • depthwise_initializer: Initializer for the depthwise kernel matrix (see initializers).
  • pointwise_initializer: Initializer for the pointwise kernel matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see regularizer).
  • pointwise_regularizer: Regularizer function applied to the pointwise kernel matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • depthwise_constraint: Constraint function applied to the depthwise kernel matrix (see constraints).
  • pointwise_constraint: Constraint function applied to the pointwise kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(batch, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(batch, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.
CLASS

alias of SeparableConv2D

class conx.layers.SeparableConvolution1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SeparableConvolution1DLayer

Depthwise separable 1D convolution.

Separable convolutions consist in first performing
a depthwise spatial convolution
(which acts on each input channel separately)
followed by a pointwise convolution which mixes together the resulting
output channels. The depth_multiplier argument controls how many
output channels are generated per input channel in the depthwise step.
Intuitively, separable convolutions can be understood as
a way to factorize a convolution kernel into two smaller kernels,
or as an extreme version of an Inception block.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of single integer, specifying the length of the 1D convolution window.
  • strides: An integer or tuple/list of single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filterss_in * depth_multiplier.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • depthwise_initializer: Initializer for the depthwise kernel matrix (see initializers).
  • pointwise_initializer: Initializer for the pointwise kernel matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see regularizer).
  • pointwise_regularizer: Regularizer function applied to the pointwise kernel matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • depthwise_constraint: Constraint function applied to the depthwise kernel matrix (see constraints).
  • pointwise_constraint: Constraint function applied to the pointwise kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

3D tensor with shape:
(batch, channels, steps) if data_format=’channels_first’
or 3D tensor with shape:
(batch, steps, channels) if data_format=’channels_last’.

Output shape

3D tensor with shape:
(batch, filters, new_steps) if data_format=’channels_first’
or 3D tensor with shape:
(batch, new_steps, filters) if data_format=’channels_last’.
new_steps values might have changed due to padding or strides.
CLASS

alias of SeparableConv1D

class conx.layers.SeparableConvolution2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SeparableConvolution2DLayer

Depthwise separable 2D convolution.

Separable convolutions consist in first performing
a depthwise spatial convolution
(which acts on each input channel separately)
followed by a pointwise convolution which mixes together the resulting
output channels. The depth_multiplier argument controls how many
output channels are generated per input channel in the depthwise step.
Intuitively, separable convolutions can be understood as
a way to factorize a convolution kernel into two smaller kernels,
or as an extreme version of an Inception block.

Arguments

  • filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
  • kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
  • strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
  • padding: one of "valid" or "same" (case-insensitive).
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.
  • depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filterss_in * depth_multiplier.
  • activation: Activation function to use (see activations). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • depthwise_initializer: Initializer for the depthwise kernel matrix (see initializers).
  • pointwise_initializer: Initializer for the pointwise kernel matrix (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see regularizer).
  • pointwise_regularizer: Regularizer function applied to the pointwise kernel matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • depthwise_constraint: Constraint function applied to the depthwise kernel matrix (see constraints).
  • pointwise_constraint: Constraint function applied to the pointwise kernel matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).

Input shape

4D tensor with shape:
(batch, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, rows, cols, channels) if data_format=’channels_last’.

Output shape

4D tensor with shape:
(batch, filters, new_rows, new_cols) if data_format=’channels_first’
or 4D tensor with shape:
(batch, new_rows, new_cols, filters) if data_format=’channels_last’.
rows and cols values might have changed due to padding.
CLASS

alias of SeparableConv2D

class conx.layers.SimpleRNNCellLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SimpleRNNCellLayer

Cell class for SimpleRNN.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • activation: Activation function to use (see activations).
    • Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
CLASS

alias of SimpleRNNCell

class conx.layers.SimpleRNNLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SimpleRNNLayer

Fully-connected RNN where the output is to be fed back to input.

Arguments

  • units: Positive integer, dimensionality of the output space.
  • activation: Activation function to use (see activations).
    • Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).
  • use_bias: Boolean, whether the layer uses a bias vector.
  • kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs (see initializers).
  • recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state (see initializers).
  • bias_initializer: Initializer for the bias vector (see initializers).
  • kernel_regularizer: Regularizer function applied to the kernel weights matrix (see regularizer).
  • recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix (see regularizer).
  • bias_regularizer: Regularizer function applied to the bias vector (see regularizer).
  • activity_regularizer: Regularizer function applied to the output of the layer (its “activation”). (see regularizer).
  • kernel_constraint: Constraint function applied to the kernel weights matrix (see constraints).
  • recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix (see constraints).
  • bias_constraint: Constraint function applied to the bias vector (see constraints).
  • dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
  • recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
  • return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
  • return_state: Boolean. Whether to return the last state in addition to the output.
  • go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
  • stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
  • unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
CLASS

alias of SimpleRNN

class conx.layers.SoftmaxLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SoftmaxLayer

Softmax activation function.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • axis: Integer, axis along which the softmax normalization is applied.
CLASS

alias of Softmax

class conx.layers.SpatialDropout1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SpatialDropout1DLayer

Spatial 1D version of Dropout.

This version performs the same function as Dropout, however it drops
entire 1D feature maps instead of individual elements. If adjacent frames
within feature maps are strongly correlated (as is normally the case in
early convolution layers) then regular dropout will not regularize the
activations and will otherwise just result in an effective learning rate
decrease. In this case, SpatialDropout1D will help promote independence
between feature maps and should be used instead.

Arguments

  • rate: float between 0 and 1. Fraction of the input units to drop.

Input shape

3D tensor with shape:
(samples, timesteps, channels)

Output shape

Same as input

References

CLASS

alias of SpatialDropout1D

class conx.layers.SpatialDropout2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SpatialDropout2DLayer

Spatial 2D version of Dropout.

This version performs the same function as Dropout, however it drops
entire 2D feature maps instead of individual elements. If adjacent pixels
within feature maps are strongly correlated (as is normally the case in
early convolution layers) then regular dropout will not regularize the
activations and will otherwise just result in an effective learning rate
decrease. In this case, SpatialDropout2D will help promote independence
between feature maps and should be used instead.

Arguments

  • rate: float between 0 and 1. Fraction of the input units to drop.
  • data_format: ‘channels_first’ or ‘channels_last’. In ‘channels_first’ mode, the channels dimension (the depth) is at index 1, in ‘channels_last’ mode is it at index 3. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

4D tensor with shape:
(samples, channels, rows, cols) if data_format=’channels_first’
or 4D tensor with shape:
(samples, rows, cols, channels) if data_format=’channels_last’.

Output shape

Same as input

References

CLASS

alias of SpatialDropout2D

class conx.layers.SpatialDropout3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

SpatialDropout3DLayer

Spatial 3D version of Dropout.

This version performs the same function as Dropout, however it drops
entire 3D feature maps instead of individual elements. If adjacent voxels
within feature maps are strongly correlated (as is normally the case in
early convolution layers) then regular dropout will not regularize the
activations and will otherwise just result in an effective learning rate
decrease. In this case, SpatialDropout3D will help promote independence
between feature maps and should be used instead.

Arguments

  • rate: float between 0 and 1. Fraction of the input units to drop.
  • data_format: ‘channels_first’ or ‘channels_last’. In ‘channels_first’ mode, the channels dimension (the depth) is at index 1, in ‘channels_last’ mode is it at index 4. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

5D tensor with shape:
(samples, channels, dim1, dim2, dim3) if data_format=’channels_first’
or 5D tensor with shape:
(samples, dim1, dim2, dim3, channels) if data_format=’channels_last’.

Output shape

Same as input

References

CLASS

alias of SpatialDropout3D

class conx.layers.StackedRNNCellsLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

StackedRNNCellsLayer

Wrapper allowing a stack of RNN cells to behave as a single cell.

Used to implement efficient stacked RNNs.

Arguments

  • cells: List of RNN cell instances.

Examples

cells = [
    keras.layers.LSTMCell(output_dim),
    keras.layers.LSTMCell(output_dim),
    keras.layers.LSTMCell(output_dim),
]

inputs = keras.Input((timesteps, input_dim))
x = keras.layers.RNN(cells)(inputs)
CLASS

alias of StackedRNNCells

class conx.layers.SubtractLayer(name, **params)[source]

Bases: conx.layers.AddLayer

A layer for subtracting the output vectors of layers.

CLASS

alias of Subtract

make_keras_function()[source]
conx.layers.SubtractionLayer

alias of SubtractLayer

class conx.layers.ThresholdedReLULayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ThresholdedReLULayer

Thresholded Rectified Linear Unit.

It follows:
f(x) = x for x > theta,
f(x) = 0 otherwise.

Input shape

Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.

Output shape

Same shape as the input.

Arguments

  • theta: float >= 0. Threshold location of activation.

References

CLASS

alias of ThresholdedReLU

class conx.layers.UpSampling1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

UpSampling1DLayer

Upsampling layer for 1D inputs.

Repeats each temporal step size times along the time axis.

Arguments

  • size: integer. Upsampling factor.

Input shape

3D tensor with shape: (batch, steps, features).

Output shape

3D tensor with shape: (batch, upsampled_steps, features).

CLASS

alias of UpSampling1D

class conx.layers.UpSampling2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

UpSampling2DLayer

Upsampling layer for 2D inputs.

Repeats the rows and columns of the data
by size[0] and size[1] respectively.

Arguments

  • size: int, or tuple of 2 integers. The upsampling factors for rows and columns.
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

4D tensor with shape:

  • If data_format is "channels_last": (batch, rows, cols, channels)
  • If data_format is "channels_first": (batch, channels, rows, cols)

Output shape

4D tensor with shape:

  • If data_format is "channels_last": (batch, upsampled_rows, upsampled_cols, channels)
  • If data_format is "channels_first": (batch, channels, upsampled_rows, upsampled_cols)
CLASS

alias of UpSampling2D

class conx.layers.UpSampling3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

UpSampling3DLayer

Upsampling layer for 3D inputs.

Repeats the 1st, 2nd and 3rd dimensions
of the data by size[0], size[1] and size[2] respectively.

Arguments

  • size: int, or tuple of 3 integers. The upsampling factors for dim1, dim2 and dim3.
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

5D tensor with shape:

  • If data_format is "channels_last": (batch, dim1, dim2, dim3, channels)
  • If data_format is "channels_first": (batch, channels, dim1, dim2, dim3)

Output shape

5D tensor with shape:

  • If data_format is "channels_last": (batch, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels)
  • If data_format is "channels_first": (batch, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3)
CLASS

alias of UpSampling3D

class conx.layers.WrapperLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

WrapperLayer

Abstract wrapper base class.

Wrappers take another layer and augment it in various ways.
Do not use this class as a layer, it is only an abstract base class.
Two usable wrappers are the TimeDistributed and Bidirectional wrappers.

Arguments

  • layer: The layer to be wrapped.
CLASS

alias of Wrapper

class conx.layers.ZeroPadding1DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ZeroPadding1DLayer

Zero-padding layer for 1D input (e.g. temporal sequence).

Arguments

  • padding: int, or tuple of int (length 2), or dictionary.
    • If int: How many zeros to add at the beginning and end of the padding dimension (axis 1).
    • If tuple of int (length 2): How many zeros to add at the beginning and at the end of the padding dimension ((left_pad, right_pad)).

Input shape

3D tensor with shape (batch, axis_to_pad, features)

Output shape

3D tensor with shape (batch, padded_axis, features)

CLASS

alias of ZeroPadding1D

class conx.layers.ZeroPadding2DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ZeroPadding2DLayer

Zero-padding layer for 2D input (e.g. picture).

This layer can add rows and columns of zeros
at the top, bottom, left and right side of an image tensor.

Arguments

  • padding: int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.
    • If int: the same symmetric padding is applied to width and height.
    • If tuple of 2 ints: interpreted as two different symmetric padding values for height and width: (symmetric_height_pad, symmetric_width_pad).
    • If tuple of 2 tuples of 2 ints: interpreted as ((top_pad, bottom_pad), (left_pad, right_pad))
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

4D tensor with shape:

  • If data_format is "channels_last": (batch, rows, cols, channels)
  • If data_format is "channels_first": (batch, channels, rows, cols)

Output shape

4D tensor with shape:

  • If data_format is "channels_last": (batch, padded_rows, padded_cols, channels)
  • If data_format is "channels_first": (batch, channels, padded_rows, padded_cols)
CLASS

alias of ZeroPadding2D

class conx.layers.ZeroPadding3DLayer(name, *args, **params)

Bases: conx.layers._BaseLayer

ZeroPadding3DLayer

Zero-padding layer for 3D data (spatial or spatio-temporal).

Arguments

  • padding: int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.
    • If int: the same symmetric padding is applied to width and height.
    • If tuple of 2 ints: interpreted as two different symmetric padding values for height and width: (symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad).
    • If tuple of 2 tuples of 2 ints: interpreted as ((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad))
  • data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be “channels_last”.

Input shape

5D tensor with shape:

  • If data_format is "channels_last": (batch, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad, depth)
  • If data_format is "channels_first": (batch, depth, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad)

Output shape

5D tensor with shape:

  • If data_format is "channels_last": (batch, first_padded_axis, second_padded_axis, third_axis_to_pad, depth)
  • If data_format is "channels_first": (batch, depth, first_padded_axis, second_padded_axis, third_axis_to_pad)
CLASS

alias of ZeroPadding3D

conx.layers.make_layer(state)[source]
conx.layers.process_class_docstring(docstring)[source]

4.1.5. conx.utils module

class conx.utils.Experiment(name)[source]

Bases: object

Run a series of experiments.

function() should take any options, and return a network.

Parameters:name (*) –
>>> from conx import Network
>>> def function(optimizer, activation, **options):
...     net = Network("XOR", 2, 2, 1, activation=activation, seed=42)
...     net.compile(error="mse", optimizer=optimizer)
...     net.dataset.append_by_function(2, (0, 4), "binary", lambda i,v: [int(sum(v) == len(v))])
...     net.train(report_rate=100, verbose=0, plot=False, **options)
...     category = "%s-%s" % (optimizer, activation)
...     return category, net
>>> exp = Experiment("XOR")
>>> exp.run(function,
...         epochs=[5],
...         accuracy=[0.8],
...         tolerance=[0.2],
...         optimizer=["adam", "sgd"],
...         activation=["sigmoid", "relu"],
...         dir="/tmp/")
>>> len(exp.results)
4
>>> exp.plot("loss", format="svg")
<IPython.core.display.SVG object>
>>> exp.apply(lambda category, exp_name: (category, exp_name))
[('adam-sigmoid', '/tmp/XOR-00001-00001'), ('sgd-sigmoid', '/tmp/XOR-00001-00002'), ('adam-relu', '/tmp/XOR-00001-00003'), ('sgd-relu', '/tmp/XOR-00001-00004')]
apply(function, *args, **kwargs)[source]

Apply a function to experimental runs.

Parameters:- takes either (function) – category, network-name, args, and kwargs; or category, network, args, kwargs depending on cache, and returns some results.
plot(metrics='loss', symbols=None, format='svg')[source]

Plot all of the results of the experiment on a single plot.

run(function, trials=1, dir='./', save=True, cache=False, **options)[source]

Run a set of experiments, varying parameters.

Parameters:
  • function - callable that takes options, returns category (*) –
  • trials (*) –
  • dir (*) –
  • save (*) –
  • cache (*) –

The experiment name is compose of Experiment.name + trial number + experiment number. For example, the first experiment in the below example is: “Test1-00001-00001”. The last experiment is “Test1-00005-00002”.

Experiment.cache is a dictionary mapping experiment name (directory) to network for each experiment.

Experiment.results is a list of (category, name) for each experiment.

Example

>>> from conx import Network
>>> net = Network("Sample - empty")
>>> exp = Experiment("Test1")
>>> exp.run(lambda var: (var, net),
...         trials=5,
...         save=False,
...         cache=True,
...         var=["OPTION1", "OPTION2"])
>>> len(exp.results) == 10
True
>>> len(exp.cache) == 10
True
>>> "./Test1-00001-00001" in exp.cache.keys()
True
>>> "./Test1-00005-00002" in exp.cache.keys()
True
>>> exp.results[0][0] == "OPTION1"
True
>>> exp.results[0][1] == "./Test1-00001-00001"
True
>>> exp.results[-1][1] == "./Test1-00005-00002"
True
>>> exp.results[-1][0] == "OPTION2"
True
class conx.utils.PCA(states, dim=2, solver='randomized')[source]

Bases: object

Compute the Prinicpal Component Analysis for the points in a multi-dimensional space.

Example

>>> data = [
...         [0.00, 0.00, 0.00],
...         [0.25, 0.25, 0.25],
...         [0.50, 0.50, 0.50],
...         [0.75, 0.75, 0.75],
...         [1.00, 1.00, 1.00],
... ]
>>> pca = PCA(data)
>>> new_data = pca.transform(data)
>>> len(new_data)
5
scale(ovector)[source]

Scale a transformed vector to (0, 1).

transform(vectors, scale=False)[source]
>>> from conx import Network
>>> net = Network("Example", 2, 2, 1)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.load([
...        [[0, 0], [0], "0"],
...        [[0, 1], [1], "1"],
...        [[1, 0], [1], "1"],
...        [[1, 1], [0], "0"],
... ])
>>> states = [net.propagate_to("hidden", input) for input in net.dataset.inputs]
>>> pca = PCA(states)
>>> new_states = pca.transform(states)
>>> len(new_states)
4
transform_network_bank(network, bank, label_index=0, tolerance=None, test=True, scale=False)[source]
>>> from conx import Network
>>> net = Network("Example", 2, 2, 1)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.load([
...        [[0, 0], [0], "0"],
...        [[0, 1], [1], "1"],
...        [[1, 0], [1], "1"],
...        [[1, 1], [0], "0"],
... ])
>>> states = [net.propagate_to("hidden", input) for input in net.dataset.inputs]
>>> pca = PCA(states)
>>> results = pca.transform_network_bank(net, "hidden")
>>> sum([len(vectors) for (label, vectors) in results["data"]])
4
>>> "xmin" in results
True
>>> "xmax" in results
True
>>> "ymin" in results
True
>>> "ymax" in results
True
transform_one(vector, scale=False)[source]

Transform a vector into the PCA of the trained states.

>>> from conx import Network
>>> net = Network("Example", 2, 2, 1)
>>> net.compile(error="mse", optimizer="adam")
>>> net.dataset.load([
...        [[0, 0], [0], "0"],
...        [[0, 1], [1], "1"],
...        [[1, 0], [1], "1"],
...        [[1, 1], [0], "0"],
... ])
>>> states = [net.propagate_to("hidden", input) for input in net.dataset.inputs]
>>> pca = PCA(states)
>>> new_state = pca.transform_one(states[0])
>>> len(new_state)
2
conx.utils.all_same(iterator)[source]

Are there more than one item, and all the same?

>>> all_same([int, int, int])
True
>>> all_same([int, float, int])
False
conx.utils.argmax(seq)[source]

Find the index of the maximum value in seq.

Parameters:seq (list) –
Returns:The index of maximum value in list.
>>> argmax([0.1, 0.2, 0.3, 0.1])
2
conx.utils.argmin(seq)[source]

Find the index of the minimum value in seq.

Parameters:seq (list) –
Returns:The index of minimum value in list.
>>> argmin([0.5, 0.2, 0.3, 0.1])
3
conx.utils.array_to_image(array, scale=1.0, minmax=None, colormap=None, shape=None)[source]

Convert a matrix (with shape, or given shape) to a PIL.Image.

>>> m = [[[1.0, 1.0, 1.0], [0.0, 0.0, 0.0]],
...      [[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]]
>>> image = array_to_image(m)
>>> np.array(image).tolist()
[[[255, 255, 255], [0, 0, 0]], [[0, 0, 0], [255, 255, 255]]]
>>> image_to_array(image)
[[[1.0, 1.0, 1.0], [0.0, 0.0, 0.0]], [[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]]
>>> m = [1.0, 1.0, 1.0, 0.0, 0.0, 0.0,
...      0.0, 0.0, 0.0, 1.0, 1.0, 1.0]
>>> array_to_image(m, shape=(2, 2, 3))       
<PIL.Image.Image image mode=RGB size=2x2 at ...>
>>> array_to_image(m, shape=(2, 2, 3), colormap="bone")       
<PIL.Image.Image image mode=RGB size=2x2 at ...>
conx.utils.atype(dtype)[source]

Given a numpy dtype, return the associated Python type. If unable to determine, just return the dtype.kind code.

>>> atype(np.float64(23).dtype)
<class 'numbers.Number'>
conx.utils.autoname(index, sizes)[source]

Given an index and list of sizes, return a name for the layer.

>>> autoname(0, sizes=4)
'input'
>>> autoname(1, sizes=4)
'hidden1'
>>> autoname(2, sizes=4)
'hidden2'
>>> autoname(3, sizes=4)
'output'
conx.utils.binary(i, width)[source]
>>> binary(0, 5)
[0, 0, 0, 0, 0]
>>> binary(15, 4)
[1, 1, 1, 1]
>>> binary(14, 4)
[1, 1, 1, 0]
conx.utils.binary_to_int(vector)[source]

Given a binary vector, return the integer value.

>>> binary_to_int(binary(0, 5))
0
>>> binary_to_int(binary(15, 4))
15
>>> binary_to_int(binary(14, 4))
14
conx.utils.choice(seq=None, p=None)[source]

Get a random choice from sequence, optionally given a probability distribution.

Parameters:
  • - a list of choices, or None if choices are range (seq) –
  • - a list of probabilities, or None if even chance (p) –
Returns:

One of the choices, picked with given probabilty.

Examples

>>> choice(1)
0
>>> choice([42])
42
>>> choice("abcde", p=[0, 1, 0, 0, 0])
'b'
>>> choice(p=[0, 0, 1, 0, 0])
2
>>> choice("aaaaa")
'a'
conx.utils.clear_session()[source]

Needed to clear the session if memory is growing.

conx.utils.collapse(item)[source]

For any repeated structure, return [struct, count].

>>> collapse([[int, int, int], [float, float]])
[[<class 'int'>, 3], [<class 'float'>, 2]]
conx.utils.count_params(weights)[source]

Count the total number of scalars composing the weights.

Arguments
weights: An iterable containing the weights on which to compute params
Returns:The total number of scalars composing the weights
conx.utils.crop_image(image, x1, y1, x2, y2)[source]

Given an image an crop rectangle x1, y1, x2, y2, return the cropped image.

>>> m = [[[0.0, 1.0, 1.0], [1.0, 0.0, 0.0]],
...      [[0.0, 1.0, 1.0], [1.0, 0.0, 0.0]]]
>>> image = array_to_image(m)
>>> crop_image(image, 0, 0, 1, 1) 
<PIL.Image.Image image mode=RGB size=1x1 at ...>
conx.utils.cxtypes(item)[source]

Get the types of (possibly) nested list(s), and collapse if possible.

>>> cxtypes(0)
<class 'numbers.Number'>
>>> cxtypes([0, 1, 2])
[<class 'numbers.Number'>, 3]
conx.utils.download(url, directory='./', force=False, unzip=True, filename=None)[source]

Download a file into a local directory.

Parameters:
>>> download("https://raw.githubusercontent.com/Calysto/conx/master/README.md",
...          "/tmp/testme", force=True) 
Downloading ...
>>> download("https://raw.githubusercontent.com/Calysto/conx/master/README.md",
...          "/tmp/testme") 
Using cached ...
conx.utils.find_all_paths(net, start_layer, end_layer, path=[])[source]

Given a start_layer and an end_layer, return a list containing all pathways (does not include end_layer).

Recursive.

conx.utils.find_dimensions(n)[source]

Find the best (most square) dimensions of n.

conx.utils.find_factors(n)[source]

Find the integer factors of n.

conx.utils.find_path(net, start_layer, end_layer)[source]

Given a conx network, a start layer, and an ending layer, find the path between them.

conx.utils.format_collapse(ttype, dims)[source]

Given a type and a tuple of dimensions, return a struct of [[[ttype, dims[-1]], dims[-2]], …]

>>> format_collapse(int, (1, 2, 3))
[[[<class 'int'>, 3], 2], 1]
conx.utils.frange(start, stop=None, step=1.0, raw=False)[source]

Like range(), but with floats.

May not be exactly correct due to rounding issues.

Parameters:
Returns:

A list of floats.

Examples

>>> len(frange(-1, 1, .1))
20
conx.utils.get_colormap()[source]

Get the global colormap.

Returns:The valid name of the global colormap.

Examples

>>> cm = get_colormap()
>>> set_colormap(AVAILABLE_COLORMAPS[0])
>>> cm == get_colormap()
False
>>> set_colormap(cm)
>>> cm != get_colormap()
False
conx.utils.get_device()[source]

Returns ‘cpu’ or ‘gpu’ indicating which device the system will use.

>>> get_device() in ["gpu", "cpu"]
True
conx.utils.get_error_colormap()[source]

Get the global error colormap.

Returns:The valid name of the global error colormap.

Examples

>>> cm = get_error_colormap()
>>> set_error_colormap(AVAILABLE_COLORMAPS[0])
>>> cm != get_error_colormap()
True
>>> set_error_colormap(cm)
>>> cm == get_error_colormap()
True
conx.utils.get_form(item)[source]

First, get the types of all items, and then collapse repeated structures.

>>> get_form([1, [2, 5, 6], 3])
[<class 'numbers.Number'>, [<class 'numbers.Number'>, 3], <class 'numbers.Number'>]
conx.utils.get_shape(form)[source]

Given a form, format it in [type, dimension] format.

>>> get_shape(get_form([[0.00], [0.00]]))
(<class 'numbers.Number'>, [2, 1])
conx.utils.get_symbol(label: str, symbols: dict = None, default='o') → str[source]

Get a matplotlib symbol from a label.

Possible shape symbols:

  • ‘-‘ solid line style
  • ‘–’ dashed line style
  • ‘-.’ dash-dot line style
  • ‘:’ dotted line style
  • ‘.’ point marker
  • ‘,’ pixel marker
  • ‘o’ circle marker
  • ‘v’ triangle_down marker
  • ‘^’ triangle_up marker
  • ‘<’ triangle_left marker
  • ‘>’ triangle_right marker
  • ‘1’ tri_down marker
  • ‘2’ tri_up marker
  • ‘3’ tri_left marker
  • ‘4’ tri_right marker
  • ‘s’ square marker
  • ‘p’ pentagon marker
  • ‘*’ star marker
  • ‘h’ hexagon1 marker
  • ‘H’ hexagon2 marker
  • ‘+’ plus marker
  • ‘x’ x marker
  • ‘D’ diamond marker
  • ‘d’ thin_diamond marker
  • ‘|’ vline marker
  • ‘_’ hline marker

In addition, the shape symbol can be preceded by the following color abbreviations:

  • ‘b’ blue
  • ‘g’ green
  • ‘r’ red
  • ‘c’ cyan
  • ‘m’ magenta
  • ‘y’ yellow
  • ‘k’ black
  • ‘w’ white

Examples

>>> get_symbol("Apple")
'o'
>>> get_symbol("Apple", {'Apple': 'x'})
'x'
>>> get_symbol("Banana", {'Apple': 'x'})
'o'
conx.utils.gif2mp4(filename)[source]

Convert an animated gif into a mp4, to show with controls.

conx.utils.heatmap(function_or_matrix, in_range=(0, 1), width=8.0, height=4.0, xlabel='', ylabel='', title='', resolution=None, out_min=None, out_max=None, colormap=None, format=None)[source]

Create a heatmap plot given a matrix, or a function.

>>> import math
>>> def function(x, y):
...     return math.sqrt(x ** 2 + y ** 2)
>>> hm = heatmap(function,
...              format="svg")
>>> hm
<IPython.core.display.SVG object>
conx.utils.image_to_array(image, resize=None, raw=False)[source]

Convert an image filename or PIL.Image into a matrix (list of lists).

>>> m = [[[0.0, 1.0, 1.0], [1.0, 0.0, 0.0]],
...      [[0.0, 1.0, 1.0], [1.0, 0.0, 0.0]]]
>>> image = array_to_image(m)
>>> np.array(image).tolist()
[[[0, 255, 255], [255, 0, 0]], [[0, 255, 255], [255, 0, 0]]]
>>> image_to_array(image)
[[[0.0, 1.0, 1.0], [1.0, 0.0, 0.0]], [[0.0, 1.0, 1.0], [1.0, 0.0, 0.0]]]
conx.utils.import_keras_model(model, network_name)[source]

Import a keras model into conx.

conx.utils.is_array_like(item)[source]

Checks to see if something is array-like.

>>> import numpy as np
>>> is_array_like([])
True
>>> is_array_like(tuple())
True
>>> is_array_like(np.ndarray([]))
True
>>> is_array_like("hello")
False
>>> is_array_like(1)
False
>>> is_array_like(2.3)
False
>>> is_array_like(np)
False
conx.utils.is_collapsed(item)[source]

Is this a collapsed item?

>>> is_collapsed([int, 3])
True
>>> is_collapsed([int, int, int])
False
conx.utils.maximum(seq)[source]

Find the maximum value in seq.

Parameters:seq (list) –
Returns:The maximum value in list or matrix.
>>> maximum([0.5, 0.2, 0.3, 0.1])
0.5
>>> maximum([[0.5, 0.2], [0.3, 0.1]])
0.5
>>> maximum([[[0.5], [0.2]], [[0.3], [0.1]]])
0.5
conx.utils.minimum(seq)[source]

Find the minimum value in seq.

Parameters:seq (list) –
Returns:The minimum value in list or matrix.
>>> minimum([5, 2, 3, 1])
1
>>> minimum([[5, 2], [3, 1]])
1
>>> minimum([[[5], [2]], [[3], [1]]])
1
conx.utils.movie(function, movie_name='movie.gif', play_range=None, loop=0, optimize=True, duration=100, embed=False, mp4=True)[source]

Make a movie from a function.

function has signature: function(index) and should return a PIL.Image.

conx.utils.onehot(i, width)[source]
>>> onehot(0, 5)
[1, 0, 0, 0, 0]
>>> onehot(3, 5)
[0, 0, 0, 1, 0]
conx.utils.plot(data=[], width=8.0, height=4.0, xlabel='', ylabel='', title='', label='', symbols=None, default_symbol=None, ymin=None, xmin=None, ymax=None, xmax=None, format='svg', xs=None)[source]

Create a line or scatter plot given the y-coordinates of a set of lines.

You may provide the x-coordinates if they are not linear starting with 0.

>>> p = plot(["Error", [1, 2, 4, 6, 1, 2, 3]],
...           ylabel="error",
...           xlabel="hello", format="svg")
>>> p
<IPython.core.display.SVG object>
>>> p = plot([["Error", [1, 2, 4, 6, 1, 2, 3]]],
...           ylabel="error",
...           xlabel="hello", format="svg")
>>> p
<IPython.core.display.SVG object>
conx.utils.plot3D(function, x_range=None, y_range=None, width=4.0, height=4.0, xlabel='', ylabel='', zlabel='', title='', label='', symbols=None, default_symbol=None, ymin=None, xmin=None, ymax=None, xmax=None, format=None, colormap=None, linewidth=0, antialiased=False, mode='surface')[source]

function is a function(x,y) or list of [“Label”, [(x,y,z)]].

Parameters:
  • mode (str) –
  • function (list or callable) – [“Label”, [(x,y,z)]], or a function(x,y) that returns z
>>> plot3D([["Test1", [[0, 0, 1], [0, 1, 0]]]], mode="scatter",
...        format="svg")
<IPython.core.display.SVG object>
>>> plot3D((lambda x,y: x ** 2 + y ** 2),
...        (-1,1,.1), (-1,1,.1),
...        mode="surface",
...        format="svg")
<IPython.core.display.SVG object>
>>> plot3D((lambda x,y: x ** 2 + y ** 2),
...        (-1,1,.1), (-1,1,.1),
...        mode="wireframe",
...        format="svg")
<IPython.core.display.SVG object>
conx.utils.plot_f(f, frange=(-1, 1, 0.1), symbol='o-', xlabel='', ylabel='', title='', format=None)[source]

Plot a function.

>>> plot_f(lambda x: x, frange=(-1, 1, .1), format="svg")
<IPython.core.display.SVG object>
conx.utils.rescale_numpy_array(a, old_range, new_range, new_dtype, truncate=False)[source]

Given a numpy array, old min/max, a new min/max and a numpy type, create a new numpy array that scales the old values into the new_range.

>>> import numpy as np
>>> new_array = rescale_numpy_array(np.array([0.1, 0.2, 0.3]), (0, 1), (0.5, 1.), float)
>>> ", ".join(["%.2f" % v for v in new_array])
'0.55, 0.60, 0.65'
conx.utils.reshape(matrix, new_shape, raw=False)[source]

Given a list of lists of … and a new_shape, reformat the matrix in the new shape.

>>> m = [[[1, 2, 3]], [[4, 5, 6]]]
>>> shape(m)
(2, 1, 3)
>>> m1 = reshape(m, 6)
>>> shape(m1)
(6,)
>>> m2 = reshape(m, (3, 2))
>>> shape(m2)
(3, 2)
>>> m2
[[1, 2], [3, 4], [5, 6]]
conx.utils.scale(a, new_range=(0, 1), new_dtype='float', truncate=True)[source]

Given a vector or matrix, scale it to new_range.

>>> import sys
>>> results = scale([-1, 0, 1])
>>> results[0] - 0.0 < sys.float_info.epsilon
True
>>> results[1] - 0.5 < sys.float_info.epsilon
True
>>> results[2] - 1.0 < sys.float_info.epsilon
True
conx.utils.scale_output_for_image(vector, minmax, truncate=False)[source]

Given an activation name (or something else) and an output vector, scale the vector.

conx.utils.scatter(data=[], width=6.0, height=6.0, xlabel='', ylabel='', title='', label='', symbols=None, default_symbol='o', ymin=None, xmin=None, ymax=None, xmax=None, format='svg')[source]

Create a scatter plot with series of (x,y) data.

>>> scatter(["Test 1", [(0,4), (2,3), (1,2)]], format="svg")
<IPython.core.display.SVG object>
conx.utils.scatter_images(images, xy, size=(800, 800), scale=1.0)[source]

Create a scatter plot of images.

Takes a list of images, a list of (x,y) coordinates between 0 and 1. Returns an image the size of size. Make the scale bigger or smaller to show more images.

>>> scatter_images([array_to_image([[1]])], [(0.5, 0.5)]) 
<PIL.Image.Image image mode=RGBA size=800x800 at ...>
conx.utils.set_colormap(s)[source]

Set the global colormap for displaying all network activations.

Parameters:s (str) –

See also

AVAILABLE_COLORMAPS - complete list of valid colormap names

Examples

>>> cm = get_colormap()
>>> set_colormap(AVAILABLE_COLORMAPS[0])
>>> cm != get_colormap()
True
>>> set_colormap(cm)
>>> cm == get_colormap()
True
conx.utils.set_error_colormap(s)[source]

Set the error color map for display error values.

Parameters:s (str) –

See also

AVAILABLE_COLORMAPS - complete list of valid colormap names

Examples

>>> cm = get_error_colormap()
>>> set_error_colormap(AVAILABLE_COLORMAPS[0])
>>> cm == get_error_colormap()
False
>>> set_error_colormap(cm)
>>> cm != get_error_colormap()
False
conx.utils.shape(item)[source]

Given a matrix or vector, return the shape as a tuple of dimensions.

>>> shape([1])
(1,)
>>> shape([1, 2])
(2,)
>>> shape([[1, 2, 3], [4, 5, 6]])
(2, 3)
conx.utils.svg_to_image(svg, background=(255, 255, 255, 255))[source]
conx.utils.topological_sort(net, layers)[source]

Given a conx network and list of layers, produce a topological sorted list, from input(s) to output(s).

conx.utils.uri_to_image(image_str, width=320, height=240)[source]

Given an URI, return an image.

conx.utils.valid_shape(x)[source]

Is this a valid shape for Keras layers?

>>> valid_shape(1)
True
>>> valid_shape(None)
True
>>> valid_shape((1,))
True
>>> valid_shape((None, ))
True
conx.utils.valid_vshape(x)[source]

Is this a valid shape (i.e., size) to display vectors using PIL?

conx.utils.view(item, title=None, background=(255, 255, 255, 255), scale=1.0, **kwargs)[source]

Show an item from the console. item can any one of the following:

  • Network
  • HTML
  • SVG image
  • PIL.Image
  • list of PIL.images
  • Image filename (png or jpg)
  • array (to be converted via array_to_image)

For more information on each option, see:

  • view_network
  • view_svg
  • view_image
  • view_image_list
  • array_to_image
conx.utils.view_image(image, title=None, scale=1.0)[source]
conx.utils.view_image_list(images, labels=None, layout=None, spacing=0.1, scale=1, title=None, pivot=False)[source]

View a list of images.

Parameters:
  • images (list) –
  • labels (str) –
  • layout (tuple or list) - optional (rows, cols) – (1,n)
  • spacing (float) – 0.1
  • scale (float) – 1
  • title (str) –
  • pivot (bool) –
layout (rows, cols) can be:
  • None - find square-ish dimensions automatically
  • (int, int) - set the layout; if more than can fit, don’t show them
  • (None, int) - determine rows automatically
  • (int, None) - determine cols automatically
conx.utils.view_network(net, title=None, background=(255, 255, 255, 255), data='train', scale=1.0, **kwargs)[source]

View a network and train or test data.

Parameters:data (str) –
Common settings:
show_targets (bool) - True will show target pattern show_errors (bool) - Ture will show error pattern
Additional settings:
font_size font_family border_top border_bottom hspace vspace image_maxdim image_pixels_per_unit activation arrow_color arrow_width border_width border_color pixels_per_unit precision svg_scale svg_rotate svg_preferred_size svg_max_width
conx.utils.view_svg(svg, title=None, background=(255, 255, 255, 255), scale=1.0)[source]

Takes the actual SVG string.

conx.utils.visit(layer, stack)[source]

Utility function for topological_sort.

4.1.6. conx.widgets module

class conx.widgets.CameraWidget(*args, **kwargs)[source]

Bases: ipywidgets.widgets.domwidget.DOMWidget

Represents a media source.

>>> cam = CameraWidget()
<IPython.core.display.Javascript object>
audio

A boolean (True, False) trait.

get_data()[source]
get_image()[source]
image

A trait for unicode strings.

image_count

An int trait.

video

A boolean (True, False) trait.

class conx.widgets.Dashboard(net, width='95%', height='550px', play_rate=0.5)[source]

Bases: ipywidgets.widgets.widget_box.VBox

Build the dashboard for Jupyter widgets. Requires running in a notebook/jupyterlab.

change_select(change=None)[source]
get_current_input()[source]
goto(position)[source]
make_colormap_image(colormap_name)[source]
make_config()[source]
make_controls()[source]
prop_one(button=None)[source]
propagate(inputs)[source]

Propagate inputs through the dashboard view of the network.

regenerate(button=None)[source]
save_config(widget=None)[source]
set_attr(obj, attr, value)[source]
toggle_play(button)[source]
update_control_slider(change=None)[source]
update_layer(change)[source]

Update the layer object, and redisplay.

update_layer_selection(change)[source]

Just update the widgets; don’t redraw anything.

update_position_text(change)[source]
update_slider_control(change)[source]
update_zoom_slider(change)[source]
class conx.widgets.SequenceViewer(title, function, length, play_rate=0.5)[source]

Bases: ipywidgets.widgets.widget_box.VBox

Parameters:
  • title (str) –
  • function (callable) – a displayable or list of displayables
  • length (int) –
  • play_rate (float) – Optional. Default is 0.5 seconds.
>>> def function(index):
...     return [None]
>>> sv = SequenceViewer("Title", function, 10)
>>> ## Do this manually for testing:
>>> sv.initialize()
None
>>> ## Testing:
>>> class Dummy:
...     def update(self, result):
...         return result
>>> sv.displayers = [Dummy()]
>>> print("Testing"); sv.goto("begin") 
Testing...
>>> print("Testing"); sv.goto("end") 
Testing...
>>> print("Testing"); sv.goto("prev") 
Testing...
>>> print("Testing"); sv.goto("next") 
Testing...
goto(position)[source]
initialize()[source]
make_controls()[source]
toggle_play(button)[source]
update_slider_control(change)[source]
conx.widgets.get_camera_javascript(width=320, height=240)[source]

4.1.7. conx.activations module

conx.activations.elu(x, alpha=1.0)[source]

Exponential Linear Unit activation function.

See: https://arxiv.org/abs/1511.07289v1

def elu(x):
if x >= 0:
return x
else:
return alpha * (math.exp(x) - 1.0)
>>> elu(0.0)
0.0
>>> elu(1.0)
1.0
>>> elu(0.5, alpha=0.3)
0.5
>>> round(elu(-1), 1)
-0.6
conx.activations.hard_sigmoid(x)[source]

Hard Sigmoid activation function.

>>> round(hard_sigmoid(-1), 1)
0.3
conx.activations.linear(x)[source]

Linear activation function.

>>> linear(1) == 1
True
>>> linear(-1) == -1
True
conx.activations.relu(x, alpha=0.0, max_value=None)[source]

Rectified Linear Unit activation function.

>>> relu(1)
1.0
>>> relu(-1)
0.0
conx.activations.selu(x)[source]

Scaled Exponential Linear Unit activation function.

>>> selu(0)
0.0
conx.activations.sigmoid(x)[source]

Sigmoid activation function.

>>> sigmoid(0)
0.5
conx.activations.softmax(tensor, axis=-1)[source]

Softmax activation function.

>>> len(softmax([0.1, 0.1, 0.7, 0.0]))
4
conx.activations.softplus(x)[source]

Softplus activation function.

>>> round(softplus(0), 1)
0.7
conx.activations.softsign(x)[source]

Softsign activation function.

>>> softsign(1)
0.5
>>> softsign(-1)
-0.5
conx.activations.tanh(x)[source]

Tanh activation function.

>>> tanh(0)
0.0

4.1.8. Module contents