diktya.callbacks

class OnEpochEnd(func, every_nth_epoch=10)[source]

Bases: keras.callbacks.Callback

on_epoch_end(epoch, logs={})[source]
class SampleGAN(sample_func, discriminator_func, z, real_data, callbacks, should_sample_func=None)[source]

Bases: keras.callbacks.Callback

Keras callback that provides samples on_epoch_end to other callbacks.

Parameters:
  • sample_func – is called with z and should return fake samples.
  • discriminator_func – Should return the discriminator score.
  • z – Batch of random vectors
  • real_data – Batch of real data
  • callbacks – List of callbacks, called with the generated samples.
  • should_sample_func (optional) – Gets the current epoch and returns a bool if we should sample at the given epoch.
sample()[source]
on_train_begin(logs=None)[source]
on_epoch_end(epoch, logs=None)[source]
class VisualiseGAN(nb_samples, output_dir=None, show=False, preprocess=None)[source]

Bases: keras.callbacks.Callback

Visualise nb_samples fake images from the generator.

Warning

Cannot be used as normal keras callback. Can only be used as callback for the SampleGAN callback.

Parameters:
  • nb_samples – number of samples
  • output_dir (optional) – Save image to this directory. Format is {epoch:05d}.
  • (default (show) – False): Show images as matplotlib plot
  • preprocess (optional) – Apply this preprocessing function to the generated images.
on_train_begin(logs={})[source]
call(samples)[source]
on_epoch_end(epoch, logs={})[source]
class SaveModels(models, output_dir=None, every_epoch=50, overwrite=True, hdf5_attrs=None)[source]

Bases: keras.callbacks.Callback

on_epoch_end(epoch, log={})[source]
class DotProgressBar[source]

Bases: diktya.callbacks.OnEpochEnd

class LearningRateScheduler(optimizer, schedule)[source]

Bases: keras.callbacks.Callback

Learning rate scheduler

Parameters:
  • optimizer (keras Optimizer) – schedule the learning rate of this optimizer
  • schedule (dict) – Dictionary of epoch -> lr_value
on_epoch_end(epoch, logs={})[source]
class AutomaticLearningRateScheduler(optimizer, metric='loss', min_improvement=0.001, epoch_patience=3, factor=0.25)[source]

Bases: keras.callbacks.Callback

This callback automatically reduces the learning rate of the optimizer. If the metric did not improve by at least the min_improvement amount in the last epoch_patience epochs, the learning rate of optimizer will be decreased by factor.

Parameters:
  • optimizer (keras Optimizer) – Decrease learning rate of this optimizer
  • metric (str) – Name of the metric
  • min_improvement (float) – minimum-improvement
  • epoch_patience (int) – Number of epochs to wait until the metric decreases
  • factor (float) – Reduce learning rate by this factor
on_train_begin(logs={})[source]
on_epoch_begin(epoch, logs={})[source]
on_batch_end(batch, logs={})[source]
on_epoch_end(epoch, logs={})[source]
class HistoryPerBatch(output_dir=None, extra_metrics=None)[source]

Bases: keras.callbacks.Callback

Saves the metrics of every batch.

Parameters:
  • output_dir (optional str) – Save history and plot to this directory.
  • extra_metrics (optional list) – Also montior this metrics.
batch_history

history of every batch. Use batch_history[metric_name][epoch_idx][batch_idx] to index.

epoch_history

history of every epoch. Use epoch_history[metric_name][epoch_idx] to index.

static from_config(batch_history, epoch_history)[source]
history
metrics

List of metrics to montior.

on_epoch_begin(epoch, logs=None)[source]
on_batch_end(batch, logs={})[source]
on_epoch_end(epoch, logs={})[source]
plot_callback(fname=None, every_nth_epoch=1, **kwargs)[source]

Returns a keras callback that plots this figure on_epoch_end.

Parameters:
  • fname (optional str) – filename where to save the plot. Default is {self.output}/history.png
  • every_nth_epoch – Plot frequency
  • **kwargs – Passed to self.plot(**kwargs)
save(fname=None)[source]
on_train_end(logs={})[source]
plot(metrics=None, fig=None, ax=None, skip_first_epoch=False, use_every_nth_batch=1, save_as=None, batch_window_size=128, percentile=(1, 99), end=None, kwargs=None)[source]

Plots the losses and variance for every epoch.

Parameters:
  • metrics (list) – this metric names will be plotted
  • skip_first_epoch (bool) – skip the first epoch. Use full if the first batch has a high loss and brakes the scaling of the loss axis.
  • fig – matplotlib figure
  • ax – matplotlib axes
  • save_as (str) – Save figure under this path. If save_as is a relative path and self.output_dir is set, it is appended to self.output_dir.
Returns:

A tuple of fig, axes

class SaveModelAndWeightsCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, mode='auto', hdf5_attrs=None)[source]

Bases: keras.callbacks.Callback

Similiar to keras ModelCheckpoint, but uses save_model() to save the model and weights in one file.

filepath can contain named formatting options, which will be filled the value of epoch and keys in logs (passed in on_epoch_end).

For example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5, then multiple files will be save with the epoch number and the validation loss.

# Arguments

filepath: string, path to save the model file. monitor: quantity to monitor. verbose: verbosity mode, 0 or 1. save_best_only: if save_best_only=True,

the latest best model according to the validation loss will not be overwritten.
mode: one of {auto, min, max}.
If save_best_only=True, the decision to overwrite the current save file is made based on either the maximization or the minization of the monitored. For val_acc, this should be max, for val_loss this should be min, etc. In auto mode, the direction is automatically inferred from the name of the monitored quantity.

hdf5_attrs: Dict of attributes for the hdf5 file.

save_model(fname, overwrite=False, attrs={})[source]
on_epoch_end(epoch, logs={})[source]