Discrete Quantum Born Circuit Machine (QCBM)

class qugen.main.generator.discrete_qcbm_model_handler.DiscreteQCBMModelHandler

Parameters:

build(model_name, data_set, n_qubits=8, n_registers=2, circuit_depth=1, random_seed=2, initial_sigma=2, circuit_type='copula', transformation='pit', hot_start_path='', save_artifacts=True, slower_progress_update=False) BaseModelHandler

Build the discrete QCBM model. This defines the architecture of the model, including the circuit ansatz, data transformation and whether the artifacts are saved.

Args:

model_name (str): The name which will be used to save the data to disk. data_set: The name of the data set which gets is set as part of the model name n_qubits (int, optional): Number of qubits. Defaults to 2. n_registers (int, optional): Number of dimensions of the data. Defaults to 2. circuit_depth (int, optional): Number of repetitions of qml.StronglyEntanglingLayers. Defaults to 1. initial_sigma (float, optional): Initial value of sigma used in the CMA optimizer. Defaults to 2.0 circuit_type (string, optional): name of the circuit anstaz to be used for the QCBM, either “copula” or “standard”. Defaults to “copula” transformation (str, optional): Type of normalization, either “minmax” or “pit”. Defaults to “pit”. hot_start_path (str, optional): Path to the location of previously trained model parameters in numpy array format. Defaults to ‘’ which implies that the model will be trained starting with random weights. save_artifacts (bool, optional): Whether to save the artifacts to disk. Defaults to True. slower_progress_update (bool, optional): Controls how often the progress bar is updated. If set to True, update every 10 seconds at most, otherwise use tqdm defaults. Defaults to False.

Returns:

BaseModelHandler: Return the built model handler. It is not strictly necessary to overwrite the existing variable with this since all changes are made in place.

evaluator(solutions)

Computes the loss function for all candidate solutions from CMA

Args:

solutions (list): List of the potential weights which the CMA algorithm has sampled.

Returns:

loss (list): List of all training losses corresponding to each entry in solutions.

plot_training_data(train_dataset: array)

Plot training data and compute an estimate of the true probability distribution

predict(n_samples: int) array

Generate samples from the trained model and perform the inverse of the data transformation which was used to transform the training data to be able to compute the KL-divergence in the original space.

Args:

n_samples (int, optional): Number of samples to generate.

Returns:

np.array: Array of samples of shape (n_samples, sample_dimension).

predict_transform(n_samples: int) array

Generate samples from the trained model in the transformed space (the n-dimensional unit cube).

Args:

n_samples (int, optional): Number of samples to generate.

Returns:

np.array: Array of samples of shape (n_samples, sample_dimension).

reload(model_name: str, epoch: int, random_seed: int | None = None) BaseModelHandler

Reload the model parameters and the lastest sigma for the continuing training of the generator from the file weights_file.

Args:

weights_file (str): The path to the pickled tuple containing the generator weights and sigma value.

Returns:

BaseModelHandler: The model, but changes have been made in place as well.

save(file_path: Path, overwrite: bool = True) BaseModelHandler

Save the generator weights to disk.

Args:

file_path (Path): The paths of the pickled generator weights. overwrite (bool, optional): Whether to overwrite the file if it already exists. Defaults to True.

Returns:

BaseModelHandler: The model, unchanged.

train(train_dataset: array, n_epochs=500, batch_size=200, hist_samples=10000, plot_training_data=False) BaseModelHandler

Train the discrete QCBM.

Args:

train_dataset (np.array): The training dataset. n_epochs (int): The number of epochs. batch_size (int, optional): The population size used for the CMA optimizer. Defaults to 200 hist_samples (int, optional): Number of samples generated from the generator at every epoch to compute the loss fucntion. Defaults to 1e4 plot_training_data (bool): If True, a plot of the training data is displayed for debugging purposes

Returns:

BaseModelHandler: The trained model.