-
Notifications
You must be signed in to change notification settings - Fork 402
Description
Environment
- Qiskit Machine Learning version: 0.8.4
- Qiskit version: 1.4.5
- Python version: 3.10.12
- Operating system: WSL
What is happening?
Dear Qiskit team,
when calculating the Effective Dimension for an EstimatorQNN, the get_fisher_information function produces an error if one of the outputs of the EstimatorQNN is negative. However, choosing the PauliZ observable can lead to negative outputs.
The problem is in line 211 of EffectiveDimension:
gradvectors = np.sqrt(model_outputs) * gradients / model_outputs
Taking the square root of a negative value results in nan. Here is the error trace:
venv_ed/lib/python3.10/site-packages/qiskit_machine_learning/neural_networks/effective_dimension.py:211: RuntimeWarning: invalid value encountered in sqrt
gradvectors = np.sqrt(model_outputs) * gradients / model_outputs
venv_ed/lib/python3.10/site-packages/numpy/linalg/_linalg.py:2325: RuntimeWarning: invalid value encountered in slogdet
sign, logdet = _umath_linalg.slogdet(a, signature=signature)
Data size: 1000, global effective dimension: nan
Number of weights: 12, normalized effective dimension: nan
As a workaround, I thought about using the probability instead of the expectation value but I could not figure out how to add an interpret map to EstimatorQNN, which seems to be only possible for SamplerQNN.
I appreciate any help.
Best,
Kilian
How can we reproduce the issue?
import qiskit as qk
import numpy as np
import math
from qiskit.primitives import StatevectorEstimator
from qiskit.quantum_info import SparsePauliOp
from qiskit_machine_learning.neural_networks import EstimatorQNN
from qiskit_machine_learning.neural_networks import EffectiveDimension
def unitary(qc, layers, input_params, weight_params):
"""
applies the reupload encoding unitary
:param qc: qiskit QuantumCircuit
:param layers: number of reupload encoding layers
:param input_params: qiskit ParameterVector for the inputs of len (n_gates)
:param weight_params: qiskit ParameterVector for the weights and biases of len (2*n_gates)
"""
n_qubits = qc.num_qubits
i = 0
j = 0
for layer in range(layers):
for qubit in range(n_qubits):
qc.rz(input_params[i] * weight_params[j] + weight_params[j + 1], qubit)
i += 1
j += 2
qc.ry(input_params[i] * weight_params[j] + weight_params[j + 1], qubit)
i += 1
j += 2
qc.rz(input_params[i] * weight_params[j] + weight_params[j + 1], qubit)
i += 1
j += 2
for qubit in range(n_qubits - 1):
qc.cx(qubit, qubit + 1)
qc.cx(n_qubits - 1, 0)
def main():
n_qubits = 2
n_layers = 1
# ED parameters
num_input_samples = 10
num_weight_samples = 10
n_data = 1000
qc = qk.QuantumCircuit(n_qubits)
input_parameters = qk.circuit.ParameterVector("x", n_qubits * n_layers * 3)
weight_parameters = qk.circuit.ParameterVector("θ", n_qubits * n_layers * 3 * 2)
unitary(qc, n_layers, input_parameters, weight_parameters)
#print(qc)
observable1 = SparsePauliOp.from_list([("Z" + "I" * (n_qubits - 1), 1)])
observable2 = SparsePauliOp.from_list([("I" + "Z" + "I" * (n_qubits - 2), 1)])
observable = [observable1, observable2]
estimator = StatevectorEstimator()
qnn = EstimatorQNN(circuit=qc, input_params=input_parameters,
weight_params=weight_parameters, estimator=estimator,
observables=observable)
global_ed = EffectiveDimension(
qnn=qnn, weight_samples=num_weight_samples, input_samples=num_input_samples
)
global_eff_dim_0 = global_ed.get_effective_dimension(dataset_size=n_data)
d = qnn.num_weights
print("Data size: {}, global effective dimension: {:.4f}".format(n_data, global_eff_dim_0))
print(
"Number of weights: {}, normalized effective dimension: {:.4f}".format(d,
global_eff_dim_0 / d)
)
if __name__ == "__main__":
main()
What should happen?
The code should calculate the effective dimension (and normalized ED) for a reupload encoding circuit that measures the PauliZ expectation value for the first two qubits.
Any suggestions?
Fixing the calculation of gradvectors would be preferred.
Alternatively, a workaround that allows me to convert the output of EstimatorQNN from expectation values into probabilities should suffice.