Review — Tensorflow Quantum(TFQ) whitepaper — Part1

Amit Singh Bhatti
11 min readApr 3, 2020
Bloc Sphere

Let’s cut to the chase and get started with the quantum-based machine learning models coding ability provided by the TensorFlow. As we discussed in the previous blog about the NISQ technology and the library named CIRQ that was developed to build quantum circuits on the NISQ based hardware. TFQ has essentially been built with the amalgamation of the Tensorflow with Cirq to build hybrid- classic quantum models. We will get to this part in a short time. This piece will explain how TFQ constructs work and how to code your own TFQ based quantum machine learning model.

Why use quantum-based computing for machine learning models?

One key observation that has led to the application of quantum computers to machine learning is their ability to perform fast linear algebra on a state space that grows exponentially with the number of qubits. These quantum accelerated linear-algebra based techniques for machine learning can be considered the first generation of quantum machine learning (QML) algorithms tackling a wide range of applications in both supervised and unsupervised learning, including principal component analysis, support vector machines, K-means clustering, and recommendation systems.
These algorithms often admit exponentially faster solutions compared to their classical counterparts on certain types of quantum data.

Hybrid Quantum-Classical Models

  • Near-term quantum processors are still fairly small and noisy, thus quantum models cannot disentangle and generalize quantum data using quantum processors alone. NISQ processors will need to work in concert with classical co-processors to become effective. There comes the need to use hybrid models.
  • The need to have quantum models incoherence with the classical models to have better quality and performance can be attributed to the criterions such as representation capacity, Training capacity, Inference tractability, and Generation power.
  • Combining classical and quantum models
    One well-known method is to use classical computers as outer-loop optimizers for QNNs. When training a QNN with a classical optimizer in a quantum-classical loop, the overall algorithm is sometimes referred to as a Variational Quantum-Classical Algorithm.
    There is more proposed work on QNN based architectures
    * Variational Quantum Eigensolvers (VQEs)
    * Quantum Approximate Optimization Algorithms (QAOAs)
    * Quantum Convolutional Neural Networks (QCNN)
    * Quantum Generative Model
    Generally, the goal is to optimize over a parameterized class of computations to either generate a certain low-energy wavefunction (VQE/QAOA), learn to extract non-local information (QNN classifiers) or learn how to generate a quantum distribution from data (generative models).

A hybrid quantum-classical model is a learning heuristic in which both the classical and quantum processors contribute to the indicators of learning performance defined above.

Quantum Data

  • Abstractly, any data emerging from an underlying quantum mechanical process can be considered quantum data. This can be the classical data resulting from quantum mechanical experiments, or data which is directly generated by a quantum device and then fed into an algorithm as input.
  • Some of the source of quantum data, or the sources of simulation of quantum data
    * Quantum simulations: These can include output states of quantum chemistry simulations used to extract information about chemical structures and chemical reactions.
    * Quantum communication networks: Machine learning in this class of systems will be related to distilling small-scale quantum data; e.g., to discriminate among non-orthogonal quantum states, with application to design and construction of quantum error-correcting codes for quantum repeaters, quantum receivers, and purification units.
    * Quantum metrology: Quantum-enhanced high precision measurements such as quantum sensing and quantum imaging are inherently done on probes that are small-scale quantum devices and could be designed or improved by variational quantum models.
    * Quantum control: Variationally learning hybrid quantum-classical algorithms can lead to new optimal open or closed-loop control, calibration, and error mitigation, correction, and verification strategies for near-term quantum devices and quantum processors.

Tensorflow Quantum Overiew

TFQ is an integration of Cirq with TensorFlow that allows researchers and students to simulate QPUs while designing, training, and testing hybrid quantum-classical models, and eventually run the quantum portions of these models on actual quantum processors as they come online. A core contribution of TFQ is seamless backpropagation through combinations of classical and quantum layers in hybrid quantum-classical models. This allows QML researchers to directly harness the rich set of tools already available in TF and Keras.

Software Architecture and Building Blocks

Cirq

  • Cirq is an open source library to invoke quantum circuits on near term devices. It contains basic strcuture such as gates, circuits, and measurment operators, that are required for specifying quantum computations. This quantum computation can be executed in simulation or on real hardware. Cirq also contains substantial machinery that helps users design efficient algorithms for NISQ machines such as compilers and schedulers.
  • Example of writing a small snippet which produces an output as bell state expectation
(q1 , q2 ) = cirq.GridQubit.rect(1 ,2)
c = cirq.Circuit(cirq.H(q1), cirq.CNOT(q1, q2))
ZZ = cirq.Z(q1)*cirq.Z(q2)
bell = cirq.Simulator().simulate(c).final_state
expectation = ZZ.expectation_from_wavefunction(bell, dict(zip([q1, q2] ,[0, 1])))

Firstly, we define a circuit grid which instantiates your qubits, then we define the circuit, here we see Hadamard gate and CNOT gates have been used in the circuit. To understand the basics of quantum arithmetic and gates, I have given out the link to the quantum computing series in the previous blog. You can go through it and develop an understanding of it.
We define an intermediate ZZ state and then instantiate the simulator and then get the output of the wave function as an expectation.

Technical Hurdles in Combining Cirq with TensorFlow

Blue nodes are input tensors, arrows are tensors flowing through the graph, and orange nodes are TF Ops transforming the simulated quantum state
  • This architecture may at first glance seem like an attractive option as it is a direct formulation of quantum computation as a dataflow graph. However, this approach is suboptimal for several reasons. First, in this architecture, the structure of the circuit being run is static in the computational graph, thus running a different circuit would require the graph to be rebuilt. This is far from ideal for variational quantum algorithms which learn over many iterations with a slightly modified quantum circuit at each iteration.
  • A second problem is the lack of a clear way to embed such a quantum dataflow graph on a real quantum processor: the states would have to remain held in quantum memory on the quantum device itself, and the high latency between classical and quantum processors makes sending transformations one-by-one prohibitive.

TFQ Architecture

To overcome the above mentioned problems with intergration of TF and Ciraq following four design principles have been created:

  • Differentiability:
    A software support for QML quantum circuit differentiability is needed so that hybrid quantum-classical models can participate in the backpropagation.
  • Circuit Batching:
    There should be semantic which defines how different data points are run over quantum circuitry something similar to the traditional machine learning technique.
  • Execution Backend Agnostic:
    QML software must allow users to easily switch between running models in simulation and running models on real hardware, such that simulated results and experimental results can be directly compared.
  • Minimalism:
    There should be least of the effort required for users to shift from Cirq and TensorFlow to TFQ i.e. TFQ should have most of the APIs as intuitive and similar to the two frameworks.
The software stack of TFQ
  • In TFQ, circuits and other quantum computing constructs are tensors, and converting these tensors into classical information via simulation or execution on a quantum device is done by ops.
  • tfq.convert_to_tensors function is used to convert the ciraq objects to tensorflow string tensors. This takes the ciraq.circuit or ciraq.pauliSum object and creates a tensor string representation. The ciraq.Circuit objects may be parameterized by SumPy symbols.
  • Above tensors are converted into classical information using state simulation, expectation value calculation or sampling.
qubit = cirq.GridQubit (0 , 0)
theta = sympy.Symbol (’theta ’)
c = cirq.Circuit(cirq.X( qubit ) ** theta )
c_tensor = tfq.convert_to_tensor([ c] * 3)
theta_values = tf.constant ([[0] ,[1] ,[2]])
m = cirq.Z(qubit)
paulis = tfq.convert_to_tensor([ m ] * 3)
expectation_op = tfq.get_expectation_op()
output = expectation_op (c_tensor,[’theta ’],theta_values,paulis)
abs_output = tf.math.abs(output)

The tensor representation of circuits and Paulis along with the execution ops are all that are required to solve any problem in QML.

TFQ Pipeline for a hybrid discriminator model

  • A high-level abstract overview of the computational steps involved in the end-to-end pipeline for inference and training of a hybrid quantum-classical discriminative model for quantum data in TFQ.
  • Prepare Quantum Dataset
    current quantum computers cannot import quantum data from external sources, the user has to specify quantum circuits that generate the data. Quantum datasets are prepared using unparameterized cirq.Circuit objects and are injected into the computational graph using tfq.convert_to_tensor.
  • Evaluate Quantum Model
    Quantum models are constructed using cirq.Circuit objects containing SymPy symbols, and can be attached to quantum data sources using the tfq.AddCircuit layer. In the case of discriminative learning, this information is the hidden label parameters. To extract a quantum non-local subsystem, the quantum model disentangles the input data, leaving the hidden information encoded in classical correlations, thus making it accessible to local measurements and classical post-processing.
  • Sample or Average
    TFQ provides methods for averaging over several runs involving steps (1) and (2). Sampling or averaging are performed by feeding quantum data and quantum models to the tfq.Sample or tfq.Expectation layers.
  • Evaluate Classical Model
    As the extracted information may still be encoded in classical correlations between measured expectations, classical deep neural networks can be applied to distill such correlations.
    Since TFQ is fully compatible with core TensorFlow, quantum models can be attached directly to classical tf.keras.layers.Layer objects such as tf.keras.layers.Dense.
  • Evaluate Cost Function
    Given the results of classical post-processing, a cost function is calculated. This may be based on the accuracy of classification if the quantum data was labeled, or other criteria if the task is unsupervised.
  • Evaluate Gradients & Update Parameters
    To support gradient descent, TFQ exposes derivatives of quantum operations to the TensorFlow backpropagation machinery via the tfq.differentiators.Differentiator interface. This allows both the quantum and classical models’ parameters to be optimized against quantum data via hybrid quantum-classical backpropagation.

TFQ Building Blocks

Quantum Computations as Tensors

  • As discussed before we use the tf.convert_to_tensor function to add the Cirq based quantum circuit and quantum data to the computational graph.
q0 = cirq.GridQubit(0 , 0) 
q_data_raw = cirq.Circuit(cirq.H(q0))
q_data = tfq.convert_to_tensor([q_data_raw])
theta = sympy . Symbol (’theta ’)
q_model_raw = cirq.Circuit(cirq.Ry(theta).on(q0))
q_model = tfq.convert_to_tensor([q_model_raw])
q_measure_raw = 0.5 * cirq.Z(q0)
q_measure = tfq.convert_to_tensor([q_measure_raw])'

Composing Quantum Models

  • After injecting quantum data and quantum models into the computational graph, a custom TensorFlow operation is required to combine them.TFQ implements the tfq.layers.AddCircuit layer for combining tensors of circuits.
add_op = tfq.layers.AddCircuit() 
data_and_model = add_op (q_data, append = q_model)

Sampling and Expectation Values

  • TFQ implements tfq.layers.Sample , a Keras layer which enables sampling from batches of circuits in support of design objective.
  • Given these, the Sample layer produces a tf.RaggedTensor of shape [batch_size, num_samples, n_qubits], where the n qubits dimension is ragged to account for the possibly varying circuit size over the input batch of quantum data.
sample_layer = tfq . layers.Sample() 
samples = sample_layer(data_and_model,symbol_names =[ ’theta ’],symbol_values =[[0.5]] , repetitions =4)
  • In the simplest case, expectation values are simply averages over samples. In quantum computing, expectation values are typically taken with respect to a measurement operator M. This involves sampling bitstrings from the quantum circuit as described above, applying M to the list of bitstring samples to produce a list of numbers, then taking the average of the result.
  • TFQ implements tfq.layers.Expectation , a Keras layer which enables the extraction of measurement expectation values from quantum models. The user supplies a tensor of parameterized circuits, a list of symbols contained in the circuits, a tensor of values to substitute for the symbols in the circuit, and a tensor of operators to measure with respect to them. Given these inputs, the layer outputs a tensor of expectation values.
expectation_layer = tfq.layers.Expectation() 
expectations = expectation_layer(circuit = data_and_model, symbol_names = [’theta ’], symbol_values = [[0.5]], operators = q_measure)

Differentiating Quantum Circuits

  • One of the core contributions of TFQ is integration with TensorFlow’s backpropagation mechanism. TFQ implements this functionality with the differentiator module.
  • The first class of quantum circuit differentiators is the finite difference method. This class samples the primary quantum circuit for at least two different parameter settings, then combines them to estimate the derivative.
    The ForwardDifference differentiator provides most basic version of this. For each parameter in the circuit, the circuit is sampled at the current setting of the parameter. Then, each parameter is perturbed separately and the circuit resampled.

Simplified Layers

  • The layers allow parameterized circuits to be updated by hybrid backpropagation without the user needing to provide the list of symbols associated with the circuit. The PQC layer provides automated Keras management of the variables in a parameterized circuit.
q = cirq.GridQubit(0,0)
(a,b) = sympy.symbols("a b")
circuit = cirq.Circuit(cirq.Rz(a)(q), cirq.Rx(b)(q))
outputs = tfq.layers.PQC(circuit, cirq.Z (q))
quantum_data = tfq.convert_to_tensor([cirq.Circuit() ,cirq.Circuit( cirq.X(q))])
res = outputs(quantum_data)
  • When the variables in a parameterized circuit will be controlled completely by other user-specified machinery, for example by a classical neural network, then the user can call ControlledPQC layer
outputs = tfq.layers.ControlledPQC(circuit, cirq.Z(bit)) model_params = tf.convert_to_tensor([[0.5 , 0.5], [0.25 , 0.75]]) res = outputs ([ quantum_data , model_params ])

Quantum Circuit Simulation with qsim

  • qsim is a software package used for simulating the quantum circuits on classical computers. A C++ implementation of the package is utilised inside the TFQ’s tensorflow ops.
  • Benchmarking analysis shows that the performance of qsim is better compared to the Cirq on random and structured circuits.
  • Fast Gates fusion Algorithm
    Suppose we wish to apply two gates G1 and G2 to our initial state |ψi, and suppose these gates act on the same two qubits. Since gate application is associative, we have (G2G1)|ψi = G2(G1 |ψi). However, as the number of qubits n supporting |ψi grows, the left side of the equality becomes much faster to compute. This is because applying a gate to a state requires broadcasting the parameters of the gate to all 2n elements of the state vector, so that each gate application incurs a cost scaling as 2n. In contrast, multiplying two-qubit matrices incurs a small constant cost. This means a simulation of a circuit will be fastest if we pre-multiply as many gates as possible, while keeping the matrix size small, before applying them to a state. This pre-multiplication is called gate fusion and is accomplished with the qsim fusion algorithm.
  • Benchmarking results of simulation with TFQ and Cirq
Performance comparison

TFQ uses a single-core, while Cirq simulation is allowed to parallelize over all available cores using the Python multiprocessing module and the map function.

At the largest tested size of 20 qubits(plot on the left), we see approximately 7 times savings in simulation time versus Cirq. The Simulation of structured circuits shows the smaller scale of entanglement makes these circuits more amenable to compaction via the q-sim Fusion algorithm. At the largest tested size of 20 qubits, we see TFQ is up to 100 times faster than parallelized Cirq.

In summary, for these circuits, we find a roughly 100 times improvement in simulation time in TFQ versus Cirq.

Therefore, we see that apart TFQ has a performance edge over Cirq when running in simulation modes which are still exhibited since there is serialization between the TensorFlow wrapping the Cirq operations.

This is the end of the first part, in the next part we will get into the math of the hybrid Quantum- Classical Machine Learning.

Till next time stay home and stay safe.

Source : https://arxiv.org/pdf/2003.02989.pdf

--

--