Ambainis, A, Balodis, K, Belovs, A, Lee, T, Santha, M & Smotrovs, J 2015, 'Separations in Query Complexity Based on Pointer Functions', Journal of the ACM, vol. 64, no. 5.
View/Download from: Publisher's site
View description>>
In 1986, Saks and Wigderson conjectured that the largest separation betweendeterministic and zero-error randomized query complexity for a total booleanfunction is given by the function $f$ on $n=2^k$ bits defined by a completebinary tree of NAND gates of depth $k$, which achieves $R_0(f) =O(D(f)^{0.7537\ldots})$. We show this is false by giving an example of a totalboolean function $f$ on $n$ bits whose deterministic query complexity is$\Omega(n/\log(n))$ while its zero-error randomized query complexity is $\tildeO(\sqrt{n})$. We further show that the quantum query complexity of the samefunction is $\tilde O(n^{1/4})$, giving the first example of a total functionwith a super-quadratic gap between its quantum and deterministic querycomplexities. We also construct a total boolean function $g$ on $n$ variables that haszero-error randomized query complexity $\Omega(n/\log(n))$ and bounded-errorrandomized query complexity $R(g) = \tilde O(\sqrt{n})$. This is the firstsuper-linear separation between these two complexity measures. The exactquantum query complexity of the same function is $Q_E(g) = \tilde O(\sqrt{n})$. These two functions show that the relations $D(f) = O(R_1(f)^2)$ and $R_0(f)= \tilde O(R(f)^2)$ are optimal, up to poly-logarithmic factors. Furthervariations of these functions give additional separations between other querycomplexity measures: a cubic separation between $Q$ and $R_0$, a $3/2$-powerseparation between $Q_E$ and $R$, and a 4th power separation betweenapproximate degree and bounded-error randomized query complexity. All of these examples are variants of a function recently introduced by\goos, Pitassi, and Watson which they used to separate the unambiguous1-certificate complexity from deterministic query complexity and to resolve thefamous Clique versus Independent Set problem in communication complexity.
Ban, Y & Chen, X 2015, 'Counter-diabatic driving for fast spin control in a two-electron double quantum dot', Scientific Reports, vol. 4, no. 1.
View/Download from: Publisher's site
Bandyopadhyay, S, Cosentino, A, Johnston, N, Russo, V, Watrous, J & Yu, N 2015, 'Limitations on Separable Measurements by Convex Optimization', IEEE Transactions on Information Theory, vol. 61, no. 6, pp. 3593-3604.
View/Download from: Publisher's site
View description>>
© 1963-2012 IEEE. We prove limitations on LOCC and separable measurements in bipartite state discrimination problems using techniques from convex optimization. Specific results that we prove include: an exact formula for the optimal probability of correctly discriminating any set of either three or four Bell states via LOCC or separable measurements when the parties are given an ancillary partially entangled pair of qubits; an easily checkable characterization of when an unextendable product set is perfectly discriminated by separable measurements, along with the first known example of an unextendable product set that cannot be perfectly discriminated by separable measurements; and an optimal bound on the success probability for any LOCC or separable measurement for the recently proposed state discrimination problem of Yu, Duan, and Ying.
Berta, M & Tomamichel, M 2015, 'The Fidelity of Recovery is Multiplicative', IEEE Transactions on Information Theory, vol. 62, no. 4, pp. 1758-1763.
View/Download from: Publisher's site
View description>>
Fawzi and Renner [Commun. Math. Phys. 340(2):575, 2015] recently establisheda lower bound on the conditional quantum mutual information (CQMI) oftripartite quantum states $ABC$ in terms of the fidelity of recovery (FoR),i.e. the maximal fidelity of the state $ABC$ with a state reconstructed fromits marginal $BC$ by acting only on the $C$ system. The FoR measures quantumcorrelations by the local recoverability of global states and has manyproperties similar to the CQMI. Here we generalize the FoR and show that theresulting measure is multiplicative by utilizing semi-definite programmingduality. This allows us to simplify an operational proof by Brandao et al.[Phys. Rev. Lett. 115(5):050501, 2015] of the above-mentioned lower bound thatis based on quantum state redistribution. In particular, in contrast to theprevious approaches, our proof does not rely on de Finetti reductions.
Berta, M, Fawzi, O & Tomamichel, M 2015, 'On Variational Expressions for Quantum Relative Entropies', Letters in Mathematical Physics, vol. 107, no. 12, pp. 2239-2265.
View/Download from: Publisher's site
View description>>
Distance measures between quantum states like the trace distance and thefidelity can naturally be defined by optimizing a classical distance measureover all measurement statistics that can be obtained from the respectivequantum states. In contrast, Petz showed that the measured relative entropy,defined as a maximization of the Kullback-Leibler divergence over projectivemeasurement statistics, is strictly smaller than Umegaki's quantum relativeentropy whenever the states do not commute. We extend this result in two ways.First, we show that Petz' conclusion remains true if we allow general positiveoperator valued measures. Second, we extend the result to Renyi relativeentropies and show that for non-commuting states the sandwiched Renyi relativeentropy is strictly larger than the measured Renyi relative entropy for $\alpha\in (\frac12, \infty)$, and strictly smaller for $\alpha \in [0,\frac12)$. Thelatter statement provides counterexamples for the data-processing inequality ofthe sandwiched Renyi relative entropy for $\alpha < \frac12$. Our main tool isa new variational expression for the measured Renyi relative entropy, which wefurther exploit to show that certain lower bounds on quantum conditional mutualinformation are superadditive.
Bruno, A, de Lange, G, Asaad, S, van der Enden, KL, Langford, NK & DiCarlo, L 2015, 'Reducing intrinsic loss in superconducting resonators by surface treatment and deep etching of silicon substrates', Applied Physics Letters, vol. 106, no. 18, pp. 182601-182601.
View/Download from: Publisher's site
View description>>
We present microwave-frequency NbTiN resonators on silicon, systematically achieving internal quality factors above 1 M in the quantum regime. We use two techniques to reduce losses associated with two-level systems: an additional substrate surface treatment prior to NbTiN deposition to optimize the metal-substrate interface and deep reactive-ion etching of the substrate to displace the substrate-vacuum interfaces away from high electric fields. The temperature and power dependence of resonator behavior indicate that two-level systems still contribute significantly to energy dissipation, suggesting that more interface optimization could further improve performance.
Chen, J, Ji, Z, Yu, N & Zeng, B 2015, 'Detecting Consistency of Overlapping Quantum Marginals by Separability', Phys. Rev. A, vol. 93, no. 3, p. 032105.
View/Download from: Publisher's site
View description>>
The quantum marginal problem asks whether a set of given density matrices areconsistent, i.e., whether they can be the reduced density matrices of a globalquantum state. Not many non-trivial analytic necessary (or sufficient)conditions are known for the problem in general. We propose a method to detectconsistency of overlapping quantum marginals by considering the separability ofsome derived states. Our method works well for the $k$-symmetric extensionproblem in general, and for the general overlapping marginal problems in somecases. Our work is, in some sense, the converse to the well-known $k$-symmetricextension criterion for separability.
Chen, J-Y, Ji, Z, Liu, Z-X, Shen, Y & Zeng, B 2015, 'Geometry of reduced density matrices for symmetry-protected topological phases', Phys. Rev. A, vol. 93, no. 1, p. 012309.
View/Download from: Publisher's site
View description>>
In this paper, we study the geometry of reduced density matrices for stateswith symmetry-protected topological (SPT) order. We observe ruled surfacestructures on the boundary of the convex set of low dimension projections ofthe reduced density matrices. In order to signal the SPT order using ruledsurfaces, it is important that we add a symmetry-breaking term to the boundaryof the system---no ruled surface emerges in systems without boundary or when weadd a symmetry-breaking term representing a thermodynamic quantity. Althoughthe ruled surfaces only appear in the thermodynamic limit where theground-state degeneracy is exact, we analyze the precision of our numericalalgorithm and show that a finite system calculation suffices to reveal theruled surface structures.
Chen, T, Yu, N & Han, T 2015, 'Continuous-time orbit problems are decidable in polynomial-time', Information Processing Letters, vol. 115, no. 1, pp. 11-14.
View/Download from: Publisher's site
Cheng, H-C, Hsieh, M-H & Tomamichel, M 2015, 'Exponential Decay of Matrix $Φ$-Entropies on Markov Semigroups with Applications to Dynamical Evolutions of Quantum Ensembles', Journal of Mathematical Physics, 58(9), 092202, Sep 2017, vol. 58, no. 9.
View/Download from: Publisher's site
View description>>
In the study of Markovian processes, one of the principal achievements is theequivalence between the $\Phi$-Sobolev inequalities and an exponential decreaseof the $\Phi$-entropies. In this work, we develop a framework of Markovsemigroups on matrix-valued functions and generalize the above equivalence tothe exponential decay of matrix $\Phi$-entropies. This result also specializesto spectral gap inequalities and modified logarithmic Sobolev inequalities inthe random matrix setting. To establish the main result, we define anon-commutative generalization of the carr\'e du champ operator, and prove a deBruijn's identity for matrix-valued functions. The proposed Markov semigroups acting on matrix-valued functions haveimmediate applications in the characterization of the dynamical evolution ofquantum ensembles. We consider two special cases of quantum unital channels,namely, the depolarizing channel and the phase-damping channel. In the former,since there exists a unique equilibrium state, we show that the matrix$\Phi$-entropy of the resulting quantum ensemble decays exponentially as timegoes on. Consequently, we obtain a stronger notion of monotonicity of theHolevo quantity - the Holevo quantity of the quantum ensemble decaysexponentially in time and the convergence rate is determined by the modifiedlog-Sobolev inequalities. However, in the latter, the matrix $\Phi$-entropy ofthe quantum ensemble that undergoes the phase-damping Markovian evolutiongenerally will not decay exponentially. This is because there are multipleequilibrium states for such a channel. Finally, we also consider examples of statistical mixing of Markov semigroupson matrix-valued functions. We can explicitly calculate the convergence rate ofa Markovian jump process defined on Boolean hypercubes, and provide upperbounds of the mixing time on these types of examples.
Coles, PJ, Berta, M, Tomamichel, M & Wehner, S 2015, 'Entropic Uncertainty Relations and their Applications', Rev. Mod. Phys., vol. 89, no. 1, pp. 015002-58.
View/Download from: Publisher's site
View description>>
Heisenberg's uncertainty principle forms a fundamental element of quantummechanics. Uncertainty relations in terms of entropies were initially proposedto deal with conceptual shortcomings in the original formulation of theuncertainty principle and, hence, play an important role in quantumfoundations. More recently, entropic uncertainty relations have emerged as thecentral ingredient in the security analysis of almost all quantum cryptographicprotocols, such as quantum key distribution and two-party quantum cryptography.This review surveys entropic uncertainty relations that capture Heisenberg'sidea that the results of incompatible measurements are impossible to predict,covering both finite- and infinite-dimensional measurements. These ideas arethen extended to incorporate quantum correlations between the observed objectand its environment, allowing for a variety of recent, more generalformulations of the uncertainty principle. Finally, various applications arediscussed, ranging from entanglement witnessing to wave-particle duality toquantum cryptography.
Combes, J & Ferrie, C 2015, 'Cost of postselection in decision theory', Physical Review A, vol. 92, no. 2, pp. 1-9.
View/Download from: Publisher's site
View description>>
© 2015 American Physical Society. Postselection is the process of discarding outcomes from statistical trials that are not the event one desires. Postselection can be useful in many applications where the cost of getting the wrong event is implicitly high. However, unless this cost is specified exactly, one might conclude that discarding all data is optimal. Here we analyze the optimal decision rules and quantum measurements in a decision theoretic setting where a prespecified cost is assigned to discarding data. Our scheme interpolates between unambiguous state discrimination (when the cost of postselection is zero) and a minimum error measurement (when the cost of postselection is maximal). We also relate our formulation to previous approaches which focus on minimizing the probability of indecision.
Cui, SX, Yu, N & Zeng, B 2015, 'Generalized graph states based on Hadamard matrices', Journal of Mathematical Physics, vol. 56, no. 7, pp. 072201-072201.
View/Download from: Publisher's site
View description>>
Graph states are widely used in quantum information theory, including entanglement theory, quantum error correction, and one-way quantum computing. Graph states have a nice structure related to a certain graph, which is given by either a stabilizer group or an encoding circuit, both can be directly given by the graph. To generalize graph states, whose stabilizer groups are abelian subgroups of the Pauli group, one approach taken is to study non-abelian stabilizers. In this work, we propose to generalize graph states based on the encoding circuit, which is completely determined by the graph and a Hadamard matrix. We study the entanglement structures of these generalized graph states and show that they are all maximally mixed locally. We also explore the relationship between the equivalence of Hadamard matrices and local equivalence of the corresponding generalized graph states. This leads to a natural generalization of the Pauli (X, Z) pairs, which characterizes the local symmetries of these generalized graph states. Our approach is also naturally generalized to construct graph quantum codes which are beyond stabilizer codes.
Dangniam, N & Ferrie, C 2015, 'Quantum Bochner’s theorem for phase spaces built on projective representations', Journal of Physics A: Mathematical and Theoretical, vol. 48, no. 11, pp. 115305-115305.
View/Download from: Publisher's site
Ferrie, C & Blume-Kohout, R 2015, 'Minimax quantum tomography: the ultimate bounds on accuracy', Phys. Rev. Lett., vol. 116, no. 9, p. 090407.
View/Download from: Publisher's site
View description>>
A minimax estimator has the minimum possible error ('risk') in the worstcase. We construct the first minimax estimators for quantum state tomographywith relative entropy risk. The minimax risk of non-adaptive tomography scalesas $O(1/\sqrt{N})$, in contrast to that of classical probability estimationwhich is $O(1/N)$. We trace this deficiency to sampling mismatch: futureobservations that determine risk may come from a different sample space thanthe past data that determine the estimate. This makes minimax estimators verybiased, and we propose a computationally tractable alternative with similarbehavior in the worst case, but superior accuracy on most states.
Frati, F, Gaspers, S, Gudmundsson, J & Mathieson, L 2015, 'Augmenting Graphs to Minimize the Diameter', Algorithmica, vol. 72, no. 4, pp. 995-1010.
View/Download from: Publisher's site
View description>>
© 2014, Springer Science+Business Media New York. We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT $$4$$4-approximation algorithm for the problem.
Friesen, M, Hamed, A, Lee, T & Oliver Theis, D 2015, 'Fooling-sets and rank', European Journal of Combinatorics, vol. 48, pp. 143-153.
View/Download from: Publisher's site
Granade, C, Ferrie, C & Cory, DG 2015, 'Accelerated randomized benchmarking', New Journal of Physics, vol. 17, no. 1, pp. 013042-013042.
View/Download from: Publisher's site
View description>>
© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.
Gross, JA, Dangniam, N, Ferrie, C & Caves, CM 2015, 'Novelty, efficacy, and significance of weak measurements for quantum tomography', Physical Review A, vol. 92, no. 6.
View/Download from: Publisher's site
Guo, C, Allen, FI, Lee, Y, Le, TP, Song, C, Ciston, J, Minor, AM & Gomez, ED 2015, 'Probing Local Electronic Transitions in Organic Semiconductors through Energy‐Loss Spectrum Imaging in the Transmission Electron Microscope', Advanced Functional Materials, vol. 25, no. 38, pp. 6071-6076.
View/Download from: Publisher's site
View description>>
Improving the performance of organic electronic devices depends on exploiting the complex nanostructures formed in the active layer. Current imaging methods based on transmission electron microscopy provide limited chemical sensitivity, and thus the application to materials with compositionally similar phases or complicated multicomponent systems is challenging. Here, it is demonstrated that monochromated transmission electron microscopes can generate contrast in organic thin films based on differences in the valence electronic structure at energy losses below 10 eV. In this energy range, electronic fingerprints corresponding to interband excitations in organic semiconductors can be utilized to generate significant spectral contrast between phases. Based on differences in chemical bonding of organic materials, high‐contrast images are thus obtained revealing the phase separation in polymer/fullerene mixtures. By applying principal component analysis to the spectroscopic image series, further details about phase compositions and local electronic transitions in the active layer of organic semiconductor mixtures can be explored.
Haah, J, Harrow, AW, Ji, Z, Wu, X & Yu, N 2015, 'Sample-optimal tomography of quantum states', IEEE Transactions on Information Theory, vol. 63, no. 9, pp. 5628-5641.
View/Download from: Publisher's site
View description>>
It is a fundamental problem to decide how many copies of an unknown mixedquantum state are necessary and sufficient to determine the state. Previously,it was known only that estimating states to error $\epsilon$ in trace distancerequired $O(dr^2/\epsilon^2)$ copies for a $d$-dimensional density matrix ofrank $r$. Here, we give a theoretical measurement scheme (POVM) that requires$O (dr/ \delta ) \ln (d/\delta) $ copies of $\rho$ to error $\delta$ ininfidelity, and a matching lower bound up to logarithmic factors. This implies$O( (dr / \epsilon^2) \ln (d/\epsilon) )$ copies suffice to achieve error$\epsilon$ in trace distance. We also prove that for independent (product)measurements, $\Omega(dr^2/\delta^2) / \ln(1/\delta)$ copies are necessary inorder to achieve error $\delta$ in infidelity. For fixed $d$, our measurementcan be implemented on a quantum computer in time polynomial in $n$.
Ivanyos, G, Karpinski, M, Qiao, Y & Santha, M 2015, 'Generalized Wong sequences and their applications to Edmonds' problems', Journal of Computer and System Sciences, vol. 81, no. 7, pp. 1373-1386.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Inc. Given a linear subspace B of the n×n matrices over some field F, we consider the following problems: symbolic matrix rank (SMR) asks to determine the maximum rank among matrices in B, while symbolic determinant identity testing (SDIT) asks to decide whether there exists a nonsingular matrix in B. The constructive versions of these problems ask to find a matrix of maximum rank, respectively a nonsingular matrix, if there exists one. Our first algorithm solves the constructive SMR when B is spanned by unknown rank one matrices, answering an open question of Gurvits. Our second algorithm solves the constructive SDIT when B is spanned by triangularizable matrices. (The triangularization is not given explicitly.) Both algorithms work over fields of size ≥n+1. Our framework is based on generalizing Wong sequences, a classical method to deal with pairs of matrices, to pairs of matrix spaces.
Kueng, R & Ferrie, C 2015, 'Near-optimal quantum tomography: estimators and bounds', New J. Phys., vol. 17, p. 123013.
View/Download from: Publisher's site
View description>>
We give bounds on the average fidelity achievable by any quantum stateestimator, which is arguably the most prominently used figure of merit inquantum state tomography. Moreover, these bounds can be computed online---thatis, while the experiment is running. We show numerically that these bounds arequite tight for relevant distributions of density matrices. We also show thatthe Bayesian mean estimator is ideal in the sense of performing close to thebound without requiring optimization. Our results hold for all finitedimensional quantum systems.
Kulkarni, R, Qiao, Y & Sun, X 2015, 'Any monotone property of 3-uniform hypergraphs is weakly evasive', Theoretical Computer Science, vol. 588, pp. 16-23.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier B.V. For a Boolean function f, let D(f) denote its deterministic decision tree complexity, i.e., minimum number of (adaptive) queries required in worst case in order to determine f. In a classic paper, Rivest and Vuillemin [11] show that any non-constant monotone property P:{0,1}(n2)→{0,1} of n-vertex graphs has D(P)=Ω(n2).We extend their result to 3-uniform hypergraphs. In particular, we show that any non-constant monotone property P:{0,1}(n3)→{0,1} of n-vertex 3-uniform hypergraphs has D(P)=Ω(n3).Our proof combines the combinatorial approach of Rivest and Vuillemin with the topological approach of Kahn, Saks, and Sturtevant [6]. Interestingly, our proof makes use of Vinogradov's Theorem (weak Goldbach Conjecture), inspired by its recent use by Babai et al. [1] in the context of the topological approach. Our work leaves the generalization to k-uniform hypergraphs as an intriguing open question.
Le, TP, Shang, Z, Wang, L, Li, N, Vajjala Kesava, S, O’Connor, JW, Chang, Y, Bae, C, Zhu, C, Hexemer, A, Gomez, EW, Salleo, A, Hickner, MA & Gomez, ED 2015, 'Miscibility and Acid Strength Govern Contact Doping of Organic Photovoltaics with Strong Polyelectrolytes', Macromolecules, vol. 48, no. 15, pp. 5162-5171.
View/Download from: Publisher's site
Leung, D & Yu, N 2015, 'Maximum privacy without coherence, zero-error', Journal of Mathematical Physics, vol. 57, no. 9, p. 092202.
View/Download from: Publisher's site
View description>>
We study the possible difference between the quantum and the privatecapacities of a quantum channel in the zero-error setting. For a family ofchannels introduced by arXiv:1312.4989, we demonstrate an extreme difference:the zero-error quantum capacity is zero, whereas the zero-error privatecapacity is maximum given the quantum output dimension.
Li, N, Ferrie, C, Gross, JA, Kalev, A & Caves, CM 2015, 'Fisher-symmetric informationally complete measurements for pure states', Phys. Rev. Lett., vol. 116, no. 18, pp. 180402-180402.
View/Download from: Publisher's site
View description>>
We introduce a new kind of quantum measurement that is defined to besymmetric in the sense of uniform Fisher information across a set of parametersthat injectively represent pure quantum states in the neighborhood of afiducial pure state. The measurement is locally informationallycomplete---i.e., it uniquely determines these parameters, as opposed todistinguishing two arbitrary quantum states---and it is maximal in the sense ofa multi-parameter quantum Cramer-Rao bound. For a $d$-dimensional quantumsystem, requiring only local informational completeness allows us to reduce thenumber of outcomes of the measurement from a minimum close to but below $4d-3$,for the usual notion of global pure-state informational completeness, to$2d-1$.
Li, S & Liu, W 2015, 'Cardinal directions: a comparison of direction relation matrix and objects interaction matrix', International Journal of Geographical Information Science, vol. 29, no. 2, pp. 194-216.
View/Download from: Publisher's site
View description>>
© 2014 Taylor & Francis. How to express and reason with cardinal directions between extended objects such as lines and regions is an important problem in qualitative spatial reasoning (QSR), a common subfield of geographical information science and Artificial Intelligence (AI). The direction relation matrix (DRM) model, proposed by Goyal and Egenhofer in 1997, is one very expressive relation model for this purpose. Unlike many other relation models in QSR, the set-theoretic converse of a DRM relation is not necessarily representable in DRM. Schneider et al. regard this as a serious shortcoming and propose, in their work published in ACM TODS (2012), the objects interaction matrix (OIM) model for modelling cardinal directions between complex regions. OIM is also a tiling-based model that consists of two phases: the tiling phase and the interpretation phase. Although it was claimed that OIM is a novel concept, we show that it is not so different from DRM if we represent the cardinal direction of two regions a and b by both the DRM of a to b and that of b to a. Under this natural assumption, we give methods for computing DRMs from OIMs and vice versa, and show that OIM is almost the same as DRM in the tiling phase, and becomes less precise after interpretation. Furthermore, exploiting the similarity between the two models, we prove that the consistency of a complete basic OIM network can be decided in cubic time. This answers an open problem raised by Schneider et al. regarding efficient algorithms for reasoning with OIM.
Lu, D, Xin, T, Yu, N, Ji, Z, Chen, J, Long, G, Baugh, J, Peng, X, Zeng, B & Laflamme, R 2015, 'Tomography is necessary for universal entanglement detection with single-copy observables', Phys. Rev. Lett., vol. 116, no. 23, p. 230501.
View/Download from: Publisher's site
View description>>
Entanglement, one of the central mysteries of quantum mechanics, plays anessential role in numerous applications of quantum information theory. Anatural question of both theoretical and experimental importance is whetheruniversal entanglement detection is possible without full state tomography. Inthis work, we prove a no-go theorem that rules out this possibility for anynon-adaptive schemes that employ single-copy measurements only. We also examinein detail a previously implemented experiment, which claimed to detectentanglement of two-qubit states via adaptive single-copy measurements withoutfull state tomography. By performing the experiment and analyzing the data, wedemonstrate that the information gathered is indeed sufficient to reconstructthe state. These results reveal a fundamental limit for single-copymeasurements in entanglement detection, and provides a general framework tostudy the detection of other interesting properties of quantum states, such asthe positivity of partial transpose and the $k$-symmetric extendibility.
Mathieson, L 2015, 'Graph Editing Problems with Extended Regularity Constraints', Theoretical Computer Science, vol. 677, pp. 56-68.
View/Download from: Publisher's site
View description>>
Graph editing problems offer an interesting perspective on sub- andsupergraph identification problems for a large variety of target properties.They have also attracted significant attention in recent years, particularly inthe area of parameterized complexity as the problems have rich parameterecologies. In this paper we examine generalisations of the notion of editing a graph toobtain a regular subgraph. In particular we extend the notion of regularity toinclude two variants of edge-regularity along with the unifying constraint ofstrong regularity. We present a number of results, with the central observationthat these problems retain the general complexity profile of theirregularity-based inspiration: when the number of edits $k$ and the maximumdegree $r$ are taken together as a combined parameter, the problems aretractable (i.e. in \FPT{}), but are otherwise intractable. We also examine variants of the basic editing to obtain a regular subgraphproblem from the perspective of parameterizing by the treewidth of the inputgraph. In this case the treewidth of the input graph essentially becomes alimiting parameter on the natural $k+r$ parameterization.
Paler, A, Polian, I, Nemoto, K & Devitt, SJ 2015, 'Fault-Tolerant High Level Quantum Circuits: Form, Compilation and Description', Quantum Science and Technology, vol. 2, no. 2, p. 025003.
View/Download from: Publisher's site
View description>>
Fault-tolerant quantum error correction is a necessity for any quantumarchitecture destined to tackle interesting, large-scale problems. Itstheoretical formalism has been well founded for nearly two decades. However, westill do not have an appropriate compiler to produce a fault-tolerant, errorcorrected description from a higher level quantum circuit for state of the arthardware models. There are many technical hurdles, including dynamic circuitconstructions that occur when constructing fault-tolerant circuits withcommonly used error correcting codes. We introduce a package that converts highlevel quantum circuits consisting of commonly used gates into a form employingall decompositions and ancillary protocols needed for fault-tolerant errorcorrection. We call this form the (I)initialisation, (C)NOT, (M)measurementform (ICM) and consists of an initialisation layer of qubits into one of fourdistinct states, a massive, deterministic array of CNOT operations and a seriesof time ordered $X$- or $Z$-basis measurements. The form allows a more flexbileapproach towards circuit optimisation. At the same time, the package outputs astandard circuit or a canonical geometric description which is a necessity foroperating current state-of-the-art hardware architectures using topologicalquantum codes.
Pfister, C, Kaniewski, J, Tomamichel, M, Mantri, A, Schmucker, R, McMahon, N, Milburn, G & Wehner, S 2015, 'Understanding nature from experimental observations: a theory independent test for gravitational decoherence', Nature Communications, vol. 7, pp. 13022-13022.
View/Download from: Publisher's site
View description>>
Quantum mechanics and the theory of gravity are presently not compatible. Aparticular question is whether gravity causes decoherence - an unavoidablesource of noise. Several models for gravitational decoherence have beenproposed, not all of which can be described quantum mechanically. In parallel,several experiments have been proposed to test some of these models, where thedata obtained by such experiments is analyzed assuming quantum mechanics. Sincewe may need to modify quantum mechanics to account for gravity, however, onemay question the validity of using quantum mechanics as a calculational tool todraw conclusions from experiments concerning gravity. Here we propose an experiment to estimate gravitational decoherence whoseconclusions hold even if quantum mechanics would need to be modified. We firstestablish a general information-theoretic notion of decoherence which reducesto the standard measure within quantum mechanics. Second, drawing on ideas fromquantum information, we propose a very general experiment that allows us toobtain a quantitative estimate of decoherence of any physical process for anyphysical theory satisfying only very mild conditions.Finally, we propose aconcrete experiment using optomechanics to estimate gravitational decoherencein any such theory, including quantum mechanics as a special case. Our work raises the interesting question whether other properties of naturecould similarly be established from experimental observations alone - that is,without already having a rather well formed theory of nature like quantummechanics to make sense of experimental data.
Schockaert, S & Li, S 2015, 'Realizing RCC8 networks using convex regions', Artificial Intelligence, vol. 218, pp. 74-105.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier B.V. All rights reserved. RCC8 is a popular fragment of the region connection calculus, in which qualitative spatial relations between regions, such as adjacency, overlap and parthood, can be expressed. While RCC8 is essentially dimensionless, most current applications are confined to reasoning about two-dimensional or three-dimensional physical space. In this paper, however, we are mainly interested in conceptual spaces, which typically are high-dimensional Euclidean spaces in which the meaning of natural language concepts can be represented using convex regions. The aim of this paper is to analyze how the restriction to convex regions constrains the realizability of networks of RCC8 relations. First, we identify all ways in which the set of RCC8 base relations can be restricted to guarantee that consistent networks can be convexly realized in respectively 1D, 2D, 3D, and 4D. Most surprisingly, we find that if the relation 'partially overlaps' is disallowed, all consistent atomic RCC8 networks can be convexly realized in 4D. If instead refinements of the relation 'part of' are disallowed, all consistent atomic RCC8 relations can be convexly realized in 3D. We furthermore show, among others, that any consistent RCC8 network with 2n + 1variables can be realized using convex regions in the n -dimensional Euclidean space.
Sutter, D, Tomamichel, M & Harrow, AW 2015, 'Strengthened Monotonicity of Relative Entropy via Pinched Petz Recovery Map', IEEE Transactions on Information Theory, vol. 62, no. 5, pages 2907-2913, 2016, vol. 62, no. 5, pp. 2907-2913.
View/Download from: Publisher's site
View description>>
The quantum relative entropy between two states satisfies a monotonicityproperty meaning that applying the same quantum channel to both states cannever increase their relative entropy. It is known that this inequality is onlytight when there is a 'recovery map' that exactly reverses the effects of thequantum channel on both states. In this paper we strengthen this inequality byshowing that the difference of relative entropies is bounded below by themeasured relative entropy between the first state and a recovered state fromits processed version. The recovery map is a convex combination of rotated Petzrecovery maps and perfectly reverses the quantum channel on the second state.As a special case we reproduce recent lower bounds on the conditional mutualinformation such as the one proved in [Fawzi and Renner, Commun. Math. Phys.,2015]. Our proof only relies on elementary properties of pinching maps and theoperator logarithm.
Tomamichel, M & Hayashi, M 2015, 'Operational Interpretation of Renyi Information Measures via Composite Hypothesis Testing Against Product and Markov Distributions', IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 1064-1082.
View/Download from: Publisher's site
View description>>
We revisit the problem of asymmetric binary hypothesis testing against acomposite alternative hypothesis. We introduce a general framework to treatsuch problems when the alternative hypothesis adheres to certain axioms. Inthis case we find the threshold rate, the optimal error and strong converseexponents (at large deviations from the threshold) and the second orderasymptotics (at small deviations from the threshold). We apply our results tofind operational interpretations of various Renyi information measures. In casethe alternative hypothesis is comprised of bipartite product distributions, wefind that the optimal error and strong converse exponents are determined byvariations of Renyi mutual information. In case the alternative hypothesisconsists of tripartite distributions satisfying the Markov property, we findthat the optimal exponents are determined by variations of Renyi conditionalmutual information. In either case the relevant notion of Renyi mutualinformation depends on the precise choice of the alternative hypothesis. Assuch, our work also strengthens the view that different definitions of Renyimutual information, conditional entropy and conditional mutual information areadequate depending on the context in which the measures are used.
Tomamichel, M & Leverrier, A 2015, 'A largely self-contained and complete security proof for quantum key distribution', Quantum, vol. 1, pp. 14-52.
View/Download from: Publisher's site
View description>>
In this work we present a security analysis for quantum key distribution,establishing a rigorous tradeoff between various protocol and securityparameters for a class of entanglement-based and prepare-and-measure protocols.The goal of this paper is twofold: 1) to review and clarify thestate-of-the-art security analysis based on entropic uncertainty relations, and2) to provide an accessible resource for researchers interested in a securityanalysis of quantum cryptographic protocols that takes into account finiteresource effects. For this purpose we collect and clarify several argumentsspread in the literature on the subject with the goal of making this treatmentlargely self-contained. More precisely, we focus on a class of prepare-and-measure protocols based onthe Bennett-Brassard (BB84) protocol as well as a class of entanglement-basedprotocols similar to the Bennett-Brassard-Mermin (BBM92) protocol. We carefullyformalize the different steps in these protocols, including randomization,measurement, parameter estimation, error correction and privacy amplification,allowing us to be mathematically precise throughout the security analysis. Westart from an operational definition of what it means for a quantum keydistribution protocol to be secure and derive simple conditions that serve assufficient condition for secrecy and correctness. We then derive and eventuallydiscuss tradeoff relations between the block length of the classicalcomputation, the noise tolerance, the secret key length and the securityparameters for our protocols. Our results significantly improve upon previouslyreported tradeoffs.
Tomamichel, M, Berta, M & Renes, JM 2015, 'Quantum Coding with Finite Resources', Nature Communications 7:11419 (2016), vol. 7.
View/Download from: Publisher's site
View description>>
The quantum capacity of a memoryless channel is often used as a single figureof merit to characterize its ability to transmit quantum informationcoherently. The capacity determines the maximal rate at which we can codereliably over asymptotically many uses of the channel. We argue that thisasymptotic treatment is insufficient to the point of being irrelevant in thequantum setting where decoherence severely limits our ability to manipulatelarge quantum systems in the encoder and decoder. For all practical purposes weshould instead focus on the trade-off between three parameters: the rate of thecode, the number of coherent uses of the channel, and the fidelity of thetransmission. The aim is then to specify the region determined by allowedcombinations of these parameters. Towards this goal, we find approximate andexact characterizations of the region of allowed triplets for the qubitdephasing channel and for the erasure channel with classical post-processingassistance. In each case the region is parametrized by a second channelparameter, the quantum channel dispersion. In the process we also developseveral general inner (achievable) and outer (converse) bounds on the codingregion that are valid for all finite-dimensional quantum channels and can becomputed efficiently. Applied to the depolarizing channel, this allows us todetermine a lower bound on the number of coherent uses of the channel necessaryto witness super-additivity of the coherent information.
Yu, N 2015, 'Separability of Bosonic Systems', Phys. Rev. A, vol. 94, no. 6, p. 060101.
View/Download from: Publisher's site
View description>>
In this paper, we study the separability of quantum states in bosonic system.Our main tool here is the 'separability witnesses', and a connection between'separability witnesses' and a new kind of positivity of matrices--- 'PowerPositive Matrices' is drawn. Such connection is employed to demonstrate thatmulti-qubit quantum states with Dicke states being its eigenvectors isseparable if and only if two related Hankel matrices are positive semidefinite.By employing this criterion, we are able to show that such state is separableif and only if it's partial transpose is non-negative, which confirms theconjecture in [Wolfe, Yelin, Phys. Rev. Lett. (2014)]. Then, we present a classof bosonic states in $d\otimes d$ system such that for general $d$, determineits separability is NP-hard although verifiable conditions for separability iseasily derived in case $d=3,4$.
Yu, N & Ying, M 2015, 'Optimal simulation of Deutsch gates and the Fredkin gate', PHYSICAL REVIEW A, vol. 91, no. 3.
View/Download from: Publisher's site
Zhao, Y-Y, Yu, N-K, Kurzynski, P, Xiang, G-Y, Li, C-F & Guo, G-C 2015, 'Experimental realisation of generalised qubit measurements based on quantum walks', Physical Review A - Atomic, Molecular, and Optical Physics, vol. 91, no. 4.
View/Download from: Publisher's site
View description>>
We report an experimental implementation of a single-qubit generalisedmeasurement scenario(POVM) based on a quantum walk model. The qubit is encodedin a single-photon polarisation. The photon performs a quantum walk on an arrayof optical elements, where the polarisation-dependent translation is performedvia birefringent beam displacers and a change of the polarisation isimplemented with the help of wave-plates. We implement: (i) Trine-POVM, i.e.,the POVM elements uniformly distributed on an equatorial plane of the Blochsphere; (ii) Symmetric-Informationally- Complete (SIC) POVM; and (iii)Unambiguous Discrimination of two non-orthogonal qubit states.
Zhou, B, Xu, G & Li, S 2015, 'The Quintuple Implication Principle of fuzzy reasoning', Information Sciences, vol. 297, pp. 202-215.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier Inc. Fuzzy Modus Ponens (FMP) and Fuzzy Modus Tollens (FMT) are two fundamental patterns of approximate reasoning. Suppose A and B are fuzzy predicates and 'IF A THEN B' is a fuzzy rule. Approximate reasoning often requires to derive an approximation A∗ of B from a given approximation A∗ of A, or vice versa. To solve these problems, Zadeh introduces the well-known Compositional Rule of Inference (CRI), which models fuzzy rule by implication and computes A∗ (A∗, resp.) by composing A∗ (A∗, resp.) with A→B. Wang argues that the use of the compositional operation is logically not sufficiently justified and proposes the Triple Implication Principle (TIP) instead. Both CRI and TIP do not explicitly use the closeness of A and A∗ (or that of B and A∗) in the process of calculating the consequence, which makes the thus computed approximation sometimes useless or misleading. In this paper, we propose the Quintuple Implication Principle (QIP) for fuzzy reasoning, which characterizes the approximation A∗ of B (A∗ of A, resp.) as the formula which is best supported by A→B,A∗→A and A∗ (A→B,B→A∗ and A∗, resp.). Based upon Monoidal t-norm Logic (MTL), this paper applies QIP to solve FMP and FMT for four important implications. Most importantly, we show that QIP, when using Gödel implication, computes exactly the same approximation as Mamdani-type fuzzy inference does. This is surprising as Mamdani interprets fuzzy rules in terms of the minimum operation, while CRI, TIP and QIP all interpret fuzzy rules in terms of implication.
Bremner, MJ, Montanaro, A & Shepherd, D 1970, 'Average-case complexity versus approximate simulation of commuting quantum computations', 15th Asian Quantum Information Science Conference, Seoul, Korea.
Datta, N, Tomamichel, M & Wilde, MM 1970, 'Second-order coding rates for entanglement-assisted communication', 2015 IEEE International Symposium on Information Theory (ISIT), 2015 IEEE International Symposium on Information Theory (ISIT), IEEE, Hong Kong, China, pp. 2772-2776.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. The entanglement-assisted capacity of a quantum channel is known to provide the formal quantum generalization of Shannon's classical channel capacity theorem, in the sense that it admits a single-letter characterization in terms of the quantum mutual information and does not increase in the presence of a noiseless quantum feedback channel from receiver to sender. In this work, we investigate second-order asymptotics of the entanglement-assisted communication task. That is, we consider how quickly the rates of entanglement-assisted codes converge to the entanglement-assisted capacity of a channel as a function of the number of channel uses and the error tolerance. We define a quantum generalization of the mutual information variance of a channel in the entanglement-assisted setting. For covariant channels, we show that this quantity is equal to the channel dispersion, and characterizes the convergence towards the entanglement-assisted capacity when the number of channel uses increases. More generally, we prove that the Gaussian approximation for a second-order coding rate is achievable for all quantum channels.
Feng, Y & Ying, M 1970, 'Toward automatic verification of quantum cryptographic protocols', Leibniz International Proceedings in Informatics, LIPIcs, International Conference on Concurrency Theory, Springer, Spain, pp. 441-455.
View/Download from: Publisher's site
View description>>
Several quantum process algebras have been proposed and successfully appliedin verification of quantum cryptographic protocols. All of the bisimulationsproposed so far for quantum processes in these process algebras arestate-based, implying that they only compare individual quantum states, but nota combination of them. This paper remedies this problem by introducing a novelnotion of distribution-based bisimulation for quantum processes. We furtherpropose an approximate version of this bisimulation that enables us to provemore sophisticated security properties of quantum protocols which cannot beverified using the previous bisimulations. In particular, we prove that thequantum key distribution protocol BB84 is sound and (asymptotically) secureagainst the intercept-resend attacks by showing that the BB84 protocol, whenexecuted with such an attacker concurrently, is approximately bisimilar to anideal protocol, whose soundness and security are obviously guaranteed, with atmost an exponentially decreasing gap.
Feng, Y & Ying, M 1970, 'Toward Automatic Verification of Quantum Cryptographic Protocols.', CONCUR, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 441-455.
Grochow, JA & Qiao, Y 1970, 'Polynomial-time isomorphism test of groups that are tame extensions (Extended abstract)', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 578-589.
View/Download from: Publisher's site
View description>>
We give new polynomial-time algorithms for testing isomorphism of a class of groups given by multiplication tables (GpI). Two results (Cannon & Holt, J. Symb. Comput. 2003; Babai, Codenotti & Qiao, ICALP 2012) imply that GpI reduces to the following: given groups G,H with characteristic subgroups of the same type and isomorphic to ℤdp , and given the coset of isomorphisms Iso(G/ℤdp ,H/ℤdp), compute Iso(G,H) in time poly(|G|). Babai&Qiao (STACS 2012) solved this problem when a Sylow p-subgroup of G/ℤdp is trivial. In this paper, we solve the preceding problem in the so-called “tame” case, i. e., when a Sylow p-subgroup of G/ℤdp is cyclic, dihedral, semi-dihedral, or generalized quaternion. These cases correspond exactly to the group algebra (Formula presented.) being of tame type, as in the celebrated tame-wild dichotomy in representation theory. We then solve new cases of GpI in polynomial time. Our result relies crucially on the divide-and-conquer strategy proposed earlier by the authors (CCC 2014), which splits GpI into two problems, one on group actions (representations), and one on group cohomology. Based on this strategy, we combine permutation group and representation algorithms with new mathematical results, including bounds on the number of indecomposable representations of groups in the tame case, and on the size of their cohomology groups. Finally, we note that when a group extension is not tame, the preceding bounds do not hold. This suggests a precise sense in which the tame-wild dichotomy from representation theory may also be a key barrier to cross to put GpI into P.
Grochow, JA & Qiao, Y 1970, 'Polynomial-Time Isomorphism Test of Groups that are Tame Extensions (Extended Abstract)', Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics, pp. 578-589.
View/Download from: Publisher's site
View description>>
We give new polynomial-time algorithms for testing isomorphism of a class of groups given by multiplication tables (GpI). Two results (Cannon & Holt, J. Symb. Comput. 2003; Babai, Codenotti & Qiao, ICALP 2012) imply that GpI reduces to the following: Given groups G,H with characteristic subgroups of the same type and isomorphic to Zdp, and given the coset of isomorphisms Iso(G/Zdp,H/Zdp), compute Iso(G,H) in time poly(|G|). Babai&Qiao (STACS 2012) solved this problem when a Sylow p-subgroup of G/Zdp is trivial. In this paper, we solve the preceding problem in the so-called “tame” case, i. e., when a Sylow p-subgroup of G/Zdp is cyclic, dihedral, semi-dihedral, or generalized quaternion. These cases correspond exactly to the group algebra Fp[G/Zdp] being of tame type, as in the celebrated tame-wild dichotomy in representation theory. We then solve new cases of GpI in polynomial time. Our result relies crucially on the divide-and-conquer strategy proposed earlier by the authors (CCC 2014), which splits GpI into two problems, one on group actions (representations), and one on group cohomology. Based on this strategy, we combine permutation group and representation algorithms with new mathematical results, including bounds on the number of indecomposable representations of groups in the tame case, and on the size of their cohomology groups. Finally, we note that when a group extension is not tame, the preceding bounds do not hold. This suggests a precise sense in which the tame-wild dichotomy from representation theory may also be a key barrier to cross to put GpI into P.
Kaniewski, J, Lee, T & de Wolf, R 1970, 'Query Complexity in Expectation', Springer Berlin Heidelberg, pp. 761-772.
View/Download from: Publisher's site
Kong, S, Li, S, Li, Y & Long, Z 1970, 'On Tree-Preserving Constraints', 21st International Conference, CP 2015, Cork, Ireland, August 31 -- September 4, 2015, Proceedings, International Conference on Principles and Practice of Constraint Programming, Springer International Publishing, Cork, Ireland, pp. 244-261.
View/Download from: Publisher's site
View description>>
Tree convex constraints are extensions of the well-known row convex constraints. Just like the latter, every path-consistent tree convex constraint network is globally consistent. This paper studies and compares three subclasses of tree convex constraints which are called chain-, path- and tree-preserving constraints respectively. While the tractability of the subclass of chain-preserving constraints has been established before, this paper shows that every chain- or path-preserving constraint network is in essence the disjoint union of several independent connected row convex constraint networks, and hence (re-)establish the tractability of these two subclasses of tree convex constraints. We further prove that, when enforcing arc- and path-consistency on a tree-preserving constraint network, in each step, the network remains tree-preserving. This ensures the global consistency of the tree-preserving network if no inconsistency is detected. Moreover, it also guarantees the applicability of the partial path-consistency algorithm to tree-preserving constraint networks, which is usually more efficient than the path-consistency algorithm for large sparse networks. As an application, we show that the class of tree-preserving constraints is useful in solving the scene labelling problem.
Kulkarni, R, Qiao, Y & Sun, X 1970, 'On the Power of Parity Queries in Boolean Decision Trees', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Conference on Theory and Applications of Models of Computation, Springer International Publishing, Singapore, pp. 99-109.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. In an influential paper,Kushilevitz and Mansour (1993) introduced a natural extension of Boolean decision trees called parity decision tree (PDT) where one may query the sum modulo 2, i.e., the parity, of an arbitrary subset of variables. Although originally introduced in the context of learning, parity decision trees have recently regained interest in the context of communication complexity (cf. Shi and Zhang 2010) and property testing (cf. Bhrushundi, Chakraborty, and Kulkarni 2013). In this paper, we investigate the power of parity queries. In particular, we show that the parity queries can be replaced by ordinary ones at the cost of the total influence aka average sensitivity per query. Our simulation is tight as demonstrated by the parity function. At the heart of our result lies a qualitative extension of the result of O’Donnell, Saks, Schramme, and Servedio (2005) titled: Every decision tree has an influential variable. Recently Jain and Zhang (2011) obtained an alternate proof of the same. Our main contribution in this paper is a simple but surprising observation that the query elimination method of Jain and Zhang can indeed be adapted to eliminate, seemingly much more powerful, parity queries. Moreover, we extend our result to linear queries for Boolean valued functions over arbitrary finite fields.
Paler, A & Devitt, SJ 1970, 'An introduction to Fault-tolerant Quantum Computing', DAC'15 Proceedings of the 52nd Annual Design Automation Conference Article No. 60 (2015).
View/Download from: Publisher's site
View description>>
In this paper we provide a basic introduction of the core ideas and theoriessurrounding fault-tolerant quantum computation. These concepts underly thetheoretical framework of large-scale quantum computation and communications andare the driving force for many recent experimental efforts to construct smallto medium sized arrays of controllable quantum bits. We examine the basicprincipals of redundant quantum encoding, required to protect quantum bits fromerrors generated from both imprecise control and environmental interactions andthen examine the principals of fault-tolerance from largely a classicalframework. As quantum fault-tolerance essentially is avoiding theuncontrollable cascade of errors caused by the interaction of quantum-bits,these concepts can be directly mapped to quantum information.
Paler, A, Polian, I, Nemoto, K & Devitt, SJ 1970, 'A Regular Representation of Quantum Circuits', Bernard, pp. 139-154.
View/Download from: Publisher's site
View description>>
We present a quantum circuit representation consisting entirely of qubitinitialisations (I), a network of controlled-NOT gates (C) and measurementswith respect to different bases (M). The ICM representation is useful foroptimisation of quantum circuits that include teleportation, which is requiredfor fault-tolerant, error corrected quantum computation. The non-deterministicnature of teleportation necessitates the conditional introduction of correctivequantum gates and additional ancillae during circuit execution. Therefore, thestandard optimisation objectives, gate count and number of wires, are notwell-defined for general teleportation-based circuits. The transformation of acircuit into the ICM representation provides a canonical form for an exactfault-tolerant, error corrected circuit needed for optimisation prior to thefinal implementation in a realistic hardware model.
Sioutis, M, Li, S & Condotta, JF 1970, 'Efficiently characterizing non-redundant constraints in large real world qualitative spatial networks', IJCAI International Joint Conference on Artificial Intelligence, International Conference on Artificial Intelligence, AAAI Press, Buenos Aires, Argentina, pp. 3229-3235.
View description>>
RCC8 is a constraint language that serves for qualitative spatial representation and reasoning by encoding the topological relations between spatial entities. We focus on efficiently characterizing non-redundant constraints in large real world RCC8 networks and obtaining their prime networks. For a RCC8 network N a constraint is redundant, if removing that constraint from N does not change the solution set of N. A prime network of N is a network which contains no redundant constraints, but has the same solution set as N. We make use of a particular partial consistency, namely, G⋄-consistency, and obtain new complexity results for various cases of RCC8 networks, while we also show that given a maximal distributive subclass for RCC8 and a network N defined on that subclass, the prunning capacity of G⋄-consistency and ⋄-consistency is identical on the common edges of G and the complete graph of N, when G is a triangulation of the constraint graph of N. Finally, we devise an algorithm based on G⋄-consistency to compute the unique prime network of a RCC8 network, and show that it significantly progresses the state-of-the-art for practical reasoning with real RCC8 networks scaling up to millions of nodes.
Sioutis, M, Li, S & Condotta, JF 1970, 'On redundancy in linked geospatial data', CEUR Workshop Proceedings, Extended Semantic Web Conference (ESWC), Aachen university, Portoroz, Slovenia, pp. 1-8.
View description>>
RCC8 is a constraint language that serves for qualitative spatial representation and reasoning by encoding the topological relations between spatial entities. As such, RCC8 has been recently adopted by GeoSPARQL in an effort to enrich the Semantic Web with qualitative spatial relations. We focus on the redundancy that these data might harbor, which can throttle graph related applications, such as storing, representing, querying, and reasoning. For a RCC8 network N a constraint is redundant, if removing that constraint from N does not change the solution set of N. A prime network of N is a network which contains no redundant constraints, but has the same solution set as N. In this paper, we present a practical approach for obtaining the prime networks of RCC8 networks that originate from the Semantic Web, by exploiting the sparse and loosely connected structure of their constraint graphs, and, consequently, contribute towards offering Linked Geospatial Data of high quality. Experimental evaluation exhibits a vast decrease in the total number of non-redundant constraints that we can obtain from an initial network, while it also suggests that our approach significantly boosts the state-of-the-art approach.
Song, L, Feng, Y & Zhang, L 1970, 'Decentralized bisimulation for multiagent systems', Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems Aamas, International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, Istanbul, pp. 209-217.
View description>>
The notion of bisimulation has been introduced as a powerful way to abstract from details of systems in the formal verification community. When applying to multiagent systems, classical bisimulations will allow one agent to make decisions based on full histories of others. Thus, as a general concept, classical bisimulations are unrealistically powerful for such systems. In this paper, we define a coarser notion of bisimulation under which an agent can only make realistic decisions based on information available to it. Our bisimulation still implies trace distribution equivalence of the systems, and moreover, it allows a compositional abstraction framework of reasoning about the systems.
Song, L, Feng, Y & Zhang, L 1970, 'Planning for stochastic games with Co-safe objectives', Ijcai International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence, AAAI, Buenos Aires, pp. 1682-1688.
View description>>
We consider planning problems for stochastic games with objectives specified by a branching-time logic, called probabilistic computation tree logic (PCTL). This problem has been shown to be undecidable if strategies with perfect recall, i.e., history-dependent, are considered. In this paper, we show that, if restricted to co-safe properties, a subset of PCTL properties capable to specify a wide range of properties in practice including reachability ones, the problem turns to be decidable, even when the class of general strategies is considered. We also give an algorithm for solving robust stochastic planning, where a winning strategy is tolerant to some perturbations of probabilities in the model. Our result indicates that satisfiability of co-safe PCTL is decidable as well.
Sun, Y, Zhu, K, Sun, Y, Reid, B, Ferreira, FDS, Li, Y, Gao, X, Ying, M, Draper, BW, Zhao, M & Mogilner, A 1970, 'Individual and collective cell polarization and migration in electric field.', MOLECULAR BIOLOGY OF THE CELL, AMER SOC CELL BIOLOGY.
Tomamichel, M, Wilde, MM & Winter, A 1970, 'Strong converse rates for quantum communication', 2015 IEEE International Symposium on Information Theory (ISIT), 2015 IEEE International Symposium on Information Theory (ISIT), IEEE, Hong Kong, China, pp. 2386-2390.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. We revisit a fundamental open problem in quantum information theory, namely whether it is possible to transmit quantum information at a rate exceeding the channel capacity if we allow for a non-vanishing probability of decoding error. Here we establish that the Rains information of any quantum channel is a strong converse rate for quantum communication: For any code with a rate exceeding the Rains information of the channel, we show that the fidelity vanishes exponentially fast as the number of channel uses increases. This remains true even if we consider codes that perform classical post-processing on the transmitted quantum data. Our result has several applications. Most importantly, for generalized dephasing channels we show that the Rains information is also achievable, and thereby establish the strong converse property for quantum communication over such channels. This for the first time conclusively settles the strong converse question for a class of quantum channels that have a non-trivial quantum capacity.