(Fred) Cha, D, Zhang, H & Blumenstein, M 2011, 'Prediction of maximum wave-induced liquefaction in porous seabed using multi-artificial neural network model', Ocean Engineering, vol. 38, no. 7, pp. 878-887.
View/Download from: Publisher's site
Bremner, MJ, Jozsa, R & Shepherd, DJ 2011, 'Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy', PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, vol. 467, no. 2126, pp. 459-472.
View/Download from: Publisher's site
View description>>
We consider quantum computations comprising only commuting gates, known as IQP computations, and provide compelling evidence that the task of sampling their output probability distributions is unlikely to be achievable by any efficient classical means. More specifically, we introduce the class post-IQP of languages decided with bounded error by uniform families of IQP circuits with post-selection, and prove first that post-IQP equals the classical class PP. Using this result we show that if the output distributions of uniform IQP circuit families could be classically efficiently sampled, either exactly in total variation distance or even approximately up to 41 per cent multiplicative error in the probabilities, then the infinite tower of classical complexity classes known as the polynomial hierarchy would collapse to its third level. We mention some further results on the classical simulation properties of IQP circuit families, in particular showing that if the output distribution results from measurements on only O(log n) lines then it may, in fact, be classically efficiently sampled.
Chen, J, Chen, X, Duan, R, Ji, Z & Zeng, B 2011, 'No-go theorem for one-way quantum computing on naturally occurring two-level systems', Physical Review A, vol. 83, no. 5, pp. 0-0.
View/Download from: Publisher's site
View description>>
The ground states of some many-body quantum systems can serve as resource states for the one-way quantum computing model, achieving the full power of quantum computation. Such resource states are found, for example, in spin- 5 2 and spin- 3 2 systems. It is, of course, desirable to have a natural resource state in a spin- 1 2 , that is, qubit system. Here, we give a negative answer to this question for frustration-free systems with two-body interactions. In fact, it is shown to be impossible for any genuinely entangled qubit state to be a nondegenerate ground state of any two-body frustration-free Hamiltonian. What is more, we also prove that every spin- 1 2 frustration-free Hamiltonian with two-body interaction always has a ground state that is a product of single- or two-qubit states. In other words, there cannot be any interesting entanglement features in the ground state of such a qubit Hamiltonian.
Chen, J, Ji, Z, Klyachko, A, Kribs, DW & Zeng, B 2011, 'Rank Reduction for the Local Consistency Problem', JOURNAL OF MATHEMATICAL PHYSICS, vol. 53, no. 2.
View/Download from: Publisher's site
View description>>
We address the problem of how simple a solution can be for a given quantumlocal consistency instance. More specifically, we investigate how small therank of the global density operator can be if the local constraints are knownto be compatible. We prove that any compatible local density operators can besatisfied by a low rank global density operator. Then we study both fermionicand bosonic versions of the N-representability problem as applications. Afterapplying the channel-state duality, we prove that any compatible local channelscan be obtained through a global quantum channel with small Kraus rank.
Chen, J, Ji, Z, Kribs, D, Wei, Z & Zeng, B 2011, 'Ground-State Spaces of Frustration-Free Hamiltonians', JOURNAL OF MATHEMATICAL PHYSICS, vol. 53, no. 10.
View/Download from: Publisher's site
View description>>
We study the ground-state space properties for frustration-free Hamiltonians.We introduce a concept of `reduced spaces' to characterize local structures ofground-state spaces. For a many-body system, we characterize mathematicalstructures for the set $\Theta_k$ of all the $k$-particle reduced spaces, whichwith a binary operation called join forms a semilattice that can be interpretedas an abstract convex structure. The smallest nonzero elements in $\Theta_k$,called atoms, are analogs of extreme points. We study the properties of atomsin $\Theta_k$ and discuss its relationship with ground states of $k$-localfrustration-free Hamiltonians. For spin-1/2 systems, we show that all the atomsin $\Theta_2$ are unique ground states of some 2-local frustration-freeHamiltonians. Moreover, we show that the elements in $\Theta_k$ may not be thejoin of atoms, indicating a richer structure for $\Theta_k$ beyond the convexstructure. Our study of $\Theta_k$ deepens the understanding of ground-statespace properties for frustration-free Hamiltonians, from a new angle of reducedspaces.
Chen, J, Ji, Z, Wei, Z & Zeng, B 2011, 'Correlations in excited states of local Hamiltonians', PHYSICAL REVIEW A, vol. 85, no. 4.
View/Download from: Publisher's site
View description>>
Physical properties of the ground and excited states of a $k$-localHamiltonian are largely determined by the $k$-particle reduced density matrices($k$-RDMs), or simply the $k$-matrix for fermionic systems---they are at leastenough for the calculation of the ground state and excited state energies.Moreover, for a non-degenerate ground state of a $k$-local Hamiltonian, eventhe state itself is completely determined by its $k$-RDMs, and thereforecontains no genuine ${>}k$-particle correlations, as they can be inferred from$k$-particle correlation functions. It is natural to ask whether a similarresult holds for non-degenerate excited states. In fact, for fermionic systems,it has been conjectured that any non-degenerate excited state of a 2-localHamiltonian is simultaneously a unique ground state of another 2-localHamiltonian, hence is uniquely determined by its 2-matrix. And a weaker versionof this conjecture states that any non-degenerate excited state of a 2-localHamiltonian is uniquely determined by its 2-matrix among all the pure$n$-particle states. We construct explicit counterexamples to show that bothconjectures are false. It means that correlations in excited states of localHamiltonians could be dramatically different from those in ground states. Wefurther show that any non-degenerate excited state of a $k$-local Hamiltonianis a unique ground state of another $2k$-local Hamiltonian, hence is uniquelydetermined by its $2k$-RDMs (or $2k$-matrix).
Chen, J, Ji, Z, Zeng, B & Zhou, DL 2011, 'From Ground States to Local Hamiltonians', PHYSICAL REVIEW A, vol. 86, no. 2.
View/Download from: Publisher's site
View description>>
Traditional quantum physics solves ground states for a given Hamiltonian,while quantum information science asks for the existence and construction ofcertain Hamiltonians for given ground states. In practical situations, onewould be mainly interested in local Hamiltonians with certain interactionpatterns, such as nearest neighbour interactions on some type of lattices. Anecessary condition for a space $V$ to be the ground-state space of some localHamiltonian with a given interaction pattern, is that the maximally mixed statesupported on $V$ is uniquely determined by its reduced density matricesassociated with the given pattern, based on the principle of maximum entropy.However, it is unclear whether this condition is in general also sufficient. Weexamine the situations for the existence of such a local Hamiltonian to have$V$ satisfying the necessary condition mentioned above as its ground-statespace, by linking to faces of the convex body of the local reduced states. Wefurther discuss some methods for constructing the corresponding localHamiltonians with given interaction patterns, mainly from physical points ofview, including constructions related to perturbation methods, localfrustration-free Hamiltonians, as well as thermodynamical ensembles.
Datta, A, Zhang, L, Nunn, J, Langford, NK, Feito, A, Plenio, MB & Walmsley, IA 2011, 'A compact entanglement distillery', Phys. Rev. Lett., vol. 108, no. 6, pp. 060502-060502.
View/Download from: Publisher's site
View description>>
Large-scale quantum-correlated networks could transform technologies rangingfrom communications and cryptography to computation, metrology, and simulationof novel materials. Critical to achieving such quantum enhancements isdistributing high-quality entanglement between distant nodes. This is madepossible in the unavoidable presence of decoherence by entanglementdistillation. However, current versions of this protocol are prohibitivelycostly in terms of resources. We introduce a new scheme for continuous-variableentanglement distillation that requires only linear temporal and constantphysical or spatial resources, both of which are exponential improvements overexisting protocols. Our scheme uses a fixed module - an entanglement distillery- comprising only four quantum memories of at most 50 % storage efficiency andallowing a feasible experimental implementation. Tangible quantum advantagesare obtained by using non-ideal quantum memories outside their conventionalrole of storage. By creating, storing and processing information in the samephysical space, the scheme establishes a potentially valuable technique fordesigning stable, scalable protocols across different quantum technologies.
Datta, N & Hsieh, M-H 2011, 'One-shot entanglement-assisted quantum and classical communication', IEEE Trans. Inform. Theory, vol. 59, no. 3, pp. 3-1939.
View/Download from: Publisher's site
View description>>
We study entanglement-assisted quantum and classical communication over asingle use of a quantum channel, which itself can correspond to a finite numberof uses of a channel with arbitrarily correlated noise. We obtaincharacterizations of the corresponding one-shot capacities by establishingupper and lower bounds on them in terms of the difference of two smoothedentropic quantities. In the case of a memoryless channel, the upper and lowerbounds converge to the known single-letter formulas for the correspondingcapacities, in the limit of asymptotically many uses of it. Our results implythat the difference of two smoothed entropic quantities characterizing theone-shot entanglement-assisted capacities serves as a one-shot analogue of themutual information, since it reduces to the mutual information, between theoutput of the channel and a system purifying its input, in the asymptotic,memoryless scenario.
Datta, N & Hsieh, M-H 2011, 'The apex of the family tree of protocols: Optimal rates and resource inequalities', New J. Phys., vol. 13, p. 093042.
View/Download from: Publisher's site
View description>>
We establish bounds on the maximum entanglement gain and minimum quantumcommunication cost of the Fully Quantum Slepian-Wolf protocol in the one-shotregime, which is considered to be at the apex of the existing family tree inQuantum Information Theory. These quantities, which are expressed in terms ofsmooth min- and max-entropies, reduce to the known rates of quantumcommunication cost and entanglement gain in the asymptotic i.i.d. scenario. Wealso provide an explicit proof of the optimality of these asymptotic rates. Weintroduce a resource inequality for the one-shot FQSW protocol, which inconjunction with our results, yields achievable one-shot rates of its childrenprotocols. In particular, it yields bounds on the one-shot quantum capacity ofa noisy channel in terms of a single entropic quantity, unlike previouslybounds. We also obtain an explicit expression for the achievable rate forone-shot state redistribution.
Datta, N, Hsieh, M-H & Wilde, MM 2011, 'Quantum rate distortion, reverse Shannon theorems, and source-channel separation', IEEE Transactions on Information Theory vol. 59, no. 1, pp. 615-630 (January 2013), vol. 59, no. 1, pp. 615-630.
View/Download from: Publisher's site
View description>>
We derive quantum counterparts of two key theorems of classical informationtheory, namely, the rate distortion theorem and the source-channel separationtheorem. The rate-distortion theorem gives the ultimate limits on lossy datacompression, and the source-channel separation theorem implies that a two-stageprotocol consisting of compression and channel coding is optimal fortransmitting a memoryless source over a memoryless channel. In spite of theirimportance in the classical domain, there has been surprisingly little work inthese areas for quantum information theory. In the present paper, we prove thatthe quantum rate distortion function is given in terms of the regularizedentanglement of purification. We also determine a single-letter expression forthe entanglement-assisted quantum rate distortion function, and we prove thatit serves as a lower bound on the unassisted quantum rate distortion function.This implies that the unassisted quantum rate distortion function isnon-negative and generally not equal to the coherent information between thesource and distorted output (in spite of Barnum's conjecture that the coherentinformation would be relevant here). Moreover, we prove several quantumsource-channel separation theorems. The strongest of these are in theentanglement-assisted setting, in which we establish a necessary and sufficientcodition for transmitting a memoryless source over a memoryless quantum channelup to a given distortion.
Datta, N, Mosonyi, M, Hsieh, M-H & Brandao, FGSL 2011, 'A smooth entropy approach to quantum hypothesis testing and the classical capacity of quantum channels', IEEE Trans. Inf. Theo., vol. 59, no. 12, pp. 8014-8026.
View/Download from: Publisher's site
View description>>
We use the smooth entropy approach to treat the problems of binary quantumhypothesis testing and the transmission of classical information through aquantum channel. We provide lower and upper bounds on the optimal type II errorof quantum hypothesis testing in terms of the smooth max-relative entropy ofthe two states representing the two hypotheses. Using then a relative entropyversion of the Quantum Asymptotic Equipartition Property (QAEP), we can recoverthe strong converse rate of the i.i.d. hypothesis testing problem in theasymptotics. On the other hand, combining Stein's lemma with our bounds, weobtain a stronger ($\ep$-independent) version of the relative entropy-QAEP.Similarly, we provide bounds on the one-shot $\ep$-error classical capacity ofa quantum channel in terms of a smooth max-relative entropy variant of itsHolevo capacity. Using these bounds and the $\ep$-independent version of therelative entropy-QAEP, we can recover both the Holevo-Schumacher-Westmorelandtheorem about the optimal direct rate of a memoryless quantum channel withproduct state encoding, as well as its strong converse counterpart.
Devitt, SJ, Stephens, AM, Munro, WJ & Nemoto, K 2011, 'Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation', New J. Phys. 13: 095001 (2011), vol. 13.
View/Download from: Publisher's site
View description>>
In this paper we introduce a design for an optical topological cluster statecomputer constructed exclusively from a single quantum component. Unlikeprevious efforts we eliminate the need for on demand, high fidelity photonsources and detectors and replace them with the same device utilised to createphoton/photon entanglement. This introduces highly probabilistic elements intothe optical architecture while maintaining complete specificity of thestructure and operation for a large scale computer. Photons in this system arecontinually recycled back into the preparation network, allowing for aarbitrarily deep 3D cluster to be prepared using a comparatively small numberof photonic qubits and consequently the elimination of high frequency,deterministic photon sources.
Ferrie, C, Granade, CE & Cory, DG 2011, 'How to best sample a periodic probability distribution, or on the accuracy of Hamiltonian finding strategies', QUANTUM INFORMATION PROCESSING, vol. 12, no. 1, pp. 611-623.
View/Download from: Publisher's site
View description>>
Projective measurements of a single two-level quantum mechanical system (aqubit) evolving under a time-independent Hamiltonian produce a probabilitydistribution that is periodic in the evolution time. The period of thisdistribution is an important parameter in the Hamiltonian. Here, we explore howto design experiments so as to minimize error in the estimation of thisparameter. While it has been shown that useful results may be obtained byminimizing the risk incurred by each experiment, such an approach iscomputationally intractable in general. Here, we motivate and derive heuristicstrategies for experiment design that enjoy the same exponential scaling asfully optimized strategies. We then discuss generalizations to the case offinite relaxation times, T_2 < \infty.
Furrer, F, Franz, T, Berta, M, Leverrier, A, Scholz, VB, Tomamichel, M & Werner, RF 2011, 'Continuous Variable Quantum Key Distribution: Finite-Key Analysis of Composable Security against Coherent Attacks', Phys. Rev. Lett., vol. 109, no. 10, p. 100502.
View/Download from: Publisher's site
View description>>
We provide a security analysis for continuous variable quantum keydistribution protocols based on the transmission of squeezed vacuum statesmeasured via homodyne detection. We employ a version of the entropicuncertainty relation for smooth entropies to give a lower bound on the numberof secret bits which can be extracted from a finite number of runs of theprotocol. This bound is valid under general coherent attacks, and gives rise tokeys which are composably secure. For comparison, we also give a lower boundvalid under the assumption of collective attacks. For both scenarios, we findpositive key rates using experimental parameters reachable today.
Gerrits, T, Thomas-Peter, N, Gates, JC, Lita, AE, Metcalf, BJ, Calkins, B, Tomlin, NA, Fox, AE, Linares, AL, Spring, JB, Langford, NK, Mirin, RP, Smith, PGR, Walmsley, IA & Nam, SW 2011, 'On-chip, photon-number-resolving, telecommunication-band detectors for scalable photonic information processing', Physical Review A, vol. 84, no. 6.
View/Download from: Publisher's site
Hsieh, MH, Yen, WT & Hsu, LY 2011, 'Undetermined states: How to find them and their applications', European Physical Journal D, vol. 61, no. 1, pp. 261-265.
View/Download from: Publisher's site
View description>>
We investigate the undetermined sets consisting of two-level, multi-partite pure quantum states, whose reduced density matrices give absolutely no information of their original states. Two approached of finding these quantum states are proposed. One is to establish the relation between codewords of the stabilizer quantum error correction codes (SQECCs) and the undetermined states. The other is to study the local complementation rules of the graph states. As an application, the undetermined states can be exploited in the quantum secret sharing scheme. The security is guaranteed by their undetermineness. © 2010 EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg.
Jain, R, Ji, Z, Upadhyay, S & Watrous, J 2011, 'QIP = PSPACE', Journal of the ACM, vol. 58, no. 6, pp. 1-27.
View/Download from: Publisher's site
View description>>
This work considers the quantum interactive proof system model of computation, which is the (classical) interactive proof system model’s natural quantum computational analogue. An exact characterization of the expressive power of quantum interactive proof systems is obtained: the collection of computational problems having quantum interactive proof systems consists precisely of those problems solvable by deterministic Turing machines that use at most a polynomial amount of space (or, more succinctly, QIP = PSPACE). This characterization is proved through the use of a parallelized form of the matrix multiplicative weights update method, applied to a class of semidefinite programs that captures the computational power of quantum interactive proof systems. One striking implication of this characterization is that quantum computing provides no increase in computational power whatsoever over classical computing in the context of interactive proof systems, for it is well known that the collection of computational problems having classical interactive proof systems coincides with those problems solvable by polynomial-space computations.
Ji, Z, Wei, Z & Zeng, B 2011, 'Complete characterization of the ground-space structure of two-body frustration-free Hamiltonians for qubits', Physical Review A, vol. 84, no. 4.
View/Download from: Publisher's site
Kieferova, M & Nagaj, D 2011, 'Quantum Walks on Necklaces and Mixing', Issue, vol. 2, p. 1250025.
View description>>
We analyze continuous-time quantum walks on necklace graphs - cyclical graphsconsisting of many copies of a smaller graph (pearl). Using a Bloch-type ansatzfor the eigenfunctions, we block-diagonalize the Hamiltonian, reducing theeffective size of the problem to the size of a single pearl. We then present ageneral approach for showing that the mixing time scales (with growing size ofthe necklace) similarly to that of a simple walk on a cycle. Finally, wepresent results for mixing on several necklace graphs.
Langford, NK, Ramelow, S, Prevedel, R, Munro, WJ, Milburn, GJ & Zeilinger, A 2011, 'Efficient quantum computing using coherent photon conversion', Nature, vol. 478, no. 7369, pp. 360-363.
View/Download from: Publisher's site
Lapkiewicz, R, Li, P, Schaeff, C, Langford, NK, Ramelow, S, Wieśniak, M & Zeilinger, A 2011, 'Experimental non-classicality of an indivisible quantum system', Nature, vol. 474, no. 7352, pp. 490-493.
View/Download from: Publisher's site
Lee, JH, Guan, H, Loo, YC, Blumenstein, M & Wang, XP 2011, 'Modelling Long-Term Bridge Deterioration at Structural Member Level Using Artificial Intelligence Techniques', Applied Mechanics and Materials, vol. 99-100, pp. 444-453.
View/Download from: Publisher's site
View description>>
Efficient use of public funds for structural integrity of bridge networks requires an effective bridge asset management technology. To achieve this, a reliable deterioration model is essential in any Bridge Management System (BMS). The deterioration rate is calculated based on historical condition ratings obtained from the structural element-level bridge inspections. Although most bridge authorities have previously conducted inspection and maintenance tasks, these past inspection records are incompatible with what are required by a typical BMS as input. Such incompatibility is a major cause for the deficiency of the current BMS outcomes. Artificial Intelligence (AI)-based bridge deterioration model has recently been developed to minimise uncertainties in predicting deterioration of structural bridge members (e.g. beams, piers etc). This model contains two components: (1) using Neural Network-based Backward Prediction Model (BPM) to generate unavailable historical condition ratings; and (2) using Time Delay Neural Network (TDNN) to perform long-term performance prediction of bridge structural members. However new problems have emerged in the process of TDNN prediction. This is because the BPM-generated condition ratings are used together with the actual condition ratings. The incompatibility between the two sets of data produces unreliable prediction outcomes during the TDNN process. This research is thus to develop a new process based on the existing method, thereby overcoming the abovementioned problems. To achieve this, the actual overall condition ratings are replaced by the BPM forward predicted condition ratings. Consequently, the outcome of this study can improve accuracy of long-term bridge deterioration prediction.
Lee, KC, Sprague, MR, Sussman, BJ, Nunn, J, Langford, NK, Jin, X-M, Champion, T, Michelberger, P, Reim, KF, England, D, Jaksch, D & Walmsley, IA 2011, 'Entangling Macroscopic Diamonds at Room Temperature', Science, vol. 334, no. 6060, pp. 1253-1256.
View/Download from: Publisher's site
View description>>
Optical pulses are used to quantum mechanically entangle two diamonds several centimeters apart.
Mans, B & Mathieson, L 2011, 'On the Treewidth of Dynamic Graphs', Theoretical Computer Science, vol. 554, no. C, pp. 217-228.
View/Download from: Publisher's site
View description>>
Dynamic graph theory is a novel, growing area that deals with graphs thatchange over time and is of great utility in modelling modern wireless, mobileand dynamic environments. As a graph evolves, possibly arbitrarily, it ischallenging to identify the graph properties that can be preserved over timeand understand their respective computability. In this paper we are concerned with the treewidth of dynamic graphs. We focuson metatheorems, which allow the generation of a series of results based ongeneral properties of classes of structures. In graph theory two majormetatheorems on treewidth provide complexity classifications by employingstructural graph measures and finite model theory. Courcelle's Theorem gives ageneral tractability result for problems expressible in monadic second orderlogic on graphs of bounded treewidth, and Frick & Grohe demonstrate a similarresult for first order logic and graphs of bounded local treewidth. We extend these theorems by showing that dynamic graphs of bounded (local)treewidth where the length of time over which the graph evolves and is observedis finite and bounded can be modelled in such a way that the (local) treewidthof the underlying graph is maintained. We show the application of these resultsto problems in dynamic graph theory and dynamic extensions to static problems.In addition we demonstrate that certain widely used dynamic graph classesnaturally have bounded local treewidth.
Ocko, SA, Chen, X, Zeng, B, Yoshida, B, Ji, Z, Ruskai, MB & Chuang, IL 2011, 'Quantum Codes Give Counterexamples to the Unique Preimage Conjecture of theN-Representability Problem', Physical Review Letters, vol. 106, no. 11.
View/Download from: Publisher's site
Reim, KF, Michelberger, P, Lee, KC, Nunn, J, Langford, NK & Walmsley, IA 2011, 'Single-Photon-Level Quantum Memory at Room Temperature', Physical Review Letters, vol. 107, no. 5.
View/Download from: Publisher's site
Szehr, O, Dupuis, F, Tomamichel, M & Renner, R 2011, 'Decoupling with unitary approximate two-designs', New J. Phys., vol. 15, p. 053022.
View/Download from: Publisher's site
View description>>
Consider a bipartite system, of which one subsystem, A, undergoes a physicalevolution separated from the other subsystem, R. One may ask under whichconditions this evolution destroys all initial correlations between thesubsystems A and R, i.e. decouples the subsystems. A quantitative answer tothis question is provided by decoupling theorems, which have been developedrecently in the area of quantum information theory. This paper builds onpreceding work, which shows that decoupling is achieved if the evolution on Aconsists of a typical unitary, chosen with respect to the Haar measure,followed by a process that adds sufficient decoherence. Here, we prove ageneralized decoupling theorem for the case where the unitary is chosen from anapproximate two-design. A main implication of this result is that decoupling isphysical, in the sense that it occurs already for short sequences of randomtwo-body interactions, which can be modeled as efficient circuits. Ourdecoupling result is independent of the dimension of the R system, which showsthat approximate 2-designs are appropriate for decoupling even if the dimensionof this system is large.
Thomas-Peter, N, Langford, NK, Datta, A, Zhang, L, Smith, BJ, Spring, JB, Metcalf, BJ, Coldenstrodt-Ronge, HB, Hu, M, Nunn, J & Walmsley, IA 2011, 'Integrated Photonic Sensing', New J. Phys., vol. 13, p. 055024.
View/Download from: Publisher's site
View description>>
Loss is a critical roadblock to achieving photonic quantum-enhancedtechnologies. We explore a modular platform for implementing integratedphotonics experiments and consider the effects of loss at different stages ofthese experiments, including state preparation, manipulation and measurement.We frame our discussion mainly in the context of quantum sensing and focusparticularly on the use of loss-tolerant Holland-Burnett states for opticalphase estimation. In particular, we discuss spontaneous four-wave mixing instandard birefringent fibre as a source of pure, heralded single photons andpresent methods of optimising such sources. We also outline a route toprogrammable circuits which allow the control of photonic interactions even inthe presence of fabrication imperfections and describe a ratiometriccharacterisation method for beam splitters which allows the characterisation ofcomplex circuits without the need for full process tomography. Finally, wepresent a framework for performing state tomography on heralded states usinglossy measurement devices. This is motivated by a calculation of the effects offabrication imperfections on precision measurement using Holland-Burnettstates.
Tomamichel, M & Hänggi, E 2011, 'The Link between Entropic Uncertainty and Nonlocality', J. Phys. A: Math. Theor., vol. 46, no. 5, p. 055301.
View/Download from: Publisher's site
View description>>
Two of the most intriguing features of quantum physics are the uncertaintyprinciple and the occurrence of nonlocal correlations. The uncertaintyprinciple states that there exist pairs of incompatible measurements on quantumsystems such that their outcomes cannot both be predicted. On the other hand,nonlocal correlations of measurement outcomes at different locations cannot beexplained by classical physics, but appear in the presence of entanglement.Here, we show that these two fundamental quantum effects are quantitativelyrelated. Namely, we provide an entropic uncertainty relation for the outcomesof two binary measurements, where the lower bound on the uncertainty isquantified in terms of the maximum Clauser-Horne-Shimony-Holt value that can beachieved with these measurements. We discuss applications of this uncertaintyrelation in quantum cryptography, in particular, to certify quantum sourcesusing untrusted devices.
Tomamichel, M, Lim, CCW, Gisin, N & Renner, R 2011, 'Tight Finite-Key Analysis for Quantum Cryptography', Nat. Commun., vol. 3, pp. 634-6.
View/Download from: Publisher's site
View description>>
Despite enormous progress both in theoretical and experimental quantumcryptography, the security of most current implementations of quantum keydistribution is still not established rigorously. One of the main problems isthat the security of the final key is highly dependent on the number, M, ofsignals exchanged between the legitimate parties. While, in any practicalimplementation, M is limited by the available resources, existing securityproofs are often only valid asymptotically for unrealistically large values ofM. Here, we demonstrate that this gap between theory and practice can beovercome using a recently developed proof technique based on the uncertaintyrelation for smooth entropies. Specifically, we consider a family ofBennett-Brassard 1984 quantum key distribution protocols and show that securityagainst general attacks can be guaranteed already for moderate values of M.
Winkler, S, Tomamichel, M, Hengl, S & Renner, R 2011, 'Impossibility of Growing Quantum Bit Commitments', Phys. Rev. Lett., vol. 107, no. 9, p. 090502.
View/Download from: Publisher's site
View description>>
Quantum key distribution (QKD) is often, more correctly, called key growing.Given a short key as a seed, QKD enables two parties, connected by an insecurequantum channel, to generate a secret key of arbitrary length. Conversely, nokey agreement is possible without access to an initial key. Here, we consideranother fundamental cryptographic task, commitments. While, similar to keyagreement, commitments cannot be realized from scratch, we ask whether they maybe grown. That is, given the ability to commit to a fixed number of bits, isthere a way to augment this to commitments to strings of arbitrary length?Using recently developed information-theoretic techniques, we answer thisquestion to the negative.
Ying, M, Yu, N, Feng, Y & Duan, R 2011, 'Verification of Quantum Programs', CoRR, vol. abs/1106.4063, no. 4, pp. 1085-1093.
View/Download from: Publisher's site
View description>>
© Copyright 2018, Institute of Software, the Chinese Academy of Sciences. All rights reserved. With the rapid development of quantum hardware, people tend to believe that special-purpose quantum computers with more than 100 qubits will be available in 5 to 10 years. It is conceivable that, once this becomes a reality, the development of quantum software will be crucial in harnessing the power of quantum computers. However, due to the distinguishable features of quantum mechanics, such as the no-cloning of quantum information and the nonlocal effect of entanglement, developing correct and efficient quantum programs and communication protocols is a challenging issue. Formal verification methods, particularly model checking techniques, have proven effective in classical software design and system modelling. Therefore, formal verification of quantum software has received more and more attention recently. This article reviews recent research findings in verification of both sequential quantum programs and quantum communication protocols, with the focus placed on the work of the two authors' research groups. Future directions and challenges in this area are also discussed.
Yu, N, Duan, R & Ying, M 2011, 'Any 2 circle times n subspace is locally distinguishable', PHYSICAL REVIEW A, vol. 84, no. 1, pp. 1-3.
View/Download from: Publisher's site
View description>>
A subspace of a multipartite Hilbert space is said to be locally indistinguishable if any orthonormal basis of this subspace cannot be perfectly distinguished by local operations and classical communication. Previously it was shown that any m . n bipartite system with m > 2 and n > 2 has a locally indistinguishable subspace. However, it has been an open problem since 2005 whether there is a locally indistinguishable bipartite subspace with a qubit subsystem.We settle this problem in negative by showing that any 2 . n bipartite subspace contains a basis that is locally distinguishable. As an interesting application, we show that any quantum channel with two Kraus operators has optimal environment-assisted classical capacity.
Arefin, AS, Inostroza-Ponta, M, Mathieson, L, Berretta, R & Moscato, P 1970, 'Clustering Nodes in Large-Scale Biological Networks Using External Memory Algorithms', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Algorithms and Architectures for Parallel Processing, Springer Berlin Heidelberg, Melbourne, Australia, pp. 375-386.
View/Download from: Publisher's site
View description>>
Novel analytical techniques have dramatically enhanced our understanding of many application domains including biological networks inferred from gene expression studies. However, there are clear computational challenges associated to the large datasets generated from these studies. The algorithmic solution of some NP-hard combinatorial optimization problems that naturally arise on the analysis of large networks is difficult without specialized computer facilities (i.e. supercomputers). In this work, we address the data clustering problem of large-scale biological networks with a polynomial-time algorithm that uses reasonable computing resources and is limited by the available memory. We have adapted and improved the MSTkNN graph partitioning algorithm and redesigned it to take advantage of external memory (EM) algorithms. We evaluate the scalability and performance of our proposed algorithm on a well-known breast cancer microarray study and its associated dataset. © 2011 Springer-Verlag.
Babai, L, Codenotti, P, Grochow, JA & Qiao, Y 1970, 'Code Equivalence and Group Isomorphism', Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, San Francisco, USA, pp. 1395-1408.
View/Download from: Publisher's site
View description>>
The isomorphism problem for groups given by their multiplication tables has long been known to be solvable in time nlog n+O(1). The decades-old quest for a polynomial-time algorithm has focused on the very difficult case of class-2 nilpotent groups (groups whose quotient by their center is abelian), with little success. In this paper we consider the opposite end of the spectrum and initiate a more hopeful program to find a polynomial-time algorithm for semisimple groups, defined as groups without abelian normal subgroups. First we prove that the isomorphism problem for this class can be solved in time nO(log log n). We then identify certain bottlenecks to polynomial-time solvability and give a polynomial-time solution to a rich subclass, namely the semisimple groups where each minimal normal subgroup has a bounded number of simple factors. We relate the results to the filtration of groups introduced by Babai and Beals (1999). One of our tools is an algorithm for equivalence of (not necessarily linear) codes in simply-exponential time in the length of the code, obtained by modifying Luks's algorithm for hypergraph isomorphism in simply-exponential time in the number of vertices (FOCS 1999). We comment on the complexity of the closely related problem of permutational isomorphism of permutation groups.
Bremner, MJ 1970, 'Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy', 14th Workshop on Quantum Information Processing, Singapore.
View description>>
Quantum Information Processing (QIP) is a rapidly developing field of research spanning both physics and computer science. As the name implies, the field extends information processing (including computing and cryptography) to physical regimes where quantum effects become significant. QIP 2011 was the fourteenth workshop on theoretical aspects of quantum computing, quantum cryptography, and quantum information in a series that started in Aarhus in 1998 and was held 2010 at ETH Zurich, Switzerland. QIP 2011 featured a tutorial programme, invited talks, contributed talks, and a poster session. In addition, there was a rump session with short informal talks.
Devitt, SJ, Stephens, AM, Munro, WJ & Nemoto, K 1970, 'The optical quantum computer: how big and how fast?', SPIE Proceedings, SPIE Optical Engineering + Applications, SPIE, San Diego, CA.
View/Download from: Publisher's site
Feng, Y, Duan, R & Ying, M 1970, 'Bisimulation for Quantum Processes', ACM SIGPLAN NOTICES, ACM-SIGACT Symposium on Principles of Programming Languages, ACM, Austin, Texas, USA, pp. 523-534.
View/Download from: Publisher's site
View description>>
Quantum cryptographic systems have been commercially available, with a striking advantage over classical systems that their security and ability to detect the presence of eavesdropping are provable based on the principles of quantum mechanics. On the other hand, quantum protocol designers may commit much more faults than classical protocol designers since human intuition is much better adapted to the classical world than the quantum world. To offer formal techniques for modeling and verification of quantum protocols, several quantum extensions of process algebra have been proposed. One of the most serious issues in quantum process algebra is to discover a quantum generalization of the notion of bisimulation, which lies in a central position in process algebra, preserved by parallel composition in the presence of quantum entanglement, which has no counterpart in classical computation. Quite a few versions of bisimulation have been defined for quantum processes in the literature, but in the best case they are only proved to be preserved by parallel composition of purely quantum processes where no classical communications are involved. Many quantum cryptographic protocols, however, employ the LOCC (Local Operations and Classical Communications) scheme, where classical communications must be explicitly specified. So, a notion of bisimulation preserved by parallel composition in the circumstance of both classical and quantum communications is crucial for process algebra approach to verification of quantum cryptographic protocols. In this paper we introduce a novel notion of bisimulation for quantum processes and prove that it is congruent with respect to various process algebra combinators including parallel composition even when both classical and quantum communications are present.We also establish some basic algebraic laws for this bisimulation.
Fujiwara, Y & Hsieh, M-H 1970, 'Adaptively correcting quantum errors with entanglement', IEEE International Symposium on Information Theory - Proceedings, IEEE International Symposium on Information Theory, IEEE, St Petersburg, pp. 279-283.
View/Download from: Publisher's site
View description>>
Contrary to the assumption that most quantum error-correcting codes (QECC)make, it is expected that phase errors are much more likely than bit errors inphysical devices. By employing the entanglement-assisted stabilizer formalism,we develop a new kind of error-correcting protocol which can flexibly tradeerror correction abilities between the two types of errors, such that higherror correction performance is achieved both in symmetric and in asymmetricsituations. The characteristics of the QECCs can be optimized in an adaptivemanner during information transmission. The proposed entanglement-assistedQECCs require only one ebit regardless of the degree of asymmetry at a givenmoment and can be decoded in polynomial time.
Lee, T & Roland, J 1970, 'A strong direct product theorem for quantum query complexity', 27th IEEE Conference on Computational Complexity (CCC'12), pages 236-246, 2012, 2012 IEEE Conference on Computational Complexity (CCC), IEEE, Porto, Portugal.
View/Download from: Publisher's site
View description>>
We show that quantum query complexity satisfies a strong direct producttheorem. This means that computing $k$ copies of a function with less than $k$times the quantum queries needed to compute one copy of the function impliesthat the overall success probability will be exponentially small in $k$. For aboolean function $f$ we also show an XOR lemma---computing the parity of $k$copies of $f$ with less than $k$ times the queries needed for one copy impliesthat the advantage over random guessing will be exponentially small. We do this by showing that the multiplicative adversary method, whichinherently satisfies a strong direct product theorem, is always at least aslarge as the additive adversary method, which is known to characterize quantumquery complexity.
Liwicki, M, Malik, MI, Heuvel, CEVD, Chen, X, Berger, C, Stoel, R, Blumenstein, M & Found, B 1970, 'Signature Verification Competition for Online and Offline Skilled Forgeries (SigComp2011)', 2011 International Conference on Document Analysis and Recognition, 2011 International Conference on Document Analysis and Recognition (ICDAR), IEEE, Beijing, China, pp. 1480-1484.
View/Download from: Publisher's site
View description>>
The Netherlands Forensic Institute and the Institute for Forensic Science in Shanghai are in search of a signature verification system that can be implemented in forensic casework and research to objectify results. We want to bridge the gap between recent technological developments and forensic casework. In collaboration with the German Research Center for Artificial Intelligence we have organized a signature verification competition on datasets with two scripts (Dutch and Chinese) in which we asked to compare questioned signatures against a set of reference signatures. We have received 12 systems from 5 institutes and performed experiments on online and offline Dutch and Chinese signatures. For evaluation, we applied methods used by Forensic Handwriting Examiners (FHEs) to assess the value of the evidence, i.e., we took the likelihood ratios more into account than in previous competitions. The data set was quite challenging and the results are very interesting. © 2011 IEEE.
Munro, WJ, Devitt, SJ & Nemoto, K 1970, 'Designing quantum repeaters and networks', SPIE Proceedings, SPIE Optical Engineering + Applications, SPIE, San Diego, CA.
View/Download from: Publisher's site
Munro, WJ, Devitt, SJ, Stephens, AM, Nemoto, K, Ralph, T & Lam, PK 1970, 'High Bandwidth Quantum Communication', AIP Conference Proceedings, QUANTUM COMMUNICATION, MEASUREMENT AND COMPUTING (QCMC): The Tenth International Conference, AIP, Brisbane, AUSTRALIA.
View/Download from: Publisher's site
Munro, WJ, Stephens, AM, Devitt, SJ & Nemoto, K 1970, 'Quantum communication without memories or shared entanglement', Optics InfoBase Conference Papers, pp. 413-414.
View description>>
Long-range quantum communication and its associated repeater network are a necessity for any future quantum Internet. The performance of such networks is currently limited by the time it takes to establish entangled links between the appropriate parties on the network. Typically this scales as the round trip time for a signal to be transmitted between those parties. We present in this talk the design of a communications network for the direct transfer of quantum information where the rate at which quantum data can be transmitted is not limited by the distance the information needs to be sent but instead by the time to perform efficient local gate operations. Our scheme requires neither the establishment of entanglement between communication nodes nor the use of long-lived quantum memories. Our scheme thus provides a much higher communications rate than in standard teleportation based schemes. © 2011 AOS.
Nguyen, V, Blumenstein, M & IEEE 1970, 'An Application of the 2D Gaussian Filter for Enhancing Feature Extraction in Off-line Signature Verification', 11TH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION (ICDAR 2011), International Conference on Document Analysis and Recognition (ICDAR), IEEE, Beijing, China, pp. 339-343.
View/Download from: Publisher's site
View description>>
Similar to many other pattern recognition problems, feature extraction contributes significantly to the overall performance of an off-line signature verification system. To be successful, a feature extraction technique must be tolerant to different types of variation whilst preserving essential information of input patterns. In this paper, we describe a grid-based feature extraction technique that utilises directional information extracted from the signature contour, i.e. the chain code histogram. Our experimental results for signature verification indicated that, by applying a suitable 2D Gaussian filter on the matrices containing the chain code histograms, an average error rate (AER) of 13.90% can be obtained whilst maintaining the false acceptance rate (FAR) for random forgeries as low as 0.02%. These figures are comparable or better than those reported by other state of the art feature extraction techniques such as the Modified Direction Feature (MDF) and the Gradient feature. © 2011 IEEE.
Pal, S, Alireza, A, Pal, U & Blumenstein, M 1970, 'Off-line Signature Identification Using Background and Foreground Information', 2011 International Conference on Digital Image Computing: Techniques and Applications, 2011 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Noosa, QLD, Australia, pp. 672-677.
View/Download from: Publisher's site
View description>>
Biometric systems play an important role in the field of information security as they are extremely required for user authentication. Automatic signature recognition and verification is one of the biometric techniques, which is currently receiving renewed interest and is only one of several techniques used to verify the identities of individuals. Signatures provide a secure means for confirmation and authorization in legal documents. So nowadays, signature identification and verification becomes an essential component in automating the rapid processing of documents containing embedded signatures. In this paper, a technique for a bi-script off-line signature identification system is proposed. In the proposed signature identification system, the signatures of English and Bengali (Bangla) are considered for the identification process. Different features such as under sampled bitmaps, modified chain-code direction features and gradient features computed from both background and foreground components are employed for this purpose. Support Vector Machines (SVMs) and Nearest Neighbour (NN) techniques are considered as classifiers for signature identification in the proposed system. A database of 1554 English signatures and 1092 Bengali signatures are used to generate the experimental results. Various results based on different features are calculated and analysed. The highest accuracies of 99.41%, 98.45%; and 97.75% are obtained based on the modified chain-code direction, under-sampled bitmaps and gradient features respectively using 1800 (1100 English+700 Bengali) samples for training and 846 (454 English+392 Bengali) samples for testing. © 2011 IEEE.
Pal, S, Blumenstein, M & Pal, U 1970, 'Off-line signature verification systems', Proceedings of the International Conference & Workshop on Emerging Trends in Technology - ICWET '11, the International Conference & Workshop, ACM Press, Mumbai, Maharashtra, India, pp. 652-657.
View/Download from: Publisher's site
View description>>
Signatures are extensively used as a means of personal verification. Manual signature-based authentication of a large number of documents is a very difficult and time consuming task. Consequently for many years, in the field of protected communication and financial applications, we have observed an explosive growth in biometric personal authentication systems that are closely connected with measurable physical unique characteristics (hand geometry, iris scan, finger prints or DNA) or behavioural features. Human signatures provide secure means for confirmation and authorization in legal documents. So nowadays, automatic signature verification becomes an essential component. In order to convey the state-of-the-art in the field to researchers, in this paper we present a survey of off-line signature verification systems. Copyright © 2011 ACM.
Qiao, Y, Jayalal, SMN & Tang, B 1970, 'On isomorphism testing of groups with normal hall subgroups', Leibniz International Proceedings in Informatics, LIPIcs, 28th International Symposium on Theoretical Aspects of Computer Science (SATCS), SCHLOSS DAGSTUHL, LEIBNIZ CENTER INFORMATICS, Dortmund, GERMANY, pp. 567-578.
View/Download from: Publisher's site
View description>>
A normal Hall subgroup N of a group G is a normal subgroup with its order coprime with its index. Schur-Zassenhaus theorem states that every normal Hall subgroup has a complement subgroup, that is a set of coset representatives H which also forms a subgroup of G. In this paper, we present a framework to test isomorphism of groups with at least one normal Hall subgroup, when groups are given as multiplication tables. To establish the framework, we first observe that a proof of Schur-Zassenhaus theorem is constructive, and formulate a necessary and sufficient condition for testing isomorphism in terms of the associated actions of the semidirect products, and isomorphisms of the normal parts and complement parts. We then focus on the case when the normal subgroup is abelian. Utilizing basic facts of representation theory of finite groups and a technique by Le Gall in [9], we first get an efficient isomorphism testing algorithm when the complement has bounded number of generators. For the case when the complement subgroup is elementary abelian, which does not necessarily have bounded number of generators, we obtain a polynomial time isomorphism testing algorithm by reducing to generalized code isomorphism problem. A solution to the latter can be obtained by a mild extension of the singly exponential (in the number of coordinates) time algorithm for code isomorphism problem developed recently by Babai in [3]. Enroute to obtaining the above reduction, we study the following computational problem in representation theory of finite groups: given two representations ρ and τ of a group H over ℤdp, p a prime, determine if there exists an automorphism φ: H → H, such that the induced representation ρφ = ρ o φ and τ are equivalent, in time poly(|H|,pd). © Youming Qiao, Jayalal Sarma M.N., and Bangsheng Tang.
Wilde, MM & Hsieh, M-H 1970, 'Entanglement boosts quantum turbo codes', 2011 IEEE International Symposium on Information Theory Proceedings, 2011 IEEE International Symposium on Information Theory - ISIT, IEEE, St. Petersburg, Russia, pp. 445-449.
View/Download from: Publisher's site
View description>>
One of the unexpected breakdowns in the existing theory of quantum serial turbo coding is that a quantum convolutional encoder cannot simultaneously be recursive and non-catastrophic. These properties are essential for a quantum turbo code to have an unbounded minimum distance and for its iterative decoding algorithm to converge, respectively. Here, we show that the entanglement- assisted paradigm gives a theoretical and simulated turbo boost to these codes, in the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can possess both of the aforementioned desirable properties, and simulation results indicate that entanglement-assisted turbo codes can operate reliably in a noise regime 5.5 dB beyond that of standard quantum turbo codes. Entanglement is the resource that enables a convolutional encoder to satisfy both properties because an encoder acting on only information qubits, classical bits, gauge qubits, and ancilla qubits cannot simultaneously satisfy them. Simulation results demonstrate that interleaved serial concatenation of EAQ convolutional encoders leads to a powerful code construction with excellent performance on a memoryless depolarizing channel. © 2011 IEEE.