Chen, L, Chitambar, E, Duan, R, Ji, Z & Winter, A 2010, 'Tensor Rank and Stochastic Entanglement Catalysis for Multipartite Pure States', Phys. Rev. Lett., vol. 105, no. 20, p. 200501.
View/Download from: Publisher's site
View description>>
The tensor rank (also known as generalized Schmidt rank) of multipartite purestates plays an important role in the study of entanglement classifications andtransformations. We employ powerful tools from the theory of homogeneouspolynomials to investigate the tensor rank of symmetric states such as thetripartite state $\ket{W_3}=\tfrac{1}{\sqrt{3}}(\ket{100}+\ket{010}+\ket{001})$and its $N$-partite generalization $\ket{W_N}$. Previous tensor rank estimatesare dramatically improved and we show that (i) three copies of $\ket{W_3}$ hasrank either 15 or 16, (ii) two copies of $\ket{W_N}$ has rank $3N-2$, and (iii)$n$ copies of $\ket{W_N}$ has rank O(N). A remarkable consequence of theseresults is that certain multipartite transformations, impossible evenprobabilistically, can become possible when performed in multiple copy bunchesor when assisted by some catalyzing state. This effect is impossible forbipartite pure states.
Chen, L, Chitambar, E, Duan, R, Ji, Z & Winter, A 2010, 'Tensor rank and stochastic entanglement catalysis for multipartite pure states', Physical Review Letters, vol. 105, no. 20, pp. 1-4.
View/Download from: Publisher's site
View description>>
The tensor rank (also known as generalized Schmidt rank) of multipartite pure states plays an important role in the study of entanglement classifications and transformations. We employ powerful tools from the theory of homogeneous polynomials to investigate the tensor rank of symmetric states such as the tripartite state |W3=1√3(|100+|010+|001) and its N-partite generalization |WN. Previous tensor rank estimates are dramatically improved and we show that (i) three copies of |W3 have a rank of either 15 or 16, (ii) two copies of |WN have a rank of 3N-2, and (iii) n copies of |WN have a rank of O(N). A remarkable consequence of these results is that certain multipartite transformations, impossible even probabilistically, can become possible when performed in multiple-copy bunches or when assisted by some catalyzing state. This effect is impossible for bipartite pure states. © 2010 The American Physical Society.
Chen, L, Chitambar, E, Duan, R, Ji, Z & Winter, A 2010, 'Tensor Rank and Stochastic Entanglement Catalysis for Multipartite Pure States', Physical Review Letters, vol. 105, no. 20, pp. 1-4.
View/Download from: Publisher's site
View description>>
The tensor rank (also known as generalized Schmidt rank) of multipartite pure states plays an important role in the study of entanglement classifications and transformations. We employ powerful tools from the theory of homogeneous polynomials to investig
Chen, X, Duan, R, Ji, Z & Zeng, B 2010, 'Quantum state reduction for universal measurement based computation', Phys. Rev. Lett. 105(2):020502 (2010), vol. 105, no. 2, pp. 1-4.
View/Download from: Publisher's site
View description>>
Measurement based quantum computation (MBQC), which requires only singleparticle measurements on a universal resource state to achieve the full powerof quantum computing, has been recognized as one of the most promising modelsfor the physical realization of quantum computers. Despite considerableprogress in the last decade, it remains a great challenge to search for newuniversal resource states with naturally occurring Hamiltonians, and to betterunderstand the entanglement structure of these kinds of states. Here we showthat most of the resource states currently known can be reduced to the clusterstate, the first known universal resource state, via adaptive localmeasurements at a constant cost. This new quantum state reduction schemeprovides simpler proofs of universality of resource states and opens up plentyof space to the search of new resource states, including an example based onthe one-parameter deformation of the AKLT state studied in [Commun. Math. Phys.144, 443 (1992)] by M. Fannes et al. about twenty years ago.
Chitambar, E, Duan, R & Shi, Y 2010, 'Multipartite-to-bipartite entanglement transformations and polynomial identity testing', Physical Review A, vol. 81, no. 5, pp. 1-4.
View/Download from: Publisher's site
View description>>
We consider the problem of deciding if some multiparty entangled pure state can be converted, with a nonzero success probability, into a given bipartite pure state shared between two specified parties through local quantum operations and classical commun
Datta, N & Hsieh, M-H 2010, 'Universal coding for transmission of private information', J. Math. Phys., vol. 51, no. 12, p. 122202.
View/Download from: Publisher's site
View description>>
We consider the scenario in which Alice transmits private classical messagesto Bob via a classical-quantum channel, part of whose output is intercepted byan eavesdropper, Eve. We prove the existence of a universal coding scheme underwhich Alice's messages can be inferred correctly by Bob, and yet Eve learnsnothing about them. The code is universal in the sense that it does not dependon specific knowledge of the channel. Prior knowledge of the probabilitydistribution on the input alphabet of the channel, and bounds on thecorresponding Holevo quantities of the output ensembles at Bob's and Eve's endsuffice.
Devitt, S 2010, 'Scalable quantum information processing and the optical topological quantum computer', Optics and Spectroscopy, vol. 108, no. 2, pp. 267-281.
View/Download from: Publisher's site
Duan, R, Severini, S & Winter, A 2010, 'Zero-error communication via quantum channels, non-commutative graphs and a quantum Lovasz theta function', IEEE Trans. Inf. Theory 59(2):1164-1174, 2013, vol. 59, no. 2, pp. 1164-1174.
View/Download from: Publisher's site
View description>>
We study the quantum channel version of Shannon's zero-error capacityproblem. Motivated by recent progress on this question, we propose to considera certain operator space as the quantum generalisation of the adjacency matrix,in terms of which the plain, quantum and entanglement-assisted capacity can beformulated, and for which we show some new basic properties. Most importantly, we define a quantum version of Lovasz' famous thetafunction, as the norm-completion (or stabilisation) of a 'naive' generalisationof theta. We go on to show that this function upper bounds the number ofentanglement-assisted zero-error messages, that it is given by a semidefiniteprogramme, whose dual we write down explicitly, and that it is multiplicativewith respect to the natural (strong) graph product. We explore various other properties of the new quantity, which reduces toLovasz' original theta in the classical case, give several applications, andpropose to study the operator spaces associated to channels as 'non-commutativegraphs', using the language of Hilbert modules.
Duan, R, Xin, Y & Ying, M 2010, 'Locally indistinguishable subspaces spanned by three-qubit unextendible product bases', PHYSICAL REVIEW A, vol. 81, no. 3, pp. 1-10.
View/Download from: Publisher's site
View description>>
We study the local distinguishability of general multiqubit states and show that local projective measurements and classical communication are as powerful as the most general local measurements and classical communication. Remarkably, this indicates that
Ferrie, C 2010, 'Quasi-probability representations of quantum theory with applications to quantum information science', Rep. Prog. Phys., vol. 74, no. 11, p. 116001.
View/Download from: Publisher's site
View description>>
This article comprises a review of both the quasi-probability representationsof infinite-dimensional quantum theory (including the Wigner function) and themore recently defined quasi-probability representations of finite-dimensionalquantum theory. We focus on both the characteristics and applications of theserepresentations with an emphasis toward quantum information theory. We discussthe recently proposed unification of the set of possible quasi-probabilityrepresentations via frame theory and then discuss the practical relevance ofnegativity in such representations as a criteria for quantumness.
Grassl, M, Ji, Z, Wei, Z & Zeng, B 2010, 'Quantum-capacity-approaching codes for the detected-jump channel', Physical Review A, vol. 82, no. 6.
View/Download from: Publisher's site
Hsieh, M-H & Gall, FL 2010, 'NP-hardness of decoding quantum error-correction codes', Physical Review A, vol. 83, no. 5, pp. 052331-5.
View/Download from: Publisher's site
View description>>
Though the theory of quantum error correction is intimately related to theclassical coding theory, in particular, one can construct quantum errorcorrection codes (QECCs) from classical codes with the dual containingproperty, this does not necessarily imply that the computational complexity ofdecoding QECCs is the same as their classical counterparts. Instead, decodingQECCs can be very much different from decoding classical codes due to thedegeneracy property. Intuitively, one expect degeneracy would simplify thedecoding since two different errors might not and need not be distinguished inorder to correct them. However, we show that general quantum decoding problemis NP-hard regardless of the quantum codes being degenerate or non-degenerate.This finding implies that no considerably fast decoding algorithm exists forthe general quantum decoding problems, and suggests the existence of a quantumcryptosystem based on the hardness of decoding QECCs.
Jain, R, Ji, Z, Upadhyay, S & Watrous, J 2010, 'QIP = PSPACE', Communications of the ACM, vol. 53, no. 12, pp. 102-109.
View/Download from: Publisher's site
View description>>
The interactive proof system model of computation has been studied extensively in computational complexity theory and theoretical cryptography for more than 25 years, and has driven the development of interesting new techniques and insights in those fields. This work considers the quantum interactive proof system model, which is the classical model's natural quantum computational analog. An exact characterization of the expressive power of quantum interactive proof systems is obtained: the collection of computational problems having quantum interactive proof systems consists precisely of those problems solvable with an ordinary classical computer using at most a polynomial amount of memory (or QIP = PSPACE in complexity-theoretic terminology). One striking implication of this characterization is that it implies quantum computing provides no increase in computational power whatsoever over classical computing in the context of interactive proof systems.
Ji, Z, Chen, J, Wei, Z & Ying, M 2010, 'THE LU-LC CONJECTURE IS FALSE', QUANTUM INFORMATION & COMPUTATION, vol. 10, no. 1-2, pp. 97-108.
View description>>
The LU-LC conjecture is an important open problem concerning the structure of entanglement of states described in the stabilizer formalism. It states that two local unitary equivalent stabilizer states are also local Clifford equivalent. If this conjecture were true, the local equivalence of stabilizer states would be extremely easy to characterize. Unfortunately, however, based on the recent progress made by Gross and Van den Nest, we find that the conjecture is false.
Lee, T, Mittal, R, Reichardt, BW, Spalek, R & Szegedy, M 2010, 'Quantum query complexity of state conversion', Proc. 52nd IEEE Symp. on Foundations of Computer Science (FOCS), 2011, pages 344-353, pp. 344-353.
View/Download from: Publisher's site
View description>>
State conversion generalizes query complexity to the problem of convertingbetween two input-dependent quantum states by making queries to the input. Wecharacterize the complexity of this problem by introducing a naturalinformation-theoretic norm that extends the Schur product operator norm. Thecomplexity of converting between two systems of states is given by the distancebetween them, as measured by this norm. In the special case of function evaluation, the norm is closely related tothe general adversary bound, a semi-definite program that lower-bounds thenumber of input queries needed by a quantum algorithm to evaluate a function.We thus obtain that the general adversary bound characterizes the quantum querycomplexity of any function whatsoever. This generalizes and simplifies theproof of the same result in the case of boolean input and output. Also in thecase of function evaluation, we show that our norm satisfies a remarkablecomposition property, implying that the quantum query complexity of thecomposition of two functions is at most the product of the query complexitiesof the functions, up to a constant. Finally, our result implies that discreteand continuous-time query models are equivalent in the bounded-error setting,even for the general state-conversion problem.
Li, Y, Duan, R & Ying, M 2010, 'Local unambiguous discrimination with remaining entanglement', PHYSICAL REVIEW A, vol. 82, no. 3, pp. 1-6.
View/Download from: Publisher's site
View description>>
A bipartite state, which is secretly chosen from a finite set of known entangled pure states, cannot immediately be useful in standard quantum information processing tasks. To effectively make use of the entanglement contained in this unknown state, we i
Liu, W, Zhang, X, Li, S & Ying, M 2010, 'Reasoning about cardinal directions between extended objects', ARTIFICIAL INTELLIGENCE, vol. 174, no. 12-13, pp. 951-983.
View/Download from: Publisher's site
View description>>
Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a relation model, known as the cardinal direction calculus (CDC), for representing direction relations between connected plane regions. The CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking the consistency of complete networks of basic CDC constraints, and proves that reasoning with the CDC is in general an NP-complete problem. For a consistent complete network of basic CDC constraints, our algorithm returns a âcanonicalâ solution in cubic time. This cubic algorithm is also adapted to check the consistency of complete networks of basic cardinal constraints between possibly disconnected regions.
Mathieson, L 2010, 'The parameterized complexity of editing graphs for bounded degeneracy', Theoretical Computer Science, vol. 411, no. 34-36, pp. 3181-3187.
View/Download from: Publisher's site
Mellor, D, Prieto, E, Mathieson, L & Moscato, P 2010, 'A Kernelisation Approach for Multiple d-Hitting Set and Its Application in Optimal Multi-Drug Therapeutic Combinations', PLoS ONE, vol. 5, no. 10, pp. e13055-e13055.
View/Download from: Publisher's site
View description>>
Therapies consisting of a combination of agents are an attractive proposition, especially in the context of diseases such as cancer, which can manifest with a variety of tumor types in a single case. However uncovering usable drug combinations is expensive both financially and temporally. By employing computational methods to identify candidate combinations with a greater likelihood of success we can avoid these problems, even when the amount of data is prohibitively large. HITTING SET is a combinatorial problem that has useful application across many fields, however as it is NP-complete it is traditionally considered hard to solve exactly. We introduce a more general version of the problem (α,β,d)-HITTING SET, which allows more precise control over how and what the hitting set targets. Employing the framework of Parameterized Complexity we show that despite being NP-complete, the ((α,β,d)-HITTING SET problem is fixed-parameter tractable with a kernel of size O(αdkd) when we parameterize by the size k of the hitting set and the maximum number a of the minimum number of hits, and taking the maximum degree d of the target sets as a constant. We demonstrate the application of this problem to multiple drug selection for cancer therapy, showing the flexibility of the problem in tailoring such drug sets. The fixed-parameter tractability result indicates that for low values of the parameters the problem can be solved quickly using exact methods. We also demonstrate that the problem is indeed practical, with computation times on the order of 5 seconds, as compared to previous Hitting Set applications using the same dataset which exhibited times on the order of 1 day, even with relatively relaxed notions for what constitutes a low value for the parameters. Furthermore the existence of a kernelization for ((α,β,d)-HITTING SET indicates that the problem is readily scalable to large datasets. © 2010 Mellor et al.
Munro, WJ, Harrison, KA, Stephens, AM, Devitt, SJ & Nemoto, K 2010, 'From quantum multiplexing to high-performance quantum networking', Nature Photonics, vol. 4, no. 11, pp. 792-796.
View/Download from: Publisher's site
Nunn, J, Dorner, U, Michelberger, P, Reim, KF, Lee, KC, Langford, NK, Walmsley, IA & Jaksch, D 2010, 'Quantum memory in an optical lattice', Physical Review A, vol. 82, no. 2.
View/Download from: Publisher's site
Reim, KF, Nunn, J, Lorenz, VO, Sussman, BJ, Lee, KC, Langford, NK, Jaksch, D & Walmsley, IA 2010, 'Towards high-speed optical quantum memories', Nature Photonics, vol. 4, no. 4, pp. 218-221.
View/Download from: Publisher's site
Rizzi, R, Mahata, P, Mathieson, L & Moscato, P 2010, 'Hierarchical Clustering Using the Arithmetic-Harmonic Cut: Complexity and Experiments', PLoS ONE, vol. 5, no. 12, pp. e14067-e14067.
View/Download from: Publisher's site
View description>>
Clustering, particularly hierarchical clustering, is an important method for understanding and analysing data across a wide variety of knowledge domains with notable utility in systems where the data can be classified in an evolutionary context. This paper introduces a new hierarchical clustering problem defined by a novel objective function we call the arithmeticharmonic cut. We show that the problem of finding such a cut is NP-hard and APX-hard but is fixed-parameter tractable, which indicates that although the problem is unlikely to have a polynomial time algorithm (even for approximation), exact parameterized and local search based techniques may produce workable algorithms. To this end, we implement a memetic algorithm for the problem and demonstrate the effectiveness of the arithmetic-harmonic cut on a number of datasets including a cancer type dataset and a corona virus dataset. We show favorable performance compared to currently used hierarchical clustering techniques such as k-MEANS, Graclus and NORMALIZED-CUT. The arithmetic-harmonic cut metric overcoming difficulties other hierarchal methods have in representing both intercluster differences and intracluster similarities. © 2010 Rizzi et al.
Sheridan, L, Le, TP & Scarani, V 2010, 'Finite-key security against coherent attacks in quantum key distribution', New J. Phys., vol. 12, p. 123019.
View description>>
The work by Christandl, K\'onig and Renner [Phys. Rev. Lett. 102, 020504(2009)] provides in particular the possibility of studying unconditionalsecurity in the finite-key regime for all discrete-variable protocols. We spellout this bound from their general formalism. Then we apply it to the study of arecently proposed protocol [Laing et al., Phys. Rev. A 82, 012304 (2010)]. Thisprotocol is meaningful when the alignment of Alice's and Bob's reference framesis not monitored and may vary with time. In this scenario, the notion ofasymptotic key rate has hardly any operational meaning, because if one waitstoo long time, the average correlations are smeared out and no security can beinferred. Therefore, finite-key analysis is necessary to find the maximalachievable secret key rate and the corresponding optimal number of signals.
Shih, Y-C, Hsieh, M-H & Wei, H-Y 2010, 'Multicasting Homogeneous and Heterogeneous Quantum States in Quantum Networks', Nano Communication Networks, 2010, vol. 1, no. 4, pp. 273-282.
View/Download from: Publisher's site
View description>>
In this paper, we target the practical implementation issues of quantum
multicast networks. First, we design a recursive lossless compression that
allows us to control the trade-off between the circuit complexity and the
dimension of the compressed quantum state. We give a formula that describes the
trade-off, and further analyze how the formula is affected by the controlling
parameter of the recursive procedure. Our recursive lossless compression can be
applied in a quantum multicast network where the source outputs homogeneous
quantum states (many copies of a quantum state) to a set of destinations
through a bottleneck. Such a recursive lossless compression is extremely useful
in the current situation where the technology of producing large-scale quantum
circuits is limited. Second, we develop two lossless compression schemes that
work for heterogeneous quantum states (many copies of a set of quantum states)
when the set of quantum states satisfies a certain structure. The heterogeneous
compression schemes provide extra compressing power over the homogeneous
compression scheme. Finally, we realize our heterogeneous compression schemes
in several quantum multicast networks, including the single-source
multi-terminal model, the multi-source multi-terminal model, and the ring
networks. We then analyze the bandwidth requirements for these network models.
Tomamichel, M & Renner, R 2010, 'The Uncertainty Relation for Smooth Entropies', Phys. Rev. Lett., vol. 106, no. 11, p. 110506.
View/Download from: Publisher's site
View description>>
Uncertainty relations give upper bounds on the accuracy by which the outcomesof two incompatible measurements can be predicted. While establisheduncertainty relations apply to cases where the predictions are based on purelyclassical data (e.g., a description of the system's state before measurement),an extended relation which remains valid in the presence of quantum informationhas been proposed recently [Berta et al., Nat. Phys. 6, 659 (2010)]. Here, wegeneralize this uncertainty relation to one formulated in terms of smoothentropies. Since these entropies measure operational quantities such asextractable secret key length, our uncertainty relation is of immediatepractical use. To illustrate this, we show that it directly implies security ofa family of quantum key distribution protocols including BB84. Our proofremains valid even if the measurement devices used in the experiment deviatearbitrarily from the theoretical model.
Tomamichel, M, Schaffner, C, Smith, A & Renner, R 2010, 'Leftover Hashing Against Quantum Side Information', IEEE Trans. Inf. Theory, 57 (8), 2011, vol. 57, no. 8, pp. 5524-5535.
View/Download from: Publisher's site
View description>>
The Leftover Hash Lemma states that the output of a two-universal hashfunction applied to an input with sufficiently high entropy is almost uniformlyrandom. In its standard formulation, the lemma refers to a notion of randomnessthat is (usually implicitly) defined with respect to classical sideinformation. Here, we prove a (strictly) more general version of the LeftoverHash Lemma that is valid even if side information is represented by the stateof a quantum system. Furthermore, our result applies to arbitrary delta-almosttwo-universal families of hash functions. The generalized Leftover Hash Lemmahas applications in cryptography, e.g., for key agreement in the presence of anadversary who is not restricted to classical information processing.
Wilde, MM & Hsieh, M-H 2010, 'Public and private resource trade-offs for a quantum channel', Quantum Information Processing, vol. 11, no. 6, pp. 6-1501.
View/Download from: Publisher's site
View description>>
Collins and Popescu realized a powerful analogy between several resources inclassical and quantum information theory. The Collins-Popescu analogy statesthat public classical communication, private classical communication, andsecret key interact with one another somewhat similarly to the way thatclassical communication, quantum communication, and entanglement interact. Thispaper discusses the information-theoretic treatment of this analogy for thecase of noisy quantum channels. We determine a capacity region for a quantumchannel interacting with the noiseless resources of public classicalcommunication, private classical communication, and secret key. We then comparethis region with the classical-quantum-entanglement region from our priorefforts and explicitly observe the information-theoretic consequences of thestrong correlations in entanglement and the lack of a super-dense codingprotocol in the public-private-secret-key setting. The region simplifies forseveral realistic, physically-motivated channels such as entanglement-breakingchannels, Hadamard channels, and quantum erasure channels, and we are able tocompute and plot the region for several examples of these channels.
Wilde, MM & Hsieh, M-H 2010, 'The quantum dynamic capacity formula of a quantum channel', Quantum Information Processing, vol. 11, no. 6, pp. 6-1463.
View/Download from: Publisher's site
View description>>
The dynamic capacity theorem characterizes the reliable communication ratesof a quantum channel when combined with the noiseless resources of classicalcommunication, quantum communication, and entanglement. In prior work, weproved the converse part of this theorem by making contact with many previousresults in the quantum Shannon theory literature. In this work, we prove thetheorem with an 'ab initio' approach, using only the most basic tools in thequantum information theorist's toolkit: the Alicki-Fannes' inequality, thechain rule for quantum mutual information, elementary properties of quantumentropy, and the quantum data processing inequality. The result is a simplifiedproof of the theorem that should be more accessible to those unfamiliar withthe quantum Shannon theory literature. We also demonstrate that the 'quantumdynamic capacity formula' characterizes the Pareto optimal trade-off surfacefor the full dynamic capacity region. Additivity of this formula simplifies thecomputation of the trade-off surface, and we prove that its additivity holdsfor the quantum Hadamard channels and the quantum erasure channel. We thendetermine exact expressions for and plot the dynamic capacity region of thequantum dephasing channel, an example from the Hadamard class, and the quantumerasure channel.
Wilde, MM, Hsieh, M-H & Babar, Z 2010, 'Entanglement-assisted quantum turbo codes', IEEE Transactions on Information Theory vol. 60, no. 2, pages 1203-1222 (February 2014), vol. 60, no. 2, pp. 1203-1222.
View/Download from: Publisher's site
View description>>
An unexpected breakdown in the existing theory of quantum serial turbo codingis that a quantum convolutional encoder cannot simultaneously be recursive andnon-catastrophic. These properties are essential for quantum turbo codefamilies to have a minimum distance growing with blocklength and for theiriterative decoding algorithm to converge, respectively. Here, we show that theentanglement-assisted paradigm simplifies the theory of quantum turbo codes, inthe sense that an entanglement-assisted quantum (EAQ) convolutional encoder canpossess both of the aforementioned desirable properties. We give severalexamples of EAQ convolutional encoders that are both recursive andnon-catastrophic and detail their relevant parameters. We then modify thequantum turbo decoding algorithm of Poulin et al., in order to have theconstituent decoders pass along only 'extrinsic information' to each otherrather than a posteriori probabilities as in the decoder of Poulin et al., andthis leads to a significant improvement in the performance of unassistedquantum turbo codes. Other simulation results indicate thatentanglement-assisted turbo codes can operate reliably in a noise regime 4.73dB beyond that of standard quantum turbo codes, when used on a memorylessdepolarizing channel. Furthermore, several of our quantum turbo codes arewithin 1 dB or less of their hashing limits, so that the performance of quantumturbo codes is now on par with that of classical turbo codes. Finally, we provethat entanglement is the resource that enables a convolutional encoder to beboth non-catastrophic and recursive because an encoder acting on onlyinformation qubits, classical bits, gauge qubits, and ancilla qubits cannotsimultaneously satisfy them.
Ying, M 2010, 'Quantum computation, quantum theory and AI', ARTIFICIAL INTELLIGENCE, vol. 174, no. 2, pp. 162-176.
View/Download from: Publisher's site
View description>>
The main purpose of this paper is to examine some (potential) applications of quantum computation in AI and to review the interplay between quantum theory and AI. For the readers who are not familiar with quantum computation, a brief introduction to it i
Yu, N, Chitambar, E, Guo, C & Duan, R 2010, 'Tensor rank of the tripartite state vertical bar W >(circle times n)', PHYSICAL REVIEW A, vol. 81, no. 1, pp. 1-3.
View/Download from: Publisher's site
View description>>
Tensor rank refers to the number of product states needed to express a given multipartite quantum state. Its nonadditivity as an entanglement measure has recently been observed. In this Brief Report, we estimate the tensor rank of multiple copies of the tripartite state |W=13(|100+|010+|001). Both an upper bound and a lower bound of this rank are derived. In particular, it is proven that the rank of |W 2 is 7, thus resolving a previously open problem. Some implications of this result are discussed in terms of transformation rates between |Wn and multiple copies of the state |GHZ=12(|000+|111). © 2010 The American Physical Society.
Yu, N, Duan, R & Ying, M 2010, 'Optimal simulation of a perfect entangler', PHYSICAL REVIEW A, vol. 81, no. 3, pp. 1-4.
View/Download from: Publisher's site
View description>>
A2 circle times 2 unitary operation is called a perfect entangler if it can generate a maximally entangled state from some unentangled input. We study the following question: How many runs of a given two-qubit entangling unitary operation are required to
Abed, HE, Margner, V & Blumenstein, M 1970, 'International Conference on Frontiers in Handwriting Recognition (ICFHR 2010) - Competitions Overview', 2010 12th International Conference on Frontiers in Handwriting Recognition, 2010 International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, pp. 703-708.
View/Download from: Publisher's site
View description>>
The great success and high number of participants in pattern recognition related competitions last years show an important improvement of recognition and classification approaches. This success is unconceivable without the availability of huge datasets of real world data. We have invited for proposals for competitions to be held in the framework of the 12th International Conference on Frontiers in Handwriting Recognition (ICFHR2010). These competitions should aim at evaluating the performance of algorithms and methods for a particular task of Handwriting Recognition. Eight different teams composed of more than one group have submitted their proposals. The subjects of these propositions cover the field of research of handwriting recognition from pre-processing over handwritten document analysis to handwriting text/word recognition. These competitions represent an overview of current research topics and frontiers in handwriting document analysis and recognition. Only 5 competitions have received enough participants (we have defined the threshold to 3 systems) to present their evaluation at the ICFHR 2010. This paper presents the 8 competition proposals with lists of competition organizers and lists of participating systems and approaches. © 2010 IEEE.
Blumenstein, M, Ferrer, MA & Vargas, JF 1970, 'The 4NSigComp2010 Off-line Signature Verification Competition: Scenario 2', 2010 12th International Conference on Frontiers in Handwriting Recognition, 2010 International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, pp. 721-726.
View/Download from: Publisher's site
View description>>
The objective of this competition (4NSigComp2010) is to ascertain the performance of automatic off-line signature verifiers to evaluate recent technology developments in the areas of document analysis and machine learning. The current paper focuses on the second scenario, which aims at performance evaluation of off-line signature verification systems on a newly-created large dataset that comprises genuine, simulated signatures produced by unskilled imitators or random signatures (genuine signatures from other writers). Ten systems were evaluated, and some interesting results are presented in terms of accuracy and execution time. The top ranking system attained an overall error of 8.94%. This result interestingly correlates with the top ranking accuracy achieved in a previous signature verification competition at ICDAR 2009. © 2010 IEEE.
Bremner, MJ 1970, 'Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy', 10th Asian Conference on Quantum Information Science, Tokyo, Japan.
View description>>
The 10th Asian Conference on Quantum Information Science (AQIS'10) is a meeting focused on quantum information science and technology. Since this is a new interdisciplinary field, its broad scope includes advances in various fields such as quantum physics, computer science, mathematics and information technologies. This event is the memorable tenth conference which builds upon a successful series of EQIS'01-05 and AQIS'06-09 conferences. It will comprise tutorials and presentations of invited speakers, selected papers and posters. Areas of coverage include but are not limited to: Quantum computation, algorithms and complexity Quantum information theory Quantum error-correction and fault-tolerance, thresholds Quantum cryptography Quantum communications experiments and theory Quantum optics, NMR and solid-state technologies Quantum processors and computers design Quantum programming languages and semantics AQIS'10 will be held at the University of Tokyo, Tokyo, Japan.
Duan, R, Grassl, M, Ji, Z & Zeng, B 1970, 'Multi-Error-Correcting Amplitude Damping Codes', Proceedings 2010 IEEE International Symposium on Information Theory (ISIT 2010), Austin, USA, June 2010, pp. 2672-2676, International Symposium on Information Theory, IEEE, Austin, USA, pp. 2672-2676.
View/Download from: Publisher's site
View description>>
We construct new families of multi-error-correcting quantum codes for theamplitude damping channel. Our key observation is that, with proper encoding,two uses of the amplitude damping channel simulate a quantum erasure channel.This allows us to use concatenated codes with quantum erasure-correcting codesas outer codes for correcting multiple amplitude damping errors. Our new codesare degenerate stabilizer codes and have parameters which are better than theamplitude damping codes obtained by any previously known construction.
Duckham, M, Jeong, MH, Li, S & Renz, J 1970, 'Decentralized querying of topological relations between regions without using localization', Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS '10: 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, San Jose, USA, pp. 414-417.
View/Download from: Publisher's site
View description>>
This paper proposes an efficient, decentralized algorithm for determining the topological relationship between two regions monitored by a geosensor network. Many centralized algorithms already exist for this purpose (used for example in spatial databases). However, these algorithms are not suited to decentralized spatial computing environments, like geosensor networks, which must operate without global knowledge of the system state and without centralized control. Unlike many existing decentralized spatial algorithms, the proposed algorithm is also able to operate in the absence of information about a node's coordinate location. This makes the algorithm suitable for applications of geosensor networks where GPS or other positioning systems are unavailable or unreliable. The algorithm approach is founded on the well-known 4-intersection model, using in-network data aggregation and spatial filtering (involving nodes only at some region boundaries). This ensures only a relatively small proportion of the network is involved in computation, thus increasing efficiency. Our analysis shows that while the overall communication complexity of the algorithm is O(n), the load balancing is optimal leading to a constant O(1) communication complexity for individual nodes. This expectation is confirmed with empirical investigation using simulation, which demonstrates the practical efficiency of the algorithm. © 2010 ACM.
Jain, R, Ji, Z, Upadhyay, S & Watrous, J 1970, 'QIP = PSPACE', Proceedings of the forty-second ACM symposium on Theory of computing, STOC'10: Symposium on Theory of Computing, ACM, pp. 573-581.
View/Download from: Publisher's site
View description>>
We prove that the complexity class QIP, which consists of all problems having quantum interactive proof systems, is contained in PSPACE. This containment is proved by applying a parallelized form of the matrix multiplicative weights update method to a class of semidefinite programs that captures the computational power of quantum interactive proofs. As the containment of PSPACE in QIP follows immediately from the well-known equality IP = PSPACE, the equality QIP = PSPACE follows. © 2010 ACM.
Jansen, M, Qiao, Y & Sarma, J 1970, 'Deterministic Black-Box Identity Testing $π$-Ordered Algebraic Branching Programs', Leibniz International Proceedings in Informatics, LIPIcs, International Conference on Foundations of Software Technology and Theoretical Computer Science, Dagstuhl Publishing, Chennai, India, pp. 296-307.
View/Download from: Publisher's site
View description>>
In this paper we study algebraic branching programs (ABPs) with restrictionson the order and the number of reads of variables in the program. Given apermutation $\pi$ of $n$ variables, for a $\pi$-ordered ABP ($\pi$-OABP), forany directed path $p$ from source to sink, a variable can appear at most onceon $p$, and the order in which variables appear on $p$ must respect $\pi$. AnABP $A$ is said to be of read $r$, if any variable appears at most $r$ times in$A$. Our main result pertains to the identity testing problem. Over any field$F$ and in the black-box model, i.e. given only query access to the polynomial,we have the following result: read $r$ $\pi$-OABP computable polynomials can betested in $\DTIME[2^{O(r\log r \cdot \log^2 n \log\log n)}]$. Our next set of results investigates the computational limitations of OABPs.It is shown that any OABP computing the determinant or permanent requires size$\Omega(2^n/n)$ and read $\Omega(2^n/n^2)$. We give a multilinear polynomial$p$ in $2n+1$ variables over some specifically selected field $G$, such thatany OABP computing $p$ must read some variable at least $2^n$ times. We showthat the elementary symmetric polynomial of degree $r$ in $n$ variables can becomputed by a size $O(rn)$ read $r$ OABP, but not by a read $(r-1)$ OABP, forany $0 < 2r-1 \leq n$. Finally, we give an example of a polynomial $p$ and twovariables orders $\pi \neq \pi'$, such that $p$ can be computed by a read-once$\pi$-OABP, but where any $\pi'$-OABP computing $p$ must read some variable atleast $2^n$
Lane, C, Gal, Y, Browne, M, Short, A, Strauss, D, Tomlinson, R, Jackson, K, Tan, C & Blumenstein, M 1970, 'A new system for breakzone location and the measurement of breaking wave heights and periods.', 2010 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2010 - 2010 IEEE International Geoscience and Remote Sensing Symposium, IEEE, pp. 2234-2236.
View/Download from: Publisher's site
View description>>
This paper presents a new system for measuring breakzone locations, breaking wave height and wave periods across the surfzone from a digital video sequence. The system (Wave Pack) aims to provide real-time measurement of breaking and re-breaking wave heights and wave periods using low mounted video camera installations. Following on site data collection and analysis it was found that the Wave Pack system provides a low cost, robust, reliable and accurate system for measuring continuous wave height and period from a low elevation video camera aimed at the target beach under a wide range of wave conditions. These tests have verified the accuracy of Wave Pack in comparison to existing systems. © 2010 IEEE.
Lee, J, Blumenstein, M, Guan, H & Loo, Y-C 1970, 'Long-term Prediction of Bridge Element Performance Using Time Delay Neural Networks (TDNNs)', IABSE Symposium, Venice 2010: Large Structures and Infrastructures for Environmentally Constrained and Urbanised Areas, IABSE Symposium, Venice 2010: Large Structures and Infrastructures for Environmentally Constrained and Urbanised Areas, International Association for Bridge and Structural Engineering (IABSE), pp. 374-375.
View/Download from: Publisher's site
View description>>
<p>A bridge is principally designed to have a long service life. However, due to number factors, it could fail prematurely, and could cause loss of human life. In order to ensure the optimum bridge serviceability, systematic asset management is essential for effective decision-making of maintenance, repair and rehabilitation (MR&R). Systematic asset management can be achieved by a computer-based bridge management system (BMS). Successful BMS development requires a reliable bridge deterioration model, which is the most crucial component in a BMS. Historical condition ratings obtained from biennial bridge inspections are a major resource for predicting future bridge deterioration via BMSs. However, available historical condition ratings from most bridge agencies are very limited, thus posing a major barrier for predicting reliable future bridge performance.</p><p>This paper presents the progressive research on the development of a reliable bridge deterioration model using advanced Artificial Intelligence (AI) techniques. The development is organised in three major steps: (1) generating unavailable past bridge element condition ratings using the Backward Prediction Model (BPM) - this helps to provide sufficient historical deterioration patterns for each element; (2) predicting long-term condition ratings based on the outcome of Step 1 using Time Delay Neural Networks (TDNNs); and (3) improving long-term prediction accuracy of Step 2 by employing Case-based Reasoning (CBR). This paper mainly focuses on the first two steps of the research. Promising results are reported for the reliable long-term prediction of bridge element performance.</p>
Li, S & Liu, W 1970, 'Topological relations between convex regions', Proceedings of the National Conference on Artificial Intelligence, National Conference of the American Association for Artificial Intelligence, AAAI Press, Atlanta, Georgia, USA, pp. 321-326.
View description>>
Topological relations between spatial objects are the most important kind of qualitative spatial information. Dozens of relation models have been proposed in the past two decades. These models usually make a small number of distinctions and therefore can only cope with spatial information at a fixed granularity of spatial knowledge. In this paper, we propose a topological relation model in which the topological relation between two convex plane regions can be uniquely represented as a circular string over the alphabet {u, v, x, y}. A linear algorithm is given to compute the topological relation between two convex polygons. The infinite relation calculus could be used in hierarchical spatial reasoning as well as in qualitative shape description. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Li, S & Liu, W 1970, 'Topological Relations between Convex Regions', Proceedings of the 24th AAAI Conference on Artificial Intelligence, AAAI 2010, pp. 321-326.
View description>>
Topological relations between spatial objects are the most important kind of qualitative spatial information. Dozens of relation models have been proposed in the past two decades. These models usually make a small number of distinctions and therefore can only cope with spatial information at a fixed granularity of spatial knowledge. In this paper, we propose a topological relation model in which the topological relation between two convex plane regions can be uniquely represented as a circular string over the alphabet (u, v, x, y). A linear algorithm is given to compute the topological relation between two convex polygons. The infinite relation calculus could be used in hierarchical spatial reasoning as well as in qualitative shape description.
Nguyen, V & Blumenstein, M 1970, 'Techniques for static handwriting trajectory recovery', Proceedings of the 9th IAPR International Workshop on Document Analysis Systems, DAS '10: The Eighth IAPR International Workshop on Document Analysis Systems, ACM, pp. 463-470.
View/Download from: Publisher's site
View description>>
On-line handwriting recognition systems are usually better than their off-line counterparts thanks to the accessibility of dynamic information such as stroke order, velocity, acceleration, and pressure. Whilst the exact value of velocity as well as acceleration or pressure is unlikely to be recoverable, the temporal order of the strokes or the pen trajectory is shown to be more promising for recovery. The published experimental results suggest that the recovered pen trajectory information actually improves the off-line recognition accuracy. This paper presents an overview and discussion of pen trajectory recovery methods developed to date. Copyright 2010 ACM.
Nguyen, V, Kawazoe, Y, Wakabayashi, T, Pal, U & Blumenstein, M 1970, 'Performance Analysis of the Gradient Feature and the Modified Direction Feature for Off-line Signature Verification', 2010 12th International Conference on Frontiers in Handwriting Recognition, 2010 International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, pp. 303-307.
View/Download from: Publisher's site
View description>>
Feature extraction is an important process in offline signature verification. In this work, the performance of two feature extraction techniques, the Modified Direction Feature (MDF) and the gradient feature are compared on the basis of similar experimental settings. In addition, the performance of Support Vector Machines (SVMs) and the squared Mahalanobis distance classifier employing the Gradient Feature are also compared and reported. Without using forgeries for training, experimental results indicated that an average error rate as low as 15.03% could be obtained using the gradient feature and SVMs. © 2010 IEEE.
Tomamichel, M, Renner, R, Schaffner, C & Smith, A 1970, 'Leftover Hashing against quantum side information', 2010 IEEE International Symposium on Information Theory, 2010 IEEE International Symposium on Information Theory - ISIT, IEEE, Austin, USA, pp. 2703-2707.
View/Download from: Publisher's site
View description>>
The Leftover Hash Lemma states that the output of a two-universal hash function applied to an input with sufficiently high entropy is almost uniformly random. In its standard formulation, the lemma refers to a notion of randomness that is (usually implicitly) defined with respect to classical side information. Here, we prove a (strictly) more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system. Furthermore, our result applies to arbitrary δ-almost two-universal families of hash functions. The generalized Leftover Hash Lemma has applications in cryptography, e.g., for key agreement in the presence of an adversary who is not restricted to classical information processing. © 2010 IEEE.
Tuxworth, G, Meedeniya, A & Blumenstein, M 1970, 'Segmentation of Inter-neurons in Three Dimensional Brain Imagery', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Berlin Heidelberg, pp. 145-152.
View/Download from: Publisher's site
View description>>
Segmentation of neural cells in three dimensional fluorescence microscopy images is a challenging image processing problem. In addition to being important to neurobiologists, accurate segmentation is a vital component of an automated image processing system. Due to the complexity of the data, particularly the extreme irregularity in neural cell shape, generic segmentation techniques do not perform well. This paper presents a novel segmentation technique for segmenting neural cells in three dimensional images. Accuracy rates of over 90% are reported on a data set of 100 images containing over 130 neural cells and subsequently validated using a novel data set of 64 neurons. © 2010 Springer-Verlag.
Wilde, MM & Hsieh, M 1970, 'Trading Resources in Quantum Communication', The 10th Asian Conference on Quantum Information Science (AQIS10), Tokyo, Japan.