Arefin, AS, Mathieson, L, Johnstone, D, Berretta, R & Moscato, P 2012, 'Unveiling Clusters of RNA Transcript Pairs Associated with Markers of Alzheimer’s Disease Progression', PLoS ONE, vol. 7, no. 9, pp. e45535-e45535.
View/Download from: Publisher's site
View description>>
Background: One primary goal of transcriptomic studies is identifying gene expression patterns correlating with disease progression. This is usually achieved by considering transcripts that independently pass an arbitrary threshold (e.g. p<0.05). In diseases involving severe perturbations of multiple molecular systems, such as Alzheimer's disease (AD), this univariate approach often results in a large list of seemingly unrelated transcripts. We utilised a powerful multivariate clustering approach to identify clusters of RNA biomarkers strongly associated with markers of AD progression. We discuss the value of considering pairs of transcripts which, in contrast to individual transcripts, helps avoid natural human transcriptome variation that can overshadow disease-related changes. Methodology/Principal Findings: We re-analysed a dataset of hippocampal transcript levels in nine controls and 22 patients with varying degrees of AD. A large-scale clustering approach determined groups of transcript probe sets that correlate strongly with measures of AD progression, including both clinical and neuropathological measures and quantifiers of the characteristic transcriptome shift from control to severe AD. This enabled identification of restricted groups of highly correlated probe sets from an initial list of 1,372 previously published by our group. We repeated this analysis on an expanded dataset that included all pair-wise combinations of the 1,372 probe sets. As clustering of this massive dataset is unfeasible using standard computational tools, we adapted and re-implemented a clustering algorithm that uses external memory algorithmic approach. This identified various pairs that strongly correlated with markers of AD progression and highlighted important biological pathways potentially involved in AD pathogenesis. Conclusions/Significance: Our analyses demonstrate that, although there exists a relatively large molecular signature of AD progression, only a smal...
Bogdanov, A & Qiao, Y 2012, 'On the security of Goldreich’s one-way function', computational complexity, vol. 21, no. 1, pp. 83-127.
View/Download from: Publisher's site
View description>>
Goldreich (ECCC 2000) suggested a simple construction of a candidate one-way function f: {0, 1} n → {0, 1} m where each bit of output is a fixed predicate P of a constant number d of (random) input bits. We investigate the security of this construction in the regime m = Dn, where D(d) is a sufficiently large constant. We prove that for any predicate P that correlates with either one or two of its inputs, f can be inverted with high probability. We also prove an amplification claim regarding Goldreich's construction. Suppose we are given an assignment x′ ∈ {0,1} n that has correlation ε > 0 with the hidden assignment x′ ∈ {0,1} n. Then, given access to x′, it is possible to invert f on x with high probability, provided D = D(d,ε) is sufficiently large. © 2012 Springer Basel AG.
Briët, J, Buhrman, H, Lee, T & Vidick, T 2012, 'All Schatten spaces endowed with the Schur product are Q-algebras', Journal of Functional Analysis, vol. 262, no. 1, pp. 1-9.
View/Download from: Publisher's site
Bu, GP, Lee, JH, Guan, H, Loo, YC & Blumenstein, M 2012, 'Long-Term Performance of Bridge Elements Using Integrated Deterioration Method Incorporating Elman Neural Network', Applied Mechanics and Materials, vol. 204-208, pp. 1980-1987.
View/Download from: Publisher's site
View description>>
Currently, probabilistic deterioration modeling techniques have been employed in most state-of-the-art Bridge Management Systems (BMSs) to predict future bridge condition ratings. As confirmed by many researchers, the reliability of the probabilistic deterioration models rely heavily on the sufficient amount of condition data together with their well-distributed historical deterioration patterns over time. However, inspection records are usually insufficient in most bridge agencies. As a result, a typical standalone probabilistic model (e.g. state-based or time-based model) is not promising for forecasting a reliable bridge long-term performance. To minimise the shortcomings of lacking condition data, an integrated method using a combination of state- and time-based techniques has recently been developed and has demonstrated an improved performance as compared to the standalone techniques. However, certain shortcomings still remain in the integrated method which necessities further improvement. In this study, the core component of the state-based modeling is replaced by an Elman Neural Networks (ENN). The integrated method incorporated with ENN is more effective in predicting long-term bridge performance as compared to the typical deterioration modeling techniques. As part of comprehensive case studies, this paper presents the deterioration prediction of 52 bridge elements with material types of Steel (S), Timber (T) and Other (O). These elements are selected from 94 bridges (totaling 4,115 inspection records). The enhanced reliability of the proposed integrated method incorporating ENN is confirmed.
Chen, J, Dawkins, H, Ji, Z, Johnston, N, Kribs, D, Shultz, F & Zeng, B 2012, 'Uniqueness of Quantum States Compatible with Given Measurement Results', Phys. Rev. A, vol. 88, no. 1, p. 012109.
View/Download from: Publisher's site
View description>>
We discuss the uniqueness of quantum states compatible with given results formeasuring a set of observables. For a given pure state, we consider twodifferent types of uniqueness: (1) no other pure state is compatible with thesame measurement results and (2) no other state, pure or mixed, is compatiblewith the same measurement results. For case (1), it is known that for ad-dimensional Hilbert space, there exists a set of 4d-5 observables thatuniquely determines any pure state. We show that for case (2), 5d-7 observablessuffice to uniquely determine any pure state. Thus there is a gap between theresults for (1) and (2), and we give some examples to illustrate this. The caseof observables corresponding to reduced density matrices (RDMs) of amultipartite system is also discussed, where we improve known bounds on localdimensions for case (2) in which almost all pure states are uniquely determinedby their RDMs. We further discuss circumstances where (1) can imply (2). We useconvexity of the numerical range of operators to show that when only twoobservables are measured, (1) always implies (2). More generally, if there is acompact group of symmetries of the state space which has the span of theobservables measured as the set of fixed points, then (1) implies (2). Weanalyze the possible dimensions for the span of such observables. Our resultsextend naturally to the case of low rank quantum states.
Chen, J, Ji, Z, Kribs, DW, Zeng, B & Zhang, F 2012, 'Minimum Entangling Power is Close to Its Maximum', Journal of Physics A: Mathematical and Theoretical, vol. 52, no. 21.
View/Download from: Publisher's site
View description>>
Given a quantum gate $U$ acting on a bipartite quantum system, its maximum(average, minimum) entangling power is the maximum (average, minimum)entanglement generation with respect to certain entanglement measure when theinputs are restricted to be product states. In this paper, we mainly focus onthe 'weakest' one, i.e., the minimum entangling power, among all theseentangling powers. We show that, by choosing von Neumann entropy of reduceddensity operator or Schmidt rank as entanglement measure, even the 'weakest'entangling power is generically very close to its maximal possible entanglementgeneration. In other words, maximum, average and minimum entangling powers aregenerically close. We then study minimum entangling power with respect to otherLipschitiz-continuous entanglement measures and generalize our results tomultipartite quantum systems. As a straightforward application, a random quantum gate will almost surely bean intrinsically fault-tolerant entangling device that will always transformevery low-entangled state to near-maximally entangled state.
Chen, J, Ji, Z, Ruskai, MB, Zeng, B & Zhou, D-L 2012, 'Comment on some results of Erdahl and the convex structure of reduced density matrices', JOURNAL OF MATHEMATICAL PHYSICS, vol. 53, no. 7.
View/Download from: Publisher's site
View description>>
In J. Math. Phys. 13, 1608-1621 (1972), Erdahl considered the convexstructure of the set of $N$-representable 2-body reduced density matrices inthe case of fermions. Some of these results have a straightforward extension tothe $m$-body setting and to the more general quantum marginal problem. Wedescribe these extensions, but can not resolve a problem in the proof ofErdahl's claim that every extreme point is exposed in finite dimensions.Nevertheless, we can show that when $2m \geq N$ every extreme point of the setof $N$-representable $m$-body reduced density matrices has a unique pre-imagein both the symmetric and anti-symmetric setting. Moreover, this extends to thequantum marginal setting for a pair of complementary $m$-body and $(N-m)$-bodyreduced density matrices.
Datta, N, Hsieh, M-H, Wilde, MM & Winter, A 2012, 'Quantum-to-classical rate distortion coding', Journal of Mathematical Physics, vol. 54, no. 4, pp. 042201-17.
View/Download from: Publisher's site
View description>>
We establish a theory of quantum-to-classical rate distortion coding. In thissetting, a sender Alice has many copies of a quantum information source. Hergoal is to transmit classical information about the source, obtained byperforming a measurement on it, to a receiver Bob, up to some specified levelof distortion. We derive a single-letter formula for the minimum rate ofclassical communication needed for this task. We also evaluate this rate in thecase in which Bob has some quantum side information about the source. Ourresults imply that, in general, Alice's best strategy is a non-classical one,in which she performs a collective measurement on successive outputs of thesource.
Devitt, SJ, Stephens, AM, Munro, WJ & Nemoto, K 2012, 'Requirements for fault-tolerant factoring on an atom-optics quantum computer', Nature (communications) 4, 2524 (2013), vol. 4.
View/Download from: Publisher's site
View description>>
Quantum information processing and its associated technologies has reached aninteresting and timely stage in their development where many differentexperiments have been performed establishing the basic building blocks. Thechallenge moving forward is to scale up to larger sized quantum machinescapable of performing tasks not possible today. This raises a number ofinteresting questions like: How big will these machines need to be? how manyresources will they consume? This needs to be urgently addressed. Here weestimate the resources required to execute Shor's factoring algorithm on adistributed atom-optics quantum computer architecture. We determine the runtimeand requisite size of the quantum computer as a function of the problem sizeand physical error rate. Our results suggest that once experimental accuracyreaches levels below the fault-tolerant threshold, further optimisation ofcomputational performance and resources is largely an issue of how thealgorithm and circuits are implemented, rather than the physical quantumhardware
Dupuis, F, Szehr, O & Tomamichel, M 2012, 'A decoupling approach to classical data transmission over quantum channels', IEEE Trans. on Inf. Theory, vol. 60, no. 3, pp. 1562-1572.
View/Download from: Publisher's site
View description>>
Most coding theorems in quantum Shannon theory can be proven using thedecoupling technique: to send data through a channel, one guarantees that theenvironment gets no information about it; Uhlmann's theorem then ensures thatthe receiver must be able to decode. While a wide range of problems can besolved this way, one of the most basic coding problems remains impervious to adirect application of this method: sending classical information through aquantum channel. We will show that this problem can, in fact, be solved usingdecoupling ideas, specifically by proving a 'dequantizing' theorem, whichensures that the environment is only classically correlated with the sent data. Our techniques naturally yield a generalization of theHolevo-Schumacher-Westmoreland Theorem to the one-shot scenario, where aquantum channel can be applied only once.
England, DG, Michelberger, PS, Champion, TFM, Reim, KF, Lee, KC, Sprague, MR, Jin, X-M, Langford, NK, Kolthammer, WS, Nunn, J & Walmsley, IA 2012, 'High-fidelity polarization storage in a gigahertz bandwidth quantum memory', Journal of Physics B: Atomic, Molecular and Optical Physics, vol. 45, no. 12, pp. 124008-124008.
View/Download from: Publisher's site
Feng, Y, Duan, R & Ying, M 2012, 'Bisimulation for Quantum Processes', ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, vol. 34, no. 4, pp. 1-43.
View/Download from: Publisher's site
View description>>
Quantum cryptographic systems have been commercially available, with a striking advantage over classical systems that their security and ability to detect the presence of eavesdropping are provable based on the principles of quantum mechanics. On the oth
Granade, CE, Ferrie, C, Wiebe, N & Cory, DG 2012, 'Robust online Hamiltonian learning', New Journal of Physics, vol. 14, no. 10, pp. 103013-103013.
View/Download from: Publisher's site
Horsman, D, Fowler, AG, Devitt, S & Meter, RV 2012, 'Surface code quantum computing by lattice surgery', New Journal of Physics, vol. 14, no. 12, pp. 123011-123011.
View/Download from: Publisher's site
View description>>
Abstract In recent years, surface codes have become a leading method for quantum error correction in theoretical large-scale computational and communications architecture designs. Their comparatively high fault-tolerant thresholds and their natural two-dimensional nearest-neighbour (2DNN) structure make them an obvious choice for large scale designs in experimentally realistic systems. While fundamentally based on the toric code of Kitaev, there are many variants, two of which are the planar- and defect-based codes. Planar codes require fewer qubits to implement (for the same strength of error correction), but are restricted to encoding a single qubit of information. Interactions between encoded qubits are achieved via transversal operations, thus destroying the inherent 2DNN nature of the code. In this paper we introduce a new technique enabling the coupling of two planar codes without transversal operations, maintaining the 2DNN of the encoded computer. Our lattice surgery technique comprises splitting and merging planar code surfaces, and enables us to perform universal quantum computation (including magic state injection) while removing the need for braided logic in a strictly 2DNN design, and hence reduces the overall qubit resources for logic operations. Those resources are further reduced by the use of a rotated lattice for the planar encoding. We show how lattice surgery allows us to distribute encoded GHZ states in a more direct (and overhead friendly) manner, and how a demonstration of an encoded CNOT between two distance-3 logical states is possible with 53 physical qubits, half of that required in any other known construction in 2D.
Kaniewski, J, Tomamichel, M, Hänggi, E & Wehner, S 2012, 'Secure bit commitment from relativistic constraints', IEEE Trans. on Inf. Theory, vol. 59, no. 7, pp. 4687-4699.
View/Download from: Publisher's site
View description>>
We investigate two-party cryptographic protocols that are secure underassumptions motivated by physics, namely relativistic assumptions(no-signalling) and quantum mechanics. In particular, we discuss the securityof bit commitment in so-called split models, i.e. models in which at least someof the parties are not allowed to communicate during certain phases of theprotocol. We find the minimal splits that are necessary to evade theMayers-Lo-Chau no-go argument and present protocols that achieve security inthese split models. Furthermore, we introduce the notion of local versus globalcommand, a subtle issue that arises when the split committer is required todelegate non-communicating agents to open the commitment. We argue thatclassical protocols are insecure under global command in the split model weconsider. On the other hand, we provide a rigorous security proof in the globalcommand model for Kent's quantum protocol [Kent 2011, Unconditionally SecureBit Commitment by Transmitting Measurement Outcomes]. The proof employs twofundamental principles of modern physics, the no-signalling property ofrelativity and the uncertainty principle of quantum mechanics.
Lee, J, Guan, H, Loo, Y-C & Blumenstein, M 2012, 'Refinement of Backward Prediction Method for Reliable Artificial Intelligence-Based Bridge Deterioration Modelling', Advances in Structural Engineering, vol. 15, no. 5, pp. 825-836.
View/Download from: Publisher's site
View description>>
A deterioration model is the most critical component of a Bridge Management System (BMS). Artificial Intelligence (AI)-based bridge deterioration model has recently been developed to minimise uncertainties in predicting long-term performance of bridge structural elements. This model contains two components: (1) using Neural Network-based Backward Prediction Model (BPM) to generate unavailable historical condition ratings; and (2) using Time Delay Neural Network (TDNN) to perform long-term performance prediction of bridge structural elements. However new problems have emerged in the process of TDNN prediction. In this study, the BPM-generated condition ratings are used together with the actual overall condition ratings. The incompatibility between the two sets of data produces unreliable prediction outcomes during the TDNN process. This research therefore aims to introduce a new data processing procedure for BPM outcomes, by removing meaningless condition ratings that cause poor training outcomes for long-term prediction using TDNN. Consequently, the outcome of this study can improve accuracy of the current AI-based bridge deterioration model.
Lee, KC, Sussman, BJ, Sprague, MR, Michelberger, P, Reim, KF, Nunn, J, Langford, NK, Bustard, PJ, Jaksch, D & Walmsley, IA 2012, 'Macroscopic non-classical states and terahertz quantum processing in room-temperature diamond', Nature Photonics, vol. 6, no. 1, pp. 41-44.
View/Download from: Publisher's site
Lee, T, Magniez, F & Santha, M 2012, 'Improved Quantum Query Algorithms for Triangle Finding and Associativity Testing', Algorithmica, vol. 77, no. 2, pp. 459-486.
View/Download from: Publisher's site
View description>>
We show that the quantum query complexity of detecting if an $n$-vertex graphcontains a triangle is $O(n^{9/7})$. This improves the previous best algorithmof Belovs making $O(n^{35/27})$ queries. For the problem of determining if anoperation $\circ : S \times S \rightarrow S$ is associative, we give analgorithm making $O(|S|^{10/7})$ queries, the first improvement to the trivial$O(|S|^{3/2})$ application of Grover search. Our algorithms are designed using the learning graph framework of Belovs. Wegive a family of algorithms for detecting constant-sized subgraphs, which canpossibly be directed and colored. These algorithms are designed in a simplehigh-level language; our main theorem shows how this high-level language can becompiled as a learning graph and gives the resulting complexity. The key idea to our improvements is to allow more freedom in the parametersof the database kept by the algorithm. As in our previous work, the edge slotsmaintained in the database are specified by a graph whose edges are the unionof regular bipartite graphs, the overall structure of which mimics that of thegraph of the certificate. By allowing these bipartite graphs to be unbalancedand of variable degree we obtain better algorithms.
Lim, CCW, Portmann, C, Tomamichel, M, Renner, R & Gisin, N 2012, 'Device-Independent Quantum Key Distribution with Local Bell Test', Phys. Rev. X, vol. 3, no. 3, pp. 031006-11.
View/Download from: Publisher's site
View description>>
Device-independent quantum key distribution (DIQKD) in its current designrequires a violation of Bell's inequality between two honest parties, Alice andBob, who are connected by a quantum channel. However, in reality, quantumchannels are lossy, and this can be exploited for attacks based on thedetection loophole. Here, we propose a novel approach to DIQKD that overcomesthis limitation. In particular, based on a combination between an entropicuncertainty relation and the Clauser-Horne-Shimony-Holt (CHSH) test, we designa DIQKD protocol where the CHSH test is carried out entirely in Alice'slaboratory. Thus the loophole caused by channel losses is avoided.
Lu, C, Chen, J & Duan, R 2012, 'Some bounds on the minimum number of queries required for quantum channel perfect discrimination', Quantum Information and Computation, vol. 12, no. 1-2, pp. 138-148.
View description>>
We prove a lower bound on the q-maximal fidelities between two quantum channels ε 0 and ε 1 and an upper bound on the q-maximal fidelities between a quantum channel ε and an identity I. Then we apply these two bounds to provide a simple sufficient and necessary condition for sequential perfect distinguish ability between ε and I and provide both a lower bound and an upper bound on the minimum number of queries required to sequentially perfectly discriminating ε and I. Interestingly, in the 2-dimensional case, both bounds coincide. Based on the optimal perfect discrimination protocol presented in [20], we can further generalize the lower bound and upper bound to the minimum number of queries to perfectly discriminating ε and I over all possible discrimination schemes. Finally the two lower bounds are shown remain working for perfectly discriminating general two quantum channels ε 0 and ε 1 in sequential scheme and over all possible discrimination schemes respectively. © Rinton Press.
Mathieson, L & Szeider, S 2012, 'Editing graphs to satisfy degree constraints: A parameterized approach', Journal of Computer and System Sciences, vol. 78, no. 1, pp. 179-191.
View/Download from: Publisher's site
View description>>
We study a wide class of graph editing problems that ask whether a given graph can be modified to satisfy certain degree constraints, using a limited number of vertex deletions, edge deletions, or edge additions. The problems generalize several well-studied problems such as the General Factor Problem and the Regular Subgraph Problem. We classify the parameterized complexity of the considered problems taking upper bounds on the number of editing steps and the maximum degree of the resulting graph as parameters. © 2011 Elsevier Inc. All rights reserved.
MingSheng, Y, Yuan, F, RunYao, D, YangJia, L & NengKun, Y 2012, 'Quantum programming: From theories to implementations', CHINESE SCIENCE BULLETIN, vol. 57, no. 16, pp. 1903-1909.
View/Download from: Publisher's site
View description>>
This paper surveys the new field of programming methodology and techniques for future quantum computers, including design of sequential and concurrent quantum programming languages, their semantics and implementations. Several verification methods for qu
Moroder, T, Curty, M, Lim, CCW, Thinh, LP, Zbinden, H & Gisin, N 2012, 'Security of distributed-phase-reference quantum key distribution', Phys. Rev. Lett., vol. 109, p. 260501.
View description>>
Distributed-phase-reference quantum key distribution stands out for its easyimplementation with present day technology. Since many years, a full securityproof of these schemes in a realistic setting has been elusive. For the firsttime, we solve this long standing problem and present a generic method to provethe security of such protocols against general attacks. To illustrate ourresult we provide lower bounds on the key generation rate of a variant of thecoherent-one-way quantum key distribution protocol. In contrast to standardpredictions, it appears to scale quadratically with the system transmittance.
Munro, WJ, Stephens, AM, Devitt, SJ, Harrison, KA & Nemoto, K 2012, 'Quantum communication without the necessity of quantum memories', Nature Photonics, vol. 6, no. 11, pp. 777-781.
View/Download from: Publisher's site
Qiao, Y-M, M.N., JS & Tang, B-S 2012, 'On Isomorphism Testing of Groups with Normal Hall Subgroups', Journal of Computer Science and Technology, vol. 27, no. 4, pp. 687-701.
View/Download from: Publisher's site
View description>>
A normal Hall subgroup N of a group G is a normal subgroup with its order coprime with its index. Schur-Zassenhaus theorem states that every normal Hall subgroup has a complement subgroup, that is a set of coset representatives H which also forms a subgroup of G. In this paper, we present a framework to test isomorphism of groups with at least one normal Hall subgroup, when groups are given as multiplication tables. To establish the framework, we first observe that a proof of Schur-Zassenhaus theorem is constructive, and formulate a necessary and sufficient condition for testing isomorphism in terms of the associated actions of the semidirect products, and isomorphisms of the normal parts and complement parts. We then focus on the case when the normal subgroup is abelian. Utilizing basic facts of representation theory of finite groups and a technique by Le Gall (STACS 2009), we first get an efficient isomorphism testing algorithm when the complement has bounded number of generators. For the case when the complement subgroup is elementary abelian, which does not necessarily have bounded number of generators, we obtain a polynomial time isomorphism testing algorithm by reducing to generalized code isomorphism problem, which asks whether two linear subspaces are the same up to permutation of coordinates. A solution to the latter can be obtained by a mild extension of the singly exponential (in the number of coordinates) time algorithm for code isomorphism problem developed recently by Babai et al. (SODA 2011). Enroute to obtaining the above reduction, we study the following computational problem in representation theory of finite groups: given two representations ? and ? of a group H over Z dp , p a prime, determine if there exists an automorphism φ : H → H, such that the induced representation ?φ = ? 0 φ and ? are equivalent, in time poly(|H|, p d). © 2012 Springer Science+Business Media, LLC & Science Press, China.
Ramelow, S, Fedrizzi, A, Poppe, A, Langford, NK & Zeilinger, A 2012, 'Polarization-entanglement-conserving frequency conversion of photons', Physical Review A, vol. 85, no. 1.
View/Download from: Publisher's site
Reim, KF, Nunn, J, Jin, X-M, Michelberger, PS, Champion, TFM, England, DG, Lee, KC, Kolthammer, WS, Langford, NK & Walmsley, IA 2012, 'Multipulse Addressing of a Raman Quantum Memory: Configurable Beam Splitting and Efficient Readout', Physical Review Letters, vol. 108, no. 26.
View/Download from: Publisher's site
Salter, PS, Jesacher, A, Spring, JB, Metcalf, BJ, Thomas-Peter, N, Simmonds, RD, Langford, NK, Walmsley, IA & Booth, MJ 2012, 'Adaptive slit beam shaping for direct laser written waveguides', Optics Letters, vol. 37, no. 4, pp. 470-470.
View/Download from: Publisher's site
Somma, RD, Nagaj, D & Kieferová, M 2012, 'Quantum Speedup by Quantum Annealing', Physical Review Letters, vol. 109, no. 5, pp. 050501-050501.
View/Download from: Publisher's site
View description>>
We study the glued-trees problem from A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. Spielman, in Proceedings of the 35th Annual ACM Symposium on Theory of Computing (ACM, San Diego, CA, 2003), p. 59. in the adiabatic model of quantum computing and provide an annealing schedule to solve an oracular problem exponentially faster than classically possible. The Hamiltonians involved in the quantum annealing do not suffer from the so-called sign problem. Unlike the typical scenario, our schedule is efficient even though the minimum energy gap of the Hamiltonians is exponentially small in the problem size. We discuss generalizations based on initial-state randomization to avoid some slowdowns in adiabatic quantum computing due to small gaps.
THINH, LP, SHERIDAN, L & SCARANI, V 2012, 'TOMOGRAPHIC QUANTUM CRYPTOGRAPHY PROTOCOLS ARE REFERENCE FRAME INDEPENDENT', International Journal of Quantum Information, vol. 10, no. 03, pp. 1250035-1250035.
View/Download from: Publisher's site
View description>>
We consider the class of reference frame independent protocols in d dimensions for quantum key distribution, in which Alice and Bob have one natural basis that is aligned and the rest of their measurement bases are unaligned. We relate existing approaches to tomographically complete protocols. We comment on two different approaches to finite key bounds in this setting, one direct and one using the entropic uncertainty relation and suggest that the existing finite key bounds can still be improved.
Tomamichel, M & Hayashi, M 2012, 'A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks', IEEE Transactions on Information Theory, vol. 59, no. 11, pp. 7693-7710.
View/Download from: Publisher's site
View description>>
We consider two fundamental tasks in quantum information theory, datacompression with quantum side information as well as randomness extractionagainst quantum side information. We characterize these tasks for generalsources using so-called one-shot entropies. We show that thesecharacterizations - in contrast to earlier results - enable us to derive tightsecond order asymptotics for these tasks in the i.i.d. limit. More generally,our derivation establishes a hierarchy of information quantities that can beused to investigate information theoretic tasks in the quantum domain: Theone-shot entropies most accurately describe an operational quantity, yet theytend to be difficult to calculate for large systems. We show that theyasymptotically agree up to logarithmic terms with entropies related to thequantum and classical information spectrum, which are easier to calculate inthe i.i.d. limit. Our techniques also naturally yields bounds on operationalquantities for finite block lengths.
Tomamichel, M & Tan, VYF 2012, 'A Tight Upper Bound for the Third-Order Asymptotics for Most Discrete Memoryless Channels', IEEE Transactions on Information Theory, vol. 59, no. 11, pp. 7041-7051.
View/Download from: Publisher's site
View description>>
This paper shows that the logarithm of the epsilon-error capacity (averageerror probability) for n uses of a discrete memoryless channel is upper boundedby the normal approximation plus a third-order term that does not exceed 1/2log n + O(1) if the epsilon-dispersion of the channel is positive. This matchesa lower bound by Y. Polyanskiy (2010) for discrete memoryless channels withpositive reverse dispersion. If the epsilon-dispersion vanishes, the logarithmof the epsilon-error capacity is upper bounded by the n times the capacity plusa constant term except for a small class of DMCs and epsilon >= 1/2.
Tomamichel, M, Fehr, S, Kaniewski, J & Wehner, S 2012, 'A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography', New J. Phys., vol. 15, p. 103002.
View/Download from: Publisher's site
View description>>
We consider a game in which two separate laboratories collaborate to preparea quantum system and are then asked to guess the outcome of a measurementperformed by a third party in a random basis on that system. Intuitively, bythe uncertainty principle and the monogamy of entanglement, the probabilitythat both players simultaneously succeed in guessing the outcome correctly isbounded. We are interested in the question of how the success probabilityscales when many such games are performed in parallel. We show that anystrategy that maximizes the probability to win every game individually is alsooptimal for the parallel repetition of the game. Our result implies that theoptimal guessing probability can be achieved without the use of entanglement.We explore several applications of this result. First, we show that it impliessecurity for standard BB84 quantum key distribution when the receiving partyuses fully untrusted measurement devices, i.e. we show that BB84 is one-sideddevice independent. Second, we show how our result can be used to provesecurity of a one-round position-verification scheme. Finally, we generalize awell-known uncertainty relation for the guessing probability to quantum sideinformation.
Veitch, V, Ferrie, C, Gross, D & Emerson, J 2012, 'Negative quasi-probability as a resource for quantum computation', New Journal of Physics, vol. 14, no. 11, pp. 113011-113011.
View/Download from: Publisher's site
View description>>
A central problem in quantum information is to determine the minimal physical resources that are required for quantum computational speed-up and, in particular, for fault-tolerant quantum computation. We establish a remarkable connection between the potential for quantum speed-up and the onset of negative values in a distinguished quasi-probability representation, a discrete analogue of the Wigner function for quantum systems of odd dimension. This connection allows us to resolve an open question on the existence of bound states for magic state distillation: we prove that there exist mixed states outside the convex hull of stabilizer states that cannot be distilled to non-stabilizer target states using stabilizer operations. We also provide an efficient simulation protocol for Clifford circuits that extends to a large class of mixed states, including bound universal states. © IOP Publishing and Deutsche Physikalische Gesellschaft.
Veitch, V, Wiebe, N, Ferrie, C & Emerson, J 2012, 'Efficient simulation scheme for a class of quantum optics experiments with non-negative Wigner representation', New Journal of Physics, vol. 15, p. 013037.
View/Download from: Publisher's site
View description>>
We provide a scheme for efficient simulation of a broad class of quantumoptics experiments. Our efficient simulation extends the continuous variableGottesman-Knill theorem to a large class of non-Gaussian mixed states, therebyidentifying that these non-Gaussian states are not an enabling resource forexponential quantum speed-up. Our results also provide an operationallymotivated interpretation of negativity as non-classicality. We apply our schemeto the case of noisy single-photon-added-thermal-states to show that this classadmits states with positive Wigner function but negative P -function that arenot useful resource states for quantum computation.
Vitanov, A, Dupuis, F, Tomamichel, M & Renner, R 2012, 'Chain Rules for Smooth Min- and Max-Entropies', IEEE Transactions on Information Theory, vol. 59, no. 5, pp. 2603-2612.
View/Download from: Publisher's site
View description>>
The chain rule for the Shannon and von Neumann entropy, which relates thetotal entropy of a system to the entropies of its parts, is of centralimportance to information theory. Here we consider the chain rule for the moregeneral smooth min- and max-entropy, used in one-shot information theory. Forthese entropy measures, the chain rule no longer holds as an equality, butmanifests itself as a set of inequalities that reduce to the chain rule for thevon Neumann entropy in the i.i.d. case.
Wilde, MM & Hsieh, M-H 2012, 'Erratum to: The quantum dynamic capacity formula of a quantum channel; Public and private resource trade-offs for a quantum channel', Quantum Information Processing, vol. 11, no. 6, pp. 1503-1509.
View/Download from: Publisher's site
Wilde, MM, Datta, N, Hsieh, M-H & Winter, A 2012, 'Quantum rate distortion coding with auxiliary resources', IEEE Transactions on Information Theory vol. 59, no. 10, pages 6755-6773 (October 2013), vol. 59, no. 10, pp. 6755-6773.
View/Download from: Publisher's site
View description>>
We extend quantum rate distortion theory by considering auxiliary resourcesthat might be available to a sender and receiver performing lossy quantum datacompression. The first setting we consider is that of quantum rate distortioncoding with the help of a classical side channel. Our result here is that theregularized entanglement of formation characterizes the quantum rate distortionfunction, extending earlier work of Devetak and Berger. We also combine thisbound with the entanglement-assisted bound from our prior work to obtain thebest known bounds on the quantum rate distortion function for an isotropicqubit source. The second setting we consider is that of quantum rate distortioncoding with quantum side information (QSI) available to the receiver. In orderto prove results in this setting, we first state and prove a quantum reverseShannon theorem with QSI (for tensor-power states), which extends the knowntensor-power quantum reverse Shannon theorem. The achievability part of thistheorem relies on the quantum state redistribution protocol, while the converserelies on the fact that the protocol can cause only a negligible disturbance tothe joint state of the reference and the receiver's QSI. This quantum reverseShannon theorem with QSI naturally leads to quantum rate-distortion theoremswith QSI, with or without entanglement assistance.
Wilde, MM, Hayden, P, Buscemi, F & Hsieh, M-H 2012, 'The information-theoretic costs of simulating quantum measurements', Journal of Physics A: Mathematical and Theoretical vol. 45, no. 45, p. 453001 (2012), vol. 45, no. 45, pp. 1-67.
View/Download from: Publisher's site
View description>>
Winter's measurement compression theorem stands as one of the mostpenetrating insights of quantum information theory (QIT). In addition to makingan original and profound statement about measurement in quantum theory, it alsounderlies several other general protocols in QIT. In this paper, we provide afull review of Winter's measurement compression theorem, detailing theinformation processing task, giving examples for understanding it, reviewingWinter's achievability proof, and detailing a new approach to its single-letterconverse theorem. We prove an extension of the theorem to the case in which thesender is not required to receive the outcomes of the simulated measurement.The total cost of common randomness and classical communication can be lowerfor such a 'non-feedback' simulation, and we prove a single-letter conversetheorem demonstrating optimality. We then review the Devetak-Winter theorem onclassical data compression with quantum side information, providing new proofsof its achievability and converse parts. From there, we outline a new protocolthat we call 'measurement compression with quantum side information,' announcedpreviously by two of us in our work on triple trade-offs in quantum Shannontheory. This protocol has several applications, including its part in the'classically-assisted state redistribution' protocol, which is the most generalprotocol on the static side of the quantum information theory tree, and itsrole in reducing the classical communication cost in a task known as localpurity distillation. We also outline a connection between measurementcompression with quantum side information and recent work on entropicuncertainty relations in the presence of quantum memory. Finally, we prove asingle-letter theorem characterizing measurement compression with quantum sideinformation when the sender is not required to obtain the measurement outcome.
Wittmann, B, Ramelow, S, Steinlechner, F, Langford, NK, Brunner, N, Wiseman, HM, Ursin, R & Zeilinger, A 2012, 'Loophole-free Einstein–Podolsky–Rosen experiment via quantum steering', New Journal of Physics, vol. 14, no. 5, pp. 053030-053030.
View/Download from: Publisher's site
Yu, N 2012, 'Multi-partite $W-$type state is determined by its single particle reduced density matrices', Phys. Rev. A, vol. 87, no. 5, pp. 052310-3.
View/Download from: Publisher's site
View description>>
In this short note, we show that multi-partite $W$-type state is up to localunitaries uniquely determined by its reduced density matrices.
Yu, N, Duan, R & Ying, M 2012, 'Four Locally Indistinguishable Ququad-Ququad Orthogonal Maximally Entangled States', PHYSICAL REVIEW LETTERS, vol. 109, no. 2, pp. 020506-020506.
View/Download from: Publisher's site
View description>>
We explicitly exhibit a set of four ququad-ququad orthogonal maximally entangled states that cannot be perfectly distinguished by means of local operations and classical communication. Before our work, it was unknown whether there is a set of d locally indistinguishable d - d orthogonal maximally entangled states for some positive integer d. We further show that a 2 - 2 maximally entangled state can be used to locally distinguish this set of states without being consumed, thus demonstrate a novel phenomenon of entanglement discrimination catalysis. Based on this set of states, we construct a new set K consisting of four locally indistinguishable states such that K -m (with 4m members) is locally distinguishable for some m greater than one. As an immediate application, we construct a noisy quantum channel with one sender and two receivers whose local zero-error classical capacity can achieve the full dimension of the input space but only with a multi-shot protocol. © 2012 American Physical Society.
ZHAO, S-M, XIAO, YU, ZHU, YAN, ZHU, X-L & HSIEH, M-H 2012, 'NEW CLASS OF QUANTUM CODES CONSTRUCTED FROM CYCLIC DIFFERENCE SET', International Journal of Quantum Information, vol. 10, no. 01, pp. 1250015-1250015.
View/Download from: Publisher's site
View description>>
We introduce joint difference sets as a generalization of cyclic difference sets, and we construct a new class of quantum error-correcting codes (QECCs) from these joint difference sets. The main benefits of our method are as follows. First, we can construct quantum codes that are both high rate and with large block length, while maintaining good performance. Second, the density of constructed quantum parity check matrix can approach zero when the code length is very large. This allows us to use a simple iterative decoding algorithm. Interestingly, our method yields the well-known [5,1,3] QECC.
Babai, L & Qiao, Y 1970, 'Polynomial-time isomorphism test for groups with abelian Sylow towers', Leibniz International Proceedings in Informatics, LIPIcs, Symposium on Theoretical Aspects of Computer Science, Dagstuhl Publishing, Paris, France, pp. 453-464.
View/Download from: Publisher's site
View description>>
We consider the problem of testing isomorphism of groups of order n given by Cayley tables. The trivial nlogn bound on the time complexity for the general case has not been improved over the past four decades. Recently, Babai et al. (following Babai et al. in SODA 2011) presented a polynomial-time algorithm for groups without abelian normal subgroups, which suggests solvable groups as the hard case for group isomorphism problem. Extending recent work by Le Gall (STACS 2009) and Qiao et al. (STACS 2011), in this paper we design a polynomial-time algorithm to test isomorphism for the largest class of solvable groups yet, namely groups with abelian Sylow towers, defined as follows. A group G is said to possess a Sylow tower, if there exists a normal series where each quotient is isomorphic to a Sylow subgroup of G. A group has an abelian Sylow tower if it has a Sylow tower and all its Sylow subgroups are abelian. In fact, we are able to compute the coset of isomorphisms of groups formed as coprime extensions of an abelian group, by a group whose automorphism group is known. The mathematical tools required include representation theory, Wedderburn's theorem on semisimple algebras, and M. E. Harris's 1980 work on p'-automorphisms of abelian p-groups. We use tools from the theory of permutation group algorithms, and develop an algorithm for a parameterized version of the graph-isomorphism-hard setwise stabilizer problem, which may be of independent interest. © László Babai and Youming Qiao.
Babai, L, Codenotti, P & Qiao, Y 1970, 'Polynomial-Time Isomorphism Test for Groups with No Abelian Normal Subgroups', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Colloquium on Automata Languages and Programming, Springer Berlin Heidelberg, Warwick, UK, pp. 51-62.
View/Download from: Publisher's site
View description>>
We consider the problem of testing isomorphism of groups of order n given by Cayley tables. The trivial n logn bound on the time complexity for the general case has not been improved upon over the past four decades. We demonstrate that the obstacle to efficient algorithms is the presence of abelian normal subgroups; we show this by giving a polynomial-time isomorphism test for groups without nontrivial abelian normal subgroups. This concludes a project started by the authors and J. A. Grochow (SODA 2011). Two key new ingredient are: (a) an algorithm to test permutational isomorphism of permutation groups in time, polynomial in the order and simply exponential in the degree; (b) the introduction of the 'twisted code equivalence problem,' a generalization of the classical code equivalence problem by admitting a group action on the alphabet. Both of these problems are of independent interest. © 2012 Springer-Verlag.
Blumenstein, M, Pal, U & Uchida, S 1970, 'Message from General Chair and Program Chairs', 2012 10th IAPR International Workshop on Document Analysis Systems, 2012 10th IAPR International Workshop on Document Analysis Systems (DAS), IEEE.
View/Download from: Publisher's site
Bu, G, Lee, J, Guan, H, Blumenstein, M & Loo, Y-C 1970, 'Performance Prediction of Concrete Elements in Bridge Substructures using Integrated Deterioration Method', IABSE Congress Reports, IABSE Congress, Seoul 2012: Innovative Infrastructures – Towards Human Urbanism, International Association for Bridge and Structural Engineering (IABSE).
View/Download from: Publisher's site
View description>>
<p>The typical probabilistic deterioration model cannot guarantee a reliable long-term prediction for various situations of available condition data. To minimise this limitation, this paper presents an advanced integrated method using state-/time-based model to build a reliable transition probability for prediction long-term performance of bridge elements. A selection process is developed in this method to automatically select a suitable prediction approach for a given situations of condition data. Furthermore, a Backward Prediction Model (BPM) is employed to effectively prediction the bridge performance when the inspection data are insufficient. In this study, a benchmark example- concrete element in bridge substructures is selected to demonstrate that the BPM in conjunction with time-based model can improve the reliability of long-term prediction.</p>
Chitsaz, M, Wang, K, Blumenstein, M & Qi, G 1970, 'Concept Learning for $\ensuremath{\ensuremath{\cal E}\ensuremath{\cal L}^{++}}$ by Refinement and Reinforcement', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific Rim International Conference on Artificial Intelligence, Springer Berlin Heidelberg, Kuching, Malaysia, pp. 15-26.
View/Download from: Publisher's site
View description>>
Ontology construction in OWL is an important and yet time-consuming task even for knowledge engineers and thus a (semi-) automatic approach will greatly assist in constructing ontologies. In this paper, we propose a novel approach to learning concept definitions in from a collection of assertions. Our approach is based on both refinement operator in inductive logic programming and reinforcement learning algorithm. The use of reinforcement learning significantly reduces the search space of candidate concepts. Besides, we present an experimental evaluation of constructing a family ontology. The results show that our approach is competitive with an existing learning system for ℰℒ. © 2012 Springer-Verlag.
Datta, N, Hsieh, M & Wilde, MM 1970, 'Quantum rate distortion, reverse Shannon theorems, and source-channel', The 14th workshop on Quantum Information Processing (QIP 2012), Montreal, Canada.
View description>>
We derive quantum counterparts of two key theorems of classical information theory, namely, the rate distortion theorem and the source-channel separation theorem. The rate-distortion theorem gives the ultimate limits on lossy data compression, and the source-channel separation theorem implies that a two-stage protocol consisting of compression and channel coding is optimal for transmitting a memoryless source over a memoryless channel. In spite of their importance in the classical domain, there has been surprisingly little work in these areas for quantum information theory. In the present paper, we prove that the quantum rate distortion function is given in terms of the regularized entanglement of purification. We also determine a single-letter expression for the entanglement-assisted quantum rate distortion function, and we prove that it serves as a lower bound on the unassisted quantum rate distortion function. This implies that the unassisted quantum rate distortion function is non-negative and generally not equal to the coherent information between the source and distorted output (in spite of Barnum's conjecture that the coherent information would be relevant here). Moreover, we prove several quantum source-channel separation theorems. The strongest of these are in the entanglement-assisted setting, in which we establish a necessary and sufficient codition for transmitting a memoryless source over a memoryless quantum channel up to a given distortion.
Devitt, SJ & Nemoto, K 1970, 'Programming a Topological Quantum Computer', Test Symposium (ATS), 2012 IEEE 21st Asian, pp 55-60, IEEE, pp. 55-60.
View/Download from: Publisher's site
View description>>
Topological quantum computing has recently proven itself to be a powerfulcomputational model when constructing viable architectures for large scalecomputation. The topological model is constructed from the foundation of aerror correction code, required to correct for inevitable hardware faults thatwill exist for a large scale quantum device. It is also a measurement basedmodel of quantum computation, meaning that the quantum hardware is responsibleonly for the construction of a large, computationally universal quantum state.This quantum state is then strategically consumed, allowing for the realisationof a fully error corrected quantum algorithm. The number of physical qubitsneeded by the quantum hardware and the amount of time required to implement analgorithm is dictated by the manner in which this universal quantum state isconsumed. In this paper we examine the problem of algorithmic optimisation inthe topological lattice and introduce the required elements that will be neededwhen designing a classical software package to compile and implement a largescale algorithm on a topological quantum computer.
Ferrie, C & Blume-Kohout, R 1970, 'Estimating the bias of a noisy coin', AIP Conference Proceedings, BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 31st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP, Waterloo, CANADA, pp. 14-21.
View/Download from: Publisher's site
Ferrie, C, Granade, CE & Cory, DG 1970, 'Adaptive Hamiltonian estimation using Bayesian experimental design', AIP Conference Proceedings, BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 31st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP, Waterloo, CANADA, pp. 165-173.
View/Download from: Publisher's site
Hsieh, M 1970, 'Introduction to Quantum Information Theory', Quantum Physics of Information, Shanghai Jiao Tong University, Shanghai, China.
View description>>
The building blocks of our communication systems and computational devices are bits. The scope of quantum information science is to replace bits with âqubitsâ, objects that obey the laws of quantum physics. This is a great challenge that has the potential to revolutionize information technology.The event will consist of a school and a workshop: the school will expose the attendants to the basic concepts of quantum information and computation,and it will be a great occasion for newcomers to the field. The workshop will comprise advanced tutorials and a program of talks based on current research problems and results.
Hsieh, M 1970, 'Strong converse capacities of quantum channels for classical information', Japan-Singapore Workshop on Multi-user Quantum Networks, Singapore.
Hsieh, M 1970, 'The information-theoretic costs of simulating quantum measurements', International Workshop on Quantum Computing and Quantum Information Processing 2012, Chinese Academy of Sciences (CAS), Beijing, China.
View description>>
This workshop will focus on the recent advances in quantum computing and quantum information processing. It aims at providing a forum for international and Chinese researchers in these fields to exchange their latest research results.
Ivanyos, G, Klauck, H, Lee, T, Santha, M & De Wolf, R 1970, 'New bounds on the classical and quantum communication complexity of some graph properties', Leibniz International Proceedings in Informatics, LIPIcs, pp. 148-159.
View/Download from: Publisher's site
View description>>
We study the communication complexity of a number of graph properties where the edges of the graph G are distributed between Alice and Bob (i.e., each receives some of the edges as input). Our main results are: • An Ω(n) lower bound on the quantum communication complexity of deciding whether an n-vertex graph G is connected, nearly matching the trivial classical upper bound of O(n log n) bits of communication. • A deterministic upper bound of O(n3/2 log n) bits for deciding if a bipartite graph contains a perfect matching, and a quantum lower bound of Ω(n) for this problem. • A Θ(n2) bound for the randomized communication complexity of deciding if a graph has an Eulerian tour, and a Θ(n 3/2) bound for its quantum communication complexity. The first two quantum lower bounds are obtained by exhibiting a reduction from the n-bit Inner Product problem to these graph problems, which solves an open question of Babai, Frankl and Simon [2]. The third quantum lower bound comes from recent results about the quantum communication complexity of composed functions. We also obtain essentially tight bounds for the quantum communication complexity of a few other problems, such as deciding if G is triangle-free, or if G is bipartite, as well as computing the determinant of a distributed matrix. © G. Ivanyos, H. Klauck, T. Lee, M. Santha, R. de Wolf.
Mahler, D, Rozema, L, Darabi, A, Branczyk, AM, Combes, J, Ferrie, C, Blume-Kohout, R, James, DFV & Steinberg, A 1970, 'Experimental Demonstration of Adaptive Tomography and Self-Calibrating Tomography', Conference on Lasers and Electro-Optics 2012, Quantum Electronics and Laser Science Conference, OSA.
View/Download from: Publisher's site
Mahler, DH, Rozema, LA, Darabi, A, Branczyk, AM, Combes, J, Ferrie, C, Blume-Kohout, R, James, DFV & Steinberg, AM 1970, 'Experimental demonstration of adaptive tomography and self-calibrating tomography', 2012 Conference on Lasers and Electro-Optics, CLEO 2012, Conference on Lasers and Electro-Optics (CLEO), IEEE, San Jose, CA.
View description>>
We experimentally demonstrate two new quantum tomography protocols, one of which provides a quadratic speedup using adaptation and the other of which enables tomography to be done even with uncalibrated measurement devices. © 2012 OSA.
Mahler, DH, Rozema, LA, Darabi, A, Combes, J, Ferrie, C, Blume-Kohout, R & Steinberg, AM 1970, 'Experimental Demonstration of Adaptive Tomography', Frontiers in Optics 2012/Laser Science XXVIII, Frontiers in Optics, OSA.
View/Download from: Publisher's site
View description>>
We experimentally demonstrate using heralded single photons in a linear optics set-up a new form of quantum state tomography, which provides a quadratic speedup using a simple, one step, adaptation protocol involving only separable measurements. © 2011.
Nemoto, K, Everitt, MS, Devitt, SJ, Stephens, AM, Trupke, M & Schmiedmayer, J 1970, 'Quantum information device based on NV diamond centers for quantum network', Optics InfoBase Conference Papers, Conference on Lasers and Electro-Optics (CLEO), IEEE, San Jose, CA.
View description>>
We propose a model of quantum information devices based on an optical cavity with an NV centre. These devices can be easily modified to accommodate imperfections such as photon loss, maintaining the feasibility and scalability. © OSA 2012.
Nguyen, V & Blumenstein, M 1970, 'A Compact Size Feature Set for the Off-Line Signature Verification Problem', 2012 10th IAPR International Workshop on Document Analysis Systems, 2012 10th IAPR International Workshop on Document Analysis Systems (DAS), IEEE, Gold Coast, Australia, pp. 261-265.
View/Download from: Publisher's site
View description>>
With increasing computational power, researchers in the area of off-line signature verification have been able to investigate feature extraction techniques that produce large-dimensional feature vectors. However, a large feature vector is not necessarily associated with high performance. This paper investigates the performance of a small feature set consisting of 33 feature values. In the experiments using Support Vector Machines (SVMs), an average error rate (AER) of 16.80% was obtained together with a low false acceptance rate (FAR) for random forgeries of 0.19%. The significant reduction of the error rate was obtained when the proposed global features were employed, which demonstrates their astonishingly high discriminant power. These results suggest that the use of global features for the off-line signature verification problem is worth further investigation. © 2012 IEEE.
Pal, S, Alireza, A, Pal, U & Blumenstein, M 1970, 'Multi-script off-line signature identification', 2012 12th International Conference on Hybrid Intelligent Systems (HIS), 2012 12th International Conference on Hybrid Intelligent Systems (HIS), IEEE, Pine, India, pp. 236-240.
View/Download from: Publisher's site
View description>>
In this paper, we present an empirical contribution towards the understanding of multi-script signature identification. In the proposed signature identification system, the signatures of Bengali (Bangla), Hindi (Devanagari) and English are considered for the identification process. This system will identify whether a claimed signature belongs to the group of Bengali, Hindi or English signatures. Zernike Moment and histogram of gradient are employed as two different feature extraction techniques. In the proposed system, Support Vector Machines (SVMs) are considered as classifiers for signature identification. A database of 2100 Bangla signatures, 2100 Hindi signatures and 2100 English signatures are used for experimentation. Two different results based on two different feature sets are calculated and analysed. The highest accuracy of 92.14% is obtained based on the gradient features using 4200 (1400 Bangla +1400 Hindi + 1400 English) samples for training and 2100 (700 Bangla +700 Hindi +700 English) samples for testing. © 2012 IEEE.
Pal, S, Blumenstein, M & Pal, U 1970, 'Hindi Off-Line Signature Verification', 2012 International Conference on Frontiers in Handwriting Recognition, 2012 International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Bari, Italy, pp. 373-378.
View/Download from: Publisher's site
View description>>
Handwritten Signatures are one of the widely used biometrics for document authentication as well as human authorization. The purpose of this paper is to present an offline signature verification system involving Hindi signatures. Signature verification is a process by which the questioned signature is examined in detail in order to determine whether it belongs to the claimed person or not. Despite of substantial research in the field of signature verification involving Western signatures, very little attention has been dedicated to non-Western signatures such as Chinese, Japanese, Arabic, Persian etc. In this paper, the performance of an off-line signature verification system involving Hindi signatures, whose style is distinct from Western scripts, has been investigated. The gradient and Zernike moment features were employed and Support Vector Machines (SVMs) were considered for verification. To the best of the authors' knowledge, Hindi signatures have never been used for the task of signature verification and this is the first report of using Hindi signatures in this area. The Hindi signature database employed for experimentation consisted of 840 (35×24) genuine signatures and 1050 (35×30) forgeries. An encouraging accuracy of 7.42% FRR and 4.28% FAR were obtained following experimentation when the gradient features were employed. © 2012 IEEE.
Pal, S, Chanda, S, Pal, U, Franke, K & Blumenstein, M 1970, 'Off-line signature verification using G-SURF', 2012 12th International Conference on Intelligent Systems Design and Applications (ISDA), 2012 12th International Conference on Intelligent Systems Design and Applications (ISDA), IEEE, Kochi, India, pp. 586-591.
View/Download from: Publisher's site
View description>>
In the field of biometric authentication, automatic signature identification and verification has been a strong research area because of the social and legal acceptance and extensive use of the written signature as an easy method for authentication. Signature verification is a process in which the questioned signature is examined in detail in order to determine whether it belongs to the claimed person or not. Signatures provide a secure means for confirmation and authorization in legal documents. So nowadays, signature identification and verification becomes an essential component in automating the rapid processing of documents containing embedded signatures. Sometimes, part-based signature verification can be useful when a questioned signature has lost its original shape due to inferior scanning quality. In order to address the above-mentioned adverse scenario, we propose a new feature encoding technique. This feature encoding is based on the amalgamation of Gabor filter-based features with SURF features (G-SURF). Features generated from a signature are applied to a Support Vector Machine (SVM) classifier. For experimentation, 1500 (50x30) forgeries and 1200 (50x24) genuine signatures from the GPDS signature database were used. A verification accuracy of 97.05% was obtained from the experiments. © 2012 IEEE.
Pal, S, Nguyen, V, Blumenstein, M & Pal, U 1970, 'Off-line Bangla signature verification', Proceedings - 10th IAPR International Workshop on Document Analysis Systems, DAS 2012, IEEE International Joint Conference on Neural Networks, IEEE, USA, pp. 282-286.
View/Download from: Publisher's site
View description>>
In the field of information security, biometric systems play an important role. Within biometrics, automatic signature identification and verification has been a strong research area because of the social and legal acceptance and extensive use of the written signature as an individual authentication. Signature verification is a process in which the questioned signature is examined in detail in order to determine whether it belongs to the claimed person or not. Despite substantial research in the field of signature verification involving Western signatures, very few works have been dedicated to non-Western signatures such as Chinese, Japanese, Arabic, or Persian etc. In this paper, the performance of an off-line signature verification system involving Bangla signatures, whose style is distinct from Western scripts, was investigated. The Gaussian Grid feature extraction technique was employed for feature extraction and Support Vector Machines (SVMs) were considered for classification. The Bangla signature database employed in the experiments consisted of 3000 forgeries and 2400 genuine signatures. An encouraging accuracy of 90.4% was obtained from the experiments. © 2012 IEEE.
Pal, S, Pal, U & Blumenstein, M 1970, 'Off-line English and Chinese signature identification using foreground and background features', The 2012 International Joint Conference on Neural Networks (IJCNN), 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane), IEEE.
View/Download from: Publisher's site
View description>>
In the field of information security, the usage of biometrics is growing for user authentication. Automatic signature recognition and verification is one of the biometric techniques, which is only one of several used to verify the identity of individuals. In this paper, a foreground and background based technique is proposed for identification of scripts from bi-lingual (English/Roman and Chinese) off-line signatures. This system will identify whether a claimed signature belongs to the group of English signatures or Chinese signatures. The identification of signatures based on its script is a major contribution for multi-script signature verification. Two background information extraction techniques are used to produce the background components of the signature images. Gradient-based method was used to extract the features of the foreground as well as background components. Zernike Moment feature was also employed on signature samples. Support Vector Machine (SVM) is used as the classifier for signature identification in the proposed system. A database of 1120 (640 English+480 Chinese) signature samples were used for training and 560 (320 English+240 Chinese) signature samples were used for testing the proposed system. An encouraging identification accuracy of 97.70% was obtained using gradient feature from the experiment. © 2012 IEEE.
Sharma, N, Pal, U & Blumenstein, M 1970, 'Recent Advances in Video Based Document Processing: A Review', 2012 10th IAPR International Workshop on Document Analysis Systems, 2012 10th IAPR International Workshop on Document Analysis Systems (DAS), IEEE, Gold Coast, Australia, pp. 63-68.
View/Download from: Publisher's site
View description>>
Extraction and recognition of text present in video has become a very popular research area in the last decade. Generally, text present in video frames is of different size, orientation, style, etc. with complex backgrounds, noise, low resolution and contrast. These factors make the automatic text extraction and recognition in video frames a challenging task. A large number of techniques have been proposed by various researchers in the recent past to address the problem. This paper presents a review of various state-of-the-art techniques proposed towards different stages (e.g. detection, localization, extraction, etc.) of text information processing in video frames. Looking at the growing popularity and the recent developments in the processing of text in video frames, this review imparts details of current trends and potential directions for further research activities to assist researchers. © 2012 IEEE.
Sharma, N, Shivakumara, P, Pal, U, Blumenstein, M & Tan, CL 1970, 'A new method for arbitrarily-oriented text detection in video', Proceedings - 10th IAPR International Workshop on Document Analysis Systems, DAS 2012, International Workshop on Document Analysis Systems, IEEE, Institute of Electrical and Electronics Engineers, Gold Coast, Australia, pp. 74-78.
View/Download from: Publisher's site
View description>>
Text detection in video frames plays a vital role in enhancing the performance of information extraction systems because the text in video frames helps in indexing and retrieving video efficiently and accurately. This paper presents a new method for arbitrarily-oriented text detection in video, based on dominant text pixel selection, text representatives and region growing. The method uses gradient pixel direction and magnitude corresponding to Sobel edge pixels of the input frame to obtain dominant text pixels. Edge components in the Sobel edge map corresponding to dominant text pixels are then extracted and we call them text representatives. We eliminate broken segments of each text representatives to get candidate text representatives. Then the perimeter of candidate text representatives grows along the text direction in the Sobel edge map to group the neighboring text components which we call word patches. The word patches are used for finding the direction of text lines and then the word patches are expanded in the same direction in the Sobel edge map to group the neighboring word patches and to restore missing text information. This results in extraction of arbitrarily-oriented text from the video frame. To evaluate the method, we considered arbitrarily-oriented data, non-horizontal data, horizontal data, Hua's data and ICDAR-2003 competition data (Camera images). The experimental results show that the proposed method outperforms the existing method in terms of recall and f-measure. © 2012 IEEE.
Suwanwiwat, H, Nguyen, V & Blumenstein, M 1970, 'Off-line Restricted-Set Handwritten Word Recognition for Student Identification in a Short Answer Question Automated Assessment System', 2012 12TH INTERNATIONAL CONFERENCE ON HYBRID INTELLIGENT SYSTEMS (HIS), International Conference on Hybrid Intelligent Systems, IEEE, Institute of Electrical and Electronics Engineers, Pune, India, pp. 167-172.
View/Download from: Publisher's site
View description>>
Handwriting recognition is one of the most intensive areas of study in the field of pattern recognition. Many applications are able to benefit from a robust off-line handwriting recognition technique. An automatic off-line assessment system and a writer identification system are two of those applications. Off-line automatic assessment systems can be an aid for teachers in the marking process; they can reduce the time consumed by the human marker. There has only been limited work undertaken in developing off-line automatic assessment systems using handwriting recognition, and none in developing student identification systems, even though such systems would clearly benefit the education sector. In order to develop a complete off-line automatic assessment system, student identification using full student names is proposed in this paper. The Gaussian Grid and Modified Direction Feature Extraction Techniques are investigated in order to develop the proposed system. The recognition rates achieved using both techniques are encouraging (up to 99.08% for the Modified Direction feature extraction technique, and up to 98.28% for the Gaussian Grid feature extraction technique. © 2012 IEEE.
Yu, N & Ying, M 1970, 'Reachability and Termination Analysis of Concurrent Quantum Programs', CONCUR 2012 - CONCURRENCY THEORY, International Conference on Concurrency Theory, Sringer-Verlag, Newcastle upon Tyne, UK, pp. 69-83.
View/Download from: Publisher's site
View description>>
We introduce a Markov chain model of concurrent quantum programs. This model is a quantum generalization of Hart, Sharir and Pnueli's probabilistic concurrent programs. Some characterizations of the reachable space, uniformly repeatedly reachable space and termination of a concurrent quantum program are derived by the analysis of their mathematical structures. Based on these characterizations, algorithms for computing the reachable space and uniformly repeatedly reachable space and for deciding the termination are given. © 2012 Springer-Verlag.