Brun, TA, Devetak, I & Hsieh, M-H 2014, 'Catalytic Quantum Error Correction', IEEE Transactions on Information Theory, vol. 60, no. 6, pp. 3073-3089.
View/Download from: Publisher's site
View description>>
We develop the theory of entanglement-assisted quantum error-correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to preshared entanglement. Conventional stabilizer codes are equivalent to self-orthogonal symplectic codes. In contrast, EAQEC codes do not require self-orthogonality, which greatly simplifies their construction. We show how any classical binary or quaternary block code can be made into an EAQEC code. We provide a table of best known EAQEC codes with code length up to 10. With the self-orthogonality constraint removed, we see that the distance of an EAQEC code can be better than any standard quantum error-correcting code with the same fixed net yield. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes, which assume a subset of the qubits are noiseless. We also give an alternative construction of EAQEC codes by making classical entanglement-assisted codes coherent. © 1963-2012 IEEE.
Bu, G, Lee, J, Guan, H, Blumenstein, M & Loo, Y-C 2014, 'Development of an Integrated Method for Probabilistic Bridge-Deterioration Modeling', Journal of Performance of Constructed Facilities, vol. 28, no. 2, pp. 330-340.
View/Download from: Publisher's site
View description>>
Probabilistic deterioration models such as state-based and time-based models are only capable of predicting future bridge-condition ratings when a sufficient amount of condition data and reasonable data distribution are available. However, such are usually difficult to acquire from limited bridge-inspection records. As a result, these probabilistic models cannot guarantee reliable long-term prediction for each of the bridge elements concerned. To minimize this shortcoming, this paper proposes an advanced integrated method to construct workable transition probabilities for predicting long-term bridge performance. A selection process within this method automatically chooses a suitable prediction procedure for a given situation in terms of available inspection data. The backward prediction model (BPM) is also incorporated to effectively predict the bridge performance when sufficient inspection data are unavailable. Four different situations in regard to the available inspection data are predefined in this study to demonstrate the capabilities of the proposed integrated method. The outcomes show that the method can help develop an effective prediction model for various situations in terms of the quantity and distribution of available condition-rating data. © 2014 American Society of Civil Engineers.
Bu, GP, Lee, JH, Guan, H, Loo, YC & Blumenstein, M 2014, 'Implementation of Elman neural networks for enhancing reliability of integrated bridge deterioration model', Australian Journal of Structural Engineering, vol. 15, no. 1, pp. 51-63.
View/Download from: Publisher's site
View description>>
Probabilistic modelling is one of the most prominent techniques in bridge deterioration forecast. It can be classified into two types, namely, state- and time-based models. Reliability of both modelling techniques in forecasting long-term performance rely heavily on sufficient amount of bridge condition rating data being available together with well-distributed deterioration pattern over the age of bridge. However, it can be problematic when the available condition rating records are insufficient. In order to overcome this problem, an integrated deterioration method incorporating both the state- and time-based models has recently been developed. Despite such development and advancement, certain issues still remain with some cases of given condition data that cannot be used to produce reliable long-term performance curve. Aiming to achieve enhanced prediction performance, an Elman neural networks (ENN) technique is incorporated in the integrated method to replace the third-order polynomial regression function, the latter being the core component for long-term prediction in the state-based model. In the present study, the ENN are able to generate more reliable deterioration patterns than a typical deterministic method. The results demonstrate that the integrated method incorporating ENN are more effective in handling various situations of condition data quantities and distributions for generating long-term performance curves. © Institution of Engineers Australia, 2014.
Chen, J, Ji, Z, Li, C-K, Poon, Y-T, Shen, Y, Yu, N, Zeng, B & Zhou, D 2014, 'Discontinuity of Maximum Entropy Inference and Quantum Phase Transitions', New J. Phys., vol. 17, no. 8, pp. 083019-19.
View/Download from: Publisher's site
View description>>
In this paper, we discuss the connection between two genuinely quantumphenomena --- the discontinuity of quantum maximum entropy inference andquantum phase transitions at zero temperature. It is shown that thediscontinuity of the maximum entropy inference of local observable measurementssignals the non-local type of transitions, where local density matrices of theground state change smoothly at the transition point. We then propose to usethe quantum conditional mutual information of the ground state as an indicatorto detect the discontinuity and the non-local type of quantum phase transitionsin the thermodynamic limit.
Chen, W, Cao, Y, Wang, H & Feng, Y 2014, 'Minimum guesswork discrimination between quantum states', Computation, vol. 15, no. 9-10, pp. 9-758.
View description>>
Error probability is a popular and well-studied optimization criterion indiscriminating non-orthogonal quantum states. It captures the threat from anadversary who can only query the actual state once. However, when the adversaryis able to use a brute-force strategy to query the state, discriminationmeasurement with minimum error probability does not necessarily minimize thenumber of queries to get the actual state. In light of this, we take Massey'sguesswork as the underlying optimization criterion and study the problem ofminimum guesswork discrimination. We show that this problem can be reduced to asemidefinite programming problem. Necessary and sufficient conditions when ameasurement achieves minimum guesswork are presented. We also reveal therelation between minimum guesswork and minimum error probability. We show thatthe two criteria generally disagree with each other, except for the specialcase with two states. Both upper and lower information-theoretic bounds onminimum guesswork are given. For geometrically uniform quantum states, weprovide sufficient conditions when a measurement achieves minimum guesswork.Moreover, we give the necessary and sufficient condition under which making nomeasurement at all would be the optimal strategy.
Chitambar, E & Hsieh, M-H 2014, 'Asymptotic state discrimination and a strict hierarchy in distinguishability norms', Journal of Mathematical Physics, vol. 55, no. 11, pp. 112204-112204.
View/Download from: Publisher's site
View description>>
In this paper, we consider the problem of discriminating quantum states by local operations and classical communication (LOCC) when an arbitrarily small amount of error is permitted. This paradigm is known as asymptotic state discrimination, and we derive necessary conditions for when two multipartite states of any size can be discriminated perfectly by asymptotic LOCC. We use this new criterion to prove a gap in the LOCC and separable distinguishability norms. We then turn to the operational advantage of using two-way classical communication over one-way communication in LOCC processing. With a simple two-qubit product state ensemble, we demonstrate a strict majorization of the two-way LOCC norm over the one-way norm.
Chitambar, E, Hsieh, M-H & Winter, A 2014, 'The Private and Public Correlation Cost of Three Random Variables with Collaboration', IEEE Trans. Inf. Theory 62(4):2034-2043, 2016, vol. 62, no. 4, pp. 2034-2043.
View/Download from: Publisher's site
View description>>
In this paper we consider the problem of generating arbitrary three-partycorrelations from a combination of public and secret correlations. Two parties-- called Alice and Bob -- share perfectly correlated bits that are secret froma collaborating third party, Charlie. At the same time, all three parties haveaccess to a separate source of correlated bits, and their goal is to convertthese two resources into multiple copies of some given tripartite distribution$P_{XYZ}$. We obtain a single-letter characterization of the trade-off betweenpublic and private bits that are needed to achieve this task. The rate ofprivate bits is shown to generalize Wyner's classic notion of commoninformation held between a pair of random variables. The problem we consider isalso closely related to the task of secrecy formation in which $P_{XYZ}$ isgenerated using public communication and local randomness but with Charliefunctioning as an adversary instead of a collaborator. We describe in detailthe differences between the collaborative and adversarial scenarios.
Combes, J, Ferrie, C, Jiang, Z & Caves, CM 2014, 'Quantum limits on postselected, probabilistic quantum metrology', Physical Review A, vol. 89, no. 5.
View/Download from: Publisher's site
View description>>
Probabilistic metrology attempts to improve parameter estimation by occasionally reporting an excellent estimate and the rest of the time either guessing or doing nothing at all. Here we show that probabilistic metrology can never improve quantum limits on estimation of a single parameter, both on average and asymptotically in number of trials, if performance is judged relative to mean-square estimation error. We extend the result by showing that for a finite number of trials, the probability of obtaining better estimates using probabilistic metrology, as measured by mean-square error, decreases exponentially with the number of trials. To be tight, the performance bounds we derive require that likelihood functions be approximately normal, which in turn depends on how rapidly specific distributions converge to a normal distribution with number of trials. © 2014 American Physical Society.
Datta, N, Hsieh, M-H & Oppenheim, J 2014, 'An upper bound on the second order asymptotic expansion for the quantum communication cost of state redistribution', Journal of Mathematical Physics, vol. 57, no. 5, p. 052203.
View/Download from: Publisher's site
View description>>
State redistribution is the protocol in which, given an arbitrary tripartitequantum state, with two of the subsystems initially being with Alice and onebeing with Bob, the goal is for Alice to send one of her subsystems to Bob,possibly with the help of prior shared entanglement. We derive an upper boundon the second order asymptotic expansion for the quantum communication cost ofachieving state redistribution with a given finite accuracy. In proving ourresult, we also obtain an upper bound on the quantum communication cost of thisprotocol in the one-shot setting, by using the protocol of coherent statemerging as a primitive.
Datta, N, Tomamichel, M & Wilde, MM 2014, 'On the Second-Order Asymptotics for Entanglement-Assisted Communication', Quantum Information Processing, vol. 15, no. 6, pp. 6-2591.
View/Download from: Publisher's site
View description>>
The entanglement-assisted classical capacity of a quantum channel is known toprovide the formal quantum generalization of Shannon's classical channelcapacity theorem, in the sense that it admits a single-letter characterizationin terms of the quantum mutual information and does not increase in thepresence of a noiseless quantum feedback channel from receiver to sender. Inthis work, we investigate second-order asymptotics of the entanglement-assistedclassical communication task. That is, we consider how quickly the rates ofentanglement-assisted codes converge to the entanglement-assisted classicalcapacity of a channel as a function of the number of channel uses and the errortolerance. We define a quantum generalization of the mutual informationvariance of a channel in the entanglement-assisted setting. For covariantchannels, we show that this quantity is equal to the channel dispersion, andthus completely characterize the convergence towards the entanglement-assistedclassical capacity when the number of channel uses increases. Our results alsoapply to entanglement-assisted quantum communication, due to the equivalencebetween entanglement-assisted classical and quantum communication establishedby the teleportation and super-dense coding protocols.
Devitt, SJ, Greentree, AD, Stephens, AM & Meter, RV 2014, 'High-speed quantum networking by ship', Sci. Rep, vol. 6, no. 1, p. 36163.
View/Download from: Publisher's site
View description>>
Networked entanglement is an essential component for a plethora of quantumcomputation and communication protocols. Direct transmission of quantum signalsover long distances is prevented by fibre attenuation and the no-cloningtheorem, motivating the development of quantum repeaters, designed to purifyentanglement, extending its range. Quantum repeaters have been demonstratedover short distances, but error-corrected, global repeater networks with highbandwidth require new technology. Here we show that error corrected quantummemories installed in cargo containers and carried by ship can provide aflexible connection between local networks, enabling low-latency, high-fidelityquantum communication across global distances at higher bandwidths thanpreviously proposed. With demonstrations of technology with sufficient fidelityto enable topological error-correction, implementation of the quantum memoriesis within reach, and bandwidth increases with improvements in fabrication. Ourapproach to quantum networking avoids technological restrictions of repeaterdeployment, providing an alternate path to a worldwide Quantum Internet.
Duan, R & Winter, A 2014, 'No-Signalling Assisted Zero-Error Capacity of Quantum Channels and an Information Theoretic Interpretation of the Lovasz Number', IEEE Trans. Inf. Theory 62(2):891-914, 2016, vol. 62, no. 2, pp. 891-914.
View/Download from: Publisher's site
View description>>
We study the one-shot zero-error classical capacity of a quantum channelassisted by quantum no-signalling correlations, and the reverse problem ofexact simulation of a prescribed channel by a noiseless classical one. Quantumno-signalling correlations are viewed as two-input and two-output completelypositive and trace preserving maps with linear constraints enforcing that thedevice cannot signal. Both problems lead to simple semidefinite programmes(SDPs) that depend only on the Kraus operator space of the channel. Inparticular, we show that the zero-error classical simulation cost is preciselythe conditional min-entropy of the Choi-Jamiolkowski matrix of the givenchannel. The zero-error classical capacity is given by a similar-looking butdifferent SDP; the asymptotic zero-error classical capacity is theregularization of this SDP, and in general we do not know of any simple form. Interestingly however, for the class of classical-quantum channels, we showthat the asymptotic capacity is given by a much simpler SDP, which coincideswith a semidefinite generalization of the fractional packing number suggestedearlier by Aram Harrow. This finally results in an operational interpretationof the celebrated Lovasz $\vartheta$ function of a graph as the zero-errorclassical capacity of the graph assisted by quantum no-signalling correlations,the first information theoretic interpretation of the Lovasz number.
Everitt, MS, Devitt, S, Munro, WJ & Nemoto, K 2014, 'High-fidelity gate operations with the coupled nuclear and electron spins of a nitrogen-vacancy center in diamond', Physical Review A, vol. 89, no. 5.
View/Download from: Publisher's site
Feng, Y, Deng, Y & Ying, M 2014, 'Symbolic Bisimulation for Quantum Processes', ACM TRANSACTIONS ON COMPUTATIONAL LOGIC, vol. 15, no. 2, pp. 1-35.
View/Download from: Publisher's site
View description>>
With the previous notions of bisimulation presented in the literature, to check if two quantum processes are bisimilar, we have to instantiate their free quantum variables with arbitrary quantum states, and verify the bisimilarity of the resulting configurations. This makes checking bisimilarity infeasible from an algorithmic point of view, because quantum states constitute a continuum. In this article, we introduce a symbolic operational semantics for quantum processes directly at the quantum operation level, which allows us to describe the bisimulation between quantum processes without resorting to quantum states. We show that the symbolic bisimulation defined here is equivalent to the open bisimulation for quantum processes in previous work, when strong bisimulations are considered. An algorithm for checking symbolic ground bisimilarity is presented. We also give a modal characterisation for quantum bisimilarity based on an extension of Hennessy-Milner logic to quantum processes. © 2014 ACM.
Ferrie, C 2014, 'Quantum Model Averaging', New J. Phys., vol. 16, p. 093035.
View/Download from: Publisher's site
View description>>
Standard tomographic analyses ignore model uncertainty. It is assumed that agiven model generated the data and the task is to estimate the quantum state,or a subset of parameters within that model. Here we apply a model averagingtechnique to mitigate the risk of overconfident estimates of model parametersin two examples: (1) selecting the rank of the state in tomography and (2)selecting the model for the fidelity decay curve in randomized benchmarking.
Ferrie, C 2014, 'Self-guided quantum tomography', Phys. Rev. Lett., vol. 113, no. 19, p. 190404.
View/Download from: Publisher's site
View description>>
We introduce a self-learning tomographic technique in which the experimentguides itself to an estimate of its own state. Self-guided quantum tomography(SGQT) uses measurements to directly test hypotheses in an iterative algorithmwhich converges to the true state. We demonstrate through simulation on manyqubits that SGQT is a more efficient and robust alternative to the usualparadigm of taking a large amount of informationally complete data and solvingthe inverse problem of post-processed state estimation.
Ferrie, C 2014, 'The best Fisher is upstream: data processing inequalities for quantum metrology', Phys. Rev. A, vol. 90, no. 1, p. 014101.
View/Download from: Publisher's site
View description>>
We apply the classical data processing inequality to quantum metrology toshow that manipulating the classical information from a quantum measurementcannot aid in the estimation of parameters encoded in quantum states. Wefurther derive a quantum data processing inequality to show that coherentmanipulation of quantum data also cannot improve the precision in estimation.In addition, we comment on the assumptions necessary to arrive at theseinequalities and how they might be avoided providing insights into enhancementprocedures which are not provably wrong.
Ferrie, C & Combes, J 2014, 'How the result of a single coin toss can turn out to be 100 heads', Phys. Rev. Lett., vol. 113, no. 12, p. 120404.
View/Download from: Publisher's site
View description>>
We show that the phenomenon of anomalous weak values is not limited toquantum theory. In particular, we show that the same features occur in a simplemodel of a coin subject to a form of classical backaction with pre- andpost-selection. This provides evidence that weak values are not inherentlyquantum, but rather a purely statistical feature of pre- and post-selectionwith disturbance.
Ferrie, C & Moussa, O 2014, 'Robust and efficient in situ quantum control', Phys. Rev. A, vol. 91, no. 5, p. 052306.
View/Download from: Publisher's site
View description>>
Precision control of quantum systems is the driving force for both quantumtechnology and the probing of physics at the quantum and nano-scale. We proposean implementation independent method for in situ quantum control that leveragesrecent advances in the direct estimation of quantum gate fidelity. Ouralgorithm takes account of the stochasticity of the problem and is suitable forclosed-loop control and requires only a constant number of fidelity estimatingexperiments per iteration independent of the dimension of the control space. Itis efficient and robust to both statistical and technical noise.
Furrer, F, Franz, T, Berta, M, Leverrier, A, Scholz, VB, Tomamichel, M & Werner, RF 2014, 'Erratum: Continuous Variable Quantum Key Distribution: Finite-Key Analysis of Composable Security Against Coherent Attacks [Phys. Rev. Lett. 109, 100502 (2012)]', Physical Review Letters, vol. 112, no. 1.
View/Download from: Publisher's site
Ghanbarzadeh, R, Ghapanchi, AH & Blumenstein, M 2014, 'Application areas of multi-user virtual environments in the healthcare context.', Stud Health Technol Inform, vol. 204, pp. 38-46.
View description>>
This study conducts a systematic literature review on the application of the three-dimensional virtual worlds (3DVW) in healthcare context. During the past decade, 3DVWs have emerged as a cutting edge technology that has much to offer to the healthcare sector. Our systematic review began with an initial set of 1088 studies published from 1990 to 2013 which have used 3DVWs for the healthcare specific purposes. We found a variety of areas of application for the 3DVWs in healthcare, and categorised them into the following categories: education, treatment, evaluation, lifestyle and simulation. The presented big picture of application areas of 3DVWs in this study can be very valuable and insightful for the researchers and healthcare community.
Ghanbarzadeh, R, Ghapanchi, AH, Blumenstein, M & Talaei-Khoei, A 2014, 'A Decade of Research on the Use of Three-Dimensional Virtual Worlds in Health Care: A Systematic Literature Review', JOURNAL OF MEDICAL INTERNET RESEARCH, vol. 16, no. 2.
View/Download from: Publisher's site
View description>>
Background: A three-dimensional virtual world (3DVW) is a computer-simulated electronic 3D virtual environment that users can explore, inhabit, communicate, and interact with via avatars, which are graphical representations of the users. Since the early 2000s, 3DVWs have emerged as a technology that has much to offer the health care sector. Objective: The purpose of this study was to characterize different application areas of various 3DVWs in health and medical context and categorize them into meaningful categories. Methods: This study employs a systematic literature review on the application areas of 3DVWs in health care. Our search resulted in 62 papers from five top-ranking scientific databases published from 1990 to 2013 that describe the use of 3DVWs for health care specific purposes. We noted a growth in the number of academic studies on the topic since 2006. Results: We found a wide range of application areas for 3DVWs in health care and classified them into the following six categories: academic education, professional education, treatment, evaluation, lifestyle, and modeling. The education category, including professional and academic education, contains the largest number of papers (n=34), of which 23 are related to the academic education category and 11 to the professional education category. Nine papers are allocated to treatment category, and 8 papers have contents related to evaluation. In 4 of the papers, the authors used 3DVWs for modeling, and 3 papers targeted lifestyle purposes. The results indicate that most of the research to date has focused on education in health care. We also found that most studies were undertaken in just two countries, the United States and the United Kingdom. Conclusions: 3D virtual worlds present several innovative ways to carry out a wide variety of health-related activities. The big picture of application areas of 3DVWs presented in this review could be of value and offer insights to both the health care communi...
Gupta, A, Kayal, N & Qiao, Y 2014, 'Random arithmetic formulas can be reconstructed efficiently', computational complexity, vol. 23, no. 2, pp. 207-303.
View/Download from: Publisher's site
View description>>
Informally stated, we present here a randomized algorithm that given black-box access to the polynomial f computed by an unknown/hidden arithmetic formula φ reconstructs, on the average, an equivalent or smaller formula φ̌ in time polynomial in the size of its output φ̌. Specifically, we consider arithmetic formulas wherein the underlying tree is a complete binary tree, the leaf nodes are labeled by affine forms (i.e., degree one polynomials) over the input variables and where the internal nodes consist of alternating layers of addition and multiplication gates. We call these alternating normal form (ANF) formulas. If a polynomial f can be computed by an arithmetic formula μ of size s, it can also be computed by an ANF formula φ, possibly of slightly larger size s O(1). Our algorithm gets as input black-box access to the output polynomial f (i.e., for any point x in the domain, it can query the black box and obtain f(x) in one step) of a random ANF formula φ of size s (wherein the coefficients of the affine forms in the leaf nodes of φ are chosen independently and uniformly at random from a large enough subset of the underlying field). With high probability (over the choice of coefficients in the leaf nodes), the algorithm efficiently (i.e., in time s O(1)) computes an ANF formula φ̌ of size s computing f. This then is the strongest model of arithmetic computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst case. © 2014 Springer Basel.
Hayashi, M & Tomamichel, M 2014, 'Correlation Detection and an Operational Interpretation of the Renyi Mutual Information', Journal of Mathematical Physics, vol. 57, no. 10, p. 102201.
View/Download from: Publisher's site
View description>>
A variety of new measures of quantum Renyi mutual information and quantumRenyi conditional entropy have recently been proposed, and some of theirmathematical properties explored. Here, we show that the Renyi mutualinformation attains operational meaning in the context of composite hypothesistesting, when the null hypothesis is a fixed bipartite state and the alternatehypothesis consists of all product states that share one marginal with the nullhypothesis. This hypothesis testing problem occurs naturally in channel coding,where it corresponds to testing whether a state is the output of a givenquantum channel or of a 'useless' channel whose output is decoupled from theenvironment. Similarly, we establish an operational interpretation of Renyiconditional entropy by choosing an alternative hypothesis that consists ofproduct states that are maximally mixed on one system. Specialized to classicalprobability distributions, our results also establish an operationalinterpretation of Renyi mutual information and Renyi conditional entropy.
Kaniewski, J, Tomamichel, M & Wehner, S 2014, 'Entropic uncertainty from effective anti-commutators', Phys. Rev. A, vol. 90, no. 1, p. 012332.
View/Download from: Publisher's site
View description>>
We investigate entropic uncertainty relations for two or more binarymeasurements, for example spin-$\frac{1}{2}$ or polarisation measurements. Weargue that the effective anti-commutators of these measurements, i.e. theanti-commutators evaluated on the state prior to measuring, are an expedientmeasure of measurement incompatibility. Based on the knowledge of pairwiseeffective anti-commutators we derive a class of entropic uncertainty relationsin terms of conditional R\'{e}nyi entropies. Our uncertainty relations areformulated in terms of effective measures of incompatibility, which can becertified device-independently. Consequently, we discuss potential applicationsof our findings to device-independent quantum cryptography. Moreover, toinvestigate the tightness of our analysis we consider the simplest (and verywell-studied) scenario of two measurements on a qubit. We find that our resultsoutperform the celebrated bound due to Maassen and Uffink [Phys. Rev. Lett. 60,1103 (1988)] and provide a new analytical expression for the minimumuncertainty which also outperforms some recent bounds based on majorisation.
Kieferová, M & Wiebe, N 2014, 'On the power of coherently controlled quantum adiabatic evolutions', New Journal of Physics, vol. 16, no. 12, pp. 123034-123034.
View/Download from: Publisher's site
Lai, C-Y, Hsieh, M-H & Lu, H-F 2014, 'On the MacWilliams Identity for Classical and Quantum Convolutional Codes', IEEE Transactions on Communications, vol. 64, no. 8, pp. 3148-3159, Aug 2016, vol. 64, no. 8, pp. 3148-3159.
View/Download from: Publisher's site
View description>>
The weight generating functions associated with convolutional codes (CCs) arebased on state space realizations or the weight adjacency matrices (WAMs). TheMacWilliams identity for CCs on the WAMs was first established by Gluesing-Luerssen and Schneider in the case of minimal encoders, and generalized byForney. We consider this problem in the viewpoint of constraint codes andobtain a simple and direct proof of this MacWilliams identity in the case ofminimal encoders. For our purpose, we choose a different representation for theexact weight generating function (EWGF) of a block code, by defining it as alinear combination of orthonormal vectors in Dirac bra-ket notation. Thisrepresentation provides great flexibility so that general split weightgenerating functions and their MacWilliams identities can be easily obtainedfrom the MacWilliams identity for EWGFs. As a result, we also obtain theMacWilliams identity for the input-parity weight adjacency matrices of asystematic convolutional code and its dual. Finally, paralleling thedevelopment of the classical case, we establish the MacWilliams identity forquantum convolutional codes.
Law, YZ, Thinh, LP, Bancal, J-D & Scarani, V 2014, 'Quantum randomness extraction for various levels of characterization of the devices', Journal of Physics A: Mathematical and Theoretical, vol. 47, no. 42, pp. 424028-424028.
View/Download from: Publisher's site
Lee, J, Guan, H, Loo, Y-C & Blumenstein, M 2014, 'Development of a Long-Term Bridge Element Performance Model Using Elman Neural Networks', Journal of Infrastructure Systems, vol. 20, no. 3, pp. 04014013-04014013.
View/Download from: Publisher's site
View description>>
© 2014 American Society of Civil Engineers. A reliable deterioration model is essential in bridge asset management. Most deterioration modeling requires a large amount of well-distributed condition rating data along with all bridge ages to calculate the probability of condition rating deterioration. This means that the model can only function properly when a full set of data is available. To overcome this shortcoming, an improved artificial intelligence (AI)-based model is presented in this study to effectively predict long-term deterioration of bridge elements. The model has four major components: (1) categorizing bridge element condition ratings; (2) using the neural network-based backward prediction model (BPM) to generate unavailable historical condition ratings for applicable bridge elements; (3) training by an Elman neural network (ENN) for identifying historical deterioration patterns; and (4) using the ENN to predict long-term performance. The model has been tested using bridge inspection records that demonstrate satisfactory results. This study primarily focuses on the establishment of a new methodology to address the research problems identified. A series of case studies, hence, need to follow to ensure the method is appropriately developed and validated.
Li, Y, Yu, N & Ying, M 2014, 'Termination of nondeterministic quantum programs', ACTA INFORMATICA, vol. 51, no. 1, pp. 1-24.
View/Download from: Publisher's site
View description>>
We define a language-independent model of nondeterministic quantum programs in which a quantum program consists of a finite set of quantum processes. These processes are represented by quantum Markov chains over the common state space, which formalize the quantum mechanical behaviors of the machine. An execution of a nondeterministic quantum program is modeled by a sequence of actions of individual processes, and at each step of an execution a process is chosen nondeterministically to perform the next action. This execution model formalize the users behavior of calling the processes in the classical world. Applying the model to a quantum walk as an instance of physically realizable systems, we describe an execution step by step. A characterization of reachable space and a characterization of diverging states of a nondeterministic quantum program are presented. We establish a zero-one law for termination probability of the states in the reachable space. A combination of these results leads to a necessary and sufficient condition for termination of nondeterministic quantum programs. Based on this condition, an algorithm is found for checking termination of nondeterministic quantum programs within a fixed finite-dimensional state space.
Lin, MS & Tomamichel, M 2014, 'Investigating Properties of a Family of Quantum Renyi Divergences', Quantum Information Processing, vol. 14, no. 4, pp. 1501-1512.
View/Download from: Publisher's site
View description>>
Audenaert and Datta recently introduced a two-parameter family of relativeR\'{e}nyi entropies, known as the $\alpha$-$z$-relative R\'{e}nyi entropies.The definition of the $\alpha$-$z$-relative R\'{e}nyi entropy unifies allpreviously proposed definitions of the quantum R\'{e}nyi divergence of order$\alpha$ under a common framework. Here we will prove that the$\alpha$-$z$-relative R\'{e}nyi entropies are a proper generalization of thequantum relative entropy by computing the limit of the $\alpha$-$z$ divergenceas $\alpha$ approaches one and $z$ is an arbitrary function of $\alpha$. Wealso show that certain operationally relevant families of R\'enyi divergencesare differentiable at $\alpha = 1$. Finally, our analysis reveals that thederivative at $\alpha = 1$ evaluates to half the relative entropy variance, aquantity that has attained operational significance in second-order quantumhypothesis testing.
Lunghi, T, Kaniewski, J, Bussieres, F, Houlmann, R, Tomamichel, M, Wehner, S & Zbinden, H 2014, 'Practical relativistic bit commitment', Phys. Rev. Lett., vol. 115, no. 3, p. 030502.
View/Download from: Publisher's site
View description>>
Bit commitment is a fundamental cryptographic primitive in which Alice wishesto commit a secret bit to Bob. Perfectly secure bit commitment between twomistrustful parties is impossible through asynchronous exchange of quantuminformation. Perfect security is however possible when Alice and Bob each splitinto several agents exchanging classical information at times and locationssuitably chosen to satisfy specific relativistic constraints. In this Letter wefirst revisit a previously proposed scheme that realizes bit commitment usingonly classical communication. We prove that the protocol is secure againstquantum adversaries for a duration limited by the light-speed communicationtime between the locations of the agents. We then propose a novel multi-roundscheme based on finite-field arithmetic that extends the commitment time beyondthis limit, and we prove its security against classical attacks. Finally, wepresent an implementation of these protocols using dedicated hardware and weshow how it could be used to realize commitments of duration ranging up to 212milliseconds by agents occupying antipodal locations on the Earth.
Metcalf, BJ, Spring, JB, Humphreys, PC, Thomas-Peter, N, Barbieri, M, Kolthammer, WS, Jin, X-M, Langford, NK, Kundys, D, Gates, JC, Smith, BJ, Smith, PGR & Walmsley, IA 2014, 'Quantum teleportation on a photonic chip', Nature Photonics, vol. 8, no. 10, pp. 770-774.
View/Download from: Publisher's site
Nemoto, K, Trupke, M, Devitt, SJ, Stephens, AM, Scharfenberger, B, Buczak, K, Nöbauer, T, Everitt, MS, Schmiedmayer, J & Munro, WJ 2014, 'Photonic Architecture for Scalable Quantum Information Processing in Diamond', Physical Review X, vol. 4, no. 3.
View/Download from: Publisher's site
Pal, S, Pal, U & Blumenstein, M 2014, 'Signature-Based Biometric Authentication', Studies in Computational Intelligence, vol. 555, pp. 285-314.
View/Download from: Publisher's site
View description>>
In a modern, civilized and advanced society, reliable authentication and authorization of individuals are becoming more essential tasks in several aspects of daily activities and as well as many different important applications such as in financial transactions, access control, travel and immigration, healthcare etc. In some situations, when individual equipment is required for confirmation of one's identity to other groups of people in order to make use of services or to achieve access to physical places, it is always necessary to declare self-identity and to prove the claim. Traditional authentication methods, which are based on knowledge (password-based authentication) or the utility of a token (photo ID cards, magnetic strip cards and key-based authentication), are less reliable because of loss, forgetfulness and theft. These issues direct substantial attention towards biometrics as an alternative method for person authentication and identification. The word 'biometric' has been derived from the Greek words "Bio-metriks", "Bio" which means life and "metriks" which means measures. Therefore a biometric is the measurement and statistical analysis of unchanging biological characteristics. Biometrics evaluate a person's unique physical or behavioural traits to authenticate their identity. As biometric identifiers are unique to persons, they are more reliable in verifying identity than token-based and knowledge-based methods. In the last few years, substantial efforts have been devoted to the development of biometric-based authentication systems. Biometrics provide an expected and successful solution to the authentication problem, as it offers the construction of systems that can identify individuals by the analysis of their physiological or behavioural characteristics [1]. In fact, the field of biometrics is the science of using digital technologies and the intention of biometric systems is to perform the recognition or authentication of people based on some biol...
Paler, A, Devitt, SJ, Nemoto, K & Polian, I 2014, 'Mapping of Topological Quantum Circuits to Physical Hardware', Sci. Rep., vol. 4, p. 4657.
View/Download from: Publisher's site
View description>>
Topological quantum computation is a promising technique to achievelarge-scale, error-corrected computation. Quantum hardware is used to create alarge, 3-dimensional lattice of entangled qubits while performing computationrequires strategic measurement in accordance with a topological circuitspecification. The specification is a geometric structure that defines encodedinformation and fault-tolerant operations. The compilation of a topologicalcircuit is one important aspect of programming a quantum computer, another isthe mapping of the topological circuit into the operations performed by thehardware. Each qubit has to be controlled, and measurement results are neededto propagate encoded quantum information from input to output. In this work, weintroduce an algorithm for mapping an topological circuit to the operationsneeded by the physical hardware. We determine the control commands for eachqubit in the computer and the relevant measurements that are needed to trackinformation as it moves through the circuit.
Qiao, Y, Sun, X & Yu, N 2014, 'Characterization of multipartite entanglement in terms of local transformations', IEEE Journal on Selected Areas in Communications 38 (3), 568-574 2020, pp. 1-6.
View description>>
The degree of the generators of invariant polynomial rings of is a longstanding open problem since the very initial study of the invariant theory inthe 19th century. Motivated by its significant role in characterizingmultipartite entanglement, we study the invariant polynomial rings of localunitary group---the tensor product of unitary group, and local general lineargroup---the tensor product of general linear group. For these two groups, weprove polynomial upper bounds on the degree of the generators of invariantpolynomial rings. On the other hand, systematic methods are provided to toconstruct all homogenous polynomials that are invariant under these two groupsfor any fixed degree. Thus, our results can be regarded as a completecharacterization of the invariant polynomial rings. As an interestingapplication, we show that multipartite entanglement is additive in the sensethat two multipartite states are local unitary equivalent if and only if$r$-copies of them are LU equivalent for some $r$.
Tomamichel, M, Martinez-Mateo, J, Pacher, C & Elkouss, D 2014, 'Fundamental Finite Key Limits for One-Way Information Reconciliation in Quantum Key Distribution', Quantum Information Processing (2017) 16:280, vol. 16, no. 11.
View/Download from: Publisher's site
View description>>
The security of quantum key distribution protocols is guaranteed by the lawsof quantum mechanics. However, a precise analysis of the security propertiesrequires tools from both classical cryptography and information theory. Here,we employ recent results in non-asymptotic classical information theory to showthat one-way information reconciliation imposes fundamental limitations on theamount of secret key that can be extracted in the finite key regime. Inparticular, we find that an often used approximation for the informationleakage during information reconciliation is not generally valid. We propose animproved approximation that takes into account finite key effects andnumerically test it against codes for two probability distributions, that wecall binary-binary and binary-Gaussian, that typically appear in quantum keydistribution protocols.
Tomamichel, M, Wilde, MM & Winter, A 2014, 'Strong converse rates for quantum communication', IEEE Transactions on Information Theory, vol. 63, no. 1, pages 715-727, January 2017, vol. 63, no. 1, pp. 715-727.
View/Download from: Publisher's site
View description>>
We revisit a fundamental open problem in quantum information theory, namelywhether it is possible to transmit quantum information at a rate exceeding thechannel capacity if we allow for a non-vanishing probability of decoding error.Here we establish that the Rains information of any quantum channel is a strongconverse rate for quantum communication: For any sequence of codes with rateexceeding the Rains information of the channel, we show that the fidelityvanishes exponentially fast as the number of channel uses increases. Thisremains true even if we consider codes that perform classical post-processingon the transmitted quantum data. As an application of this result, forgeneralized dephasing channels we show that the Rains information is alsoachievable, and thereby establish the strong converse property for quantumcommunication over such channels. Thus we conclusively settle the strongconverse question for a class of quantum channels that have a non-trivialquantum capacity.
Wiebe, N, Granade, C, Ferrie, C & Cory, D 2014, 'Quantum Hamiltonian learning using imperfect quantum resources', Physical Review A, vol. 89, no. 4.
View/Download from: Publisher's site
Wiebe, N, Granade, C, Ferrie, C & Cory, DG 2014, 'Hamiltonian Learning and Certification Using Quantum Resources', Physical Review Letters, vol. 112, no. 19.
View/Download from: Publisher's site
Ying, M, Li, Y, Yu, N & Feng, Y 2014, 'Model-Checking Linear-Time Properties of Quantum Systems', ACM TRANSACTIONS ON COMPUTATIONAL LOGIC, vol. 15, no. 3.
View/Download from: Publisher's site
View description>>
© 2014 ACM. We define a formal framework for reasoning about linear-time properties of quantum systems in which quantum automata are employed in the modeling of systems and certain (closed) subspaces of state Hilbert spaces are used as the atomic propositions about the behavior of systems. We provide an algorithm for verifying invariants of quantum automata. Then, an automata-based model-checking technique is generalized for the verification of safety properties recognizable by reversible automata and ω?properties recognizable by reversible Büchi automata.
Yu, N, Duan, R & Ying, M 2014, 'Distinguishability of Quantum States by Positive Operator-Valued Measures With Positive Partial Transpose', IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 60, no. 4, pp. 2069-2079.
View/Download from: Publisher's site
View description>>
We study the distinguishability of bipartite quantum states by positive operator-valued measures with positive partial transpose (PPT POVMs). The contributions of this paper include: 1) we give a negative answer to an open problem of showing a limitation of a previous known method for detecting nondistinguishability; 2) we show that a maximally entangled state and its orthogonal complement, no matter how many copies are supplied, cannot be distinguished by the PPT POVMs, even unambiguously. This result is much stronger than the previous known ones; and 3) we study the entanglement cost of distinguishing quantum states. It is proved that √2/3|00〉 + √1/3|11〉 is sufficient and necessary for distinguishing three Bell states by the PPT POVMs. An upper bound of entanglement cost of distinguishing a d ⊗ d pure state and its orthogonal complement is obtained for separable operations. Based on this bound, we are able to construct two orthogonal quantum states, which cannot be distinguished unambiguously by separable POVMs, but finite copies would make them perfectly distinguishable by local operations and classical communication. We further observe that a two-qubit maximally entangled state is always enough for distinguishing a d ⊗ d pure state and its orthogonal complement by the PPT POVMs, no matter the value of d. In sharp contrast, an entangled state with Schmidt number at least d is always needed for distinguishing such two states by separable POVMs. As an application, we show that the entanglement cost of distinguishing a d ⊗ d maximally entangled state and its orthogonal complement must be a maximally entangled state for d = 2, which implies that teleportation is optimal, and in general, it could be chosen as O{script} (log d/d). © 1963-2012 IEEE.
Yu, N, Guo, C & Duan, R 2014, 'Obtaining a W State from a Greenberger-Horne-Zeilinger State via Stochastic Local Operations and Classical Communication with a Rate Approaching Unity', PHYSICAL REVIEW LETTERS, vol. 112, no. 16.
View/Download from: Publisher's site
View description>>
We introduce a notion of the entanglement transformation rate to characterize the asymptotic comparability of two multipartite pure entangled states under stochastic local operations and classical communication (SLOCC). For two well known SLOCC inequivalent three-qubit states |GHZ=(1/2)(|000+|111) and |W=(1/3)(|100+|010+|001), we show that the entanglement transformation rate from |GHZ to |W is exactly 1. That means that we can obtain one copy of the W state from one copy of the Greenberg-Horne-Zeilinger (GHZ) state by SLOCC, asymptotically. We then apply similar techniques to obtain a lower bound on the entanglement transformation rates from an N-partite GHZ state to a class of Dicke states, and prove the tightness of this bound for some special cases which naturally generalize the |W state. A new lower bound on the tensor rank of the matrix permanent is also obtained by evaluating the tensor rank of Dicke states. © 2014 American Physical Society.
Alon, N, Lee, T & Shraibman, A 1970, 'The cover number of a matrix and its algorithmic applications', Leibniz International Proceedings in Informatics, LIPIcs, pp. 34-47.
View/Download from: Publisher's site
View description>>
Given a matrix A, we study how many ε-cubes are required to cover the convex hull of the columns of A. We show bounds on this cover number in terms of VC dimension and the γ2 norm and give algorithms for enumerating elements of a cover. This leads to algorithms for computing approximate Nash equilibria that unify and extend several previous results in the literature. Moreover, our approximation algorithms can be applied quite generally to a family of quadratic optimization problems that also includes finding the densest κ-by-k combinatorial rectangle of a matrix. In particular, for this problem we give the first quasi-polynomial time additive approximation algorithm that works for any matrix A ∈ [0, 1]m×n..
Binti Adnan, NA, Yamashita, S, Devitt, SJ & Nemoto, K 1970, '2D Qubit Layout Optimization for Topological Quantum Computation', REVERSIBLE COMPUTATION, RC 2014, 6th International Conference on Reversible Computation (RC), Springer International Publishing, Kyoto, JAPAN, pp. 176-188.
View/Download from: Publisher's site
Bowie, D, Faichney, J & Blumenstein, M 1970, 'Multi-Directional Weighted Interpolation for Wi-Fi Localisation', Advances in Intelligent Systems and Computing, Robot Intelligence Technology and Applications 2: the 2nd International Conference on Robot Intelligence Technology and Applications, Springer International Publishing, pp. 105-112.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2014. The rise in popularity of unmanned autonomous vehicles (UAV) has created a need for accurate positioning systems. Due to the indoor limitations of the Global Positioning System (GPS), research has focused on other technologies which could be used in this landscape with Wi-Fi localisation emerging as a popular option. When implementing such a system, it is necessary to find an equilibrium between the desired level of final precision, and the time and money spent training the system. We propose Multi-Directional Weighted Interpolation (MDWI), a probabilistic-based weighting mechanism to predict unseen locations. Our results show that MDWI uses half the number of training points whilst increasing accuracy by up to 24%.
Chanda, S, Bu, G, Guan, H, Jo, J, Pal, U, Loo, Y-C & Blumenstein, M 1970, 'Automatic Bridge Crack Detection – A Texture Analysis-Based Approach', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), IAPR TC Workshop on Artificial Neural Networks in Pattern Recognition, Springer Berlin Heidelberg, Montreal, QC, Canada, pp. 193-203.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2014. To date, identifying cracks in bridges and determining bridge conditions primarily involve manual labour. Bridge inspection by human experts has some drawbacks such as the inability to physically examine all parts of the bridge, sole dependency on the expert knowledge of the bridge inspector. Moreover it requires proper training of the human resource and overall it is not cost effective. This article proposes an automatic bridge inspection approach exploiting wavelet-based image features along with Support Vector Machines for automatic detection of cracks in bridge images. A two-stage approach is followed, where in the first stage a decision is made as whether an image should undergo a pre-processing step (depending on image characteristics), and later in the second stage, wavelet features are extracted from the image using a sliding window-based technique. We obtained an overall accuracy of 92.11% while conducting experiments even on noisy and complex bridge images.
Das, A, Pal, U, Ferrer Ballester, MA & Blumenstein, M 1970, 'A new efficient and adaptive sclera recognition system', 2014 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), 2014 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), IEEE, Orlando, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In this paper an efficient and adaptive biometric sclera recognition and verification system is proposed. Sclera segmentation was performed by Fuzzy C-means clustering. Since the sclera vessels are not prominent, in order to make them clearly visible image enhancement was required. Adaptive histogram equalization, followed by a bank of Discrete Meyer Wavelet was used to enhance the sclera vessel patterns. Feature extraction was performed by, Dense Local Directional Pattern (D-LDP). D-LDP patch descriptors of each training image are used to form a bag of features; further Spatial Pyramid Matching was used to produce the final training model. Support Vector Machines (SVMs) are used for classification. The UBIRIS version 1 dataset was used here for experimentation of the proposed system. To investigate regarding sclera patterns adaptively with respect to change in environmental condition, population, data accruing technique and time span two different session of the mention dataset are utilized. The images in two sessions are different in acquiring technique, representation, number of individual and they were captured in a gap of two weeks. An encouraging Equal Error Rate (EER) of 3.95% was achieved in the above mention investigation.
Das, A, Pal, U, Ferrer Ballester, MA & Blumenstein, M 1970, 'A new wrist vein biometric system', 2014 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), 2014 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), IEEE, Orlando, USA, pp. 68-75.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In this piece of work a wrist vein pattern recognition and verification system is proposed. Here the wrist vein images from the PUT database were used, which were acquired in visible spectrum. The vein image only highlights the vein pattern area so, segmentation was not required. Since the wrist's veins are not prominent, image enhancement was performed. An Adaptive Histogram Equalization and Discrete Meyer Wavelet were used to enhance the vessel patterns. For feature extraction, the vein pattern is characterized with Dense Local Binary Pattern (D-LBP). D-LBP patch descriptors of each training image are used to form a bag of features, which was used to produce the training model. Support Vector Machines (SVMs) were used for classification. An encouraging Equal Error Rate (EER) of 0.79% was achieved in our experiments.
Das, A, Pal, U, Ferrer Ballester, MA & Blumenstein, M 1970, 'Fuzzy logic based selera recognition', 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, China, pp. 561-568.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In this paper a selera recognition and validation system is proposed. Here selera segmentation was performed by Fuzzy logic-based clustering. Since the selera vessels are not prominent, image enhancement was required. A Fuzzy logic-based Brightness Preserving Dynamic Fuzzy Histogram Equalization and discrete Meyer wavelet was used to enhance the vessel patterns. For feature extraction, the Dense Local Binary Pattern (D-LBP) was used. D-LBP patch descriptors of each training image are used to form a bag of features, which is used to produce the training model. Support Vector Machines (SVMs) are used for classification. The UBIRIS version 1 dataset is used here for experimentation. An encouraging Equal Error Rate (EER) of 4.31% was achieved in our experiments.
Das, A, Pal, U, Ferrer Ballester, MA & Blumenstein, M 1970, 'Multi-angle based lively sclera biometrics at a distance', 2014 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), 2014 IEEE Symposium on Computational Intelligence in Biometrics and Identity Management (CIBIM), IEEE, USA, pp. 22-29.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. This piece of work proposes a liveliness based sclera eye biometric, validation and recognition technique at a distance. The images in this work are acquired by a digital camera in the visible spectrum at varying distance of about 1 meter from the individual. Each individual during registration as well as validation is asked to look straight and move their eye ball up, left and right keeping their face straight to incorporate liveliness of the data. At first the image is divided vertically into two halves and the eyes are detected in each half of the face image that is captured, by locating the eye ball by a Circular Hough Transform. Then the eye image is cropped out automatically using the radius of the iris. Next a C-means-based segmentation is used for sclera segmentation followed by vessel enhancement by the adaptive histogram equalization and Haar filtering. The feature extraction was performed by patch-based Dense-LDP (Linear Directive Pattern). Furthermore each training image is used to form a bag of features, which is used to produce the training model. Each of the images of the different poses is combined at the feature level and the image level to obtain higher accuracy and to incorporate liveliness. The fusion that produces the best result is considered. Support Vector Machines (SVMs) are used for classification. Here images from 82 individuals (both left and right eye i.e. 164 different eyes) are used and an appreciable Equal Error Rate of 0.52% is achieved in this work.
Decker, T, Ivanyos, G, Kulkarni, R, Qiao, Y & Santha, M 1970, 'An Efficient Quantum Algorithm for Finding Hidden Parabolic Subgroups in the General Linear Group', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Symposium on Mathematical Foundations of Computer Science, Springer Berlin Heidelberg, Budapest, Hungary, pp. 226-238.
View/Download from: Publisher's site
View description>>
In the theory of algebraic groups, parabolic subgroups form a crucial building block in the structural studies. In the case of general linear groups over a finite field, given a sequence of positive integers n 1 , ⋯, n k , where n=n 1 +⋯+n k , a parabolic subgroup of parameter (n 1 , ⋯, n k ) in GL is a conjugate of the subgroup consisting of block lower triangular matrices where the ith block is of size n i . Our main result is a quantum algorithm of time polynomial in logq and n for solving the hidden subgroup problem in GL, when the hidden subgroup is promised to be a parabolic subgroup. Our algorithm works with no prior knowledge of the parameter of the hidden parabolic subgroup. Prior to this work, such an efficient quantum algorithm was only known for minimal parabolic subgroups (Borel subgroups), for the case when q is not much smaller than n (G. Ivanyos: Quantum Inf. Comput., Vol. 12, pp. 661-669). © 2014 Springer-Verlag Berlin Heidelberg.
Devitt, SJ 1970, 'Classical Control of Large-Scale Quantum Computers', RC2014, Springer Lecture Notes on Computer Science (LNCS) 8507, pp. 26-39. Springer International Publishing, Switzerland (2014), Y. Shigeru and M.Shin-ichi (Eds.), 6th International Conference on Reversible Computation (RC), SPRINGER INTERNATIONAL PUBLISHING AG, Kyoto, JAPAN, pp. 26-39.
View description>>
The accelerated development of quantum technology has reached a pivotalpoint. Early in 2014, several results were published demonstrating that severalexperimental technologies are now accurate enough to satisfy the requirementsof fault-tolerant, error corrected quantum computation. While there are manytechnological and experimental issues that still need to be solved, the abilityof experimental systems to now have error rates low enough to satisfy thefault-tolerant threshold for several error correction models is a tremendousmilestone. Consequently, it is now a good time for the computer science andclassical engineering community to examine the {\em classical} problemsassociated with compiling quantum algorithms and implementing them on futurequantum hardware. In this paper, we will review the basic operational rules ofa topological quantum computing architecture and outline one of the mostimportant classical problems that need to be solved; the decoding of errorcorrection data for a large-scale quantum computer. We will endeavour topresent these problems independently from the underlying physics as much ofthis work can be effectively solved by non-experts in quantum information orquantum mechanics.
Devitt, SJ 1970, 'The quantum memory stick', Conference on Lasers and Electro-Optics Europe - Technical Digest.
View description>>
We introduce a design a design for a quantum memory stick that uses active quantum error correction to coherently store a qubit of encoded information for months or years. This device is based on the model of topological error correction and can increase the storage time of a qubit of information by 10-11 orders of magnitude with a physical qubit overhead of several hundreds.
Devitt, SJ 1970, 'The Quantum Memory Stick', Optics InfoBase Conference Papers.
View description>>
We introduce a design a design for a quantum memory stick that uses active quantum error correction to coherently store a qubit of encoded information for months or years. This device is based on the model of topological error correction and can increase the storage time of a qubit of information by 10-11 orders of magnitude with a physical qubit overhead of several hundreds. © 2014 OSA.
Fazel, SAA, Blumenstein, M, Mirfenderesk, H & Tomlinson, R 1970, 'Estuarine flood modelling using artificial neural networks', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, China, pp. 631-637.
View/Download from: Publisher's site
View description>>
Prediction of water levels at estuaries poses a significant challenge for modelling of floods due to the influence of tidal effects. In this study, a two-stage forecasting system is proposed. In the first stage, the tidal portion of the available records is used to develop a tidal prediction system. The predictions of the first stage are used for flood modelling in the second. Experimental results suggest that the proposed flood modelling approach is advantageous for forecasting flood levels with more than 1 hour lead times.
Grochow, JA & Qiao, Y 1970, 'Algorithms for Group Isomorphism via Group Extensions and Cohomology', 2014 IEEE 29th Conference on Computational Complexity (CCC), 2014 IEEE Conference on Computational Complexity (CCC), IEEE, Canada, pp. 110-119.
View/Download from: Publisher's site
View description>>
The isomorphism problem for groups given by their multiplication tables (GPI) has long been known to be solvable in nO(log n) time, but only recently has there been significant progress towards polynomial time. For example, Babai et al. (ICALP 2012) gave a polynomial-time algorithm for groups with no abelian normal subgroups. Thus, at present it is crucial to understand groups with abelian normal subgroups to develop no(log n)-time algorithms. Towards this goal we advocate a strategy via the extension theory of groups, which describes how a normal subgroup N is related to G/N via G. This strategy 'splits' GPI into two sub problems: one regarding group actions on other groups, and one regarding group co homology. The solution of these problems is essentially necessary and sufficient to solve GPI. Most previous works naturally align with this strategy, and it thus helps explain in a unified way the recent polynomial-time algorithms for other group classes. In particular, most prior results in the multiplication table model focus on the group action aspect, despite the general necessity of co homology, for example for p-groups of class 2-believed to be the hardest case of GPI. To make progress on the group co homology aspect of GPI, we consider central-radical groups, proposed in Babai et al. (SODA 2011): the class of groups such that G mod its center has no abelian normal subgroups. Recall that Babai et al. (ICALP 2012) consider the class of groups G such that G itself has no abelian normal subgroups. Following the above strategy, we solve GPI in n O(log log n) time for central-radical groups, and in polynomial time for several prominent subclasses of central-radical groups. We also solve GPI in nO(log log n)-time for groups whose solvable normal subgroups are elementary abelian but not necessarily central. As far as we are aware, this is the first time that a nontrivial algorithm with worst-case guarantees has tackled both aspects of GPI-actions and coho...
Ivanyos, G, Karpinski, M, Qiao, Y & Santha, M 1970, 'Generalized Wong sequences and their applications to Edmonds' problems', Leibniz International Proceedings in Informatics, LIPIcs, pp. 397-408.
View/Download from: Publisher's site
View description>>
We design two deterministic polynomial time algorithms for variants of a problem introduced by Edmonds in 1967: determine the rank of a matrix M whose entries are homogeneous linear polynomials over the integers. Given a linear subspace B of the n×n matrices over some field F, we consider the following problems: symbolic matrix rank (SMR) is the problem to determine the maximum rank among matrices in B, while symbolic determinant identity testing (SDIT) is the question to decide whether there exists a nonsingular matrix in B. The constructive versions of these problems are asking to find a matrix of maximum rank, respectively a nonsingular matrix, if there exists one. Our first algorithm solves the constructive SMR when B is spanned by unknown rank one matrices, answering an open question of Gurvits. Our second algorithm solves the constructive SDIT when B is spanned by triangularizable matrices, but the triangularization is not given explicitly. Both algorithms work over finite fields of size at least n + 1 and over the rational numbers, and the first algorithm actually solves (the non-constructive) SMR independent of the field size. Our main tool to obtain these results is to generalize Wong sequences, a classical method to deal with pairs of matrices, to the case of pairs of matrix spaces.
Ivanyos, G, Kulkarni, R, Qiao, Y, Santha, M & Sundaram, A 1970, 'On the Complexity of Trial and Error for Constraint Satisfaction Problems', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Colloquium on Automata Languages and Programming, Springer Berlin Heidelberg, Copenhagen, Denmark, pp. 663-675.
View/Download from: Publisher's site
View description>>
In a recent work of Bei, Chen and Zhang (STOC 2013), a trial and error model of computing was introduced, and applied to some constraint satisfaction problems. In this model the input is hidden by an oracle which, for a candidate assignment, reveals some information about a violated constraint if the assignment is not satisfying. In this paper we initiate a systematic study of constraint satisfaction problems in the trial and error model. To achieve this, we first adopt a formal framework for CSPs, and based on this framework we define several types of revealing oracles. Our main contribution is to develop a transfer theorem for each type of the revealing oracle, under a broad class of parameters. To any hidden CSP with a specific type of revealing oracle, the transfer theorem associates another, potentially harder CSP in the normal setting, such that their complexities are polynomial time equivalent. This in principle transfers the study of a large class of hidden CSPs, possibly with a promise on the instances, to the study of CSPs in the normal setting. We then apply the transfer theorems to get polynomial-time algorithms or hardness results for hidden CSPs, including satisfaction problems, monotone graph properties, isomorphism problems, and the exact version of the Unique Games problem. © 2014 Springer-Verlag.
Kunwar, R, Pal, U & Blumenstein, M 1970, 'Semi-supervised Online Bayesian Network Learner for Handwritten Characters Recognition', 2014 22nd International Conference on Pattern Recognition, 2014 22nd International Conference on Pattern Recognition (ICPR), IEEE, Stockholm, Sweden, pp. 3104-3109.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. This work addresses the problem of creating a Bayesian Network based online semi-supervised handwritten character recognisor, which learns continuously over time to make a adaptable recognisor. The proposed method makes learning possible from a continuous inflow of a potentially unlimited amount of data without the requirement for storage. It highlights the use of unlabelled data for boosting the accuracy, especially when labelled data is scarce and expensive unlike unlabelled data. An algorithm is introduced to perform semi-supervised learning based on the combination of novel online ensemble of the Randomized Bayesian network classifiers and a novel online variant of the Expectation Maximization (EM) algorithm. We make use of a novel varying weighting factor to modulate the contribution of unlabelled data. Proposed method was evaluated using online handwritten Tamil characters from the IWFHR 2006 competition dataset. The accuracy obtained was comparable to the state of the art batch learning methods like HMM and SVMs.
Lai, C-Y & Hsieh, M-H 1970, 'The MacWilliams identity for quantum convolutional codes', 2014 IEEE International Symposium on Information Theory, 2014 IEEE International Symposium on Information Theory (ISIT), IEEE, Honolulu, HI, pp. 911-915.
View/Download from: Publisher's site
View description>>
In this paper, we propose a definition of the dual code of a quantum convolutional code, with or without entanglement assistance. We then derive a MacWilliams identity for quantum convolutional codes. Along the way, we obtain a direct proof of the MacWilliams identity, first found by Gluesing-Luerssen and Schneider, in the setting of classical convolutional codes. © 2014 IEEE.
Lai, C-Y, Hsieh, M-H & Lu, H-F 1970, 'A complete MacWilliams theorem for convolutional codes', 2014 IEEE Information Theory Workshop (ITW 2014), 2014 IEEE Information Theory Workshop (ITW), IEEE, Hobart, TAS, pp. 157-161.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In this paper, we prove a MacWilliams identity for the weight adjacency matrices based on the constraint codes of a convolutional code (CC) and its dual. Our result improves upon a recent result by Gluesing-Luerssen and Schneider, where the requirement of a minimal encoder is assumed. We can also establish the MacWilliams identity for the input-parity weight adjacency matrices of a systematic CC and its dual. Most importantly, we show that a type of Hamming weight enumeration functions of all codewords of a CC can be derived from the weight adjacency matrix, which thus provides a connection between these two very different notions of weight enumeration functions in the convolutional code literature. Finally, the relations between various enumeration functions of a CC and its dual are summarized in a diagram. This explains why no MacWilliams identity exists for the free-distance enumerators.
Nemoto, K, Devitt J., SJ, Trupke, M, Stephens M., AM, Everitt S., MS, Buczak, K, Noebauer, T, Schmiedmayer, J & Munro, WJ 1970, 'Memory-based quantum repeaters with NV centers', Optics InfoBase Conference Papers.
View description>>
We present a simple design of a quantum repeater design build from single NV- centers embedded in an optical cavity. We compare different quantum networks from a simple linear chain to a fully fault-tolerant quantum internet. © 2014 OSA.
Nemoto, K, Devitt, SJ, Trupke, M, Stephens, AM, Everitt, MS, Buczak, K, Noebauer, T, Schmiedmayer, J & Munro, WJ 1970, 'Memory-based quantum repeaters with NV centers', Conference on Lasers and Electro-Optics Europe - Technical Digest.
View description>>
We present a simple design of a quantum repeater design build from single NV- centers embedded in an optical cavity. We compare different quantum networks from a simple linear chain to a fully fault-tolerant quantum internet.
Nemoto, K, Trupke, M, Devitt, SJ, Stephens, AM, Scharfenberger, B, Buczak, K, Nöbauer, T, Schmiedmayer, J & Munro, WJ 1970, 'Quantum repeater architecture and NV-based node technology', SPIE Proceedings, SPIE Optical Engineering + Applications, SPIE, San Diego, CA.
View/Download from: Publisher's site
Neville, A, Devitt J., SJ, Shadbolt J., PJ, Thackray, L, Peruzzo, A & O'Brien, JL 1970, 'Demonstration of a Characterisation Protocol for Two-qubit Hamiltonians on a Photonic Quantum Simulator', Optics InfoBase Conference Papers.
View description>>
We demonstrate an entanglement mapping based characterisation protocol for coupled-qubit Hamiltonians. This is achieved by generating and measuring time-evolved states relevant to an NV-diamond system, using a reconfigurable integrated optical device. © 2014 Optical Society of America.
Neville, A, Devitt, SJ, Shadbolt, PJ, Thackray, L, Peruzzo, A & O'Brien, JL 1970, 'Demonstration of a characterisation protocol for two-qubit hamiltonians on a photonic quantum simulator', Conference on Lasers and Electro-Optics Europe - Technical Digest.
View description>>
We demonstrate an entanglement mapping based characterisation protocol for coupled-qubit Hamiltonians. This is achieved by generating and measuring time-evolved states relevant to an NV-diamond system, using a reconfigurable integrated optical device.
Neville, A, Devitt, SJ, Shadbolt, PJ, Thackray, L, Peruzzo, A & O'Brien, JL 1970, 'Demonstration of a Characterisation Protocol for Two-qubit Hamiltonians on a Photonic Quantum Simulator', Optics InfoBase Conference Papers.
View description>>
We demonstrate an entanglement mapping based characterisation protocol for coupled-qubit Hamiltonians. This is achieved by generating and measuring time-evolved states relevant to an NV-diamond system, using a reconfigurable integrated optical device. © 2014 Optical Society of America.
Paler, A, Devitt, S, Nemoto, K & Polian, I 1970, 'Software-based Pauli Tracking in Fault-tolerant Quantum Circuits', 2014 DESIGN, AUTOMATION AND TEST IN EUROPE CONFERENCE AND EXHIBITION (DATE), Design, Automation and Test in Europe Conference and Exhibition (DATE), IEEE, GERMANY, Dresden.
Paler, A, Devitt, SJ, Nemoto, K & Polian, I 1970, 'Cross-level Validation of Topological Quantum Circuits', REVERSIBLE COMPUTATION, RC 2014, 6th International Conference on Reversible Computation (RC), SPRINGER INTERNATIONAL PUBLISHING AG, Kyoto, JAPAN, pp. 189-200.
View description>>
Quantum computing promises a new approach to solving difficult computationalproblems, and the quest of building a quantum computer has started. While thefirst attempts on construction were succesful, scalability has never beenachieved, due to the inherent fragile nature of the quantum bits (qubits). Fromthe multitude of approaches to achieve scalability topological quantumcomputing (TQC) is the most promising one, by being based on an flexibleapproach to error-correction and making use of the straightforwardmeasurement-based computing technique. TQC circuits are defined within a large,uniform, 3-dimensional lattice of physical qubits produced by the hardware andthe physical volume of this lattice directly relates to the resources requiredfor computation. Circuit optimization may result in non-intuitive mismatchesbetween circuit specification and implementation. In this paper we introducethe first method for cross-level validation of TQC circuits. The specificationof the circuit is expressed based on the stabilizer formalism, and thestabilizer table is checked by mapping the topology on the physical qubitlevel, followed by quantum circuit simulation. Simulation results show thatcross-level validation of error-corrected circuits is feasible.
Paler, A, Devitt, SJ, Nemoto, K & Polian, I 1970, 'Software Pauli Tracking for Quantum Computation', Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014. pp. 1-4.
View/Download from: Publisher's site
View description>>
The realisation of large-scale quantum computing is no longer simply ahardware question. The rapid development of quantum technology has resulted indozens of control and programming problems that should be directed towards theclassical computer science and engineering community. One such problem is knownas Pauli tracking. Methods for implementing quantum algorithms that arecompatible with crucial error correction technology utilise extensive quantumteleportation protocols. These protocols are intrinsically probabilistic andresult in correction operators that occur as byproducts of teleportation. Thesebyproduct operators do not need to be corrected in the quantum hardware itself.Instead, byproduct operators are tracked through the circuit and output resultsreinterpreted. This tracking is routinely ignored in quantum information as itis assumed that tracking algorithms will eventually be developed. In this workwe help fill this gap and present an algorithm for tracking byproduct operatorsthrough a quantum computation. We formulate this work based on quantum gatesets that are compatible with all major forms of quantum error correction anddemonstrate the completeness of the algorithm.
Sharma, N, Pal, U & Blumenstein, M 1970, 'A study on word-level multi-script identification from video frames', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, China, pp. 1827-1833.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The presence of multiple scripts in multi-lingual document images makes Optical Character Recognition (OCR) of such documents a challenging task. Due to the unavailability of a single OCR system which can handle multiple scripts, script identification becomes an essential step for choosing the appropriate OCR. Although, there are various techniques available for script identification from handwritten and printed documents having simple backgrounds, however script identification from video frames has been seldom explored. Video frames are coloured and suffer from low resolution, blur, complex background and noise to mention a few, which makes the script identification process a challenging task. This paper presents a study of various combinations of features and classifiers to explore whether the traditional script identification techniques can be applied to video frames. A texture based feature namely, Local Binary Pattern (LBP), Gradient based features namely, Histogram of Oriented Gradient (HoG) and Gradient Local Auto-Correlation (GLAC) were used in the study. Combination of the features with SVMs and ANNs where used for classification. Three popular scripts, namely English, Bengali and Hindi were considered in the present study. Due to the inherent problems with the video, a super resolution technique was applied as a pre-processing step. Experiments show that the GLAC feature has performed better than the other features, and an accuracy of 94.25% was achieved when testing on 1271 words from three different scripts. The study also reveals that gradient features are more suitable for script identification than the texture features when using traditional script identification techniques on video frames.
Shivakumara, P, Sharma, N, Pal, U, Blumenstein, M & Tan, CL 1970, 'Gradient-Angular-Features for Word-wise Video Script Identification', 2014 22nd International Conference on Pattern Recognition, 2014 22nd International Conference on Pattern Recognition (ICPR), IEEE, Sweden, pp. 3098-3103.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Script identification at the word level is challenging because of complex backgrounds and low resolution of video. The presence of graphics and scene text in video makes the problem more challenging. In this paper, we employ gradient angle segmentation on words from video text lines. This paper presents new Gradient-Angular-Features (GAF) for video script identification, namely, Arabic, Chinese, English, Japanese, Korean and Tamil. This work enables us to select an appropriate OCR when the frame has words of multi-scripts. We employ gradient directional features for segmenting words from video text lines. For each segmented word, we study the gradient information in effective ways to identify text candidates. The skeleton of the text candidates is analyzed to identify Potential Text Candidates (PTC) by filtering out unwanted text candidates. We propose novel GAF for the PTC to study the structure of the components in the form of cursiveness and softness. The histogram operation on the GAF is performed in different ways to obtain discriminative features. The method is evaluated on 760 words of six scripts having low contrast, complex background, different font sizes, etc. in terms of the classification rate and is compared with an existing method to show the effectiveness of the method. We achieve 88.2% average classification rate.
Suwanwiwat, H, Nguyen, V, Blumenstein, M & Pal, U 1970, 'Off-Line Handwritten Bilingual Name Recognition for Student Identification in an Automated Assessment System', 2014 14th International Conference on Frontiers in Handwriting Recognition, 2014 14th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Greece, pp. 271-276.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The Student name Identification System (SIS) proposed here was investigated for English and Thai languages combined. The proposed system recognises each name by using an approach for whole word recognition. In the proposed system, the Gaussian Grid Feature (GGF), and Modified Direction Feature (MDF), together with a proposed hybrid feature extraction technique called Water Reservoir, Loop and Gaussian Grid Feature (WRLGGF) were investigated on full word contour images of each name sample. Artificial neural networks and support vector machines were used as classifiers. An encouraging recognition accuracy of 99.25% was achieved employing the proposed technique compared to 98.59% for GGF, and 96.63% using MDF.
Suwanwiwat, H, Nguyen, V, Blumenstein, M & Pal, U 1970, 'Off-line handwritten Thai name recognition for student identification in an automated assessment system', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, PEOPLES R CHINA, pp. 2347-2353.
View/Download from: Publisher's site
Tan, VYF & Tomamichel, M 1970, 'The third-order term in the normal approximation for the AWGN channel', 2014 IEEE International Symposium on Information Theory, 2014 IEEE International Symposium on Information Theory (ISIT), IEEE, pp. 2077-2081.
View/Download from: Publisher's site
View description>>
This paper shows that, under the average error probability formalism, the third-order term in the normal approximation for the additive white Gaussian noise channel with a maximal or equal power constraint is at least 1 over 2 log n+O(1). This improves on the lower bound by Polyanskiy-Poor-Verdú (2010) and matches the upper bound proved by the same authors. © 2014 IEEE.
Tomamichel, M & Tan, VYF 1970, 'Second order refinements for the classical capacity of quantum channels with separable input states', 2014 IEEE International Symposium on Information Theory, 2014 IEEE International Symposium on Information Theory (ISIT), IEEE, Honolulu, USA, pp. 141-145.
View/Download from: Publisher's site
View description>>
We study the non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of information that can be transmitted when the channel is used a finite number of times and a finite average decoding error is permissible. We show that, if we restrict the encoder to use ensembles of separable states, the non-asymptotic fundamental limit admits a Gaussian approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the number of channel uses tends to infinity. To do so, several important properties of quantum information quantities, such as the capacity-achieving output state, the divergence radius, and the channel dispersion, are generalized from their classical counterparts. Further, we exploit a close relation between classical-quantum channel coding and quantum binary hypothesis testing and rely on recent progress in the non-asymptotic characterization of quantum hypothesis testing and its Gaussian approximation. © 2014 IEEE.
Tomamichel, M, Berta, M & Hayashi, M 1970, 'A duality relation connecting different quantum generalizations of the conditional Rényi entropy', 2014 IEEE International Symposium on Information Theory, 2014 IEEE International Symposium on Information Theory (ISIT), IEEE, Honolulu, USA, pp. 731-735.
View/Download from: Publisher's site
View description>>
Recently a new quantum generalization of the Rényi divergence and the corresponding conditional Rényi entropies was proposed. Here we report on a surprising relation between conditional Rényi entropies based on this new generalization and conditional Rényi entropies based on the quantum relative Rényi entropy that was used in previous literature. This generalizes the well-known duality relation H(AB)+H(AC) = 0 for tripartite pure states to Rényi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Rényi entropies. © 2014 IEEE.
Tomamichel, M, Martinez-Mateo, J, Pacher, C & Elkouss, D 1970, 'Fundamental finite key limits for information reconciliation in quantum key distribution', 2014 IEEE International Symposium on Information Theory, 2014 IEEE International Symposium on Information Theory (ISIT), IEEE, USA, pp. 1469-1473.
View/Download from: Publisher's site
View description>>
The security of quantum key distribution protocols is guaranteed by the laws
of quantum mechanics. However, a precise analysis of the security properties
requires tools from both classical cryptography and information theory. Here,
we employ recent results in non-asymptotic classical information theory to show
that information reconciliation imposes fundamental limitations on the amount
of secret key that can be extracted in the finite key regime. In particular, we
find that an often used approximation for the information leakage during
information reconciliation is flawed and we propose an improved estimate.