Asaad, S, Dickel, C, Langford, NK, Poletto, S, Bruno, A, Rol, MA, Deurloo, D & DiCarlo, L 2016, 'Independent, extensible control of same-frequency superconducting qubits by selective broadcasting', npj Quantum Information, vol. 2, no. 1, pp. 1-6.
View/Download from: Publisher's site
View description>>
AbstractA critical ingredient for realising large-scale quantum information processors will be the ability to make economical use of qubit control hardware. We demonstrate an extensible strategy for reusing control hardware on same-frequency transmon qubits in a circuit QED chip with surface-code-compatible connectivity. A vector switch matrix enables selective broadcasting of input pulses to multiple transmons with individual tailoring of pulse quadratures for each, as required to minimise the effects of leakage on weakly anharmonic qubits. Using randomised benchmarking, we compare multiple broadcasting strategies that each pass the surface-code error threshold for single-qubit gates. In particular, we introduce a selective broadcasting control strategy using five pulse primitives, which allows independent, simultaneous Clifford gates on arbitrary numbers of qubits.
Asaad, S, Dickel, C, Langford, NK, Poletto, S, Bruno, A, Rol, MA, Deurloo, D & Dicarlo, L 2016, 'Independent, extensible control of same-frequency superconducting qubits by selective broadcasting(npj Quantum Information (2017) 3, 17001, 10.1038/npjqi.2017.1)', npj Quantum Information, vol. 2, no. 1.
View/Download from: Publisher's site
View description>>
The original version of this Article contained an error in one of the calculations within the Results section. Although the authors note that this leakage rate is per Clifford, they actually quote the value per nanosecond: 'We extract per Clifford leakage rates ? of 4.1(2) × 10-6 (QT)and 1.3(4) × 10-6 (QB) by fitting the following simple model to the data'. Now reads: 'We extract per Clifford leakage rates k of 1.4(2) × 10-4 (QT) and 3.9(4) × 10-5 (QB) by fitting the following simple model to the data'. This error has now been corrected in the PDF and HTML versions of the Article.
Berta, M, Scholz, VB & Tomamichel, M 2016, 'Rényi divergences as weighted non-commutative vector valued $L_p$-spaces', e, vol. 19, no. 6, pp. 1843-1867.
View/Download from: Publisher's site
View description>>
We show that Araki and Masuda's weighted non-commutative vector valued$L_p$-spaces [Araki \& Masuda, Publ. Res. Inst. Math. Sci., 18:339 (1982)]correspond to an algebraic generalization of the sandwiched R\'enyi divergenceswith parameter $\alpha = \frac{p}{2}$. Using complex interpolation theory, weprove various fundamental properties of these divergences in the setup of vonNeumann algebras, including a data-processing inequality and monotonicity in$\alpha$. We thereby also give new proofs for the correspondingfinite-dimensional properties. We discuss the limiting cases $\alpha\to\{\frac{1}{2},1,\infty\}$ leading to minus the logarithm of Uhlmann's fidelity,Umegaki's relative entropy, and the max-relative entropy, respectively. As acontribution that might be of independent interest, we derive a Riesz-Thorintheorem for Araki-Masuda $L_p$-spaces and an Araki-Lieb-Thirring inequality forstates on von Neumann algebras.
Boixo, S, Isakov, SV, Smelyanskiy, VN, Babbush, R, Ding, N, Jiang, Z, Bremner, MJ, Martinis, JM & Neven, H 2016, 'Characterizing Quantum Supremacy in Near-Term Devices', Nature Physics, vol. 14, no. 6, pp. 595-600.
View/Download from: Publisher's site
View description>>
A critical question for the field of quantum computing in the near future iswhether quantum devices without error correction can perform a well-definedcomputational task beyond the capabilities of state-of-the-art classicalcomputers, achieving so-called quantum supremacy. We study the task of samplingfrom the output distributions of (pseudo-)random quantum circuits, a naturaltask for benchmarking quantum computers. Crucially, sampling this distributionclassically requires a direct numerical simulation of the circuit, withcomputational cost exponential in the number of qubits. This requirement istypical of chaotic systems. We extend previous results in computationalcomplexity to argue more formally that this sampling task must take exponentialtime in a classical computer. We study the convergence to the chaotic regimeusing extensive supercomputer simulations, modeling circuits with up to 42qubits - the largest quantum circuits simulated to date for a computationaltask that approaches quantum supremacy. We argue that while chaotic states areextremely sensitive to errors, quantum supremacy can be achieved in thenear-term with approximately fifty superconducting qubits. We introduce crossentropy as a useful benchmark of quantum circuits which approximates thecircuit fidelity. We show that the cross entropy can be efficiently measuredwhen circuit simulations are available. Beyond the classically tractableregime, the cross entropy can be extrapolated and compared with theoreticalestimates of circuit fidelity to define a practical quantum supremacy test.
Bremner, MJ, Montanaro, A & Shepherd, DJ 2016, 'Achieving quantum supremacy with sparse and noisy commuting quantum computations', Quantum, vol. 1, pp. 8-8.
View/Download from: Publisher's site
View description>>
The class of commuting quantum circuits known as IQP (instantaneous quantumpolynomial-time) has been shown to be hard to simulate classically, assumingcertain complexity-theoretic conjectures. Here we study the power of IQPcircuits in the presence of physically motivated constraints. First, we showthat there is a family of sparse IQP circuits that can be implemented on asquare lattice of n qubits in depth O(sqrt(n) log n), and which is likely hardto simulate classically. Next, we show that, if an arbitrarily small constantamount of noise is applied to each qubit at the end of any IQP circuit whoseoutput probability distribution is sufficiently anticoncentrated, there is apolynomial-time classical algorithm that simulates sampling from the resultingdistribution, up to constant accuracy in total variation distance. However, weshow that purely classical error-correction techniques can be used to designIQP circuits which remain hard to simulate classically, even in the presence ofarbitrary amounts of noise of this form. These results demonstrate thechallenges faced by experiments designed to demonstrate quantum supremacy overclassical computation, and how these challenges can be overcome.
Bremner, MJ, Montanaro, A & Shepherd, DJ 2016, 'Average-Case Complexity Versus Approximate Simulation of Commuting Quantum Computations', PHYSICAL REVIEW LETTERS, vol. 117, no. 8.
View/Download from: Publisher's site
View description>>
© 2016 American Physical Society. We use the class of commuting quantum computations known as IQP (instantaneous quantum polynomial time) to strengthen the conjecture that quantum computers are hard to simulate classically. We show that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error. One conjecture relates to the hardness of estimating the complex-temperature partition function for random instances of the Ising model; the other concerns approximating the number of zeroes of random low-degree polynomials. We observe that both conjectures can be shown to be valid in the setting of worst-case complexity. We arrive at these conjectures by deriving spin-based generalizations of the boson sampling problem that avoid the so-called permanent anticoncentration conjecture. 2016 UK.
Chan, B, Guan, H, Hou, L, Jo, J, Blumenstein, M & Wang, J 2016, 'Defining a conceptual framework for the integration of modelling and advanced imaging for improving the reliability and efficiency of bridge assessments', Journal of Civil Structural Health Monitoring, vol. 6, no. 4, pp. 703-714.
View/Download from: Publisher's site
View description>>
© 2016, Springer-Verlag Berlin Heidelberg. Current bridge inspection practices are typically predicated upon manual paper-based data collection methods, which significantly limit the ability to transfer knowledge gained throughout the lifecycle of the asset, to benefit the assessment of the inspector or engineer. This study aims to overcome the limitations of current practices and proposes a conceptual framework to improve the reliability and efficiency of current bridge asset management practices through the integration of Building Information Modeling (BIM) and advanced computing and imaging technologies. As a tool for bridge inspections, BIM offers significant potential when integrated with laser scanning and keypoint-based texture recognition, which allows for the detection of such defects as cracking, corrosion or settlement in bridge components. In recent years, the construction industry has seen an increased use of BIM technology on-site to aid the construction process. However, the applications of it are deficient through the asset management phases of a project. Given the ability of BIM to house all component specific information gathered from the construction, inspection and maintenance phases, BIM is envisioned to allow emphasis to be placed on retrieving the relevant information throughout the project lifecycle, ultimately enabling engineers and bridge inspectors to make more informed decisions about the current condition of the structure. Using BIM as the focal point for information collection throughout the project lifecycle, findings from advanced imaging and data processing are proposed to be stored within the model for recall at future bridge assessments.
Chapman, RJ, Ferrie, C & Peruzzo, A 2016, 'Experimental Demonstration of Self-Guided Quantum Tomography', Physical Review Letters, vol. 117, no. 4.
View/Download from: Publisher's site
Chen, J, Guo, C, Ji, Z, Poon, Y-T, Yu, N, Zeng, B & Zhou, J 2016, 'Joint product numerical range and geometry of reduced density matrices', Sci. China Phys. Mech. Astron., vol. 60, no. 2, p. 020312.
View/Download from: Publisher's site
View description>>
The reduced density matrices of a many-body quantum system form a convex set,whose three-dimensional projection $\Theta$ is convex in $\mathbb{R}^3$. Theboundary $\partial\Theta$ of $\Theta$ may exhibit nontrivial geometry, inparticular ruled surfaces. Two physical mechanisms are known for the origins ofruled surfaces: symmetry breaking and gapless. In this work, we study theemergence of ruled surfaces for systems with local Hamiltonians in infinitespatial dimension, where the reduced density matrices are known to be separableas a consequence of the quantum de Finetti's theorem. This allows us toidentify the reduced density matrix geometry with joint product numerical range$\Pi$ of the Hamiltonian interaction terms. We focus on the case where theinteraction terms have certain structures, such that ruled surface emergenaturally when taking a convex hull of $\Pi$. We show that, a ruled surface on$\partial\Theta$ sitting in $\Pi$ has a gapless origin, otherwise it has asymmetry breaking origin. As an example, we demonstrate that a famous ruledsurface, known as the oloid, is a possible shape of $\Theta$, with two boundarypieces of symmetry breaking origin separated by two gapless lines.
Chen, J-Y, Ji, Z, Liu, Z-X, Qi, X, Yu, N, Zeng, B & Zhou, D 2016, 'Physical origins of ruled surfaces on the reduced density matrices geometry', Sci. China Phys. Mech. Astron. (2017) 60: 020311, vol. 60, no. 2.
View/Download from: Publisher's site
View description>>
The reduced density matrices (RDMs) of many-body quantum states form a convexset. The boundary of low dimensional projections of this convex set may exhibitnontrivial geometry such as ruled surfaces. In this paper, we study thephysical origins of these ruled surfaces for bosonic systems. The emergence ofruled surfaces was recently proposed as signatures of symmetry-breaking phase.We show that, apart from being signatures of symmetry-breaking, ruled surfacescan also be the consequence of gapless quantum systems by demonstrating anexplicit example in terms of a two-mode Ising model. Our analysis was largelysimplified by the quantum de Finetti's theorem---in the limit of large systemsize, these RDMs are the convex set of all the symmetric separable states. Todistinguish ruled surfaces originated from gapless systems from those caused bysymmetry-breaking, we propose to use the finite size scaling method for thecorresponding geometry. This method is then applied to the two-mode XY model,successfully identifying a ruled surface as the consequence of gapless systems.
Chen, J-Y, Ji, Z, Yu, N & Zeng, B 2016, 'Dichotomy of entanglement depth for symmetric states', Phys. Rev. A, vol. 94, no. 4, p. 042333.
View/Download from: Publisher's site
View description>>
Entanglement depth characterizes the minimal number of particles in a systemthat are mutually entangled. For symmetric states, we show that there is adichotomy for entanglement depth: an $N$-particle symmetric state is eitherfully separable, or fully entangled---the entanglement depth is either $1$ or$N$. This property is even stable under non-symmetric noise. We propose anexperimentally accessible method to detect entanglement depth in atomicensembles based on a bound on the particle number population of Dicke states,and demonstrate that the entanglement depth of some Dicke states, for examplethe twin Fock state, is very stable even under a large arbitrary noise. Ourobservation can be applied to atomic Bose-Einstein condensates to infer thatthese systems can be highly entangled with the entanglement depth that is ofthe order of the system size (i.e. several thousands of atoms).
Cheng, H-C & Hsieh, M-H 2016, 'Characterisations of Matrix and Operator-Valued $Φ$-Entropies, and Operator Efron-Stein Inequalities', Proc. R. Soc. A, vol. 472, no. 2187, p. 2187.
View/Download from: Publisher's site
View description>>
We derive new characterisations of the matrix $\mathrm{\Phi}$-entropyfunctionals introduced in [Electron.~J.~Probab., 19(20): 1--30, 2014]. Notably,all known equivalent characterisations of the classical $\Phi$-entropies havetheir matrix correspondences. Next, we propose an operator-valuedgeneralisation of the matrix $\Phi$-entropy functionals, and prove theirsubadditivity under L\'owner partial ordering. Our results demonstrate that thesubadditivity of operator-valued $\Phi$-entropies is equivalent to theconvexity of various related functions. This result can be used to demonstratean interesting result in quantum information theory: the matrix $\Phi$-entropyof a quantum ensemble is monotone under unital quantum channels. Finally, wederive the operator Efron-Stein inequality to bound the operator-valuedvariance of a random matrix.
Cheng, H-C & Hsieh, M-H 2016, 'On the Concavity of Auxiliary Function in Classical-Quantum Channels', IEEE Trans. Inf. Theory, 62(10), 5960 - 5965, 2016, vol. 62, no. 10, pp. 5960-5965.
View/Download from: Publisher's site
View description>>
The auxiliary function of a classical channel appears in two fundamentalquantities that upper and lower bound the error probability, respectively. Acrucial property of the auxiliary function is its concavity, which leads toseveral important results in finite block length analysis. In this paper, weprove that the auxiliary function of a classical-quantum channel also enjoysthe same concave property, extending an earlier partial result to its fullgenerality. The key component in our proof is a beautiful result of geometricmeans of operators.
Cheng, HC, Hsieh, MH & Yeh, PC 2016, 'The learnability of unknown quantum measurements', Quantum Information and Computation, vol. 16, no. 7-8, pp. 615-656.
View description>>
In this work, we provide an elegant framework to analyze learning matrices in the Schatten class by taking advantage of a recently developed methodology—matrix concentration inequalities. We establish the fat-shattering dimension, Rademacher/Gaussian complexity, and the entropy number of learning bounded operators and trace class operators. By characterising the tasks of learning quantum states and two-outcome quantum measurements into learning matrices in the Schatten-1 and ∞ classes, our proposed approach directly solves the sample complexity problems of learning quantum states and quantum measurements. Our main result in the paper is that, for learning an unknown quantum measurement, the upper bound, given by the fat-shattering dimension, is linearly proportional to the dimension of the underlying Hilbert space. Learning an unknown quantum state becomes a dual problem to ours, and as a byproduct, we can recover Aaronson’s famous result [Proc. R. Soc. A 463, 3089–3144 (2007)] solely using a classical machine learning technique. In addition, other famous complexity measures like covering numbers and Rademacher/Gaussian complexities are derived explicitly under the same framework. We are able to connect measures of sample complexity with various areas in quantum information science, e.g. quantum state/measurement tomography, quantum state discrimination and quantum random access codes, which may be of independent interest. Lastly, with the assistance of general Bloch-sphere representation, we show that learning quantum measurements/states can be mathematically formulated as a neural network. Consequently, classical ML algorithms can be applied to efficiently accomplish the two quantum learning tasks.
Chitambar, E & Hsieh, M-H 2016, 'Round Complexity in the Local Transformations of Quantum and Classical States', Nature Communications, vol. 8, no. 1.
View/Download from: Publisher's site
View description>>
A natural operational paradigm for distributed quantum and classicalinformation processing involves local operations coordinated by multiple roundsof public communication. In this paper we consider the minimum number ofcommunication rounds needed to perform the locality-constrained task ofentanglement transformation and the analogous classical task of secrecymanipulation. Specifically we address whether bipartite mixed entanglement canalways be converted into pure entanglement or whether unsecure classicalcorrelations can always be transformed into secret shared randomness usinglocal operations and a bounded number of communication exchanges. Our maincontribution in this paper is an explicit construction of quantum and classicalstate transformations which, for any given $r$, can be achieved using $r$rounds of classical communication exchanges but no fewer. Our results revealthat highly complex communication protocols are indeed necessary to fullyharness the information-theoretic resources contained in general quantum andclassical states. The major technical contribution of this manuscript lies inproving lower bounds for the required number of communication exchanges usingthe notion of common information and various lemmas built upon it. We propose aclassical analog to the Schmidt rank of a bipartite quantum state which we callthe secrecy rank, and we show that it is a monotone under stochastic localclassical operations.
Das, A, Ferrer, MA, Pal, U, Pal, S, Diaz, M & Blumenstein, M 2016, 'Multi‐script versus single‐script scenarios in automatic off‐line signature verification', IET Biometrics, vol. 5, no. 4, pp. 305-313.
View/Download from: Publisher's site
Das, A, Pal, U, Ferrer, MA & Blumenstein, M 2016, 'A framework for liveness detection for direct attacks in the visible spectrum for multimodal ocular biometrics', Pattern Recognition Letters, vol. 82, no. 2, pp. 232-241.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. In this research a new framework for software-based liveness detection for direct attacks in multimodal ocular biometrics across the visible spectrum is proposed. The framework aims to develop a more realistic method for liveness detection compared to previous frameworks proposed in the literature. To fulfil the above highlighted aims in this framework, intra-class level (i.e. user level) liveness detection is introduced. To detect liveness, a new set of image quality-based features is proposed for multimodal ocular biometrics in the visible spectrum. A variety of transformed domain (focus related) aspect and contrast-related quality features are employed to design the framework. Furthermore a new database is developed for liveness detection of multimodal ocular biometrics, which has the prominent presence of multimodal ocular traits (both sclera and iris). Moreover this database is comprised of a larger variety of fake images; those were prepared by employing versatile forging techniques which can be exhibited by imposters. Therefore the proposed schema has dealt with versatile categories of spoofing methods, which were not considered previously in the literature. The database contains a set of 500 fake and 500 genuine eye images acquired from 50 different eyes. An appreciable liveness detection result is achieved in the experiments. Furthermore, the experimental results conclude that this new framework is more efficient and competitive when compared to previous liveness detection schemes.
Devitt, SJ 2016, 'Performing Quantum Computing Experiments in the Cloud', Phys. Rev. A, vol. 94, no. 3, p. 032329.
View/Download from: Publisher's site
View description>>
Quantum computing technology has reached a second renaissance in the pastfive years. Increased interest from both the private and public sector combinedwith extraordinary theoretical and experimental progress has solidified thistechnology as a major advancement in the 21st century. As anticipated by many,the first realisation of quantum computing technology would occur over thecloud, with users logging onto dedicated hardware over the classical internet.Recently IBM has released the {\em Quantum Experience} which allows users toaccess a five qubit quantum processor. In this paper we take advantage of thisonline availability of actual quantum hardware and present four quantuminformation experiments that have never been demonstrated before. We utilisethe IBM chip to realise protocols in Quantum Error Correction, QuantumArithmetic, Quantum graph theory and Fault-tolerant quantum computation, byaccessing the device remotely through the cloud. While the results are subjectto significant noise, the correct results are returned from the chip. Thisdemonstrates the power of experimental groups opening up their technology to awider audience and will hopefully allow for the next stage development inquantum information technology.
Devitt, SJ 2016, 'Programming quantum computers using 3-D puzzles, coffee cups, and doughnuts', XRDS, vol. 23, no. 1, pp. 45-50.
View/Download from: Publisher's site
View description>>
The task of programming a quantum computer is just as strange as quantummechanics itself. But it now looks like a simple 3D puzzle may be the futuretool of quantum software engineers.
Guan, J, Feng, Y & Ying, M 2016, 'Decomposition of Quantum Markov Chains and Its Applications', Journal of Computer and System Sciences, vol. 95, pp. 55-68.
View/Download from: Publisher's site
View description>>
Markov chains have been widely employed as a fundamental model in the studiesof probabilistic and stochastic communicating and concurrent systems. It iswell-understood that decomposition techniques play a key role in reachabilityanalysis and model-checking of Markov chains. (Discrete-time) quantum Markovchains have been introduced as a model of quantum communicating systems [1] andalso a semantic model of quantum programs [2]. The BSCC (Bottom StronglyConnected Component) and stationary coherence decompositions of quantum Markovchains were introduced in [3, 4, 5]. This paper presents a new decompositiontechnique, namely periodic decomposition, for quantum Markov chains. We furtherestablish a limit theorem for them. As an application, an algorithm to find amaximum dimensional noiseless subsystem of a quantum communicating system isgiven using decomposition techniques of quantum Markov chains.
Hiai, F, Koenig, R & Tomamichel, M 2016, 'Generalized Log-Majorization and Multivariate Trace Inequalities', Annales Henri Poincare, vol. 18, no. 7, pp. 7-2521.
View/Download from: Publisher's site
View description>>
We show that recent multivariate generalizations of the Araki-Lieb-Thirringinequality and the Golden-Thompson inequality [Sutter, Berta, and Tomamichel,Comm. Math. Phys. (2016)] for Schatten norms hold more generally for allunitarily invariant norms and certain variations thereof. The main technicalcontribution is a generalization of the concept of log-majorization whichallows us to treat majorization with regards to logarithmic integral averagesof vectors of singular values.
Jerger, M, Reshitnyk, Y, Oppliger, M, Potočnik, A, Mondal, M, Wallraff, A, Goodenough, K, Wehner, S, Juliusson, K, Langford, NK & Fedorov, A 2016, 'Contextuality without nonlocality in a superconducting quantum system', Nature Communications, vol. 7, no. 1, p. 12930.
View/Download from: Publisher's site
View description>>
AbstractClassical realism demands that system properties exist independently of whether they are measured, while noncontextuality demands that the results of measurements do not depend on what other measurements are performed in conjunction with them. The Bell–Kochen–Specker theorem states that noncontextual realism cannot reproduce the measurement statistics of a single three-level quantum system (qutrit). Noncontextual realistic models may thus be tested using a single qutrit without relying on the notion of quantum entanglement in contrast to Bell inequality tests. It is challenging to refute such models experimentally, since imperfections may introduce loopholes that enable a realist interpretation. Here we use a superconducting qutrit with deterministic, binary-outcome readouts to violate a noncontextuality inequality while addressing the detection, individual-existence and compatibility loopholes. This evidence of state-dependent contextuality also demonstrates the fitness of superconducting quantum circuits for fault-tolerant quantum computation in surface-code architectures, currently the most promising route to scalable quantum computing.
Khare, V, Shivakumara, P, Raveendran, P & Blumenstein, M 2016, 'A blind deconvolution model for scene text detection and recognition in video', Pattern Recognition, vol. 54, pp. 128-148.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd. Text detection and recognition in poor quality video is a challenging problem due to unpredictable blur and distortion effects caused by camera and text movements. This affects the overall performance of the text detection and recognition methods. This paper presents a combined quality metric for estimating the degree of blur in the video/image. Then the proposed method introduces a blind deconvolution model that enhances the edge intensity by suppressing blurred pixels. The proposed deblurring model is compared with other state-of-the-art models to demonstrate its superiority. In addition, to validate the usefulness and the effectiveness of the proposed model, we conducted text detection and recognition experiments on blurred images classified by the proposed model from standard video databases, namely, ICDAR 2013, ICDAR 2015, YVT and then standard natural scene image databases, namely, ICDAR 2013, SVT, MSER. Text detection and recognition results on both blurred and deblurred video/images illustrate that the proposed model improves the performance significantly.
Knee, GC, Combes, J, Ferrie, C & Gauger, EM 2016, 'Weak-value amplification: state of play', Quantum Measurements and Quantum Metrology, vol. 3, no. 1, pp. 32-37.
View/Download from: Publisher's site
View description>>
AbstractWeak values arise in quantum theory when the result of a weak measurement is conditioned on a subsequent strong measurement. The majority of the trials are discarded, leaving only very few successful events. Intriguingly those can display a substantial signal amplification. This raises the question of whether weak values carry potential to improve the performance of quantum sensors, and indeed a number of impressive experimental results suggested this may be the case. By contrast, recent theoretical studies have found the opposite: using weak-values to obtain an amplification generally worsens metrological performance. This survey summarises the implications of those studies, which call for a reappraisal of weak values’ utility and for further work to reconcile theory and experiment.
Lancia, G, Mathieson, L & Moscato, P 2016, 'Separating Sets of Strings by Finding Matching Patterns is Almost Always Hard', Theoretical Computer Science, vol. 665, pp. 73-86.
View/Download from: Publisher's site
View description>>
We study the complexity of the problem of searching for a set of patternsthat separate two given sets of strings. This problem has applications in awide variety of areas, most notably in data mining, computational biology, andin understanding the complexity of genetic algorithms. We show that the basicproblem of finding a small set of patterns that match one set of strings but donot match any string in a second set is difficult (NP-complete, W[2]-hard whenparameterized by the size of the pattern set, and APX-hard). We then perform adetailed parameterized analysis of the problem, separating tractable andintractable variants. In particular we show that parameterizing by the size ofpattern set and the number of strings, and the size of the alphabet and thenumber of strings give FPT results, amongst others.
Lee, T, Leonardos, N, Saks, M & Wang, F 2016, 'Hellinger volume and number-on-the-forehead communication complexity', Journal of Computer and System Sciences, vol. 82, no. 6, pp. 1064-1074.
View/Download from: Publisher's site
Li, Y, Qiao, Y, Wang, X & Duan, R 2016, 'Tripartite-to-bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces', Communications in Mathematical Physics, vol. 358, no. 2, pp. 791-814.
View/Download from: Publisher's site
View description>>
We study the problem of transforming a tripartite pure state to a bipartiteone using stochastic local operations and classical communication (SLOCC). Itis known that the tripartite-to-bipartite SLOCC convertibility is characterizedby the maximal Schmidt rank of the given tripartite state, i.e. the largestSchmidt rank over those bipartite states lying in the support of the reduceddensity operator. In this paper, we further study this problem and exhibitnovel results in both multi-copy and asymptotic settings. In the multi-copyregime, we observe that the maximal Schmidt rank is strictlysuper-multiplicative, i.e. the maximal Schmidt rank of the tensor product oftwo tripartite pure states can be strictly larger than the product of theirmaximal Schmidt ranks. We then provide a full characterization of thosetripartite states whose maximal Schmidt rank is strictly super-multiplicativewhen taking tensor product with itself. In the asymptotic setting, we focus ondetermining the tripartite-to-bipartite SLOCC entanglement transformation rate,which turns out to be equivalent to computing the asymptotic maximal Schmidtrank of the tripartite state, defined as the regularization of its maximalSchmidt rank. Despite the difficulty caused by the super-multiplicativeproperty, we provide explicit formulas for evaluating the asymptotic maximalSchmidt ranks of two important families of tripartite pure states, by resortingto certain results of the structure of matrix spaces, including the study ofmatrix semi-invariants. These formulas give a sufficient and necessarycondition to determine whether a given tripartite pure state can be transformedto the bipartite maximally entangled state under SLOCC, in the asymptoticsetting. Applying the recent progress on the non-commutative rank problem, wecan verify this condition in deterministic polynomial time.
Long, Z, Duckham, M, Li, S & Schockaert, S 2016, 'Indexing large geographic datasets with compact qualitative representation', International Journal of Geographical Information Science, vol. 30, no. 6, pp. 1072-1094.
View/Download from: Publisher's site
View description>>
© 2015 Taylor & Francis. This paper develops a new mechanism to efficiently compute and compactly store qualitative spatial relations between spatial objects, focusing on topological and directional relations for large datasets of region objects. The central idea is to use minimum bounding rectangles (MBRs) to approximately represent region objects with arbitrary shape and complexity and only store spatial relations that cannot be unambiguously inferred from the relations of corresponding MBRs. We demonstrate, both in theory and practice, that our approach requires considerably less construction time and storage space, and can answer queries more efficiently than the state-of-the-art methods.
Luccio, F, Mans, B, Mathieson, L & Pagli, L 2016, 'Complete Balancing via Rotation', The Computer Journal, vol. 59, no. 8, pp. 1252-1263.
View/Download from: Publisher's site
Ma, X, Jackson, T, Zhou, H, Chen, J, Lu, D, Mazurek, MD, Fisher, KAG, Peng, X, Kribs, D, Resch, KJ, Ji, Z, Zeng, B & Laflamme, R 2016, 'Pure State Tomography with Pauli Measurements', Phys. Rev. A, vol. 93, no. 3, p. 032140.
View/Download from: Publisher's site
View description>>
We examine the problem of finding the minimum number of Pauli measurementsneeded to uniquely determine an arbitrary $n$-qubit pure state among allquantum states. We show that only $11$ Pauli measurements are needed todetermine an arbitrary two-qubit pure state compared to the full quantum statetomography with $16$ measurements, and only $31$ Pauli measurements are neededto determine an arbitrary three-qubit pure state compared to the full quantumstate tomography with $64$ measurements. We demonstrate that our protocol isrobust under depolarizing error with simulated random pure states. Weexperimentally test the protocol on two- and three-qubit systems with nuclearmagnetic resonance techniques. We show that the pure state tomography protocolsaves us a number of measurements without considerable loss of fidelity. Wecompare our protocol with same-size sets of randomly selected Pauli operatorsand find that our selected set of Pauli measurements significantly outperformsthose random sampling sets. As a direct application, our scheme can also beused to reduce the number of settings needed for pure-state tomography inquantum optical systems.
Mathieson, L 2016, 'Synergies in critical reflective practice and science: Science as reflection and reflection as science', Journal of University Teaching and Learning Practice, vol. 13, no. 2, pp. 1-13.
View description>>
The conceptions of reflective practice in education have their roots at least partly in the work of Dewey, who describes reflection as “the active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends” (Dewey 1933, p.9). This conception of reflection has carried on into more-focused efforts to describe critical reflection as a tool for improving professional practice (where academic and educational practice is the particular interest of this study); “… some puzzling or troubling or interesting phenomenon” allows the practitioner to access “the understandings which have been implicit in his action, understandings which he surfaces, criticizes, restructures, and embodies in further action” (Schön 1983, p. 50). Both of these descriptions embody a central idea of critical reflective practice: that the examination of practice involves the divination (in a rational, critical sense) of order and perhaps meaning from the facts at hand (which, in turn, are brought to light by the events that occur as the results of implementation of theory). As part of a lecture series, Gottlieb defined science as “an intellectual activity carried out by humans to understand the structure and functions of the world in which they live” (Gottlieb 1997). While science and critical reflective practice attempt to build models about different parts of our world – the natural world and the world of professional (educational) practice respectively – both embody certain underlying aims and methodologies. Indeed, it is striking that in these definitions the simple replacement of the terminology of reflective practice with the terminology of science (or vice versa) leads to a perfectly comprehensible definition of either.It is this confluence that this paper studies, building from two separate foundations, critical reflective practice and science. Via their models and exemplars of t...
Meter, RV & Devitt, SJ 2016, 'Local and Distributed Quantum Computation', IEEE Computer 49(9), 31-42, Sept. 2016, vol. 49, no. 9, pp. 31-42.
View/Download from: Publisher's site
View description>>
Experimental groups are now fabricating quantum processors powerful enough toexecute small instances of quantum algorithms and definitively demonstratequantum error correction that extends the lifetime of quantum data, addingurgency to architectural investigations. Although other options continue to beexplored, effort is coalescing around topological coding models as the mostpractical implementation option for error correction on realizablemicroarchitectures. Scalability concerns have also motivated architects topropose distributed memory multicomputer architectures, with experimentalefforts demonstrating some of the basic building blocks to make such designspossible. We compile the latest results from a variety of different systemsaiming at the construction of a scalable quantum computer.
Motes, KR, Mann, RL, Olson, JP, Studer, NM, Bergeron, EA, Gilchrist, A, Dowling, JP, Berry, DW & Rohde, PP 2016, 'Efficient recycling strategies for preparing large Fock states from single-photon sources --- Applications to quantum metrology', Phys. Rev. A, vol. 94, no. 1, p. 012344.
View/Download from: Publisher's site
View description>>
Fock states are a fundamental resource for many quantum technologies such asquantum metrology. While much progress has been made in single-photon sourcetechnologies, preparing Fock states with large photon number remainschallenging. We present and analyze a bootstrapped approach fornon-deterministically preparing large photon-number Fock states by iterativelyfusing smaller Fock states on a beamsplitter. We show that by employing staterecycling we are able to exponentially improve the preparation rate overconventional schemes, allowing the efficient preparation of large Fock states.The scheme requires single-photon sources, beamsplitters, number-resolvedphoto-detectors, fast-feedforward, and an optical quantum memory.
Moyle, W, Jones, C, Sung, B, Bramble, M, O’Dwyer, S, Blumenstein, M & Estivill-Castro, V 2016, 'What Effect Does an Animal Robot Called CuDDler Have on the Engagement and Emotional Response of Older People with Dementia? A Pilot Feasibility Study', International Journal of Social Robotics, vol. 8, no. 1, pp. 145-156.
View/Download from: Publisher's site
View description>>
© 2015, Springer Science+Business Media Dordrecht. The development of companion animal robots is of growing interest. These robots have recently been marketed to older adults with dementia as a means of encouraging social engagement and reducing behavioural and psychological symptoms of dementia. This paper outlines the results of a pilot study that sought to assess the feasibility and effect of using a robotic companion animal called CuDDler on engagement and emotional states of five older adults with dementia living in nursing home care. CuDDler is a prototype robot developed in Singapore. Despite their cognitive decline, the study participants raised a number of concerns regarding the feasibility and tolerability of CuDDler. The effectiveness of CuDDler was also limited in these participants, although one participant with visual agnosia benefited greatly from the one-on-one experience. The findings demonstrate the importance of companion robots being developed that are of an appropriate size, weight and shape for older people, including those with dementia, and a realistic animal shape that does not encourage thoughts of it being a toy. Our conclusions indicate the need for further studies on the development and use of companion robots, and investigation of the comparative benefits of social robots both compared to and in association with human interactions.
Nagayama, S, Choi, B-S, Devitt, S, Suzuki, S & Van Meter, R 2016, 'Interoperability in encoded quantum repeater networks', Physical Review A, vol. 93, no. 4.
View/Download from: Publisher's site
View description>>
The future of quantum repeater networking will require interoperability between various error-correcting codes. A few specific code conversions and even a generalized method are known, however, no detailed analysis of these techniques in the context of quantum networking has been performed. In this paper we analyze a generalized procedure to create Bell pairs encoded heterogeneously between two separate codes used often in error-corrected quantum repeater network designs. We begin with a physical Bell pair and then encode each qubit in a different error-correcting code, using entanglement purification to increase the fidelity. We investigate three separate protocols for preparing the purified encoded Bell pair. We calculate the error probability of those schemes between the Steane [[7,1,3]] code, a distance-3 surface code, and single physical qubits by Monte Carlo simulation under a standard Pauli error model and estimate the resource efficiency of the procedures. A local gate error rate of 10-3 allows us to create high-fidelity logical Bell pairs between any of our chosen codes. We find that a postselected model, where any detected parity flips in code stabilizers result in a restart of the protocol, performs the best.
Nagayama, S, Fowler, AG, Horsman, D, Devitt, SJ & Meter, RV 2016, 'Surface Code Error Correction on a Defective Lattice', New Journal of Physics, 19(2):023050, 2017, vol. 19, no. 2, pp. 1-29.
View/Download from: Publisher's site
View description>>
The yield of physical qubits fabricated in the laboratory is much lower thanthat of classical transistors in production semiconductor fabrication. Actualimplementations of quantum computers will be susceptible to loss in the form ofphysically faulty qubits. Though these physical faults must negatively affectthe computation, we can deal with them by adapting error correction schemes. Inthis paper We have simulated statically placed single-fault lattices andlattices with randomly placed faults at functional qubit yields of 80%, 90% and95%, showing practical performance of a defective surface code by employingactual circuit constructions and realistic errors on every gate, includingidentity gates. We extend Stace et al.'s superplaquettes solution againstdynamic losses for the surface code to handle static losses such as physicallyfaulty qubits. The single-fault analysis shows that a static loss at theperiphery of the lattice has less negative effect than a static loss at thecenter. The randomly-faulty analysis shows that 95% yield is good enough tobuild a large scale quantum computer. The local gate error rate threshold is$\sim 0.3\%$, and a code distance of seven suppresses the residual error ratebelow the original error rate at $p=0.1\%$. 90% yield is also good enough whenwe discard badly fabricated quantum computation chips, while 80% yield does notshow enough error suppression even when discarding 90% of the chips. Weevaluated several metrics for predicting chip performance, and found that theaverage of the product of the number of data qubits and the cycle time of astabilizer measurement of stabilizers gave the strongest correlation withpost-correction residual error rates. Our analysis will help with selectingusable quantum computation chips from among the pool of all fabricated chips.
Nemoto, K, Trupke, M, Devitt, SJ, Scharfenberger, B, Buczak, K, Schmiedmayer, J & Munro, WJ 2016, 'Photonic Quantum Networks formed from NV− centers', Scientific Reports, vol. 6, no. 1, p. 26284.
View/Download from: Publisher's site
View description>>
AbstractIn this article we present a simple repeater scheme based on the negatively-charged nitrogen vacancy centre in diamond. Each repeater node is built from modules comprising an optical cavity containing a single NV−, with one nuclear spin from 15N as quantum memory. The module uses only deterministic processes and interactions to achieve high fidelity operations (>99%) and modules are connected by optical fiber. In the repeater node architecture, the processes between modules by photons can be in principle deterministic, however current limitations on optical components lead the processes to be probabilistic but heralded. Our resource-modest repeater architecture contains two modules at each node and the repeater nodes are then connected by entangled photon pairs. We discuss the performance of such a quantum repeater network with modest resources and then incorporate more resource-intense strategies step by step. Our architecture should allow large-scale quantum information networks with existing or near future technology.
Paler, A, Devitt, SJ & Fowler, AG 2016, 'Synthesis of Arbitrary Quantum Circuits to Topological Assembly', Scientific Reports 6, Article number: 30600 (2016), vol. 6, no. 1, p. 30600.
View/Download from: Publisher's site
View description>>
Given a quantum algorithm, it is highly nontrivial to devise an efficientsequence of physical gates implementing the algorithm on real hardware andincorporating topological quantum error correction. In this paper, we present afirst step towards this goal, focusing on generating correct and simplearrangements of topological structures that correspond to a given quantumcircuit and largely neglecting their efficiency. We detail the many challengesthat will need to be tackled in the pursuit of efficiency. The software sourcecode can be consulted at https://github.com/alexandrupaler/tqec.
Paler, A, Wille, R & Devitt, SJ 2016, 'Wire Recycling for Quantum Circuit Optimization', Phys. Rev. A, vol. 94, no. 4, p. 042337.
View/Download from: Publisher's site
View description>>
Quantum information processing is expressed using quantum bits (qubits) andquantum gates which are arranged in the terms of quantum circuits. Here, eachqubit is associated to a quantum circuit wire which is used to conduct thedesired operations. Most of the existing quantum circuits allocate a singlequantum circuit wire for each qubit and, hence, introduce a significantoverhead. In fact, qubits are usually not needed during the entire computationbut only between their initialization and measurement. Before and after that,corresponding wires may be used by other qubits. In this work, we propose asolution which exploits this fact in order to optimize the design of quantumcircuits with respect to the required wires. To this end, we introduce arepresentation of the lifetimes of all qubits which is used to analyze therespective need for wires. Based on this analysis, a method is proposed which'recycles' the available wires and, by this, reduces the size of the resultingcircuit. Experimental evaluations based on established reversible andfault-tolerant quantum circuits confirm that the proposed solution reduces theamount of wires by more than 90% compared to unoptimized quantum circuits.
Pfister, C, Rol, MA, Mantri, A, Tomamichel, M & Wehner, S 2016, 'Capacity estimation and verification of quantum channels with arbitrarily correlated errors', Nature Communications, vol. 9, no. 1, pp. 27-27.
View/Download from: Publisher's site
View description>>
One of the main figures of merit for quantum memories and quantumcommunication devices is their quantum capacity. It has been studied forarbitrary kinds of quantum channels, but its practical estimation has so farbeen limited to devices that implement independent and identically distributed(i.i.d.) quantum channels, where each qubit is affected by the same noiseprocess. Real devices, however, typically exhibit correlated errors. Here, we overcome this limitation by presenting protocols that estimate achannel's one-shot quantum capacity for the case where the device acts on (anarbitrary number of) qubits. The one-shot quantum capacity quantifies adevice's ability to store or communicate quantum information, even if there arecorrelated errors across the different qubits. We present a protocol which is easy to implement and which comes in twoversions. The first version estimates the one-shot quantum capacity bypreparing and measuring in two different bases, where all involved qubits areused as test qubits. The second version verifies on-the-fly that a channel'sone-shot quantum capacity exceeds a minimal tolerated value while storing orcommunicating data, therefore combining test qubits and data qubits in oneprotocol. We discuss the performance of our method using simple examples, suchas the dephasing channel for which our method is asymptotically optimal.Finally, we apply our method to estimate the one-shot capacity in an experimentusing a transmon qubit.
Sanders, YR, Wallman, JJ & Sanders, BC 2016, 'Bounding quantum gate error rate based on reported average fidelity', New Journal of Physics, vol. 18, no. 1, pp. 012002-012002.
View/Download from: Publisher's site
View description>>
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.
Sutter, D, Berta, M & Tomamichel, M 2016, 'Multivariate Trace Inequalities', Communications in Mathematical Physics: Volume 352, Number 1 (2017), Page 37-58, vol. 352, no. 1, pp. 37-58.
View/Download from: Publisher's site
View description>>
We prove several trace inequalities that extend the Golden-Thompson and theAraki-Lieb-Thirring inequality to arbitrarily many matrices. In particular, westrengthen Lieb's triple matrix inequality. As an example application of ourfour matrix extension of the Golden-Thompson inequality, we prove remainderterms for the monotonicity of the quantum relative entropy and strongsub-additivity of the von Neumann entropy in terms of recoverability. We findthe first explicit remainder terms that are tight in the commutative case. Ourproofs rely on complex interpolation theory as well as asymptotic spectralpinching, providing a transparent approach to treat generic multivariate traceinequalities.
Wang, H, Zheng, W, Yu, N, Li, K, Lu, D, Xin, T, Li, C, Ji, Z, Kribs, D, Zeng, B, Peng, X & Du, J 2016, 'Quantum State and Process Tomography via Adaptive Measurements', Science China: Physics, Mechanics and Astronomy, vol. 59, no. 10.
View/Download from: Publisher's site
View description>>
We investigate quantum state tomography (QST) for pure states and quantumprocess tomography (QPT) for unitary channels via $adaptive$ measurements. Fora quantum system with a $d$-dimensional Hilbert space, we first propose anadaptive protocol where only $2d-1$ measurement outcomes are used to accomplishthe QST for $all$ pure states. This idea is then extended to study QPT forunitary channels, where an adaptive unitary process tomography (AUPT) protocolof $d^2+d-1$ measurement outcomes is constructed for any unitary channel. Weexperimentally implement the AUPT protocol in a 2-qubit nuclear magneticresonance system. We examine the performance of the AUPT protocol when appliedto Hadamard gate, $T$ gate ($\pi/8$ phase gate), and controlled-NOT gate,respectively, as these gates form the universal gate set for quantuminformation processing purpose. As a comparison, standard QPT is alsoimplemented for each gate. Our experimental results show that the AUPT protocolthat reconstructing unitary channels via adaptive measurements significantlyreduce the number of experiments required by standard QPT without considerableloss of fidelity.
Wang, X & Duan, R 2016, 'Irreversibility of Asymptotic Entanglement Manipulation Under Quantum Operations Completely Preserving Positivity of Partial Transpose', Phys. Rev. Lett., vol. 119, no. 18, pp. 180506-5.
View/Download from: Publisher's site
View description>>
We demonstrate the irreversibility of asymptotic entanglement manipulationunder quantum operations that completely preserve the positivity of partialtranspose (PPT), resolving a major open problem in quantum information theory.Our key tool is a new efficiently computable additive lower bound for theasymptotic relative entropy of entanglement with respect to PPT states, whichcan be used to evaluate the entanglement cost under local operations andclassical communication (LOCC). We find that for any rank-two mixed statesupporting on the $3\otimes3$ antisymmetric subspace, the amount of distillableentanglement by PPT operations is strictly smaller than one entanglement bit(ebit) while its entanglement cost under PPT operations is exactly one ebit. Asbyproduct, we find that for this class of states, both the Rains' bound and itsregularization, are strictly less than the asymptotic relative entropy ofentanglement. So, in general, there is no unique entanglement measure for themanipulation of entanglement by PPT operations. We further show a computablesufficient condition for the irreversibility of entanglement distillation byLOCC (or PPT) operations.
Wang, X & Duan, R 2016, 'Nonadditivity of Rains' bound for distillable entanglement', Phys. Rev. A, vol. 95, no. 6, pp. 062322-062322-5.
View/Download from: Publisher's site
View description>>
Rains' bound is arguably the best known upper bound of the distillableentanglement by operations completely preserving positivity of partialtranspose (PPT) and was conjectured to be additive and coincide with theasymptotic relative entropy of entanglement. We disprove both conjectures byexplicitly constructing a special class of mixed two-qubit states. We thenintroduce an additive semidefinite programming lower bound ($E_M$) for theasymptotic Rains' bound, and it immediately becomes a computable lower boundfor entanglement cost of bipartite states. Furthermore, $E_M$ is also proved tobe the best known upper bound of the PPT-assisted deterministic distillableentanglement and gives the asymptotic rates for all pure states and some classof genuinely mixed states.
Wilde, MM, Tomamichel, M & Berta, M 2016, 'Converse bounds for private communication over quantum channels', IEEE Transactions on Information Theory, vol. 63, no. 3, pages 1792-1817, March 2017, vol. 63, no. 3, pp. 1792-1817.
View/Download from: Publisher's site
View description>>
This paper establishes several converse bounds on the private transmissioncapabilities of a quantum channel. The main conceptual development buildsfirmly on the notion of a private state, which is a powerful, uniquely quantummethod for simplifying the tripartite picture of privacy involving localoperations and public classical communication to a bipartite picture of quantumprivacy involving local operations and classical communication. This approachhas previously led to some of the strongest upper bounds on secret key rates,including the squashed entanglement and the relative entropy of entanglement.Here we use this approach along with a 'privacy test' to establish a generalmeta-converse bound for private communication, which has a number ofapplications. The meta-converse allows for proving that any quantum channel'srelative entropy of entanglement is a strong converse rate for privatecommunication. For covariant channels, the meta-converse also leads tosecond-order expansions of relative entropy of entanglement bounds for privatecommunication rates. For such channels, the bounds also apply to the privatecommunication setting in which the sender and receiver are assisted byunlimited public classical communication, and as such, they are relevant forestablishing various converse bounds for quantum key distribution protocolsconducted over these channels. We find precise characterizations for severalchannels of interest and apply the methods to establish several converse boundson the private transmission capabilities of all phase-insensitive bosonicchannels.
Wilde, MM, Tomamichel, M, Lloyd, S & Berta, M 2016, 'Gaussian hypothesis testing and quantum illumination', Physical Review Letters, vol. 119, no. 12, p. 12.
View/Download from: Publisher's site
View description>>
Quantum hypothesis testing is one of the most basic tasks in quantuminformation theory and has fundamental links with quantum communication andestimation theory. In this paper, we establish a formula that characterizes thedecay rate of the minimal Type-II error probability in a quantum hypothesistest of two Gaussian states given a fixed constraint on the Type-I errorprobability. This formula is a direct function of the mean vectors andcovariance matrices of the quantum Gaussian states in question. We give anapplication to quantum illumination, which is the task of determining whetherthere is a low-reflectivity object embedded in a target region with a brightthermal-noise bath. For the asymmetric-error setting, we find that a quantumillumination transmitter can achieve an error probability exponent strongerthan a coherent-state transmitter of the same mean photon number, andfurthermore, that it requires far fewer trials to do so. This occurs when thebackground thermal noise is either low or bright, which means that a quantumadvantage is even easier to witness than in the symmetric-error setting becauseit occurs for a larger range of parameters. Going forward from here, we expectour formula to have applications in settings well beyond those considered inthis paper, especially to quantum communication tasks involving quantumGaussian channels.
Wu, Y, Shivakumara, P, Lu, T, Tan, CL, Blumenstein, M & Kumar, GH 2016, 'Contour Restoration of Text Components for Recognition in Video/Scene Images', IEEE Transactions on Image Processing, vol. 25, no. 12, pp. 5622-5634.
View/Download from: Publisher's site
View description>>
Text recognition in video/natural scene images has gained significant attention in the field of image processing in many computer vision applications, which is much more challenging than recognition in plain background images. In this work, we aim to restore complete character contours in video/scene images from gray values, in contrast to the conventional techniques that consider edge images/binary information as inputs for text detection and recognition. We explore and utilize the strengths of zero crossing points given by the Laplacian to identify stroke candidate pixels (SPC). For each SPC pair, we propose new symmetry features based on gradient magnitude and Fourier phase angles to identify probable stroke candidate pairs (PSCP). The same symmetry properties are proposed at the PSCP level to choose seed stroke candidate pairs (SSCP). Finally, an iterative algorithm is proposed for SSCP to restore complete character contours. Experimental results on benchmark databases, namely, the ICDAR family of video and natural scenes, Street View Data (SVT), and MSRA datasets, show that the proposed technique outperforms the existing techniques in terms of both quality measures and recognition rate. We also show that character contour restoration is effective for text detection in video and natural scene images.
XIA, M & JI, Z 2016, 'The limits of computation', Chinese Science Bulletin, vol. 61, no. 4-5, pp. 404-408.
View/Download from: Publisher's site
View description>>
The powerful idea of computation has accompanied the development of human civilization, has deeply changed the way we live and work, and has accelerated the advancement of many areas of sciences. In this article, we explore the power and limits of computation from several different perspectives. We will discuss topics from the models of computation and Church-Turing thesis, to the impact of the P versus NP problem and quantum computing on our understanding of the limits of computation. More concretely, we will explore the computability and the halting problem, the efficiency problem of computation, the P versus NP problem. We then move on to the discussion of quantum computation, quantum algorithm for factoring and its implications, quantum simulation and the relation between quantum and classical computations.
Xin, T, Lu, D, Klassen, J, Yu, N, Ji, Z, Chen, J, Ma, X, Long, G, Zeng, B & Laflamme, R 2016, 'Quantum state tomography via reduced density matrices', Phys. Rev. Lett., vol. 118, no. 2, pp. 020401-020401.
View/Download from: Publisher's site
View description>>
Quantum state tomography via local measurements is an efficient tool forcharacterizing quantum states. However it requires that the original globalstate be uniquely determined (UD) by its local reduced density matrices (RDMs).In this work we demonstrate for the first time a class of states that are UD bytheir RDMs under the assumption that the global state is pure, but fail to beUD in the absence of that assumption. This discovery allows us to classifyquantum states according to their UD properties, with the requirement that eachclass be treated distinctly in the practice of simplifying quantum statetomography. Additionally we experimentally test the feasibility and stabilityof performing quantum state tomography via the measurement of local RDMs foreach class. These theoretical and experimental results advance the project ofperforming efficient and accurate quantum state tomography in practice.
Ying, M 2016, 'Introduction', ASIAN WOMEN, vol. 28, no. 4, pp. 3-9.
View/Download from: Publisher's site
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Named Entity Recognition from Unstructured Handwritten Document Images', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, Santorini, Greece, pp. 375-380.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Named entity recognition is an important topic in the field of natural language processing, whereas in document image processing, such recognition is quite challenging without employing any linguistic knowledge. In this paper we propose an approach to detect named entities (NEs) directly from offline handwritten unstructured document images without explicit character/word recognition, and with very little aid from natural language and script rules. At the preprocessing stage, the document image is binarized, and then the text is segmented into words. The slant/skew/baseline corrections of the words are also performed. After preprocessing, the words are sent for NE recognition. We analyze the structural and positional characteristics of NEs and extract some relevant features from the word image. Then the BLSTM neural network is used for NE recognition. Our system also contains a post-processing stage to reduce the true NE rejection rate. The proposed approach produces encouraging results on both historical and modern document images, including those from an Australian archive, which are reported here for the very first time.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Offline Cursive Bengali Word Recognition Using CNNs with a Recurrent Model', 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Shenzhen, China, pp. 429-434.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper deals with offline handwritten word recognition of a major Indic script: Bengali. Due to the structure of this script, the characters (mostly ortho-syllables) are frequently overlapping and hard to segment, especially when the writing is cursive. Individual character recognition and the combination of outputs can increase the likelihood of errors. Instead, a better approach can be sending the whole word to a suitable recognizer. Here we use the Convolutional Neural Network (CNN) integrated with a recurrent model for this purpose. Long short-term memory blocks are used as hidden units. Also, the CNN-derived features are employed in a recurrent model with a CTC (Connectionist Temporal Classification) layer to get the output. We have tested our method on three datasets: (a) a publicly available dataset, (b) a new dataset generated by our research group and (c) an unconstrained dataset. The dataset (a) contains 17,091 words, while our dataset (b) contains 107,550 number of words in total. In addition to these, the dataset (c) is comprised of 5,223 words. We have compared our results with those of some earlier work in the area and have found improved performance, which is due to the novel integration of CNNs with the recurrent model.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Writer identification by training on one script but testing on another', 2016 23rd International Conference on Pattern Recognition (ICPR), 2016 23rd International Conference on Pattern Recognition (ICPR), IEEE, Mexico, pp. 1153-1158.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper deals with identifying a writer from his/her offline handwriting. In a multilingual country where a writer can scribe in multiple scripts, writer identification becomes challenging when we have individual handwriting data in one script while we need to verify/identify a writer from handwriting in another script. In this paper such an issue is addressed with two scripts: English and Bengali. Here we model the task as a classification problem, where training data contains only Bengali handwritten samples and testing is performed on English handwritten texts. This work is based on the understanding that a writer has some inherent stroke characteristics that are independent of the script in which (s)he writes. In this work, some implicit structural and statistical features are extracted, and multiple classifiers are employed for writer identification. Many training sessions are run on a database of 100 writers and the performances are analyzed. We have obtained encouraging results on this database, which show the effectiveness of our method.
Alaei, A, Conte, D, Blumenstein, M & Raveaux, R 1970, 'Document Image Quality Assessment Based on Texture Similarity Index', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, Santorini, Greece, pp. 132-137.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.In this paper, a full reference document image quality assessment (FR DIQA) method using texture features is proposed. Local binary patterns (LBP) as texture features are extracted at the local and global levels for each image. For each extracted LBP feature set, a similarity measure called the LBP similarity index (LBPSI) is computed. A weighting strategy is further proposed to improve the LBPSI obtained based on local LBP features. The LBPSIs computed for both local and global features are then combined to get the final LBPSI, which also provides the best performance for DIQA. To evaluate the proposed method, two different datasets were used. The first dataset is composed of document images, whereas the second one includes natural scene images. The mean human opinion scores (MHOS) were considered as ground truth for performance evaluation. The results obtained from the proposed LBPSI method indicate a significant improvement in automatically/accurately predicting image quality, especially on the document image-based dataset.
Alaei, F, Alaei, A, Blumenstein, M & Pal, U 1970, 'A brief review of document image retrieval methods: Recent advances', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, CANADA, pp. 3500-3507.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Due to the rapid increase of different digitized documents, the development of a system to automatically retrieve document images from a large collection of structured and unstructured document images is in high demand. Many techniques have been developed to provide an efficient and effective way for retrieving and organizing these document images in the literature. This paper provides an overview of the methods which have been applied for document image retrieval over recent years. It has been found that from a textual perspective, more attention has been paid to the feature extraction methods without using OCR.
Alaei, F, Alaei, A, Blumenstein, M & Pal, U 1970, 'Document image retrieval based on texture features and similarity fusion', 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Palmerston North, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In this paper we investigate the usefulness of two different texture features along with classification fusion for document image retrieval. A local binary texture method, as a statistical approach, and a wavelet analysis technique, as a transform-based approach, are used for feature extraction and two feature vectors are obtained for every document image. The similarity distances between each of the two feature vectors extracted for a given query and the feature vectors extracted from the document images in the training step are computed separately. In order to use the properties of both features, a classifier fusion technique is then employed using a weighted average fusion of distance measures obtained in relation to each feature vector. The document images are finally ranked based on the greatest visual similarity to the query obtained from the fusion similarity measures. The Media Team Document Database, which provides a great variety of page layouts and contents, is considered for evaluating the proposed method. The results obtained from the experiments demonstrate a correct document retrieval of 65.4% and 91.8% in the Top-1 and Top-10 ranked document list, respectively.
Alaei, F, Alaei, A, Pal, U & Blumenstein, M 1970, 'Document Image Retrieval Based on Texture Features: A Recognition-Free Approach', 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Gold Coast, Australia, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.The tendency of current technology is towards a paperless world. Due to the rapid increase of digitized documents, providing a fast and easy method for retrieval is in high demand. The aim of this paper is to examine the effectiveness of texture features for document image retrieval. Thus, segmentation-free document image retrieval using a binary texture method is proposed. In the proposed approach, local features are extracted, local grey-level structures are summarised, and their distribution is characterised using global features. The assumption is that texture properties in the text regions and non-text regions of the document images are different. This assumption is used to rank the available document images and retrieve only those, which have greatest visual similarity to a given query. The under-sampled image and sub-images of the original image are further considered to improve the retrieval results, which are up to 76.0% in the first ranking and 96.2% in the Top-10 ranking. The Media Team Oulu Document Database, which is a heterogeneous database that offers a great variety of page layouts and contents, is used for experimentation.
Ambainis, A, Balodis, K, Belovs, A, Lee, T, Santha, M & Smotrovs, J 1970, 'Separations in query complexity based on pointer functions', Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, STOC '16: Symposium on Theory of Computing, ACM.
View/Download from: Publisher's site
Anshu, A, Belovs, A, Ben-David, S, Goos, M, Jain, R, Kothari, R, Lee, T & Santha, M 1970, 'Separations in Communication Complexity Using Cheat Sheets and Information Complexity', 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), IEEE.
View/Download from: Publisher's site
Antonacopoulos, A, Gatos, B, Blumenstein, M, Lladós, J & Lopresti, D 1970, 'Message from the General Chairs and Program Chairs', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, p. xi.
View/Download from: Publisher's site
Berta, M, Fawzi, O & Tomamichel, M 1970, 'Exploiting variational formulas for quantum relative entropy', 2016 IEEE International Symposium on Information Theory (ISIT), 2016 IEEE International Symposium on Information Theory (ISIT), IEEE, Barcelona, Spain, pp. 2844-2848.
View/Download from: Publisher's site
View description>>
The relative entropy is the basic concept underlying various information measures like entropy, conditional entropy and mutual information. Here, we discuss how to make use of variational formulas for measured relative entropy and quantum relative entropy for understanding the additivity properties of various entropic quantities that appear in quantum information theory. In particular, we show that certain lower bounds on quantum conditional mutual information are superadditive.
Bremner, MJ, Montanaro, A & Shepherd, D 1970, 'Average-case complexity versus approximate simulation of commuting quantum computations', 19th Conference on Quantum Information Processing, Banff, Canada.
Broadbent, A, Ji, Z, Song, F & Watrous, J 1970, 'Zero-Knowledge Proof Systems for QMA', 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, New Brunswick, NJ, pp. 31-40.
View/Download from: Publisher's site
Chakraborty, A & Blumenstein, M 1970, 'Marginal Noise Reduction in Historical Handwritten Documents -- A Survey', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, GREECE, pp. 323-328.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.This paper presents a survey on different approaches for removing the marginal noise from document images, and anlaysing the research challenges of those methods relating to handwritten historical datasets. In this survey, historical documents collected from Australian Archives and Libraries are introduced and the associated layout complexities of those document images are also described. Benchmarking other historical databases related to this work is also discussed. This survey discusses the difficulties and suitability of the state-of-the-art methods to remove marginal noise as well as preserving the text content from handwritten historical documents. This survey helps researchers to identify appropriate methods according to the associated marginal noise and also illustrates their drawbacks in order to make suggestions for developing approaches, which are more general and robust for any datasets.
Chakraborty, A & Blumenstein, M 1970, 'Preserving Text Content from Historical Handwritten Documents', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, Santorini, Italy, pp. 329-334.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.We propose a holistic, dynamic method to preserve text content with zero tolerance while removing marginal noise for historical handwritten document images. The key idea is to identify and analyze the region between the sharp peak at the edge and page frame of the text content at each margin. Depending on the proximity of the sharp peak to the text, the text content is then extracted from the document image. This method automatically adapts thresholds for each single document image and is directly applicable to gray-scale images. The proposed method is evaluated on four diverse handwritten historical datasets: Queensland State Archive (QSA), Saint Gall, Parzival and the Prosecution Project. Experimental results show that the proposed method achieves higher accuracy compared with other methods tested on the Saint Gall and Parzival datasets, whilst for the other two Australian datasets, which have been introduced here for the first time, the results are very encouraging.
Cui, SX, Ji, Z, Yu, N & Zeng, B 1970, 'Quantum capacities for entanglement networks', 2016 IEEE International Symposium on Information Theory (ISIT), 2016 IEEE International Symposium on Information Theory (ISIT), IEEE, Barcelona, Spain, pp. 1685-1689.
View/Download from: Publisher's site
View description>>
We discuss quantum capacities for two types of entanglement networks: Q for the quantum repeater network with free classical communication, and R for the tensor network as the rank of the linear operation represented by the tensor network. We find that Q always equals R in the regularized case for the same network graph. However, the relationships between the corresponding one-shot capacities Q1 and R1 are more complicated, and the min-cut upper bound is in general not achievable. We show that the tensor network can be viewed as a stochastic protocol with the quantum repeater network, such that R1 is a natural upper bound of Q1. We analyze the possible gap between R1 and Q1 for certain networks, and compare them with the one-shot classical capacity of the corresponding classical network.
Das, A, Mondal, P, Pal, U, Ferrer, MA & Blumenstein, M 1970, 'Fast and efficent multimodal eye biometrics using projective dictionary pair learning', 2016 IEEE Congress on Evolutionary Computation (CEC), 2016 IEEE Congress on Evolutionary Computation (CEC), IEEE, Vancouver, Canada, pp. 1402-1408.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.This work proposes a projective pairwise dictionary learning-based approach for fast and efficient multimodal eye biometrics. The work uses a faster Projective pairwise Discriminative Dictionary Learning (DL) in contrast to the traditional DL which uses synthesis DL. Projective Pairwise Discriminative Dictionary (PPDD) uses a synthesis dictionary and an analysis dictionary jointly to achieve the goal of pattern representation and discrimination. As the PPDD process of DL is in contrast to the use of l0 or l1-norm sparsity constraints on the representation coefficients adopted in most traditional DL, it works faster than other DL. Moreover, the blending of synthesis dictionary and an analysis dictionary also enhance the feature representation of the complex eye patterns. We employed the combination of sclera and iris traits to establish multimodal biometrics. The experimental study and analysis conducted fulfill the hypothesis we considered. In this work we employed a part of the UBIRIS version 1 dataset to conduct the experiments.
Das, A, Pal, U, Ferrer, MA & Blumenstein, M 1970, 'SSRBC 2016: Sclera Segmentation and Recognition Benchmarking Competition', 2016 International Conference on Biometrics (ICB), 2016 International Conference on Biometrics (ICB), IEEE, Halmstad, Sweden.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.This article reports and summarizes the results of a competition on sclera segmentation and recognition benchmarking, called Sclera Segmentation and Recognition Benchmarking Competition 2016 (SSRBC 2016). It was organized in the context of the 9th IAPR International Conference on Biometrics (ICB 2016). The goal of this competition was to record the recent developments in sclera segmentation and recognition, and also to gain the attention of researchers on this subject of biometrics. In this regard, we have used a multi-angle sclera dataset (MASD version 1). It is comprised of 2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82∗2) different eyes. We have prepared a manual segmentation mask of these images to create the baseline for both tasks. We have, furthermore, adopted precision and recall based statistical measures to evaluate the effectiveness of the segmentation and the ranks of the competing algorithms. The recognition accuracy measure has been employed to measure the recognition task. To summarize, twelve participants registered for the competition, and among them, three participants submitted their algorithms/ systems for the segmentation task and two their recognition algorithm. The results produced by these algorithms reflect developments in the literature of sclera segmentation and recognition, employing cutting edge segmentation techniques. Along with the algorithms of three competing teams and their results, the MASD version 1 dataset will also be freely available for research purposes from the organizer's website. The competition also demonstrates the recent interests of researchers from academia as well as industry on this subject of biometrics.
de Vries, NJ, Arefin, AS, Mathieson, L, Lucas, B & Moscato, P 1970, 'Relative Neighborhood Graphs Uncover the Dynamics of Social Media Engagement', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Advanced Data Mining and Applications, Springer International Publishing, Gold Coast, Queensland, Australia, pp. 283-297.
View/Download from: Publisher's site
View description>>
In this paper, we examine if the Relative Neighborhood Graph (RNG) can reveal related dynamics of page-level social media metrics. A statistical analysis is also provided to illustrate the application of the method in two other datasets (the Indo-European Language dataset and the Shakespearean Era Text dataset). Using social media metrics on the world’s ‘top check-in locations’ Facebook pages dataset, the statistical analysis reveals coherent dynamical patterns. In the largest cluster, the categories ‘Gym’, ‘Fitness Center’, and ‘Sports and Recreation’ appear closely linked together in the RNG. Taken together, our study validates our expectation that RNGs can provide a “parameter-free" mathematical formalization of proximity. Our approach gives useful insights on user behaviour in social media page-level metrics as well as other applications.
Duan, R, Guo, C, Li, C-K & Li, Y 1970, 'Parallel distinguishability of quantum operations', IEEE International Symposium on Information Theory - Proceedings, IEEE International Symposium on Information Theory, IEEE, Barcelona, Spain, pp. 2259-2263.
View/Download from: Publisher's site
View description>>
We find that the perfect distinguishability of two quantum operations by aparallel scheme depends only on an operator subspace generated from theirChoi-Kraus operators. We further show that any operator subspace can beobtained from two quantum operations in such a way. This connection enables usto study the parallel distinguishability of operator subspaces directly withoutexplicitly referring to the underlining quantum operations. We obtain anecessary and sufficient condition for the parallel distinguishability of anoperator subspace that is either one-dimensional or Hermitian. In both casesthe condition is equivalent to the non-existence of positive definite operatorin the subspace, and an optimal discrimination protocol is obtained. Finally,we provide more examples to show that the non-existence of positive definiteoperator is sufficient for many other cases, but in general it is only anecessary condition.
Grochow, JA, Mulmuley, KD & Qiao, Y 1970, 'Boundaries of VP and VNP', Leibniz International Proceedings in Informatics, LIPIcs, International Colloquium on Automata Languages and Programming, Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, Rome, Italy.
View/Download from: Publisher's site
View description>>
One fundamental question in the context of the geometric complexity theory approach to the VP vs. VNP conjecture is whether VP = VP, where VP is the class of families of polynomials that can be computed by arithmetic circuits of polynomial degree and size, and VP is the class of families of polynomials that can be approximated infinitesimally closely by arithmetic circuits of polynomial degree and size. The goal of this article is to study the conjecture in (Mulmuley, FOCS 2012) that VP is not contained in VP. Towards that end, we introduce three degenerations of VP (i.e., sets of points in VP), namely the stable degeneration Stable-VP, the Newton degeneration Newton-VP, and the p-definable one-parameter degeneration VP∗. We also introduce analogous degenerations of VNP. We show that Stable-VP ⊆ Newton-VP ⊆ VP∗ ⊆ VNP, and Stable-VNP = Newton-VNP = VNP∗ = VNP. The three notions of degenerations and the proof of this result shed light on the problem of separating VP from VP. Although we do not yet construct explicit candidates for the polynomial families in VP \VP, we prove results which tell us where not to look for such families. Specifically, we demonstrate that the families in Newton-VP \VP based on semi-invariants of quivers would have to be nongeneric by showing that, for many finite quivers (including some wild ones), Newton degeneration of any generic semi-invariant can be computed by a circuit of polynomial size. We also show that the Newton degenerations of perfect matching Pfaffians, monotone arithmetic circuits over the reals, and Schur polynomials have polynomial-size circuits.
Haah, J, Harrow, AW, Ji, Z, Wu, X & Yu, N 1970, 'Sample-optimal tomography of quantum states', Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, STOC '16: Symposium on Theory of Computing, ACM, Cambridge, MA, USA, pp. 913-925.
View/Download from: Publisher's site
View description>>
It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples.
Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n.
We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors.
Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.
Ji, Z 1970, 'Classical verification of quantum proofs', Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, STOC '16: Symposium on Theory of Computing, ACM, Cambridge, MA, USA, pp. 885-898.
View/Download from: Publisher's site
View description>>
We present a classical interactive protocol that verifies the validity of a quantum witness state for the local Hamiltonian problem. It follows from this protocol that approximating the non-local value of a multi-player one-round game to inverse polynomial precision is QMA-hard. Our work makes an interesting connection between the theory of QMA-completeness and Hamiltonian complexity on one hand and the study of non-local games and Bell inequalities on the other.
Khare, V, Shivakumara, P, Kumar, A, Chan, CS, Lu, T & Blumenstien, M 1970, 'A quad tree based method for blurred and non-blurred video text frames classification through quality metrics', 2016 23rd International Conference on Pattern Recognition (ICPR), 2016 23rd International Conference on Pattern Recognition (ICPR), IEEE, Cancun, Mexico, pp. 4023-4028.
View/Download from: Publisher's site
View description>>
Blur is a common artifact in video, which adds more complexity to text detection and recognition. To achieve good accuracies for text detection and recognition, this paper suggests a new method for classifying blurred and non-blurred frames in video. We explore quality metrics, namely, BRISQUE, NRIQA, GPC and SI, in a new way for classification. We estimate the values of these metrics with the help of predefined samples called reference values. To widen the difference between metric values for better classification, we introduce scaling factors as a non-linear sigmoidal function, which considers the metric of each current frame and its reference and results in templates. Based on the characteristics of metrics, the proposed method finds a relationship between the metrics to derive rules for classification. To classify the frame containing local blur, we explore quad tree division with classification rules which divide non-blurred blocks to identify local blur. We use standard databases, namely, ICDAR 2013, ICDAR 2015 and YVT videos for experimentation, and evaluate the proposed method in terms of text detection and recognition rates given by text detection and binarization methods before and after classification.
Lanese, I & Devitt, S 1970, 'Preface', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
Lee, JH, Li, S, Long, Z & Sioutis, M 1970, 'On redundancy in simple temporal networks', Frontiers in Artificial Intelligence and Applications, European Conference on Artificial Intelligence, AAAI Press, Netherlands, pp. 828-836.
View/Download from: Publisher's site
View description>>
The Simple Temporal Problem (STP) has been widely used in various applications to schedule tasks. For dynamical systems, scheduling needs to be efficient and flexible to handle uncertainty and perturbation. To this end, modern approaches usually encode the temporal information as an STP instance. This representation contains redundant information, which can not only take a significant amount of storage space, but also make scheduling inefficient due to the non-concise representation. In this paper, we investigate the problem of simplifying an STP instance by removing redundant information. We show that such a simplification can result in a unique minimal representation without loss of temporal information, and present an efficient algorithm to achieve this task. Evaluation on a large benchmark dataset of STP exhibits a significant reduction in redundant information for the involved instances.
Long, Z, Schockaert, S & Li, S 1970, 'Encoding large RCC8 scenarios using rectangular pseudo-solutions', Proceedings of the International Conference on Knowledge Representation and Reasoning, International Conference on Principles of Knowledge Representation and Reasoning, Association for the Advancement of Artificial Intelligence, Cape Town, South Africa, pp. 463-472.
View description>>
Most approaches in the field of qualitative spatial reasoning (QSR) use constraint networks to encode spatial scenarios. The size of these networks is quadratic in the number of variables, which has severely limited the real-world application of QSR. In this paper, we propose another representation of spatial scenarios, in which each variable is associated with one or more rectangles. Instead of requiring these rectangles to define a solution of the corresponding constraint network, we construct sequences of rectangles that define partial solutions to progressively weaker constraint networks. We present experimental results that illustrate the effectiveness of this strategy.
Long, Z, Sioutis, M & Li, S 1970, 'Efficient path consistency algorithm for large qualitative constraint networks', IJCAI International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence, AAAI Press, New York City, USA, pp. 1202-1208.
View description>>
We propose a new algorithm called DPC+ to enforce partial path consistency (PPC) on qualitative constraint networks. PPC restricts path consistency (PC) to a triangulation of the underlying constraint graph of a network. As PPC retains the sparseness of a constraint graph, it can make reasoning tasks such as consistency checking and minimal labelling of large qualitative constraint networks much easier to tackle than PC. For qualitative constraint networks defined over any distributive subalgebra of well-known spatio-temporal calculi, such as the Region Connection Calculus and the Interval Algebra, we show that DPC+ can achieve PPC very fast. Indeed, the algorithm enforces PPC on a qualitative constraint network by processing each triangle in a triangulation of its underlying constraint graph at most three times. Our experiments demonstrate significant improvements of DPC+ over the state-of-the-art PPC enforcing algorithm.
Pal, S, Alaei, A, Pal, U & Blumenstein, M 1970, 'Performance of an Off-Line Signature Verification Method Based on Texture Features on a Large Indic-Script Signature Dataset', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, Santorini, Italy, pp. 72-77.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In this paper, a signature verification method based on texture features involving off-line signatures written in two different Indian scripts is proposed. Both Local Binary Patterns (LBP) and Uniform Local Binary Patterns (ULBP), as powerful texture feature extraction techniques, are used for characterizing off-line signatures. The Nearest Neighbour (NN) technique is considered as the similarity metric for signature verification in the proposed method. To evaluate the proposed verification approach, a large Bangla and Hindi off-line signature dataset (BHSig260) comprising 6240 (260×24) genuine signatures and 7800 (260×30) skilled forgeries was introduced and further used for experimentation. We further used the GPDS-100 signature dataset for a comparison. The experiments were conducted, and the verification accuracies were separately computed for the LBP and ULBP texture features. There were no remarkable changes in the results obtained applying the LBP and ULBP features for verification when the BHSig260 and GPDS-100 signature datasets were used for experimentation.
Saqib, M, Daud Khan, S & Blumenstein, M 1970, 'Texture-based feature mining for crowd density estimation: A study', 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Palmerston North, New Zealand.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Texture feature is an important feature descriptor for many image analysis applications. The objectives of this research are to determine distinctive texture features for crowd density estimation and counting. In this paper, we have comprehensively reviewed different texture features and their different possible combinations to evaluate their performance on pedestrian crowds. A two-stage classification and regression based framework have been proposed for performance evaluation of all the texture features for crowd density estimation and counting. According to the framework, input images are divided into blocks and blocks into cells of different sizes, having varying crowd density levels. Due to perspective distortion, people appearing close to the camera contribute more to the feature vector than people far away. Therefore, features extracted are normalized using a perspective normalization map of the scene. At the first stage, image blocks are classified using multi-class SVM into different density level. At the second stage Gaussian Process Regression is used to re gress low-level features to count. Various texture features and their possible combinations are evaluated on publicly available dataset.
Sioutis, M, Long, Z & Li, S 1970, 'Efficiently Reasoning about Qualitative Constraints through Variable Elimination', Proceedings of the 9th Hellenic Conference on Artificial Intelligence, SETN '16: 9th Hellenic Conference on Artificial Intelligence, ACM, Thessaloniki, Greece.
View/Download from: Publisher's site
View description>>
© 2016 ACM.We introduce, study, and evaluate a novel algorithm in the context of qualitative constraint-based spatial and temporal reasoning, that is based on the idea of variable elimination, a simple and general exact inference approach in probabilistic graphical models. Given a qualitative constraint network M, our algorithm enforces a particular directional local consistency on M, which we denote by ←-consistency. Our discussion is restricted to distributive subclasses of relations, i.e., sets of relations closed under converse, intersection, and weak composition and for which weak composition distributes over non-empty intersections for all of their relations. We demonstrate that enforcing ←-consistency on a given qualitative constraint network defined over a distributive subclass of relations allows us to decide its satisfiability. The experimentation that we have conducted with random and real-world qualitative constraint networks defined over a distributive subclass of relations of the Region Connection Calculus, shows that our approach exhibits unparalleled performance against competing state-of-the-art approaches for checking the satisfiability of such constraint networks.
Sutter, D, Tomamichel, M & Harrow, AW 1970, 'Strengthened monotonicity of relative entropy via pinched Petz recovery map', 2016 IEEE International Symposium on Information Theory (ISIT), 2016 IEEE International Symposium on Information Theory (ISIT), IEEE, Barcelona, pp. 760-764.
View/Download from: Publisher's site
Suwanwiwat, H, Pal, U & Blumenstein, M 1970, 'An Automatic Off-Line Short Answer Assessment System Using Novel Hybrid Features', 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Gold Coast, Queensland, Australia.
View/Download from: Publisher's site
View description>>
To date, paper-based examinations are still in use worldwide on all levels of education levels (e.g. secondary, tertiary levels). However, literature regarding off-line automatic assessment systems employing off-line handwriting recognition is not numerous. This paper proposes an off-line automatic assessment system employing a hybrid feature extraction technique - a newly proposed Modified Direction and Gaussian Grid Feature (MDGGF), along with its enhanced technique. In this study other original feature extraction techniques, together with their enhanced features, were also used for feature extraction technique efficiency comparison purposes. Classifiers, namely artificial neural networks and support vector machines, were selected to be employed in the experiments. Two types of datasets were employed in the experiment for both feature extraction technique accuracy and efficiency comparisons. The best correctly recognised rate of 98.33% with 100% accuracy was obtained when employing the proposed MDGGF to the off-line automatic assessment system.
Suwanwiwat, H, Pal, U & Blumenstein, M 1970, 'An investigation of novel combined features for a handwritten short answer assessment system', Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR, International Conference on Frontiers in Handwriting Recognition, IEEE, pp. 102-107.
View/Download from: Publisher's site
View description>>
This paper proposes an off-line automatic assessment system utilising novel combined feature extraction techniques. The proposed feature extraction techniques are 1) the proposed Water Reservoir, Loop, Modified Direction and Gaussian Grid Feature (WRL-MDGGF), 2) the proposed Gravity, Water Reservoir, Loop, Modified Direction and Gaussian Grid Feature (G-WRL-MDGGF). The proposed feature extraction techniques together with their original features and other combined feature extraction techniques were employed in an investigation of the efficiency of feature extraction techniques on an automatic off-line short answer assessment system. The proposed system utilised two classifiers namely, artificial neural networks and Support Vector Machines (SVMs), two type of datasets and two different thresholds in this investigation. Promising recognition rates of 94.85% and 94.88% were obtained when the proposed WRL-MDGGF and G-WRL-MDGGF were employed, respectively, using SVMs.
Tomamichel, M & Hayashi, M 1970, 'Operational interpretation of Rényi conditional mutual information via composite hypothesis testing against Markov distributions', 2016 IEEE International Symposium on Information Theory (ISIT), 2016 IEEE International Symposium on Information Theory (ISIT), IEEE, Barcelona, Spain, pp. 585-589.
View/Download from: Publisher's site
View description>>
We revisit the problem of asymmetric binary hypothesis testing against a composite alternative hypothesis. We introduce a general framework to treat such problems when the alternative hypothesis adheres to certain axioms. In this case we find the threshold rate, the optimal error and strong converse exponents (at large deviations from the threshold) and the second-order asymptotics (at small deviations from the threshold). We apply our results to find operational interpretations of Rényi information measures. In particular, in case the alternative hypothesis consists of certain tripartite distributions satisfying the Markov property, we find that the optimal exponents are determined by the Rényi conditional mutual information
Wang, B, Gao, Y, Sun, C, Blumenstein, M & La Salle, J 1970, 'A Local Scale Selection Scheme for Multiscale Area Integral Invariants', 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Gold Coast, QLD, Australia.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Area integral invariant (AII) is a functional obtained by performing integral operations on the closed planar contour of a shape via the convolution with disc kernels. This shape descriptor is insensitive to noise and robust with respect to occlusions. AII intrinsically introduces the notion of scale using the size of kernel radius. However how to select an optimal scale remains unresolved. In this paper, we propose a local scale selection scheme for generating multiscale area integral invariants. For the same scale level, the disc kernel size is not fixed and varies with the contour point where the disc is centered. This scheme also provides a scale assignment for emphasising the features extraction at finer scales. The strong discriminative power of the multiscale area integral invariant derived from the proposed scale selection scheme has been validated through experiments on very challenging leaf image retrievals.