Mathieson, L, Mendes, A, Marsden, J, Pond, J & Moscato, P 2017, 'Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques' in Methods in Molecular Biology, Springer New York, Germany, pp. 299-325.
View/Download from: Publisher's site
View description>>
© Springer Science+Business Media New York 2017. This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Alaei, A, Pal, S, Pal, U & Blumenstein, M 2017, 'An Efficient Signature Verification Method Based on an Interval Symbolic Representation and a Fuzzy Similarity Measure', IEEE Transactions on Information Forensics and Security, vol. 12, no. 10, pp. 2360-2372.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this paper, an efficient offline signature verification method based on an interval symbolic representation and a fuzzy similarity measure is proposed. In the feature extraction step, a set of local binary pattern-based features is computed from both the signature image and its under-sampled bitmap. Interval-valued symbolic data is then created for each feature in every signature class. As a result, a signature model composed of a set of interval values (corresponding to the number of features) is obtained for each individual's handwritten signature class. A novel fuzzy similarity measure is further proposed to compute the similarity between a test sample signature and the corresponding interval-valued symbolic model for the verification of the test sample. To evaluate the proposed verification approach, a benchmark offline English signature data set (GPDS-300) and a large data set (BHSig260) composed of Bangla and Hindi offline signatures were used. A comparison of our results with some recent signature verification methods available in the literature was provided in terms of average error rate and we noted that the proposed method always outperforms when the number of training samples is eight or more.
Braun, G, Jain, R, Lee, T & Pokutta, S 2017, 'Information-theoretic approximations of the nonnegative rank', computational complexity, vol. 26, no. 1, pp. 147-197.
View/Download from: Publisher's site
Chapman, RJ, Karim, A, Huang, Z, Flammia, ST, Tomamichel, M & Peruzzo, A 2017, 'Beating the Classical Limits of Information Transmission using a Quantum Decoder', Phys. Rev. A, vol. 97, no. 1, p. 012315.
View/Download from: Publisher's site
View description>>
Encoding schemes and error-correcting codes are widely used in informationtechnology to improve the reliability of data transmission over real-worldcommunication channels. Quantum information protocols can further enhance theperformance in data transmission by encoding a message in quantum states,however, most proposals to date have focused on the regime of a large number ofuses of the noisy channel, which is unfeasible with current quantum technology.We experimentally demonstrate quantum enhanced communication over an amplitudedamping noisy channel with only two uses of the channel per bit and a singleentangling gate at the decoder. By simulating the channel using a photonicinterferometric setup, we experimentally increase the reliability oftransmitting a data bit by greater than 20% for a certain damping range overclassically sending the message twice. We show how our methodology can beextended to larger systems by simulating the transmission of a single bit withup to eight uses of the channel and a two-bit message with three uses of thechannel, predicting a quantum enhancement in all cases.
Cheng, H-C & Hsieh, M-H 2017, 'Moderate Deviation Analysis for Classical-Quantum Channels and Quantum Hypothesis Testing', IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 1385-1403.
View/Download from: Publisher's site
View description>>
In this work, we study the tradeoffs between the error probabilities ofclassical-quantum channels and the blocklength $n$ when the transmission ratesapproach the channel capacity at a rate slower than $1/\sqrt{n}$, a researchtopic known as moderate deviation analysis. We show that the optimal errorprobability vanishes under this rate convergence. Our main technicalcontributions are a tight quantum sphere-packing bound, obtained via Chagantyand Sethuraman's concentration inequality in strong large deviation theory, andasymptotic expansions of error-exponent functions. Moderate deviation analysisfor quantum hypothesis testing is also established. The converse directlyfollows from our channel coding result, while the achievability relies on amartingale inequality.
Cheng, H-C, Hsieh, M-H & Tomamichel, M 2017, 'Quantum Sphere-Packing Bounds with Polynomial Prefactors', IEEE Transactions on Information Theory, 65(5):2872-2898, May 2019, vol. 65, no. 5, pp. 2872-2898.
View/Download from: Publisher's site
View description>>
We study lower bounds on the optimal error probability in classical codingover classical-quantum channels at rates below the capacity, commonly termedquantum sphere-packing bounds. Winter and Dalai have derived such bounds forclassical-quantum channels; however, the exponents in their bounds onlycoincide when the channel is classical. In this paper, we show that these twoexponents admit a variational representation and are related by theGolden-Thompson inequality, reaffirming that Dalai's expression is stronger ingeneral classical-quantum channels. Second, we establish a sphere-packing boundfor classical-quantum channels, which significantly improves Dalai's prefactorfrom the order of subexponential to polynomial. Furthermore, the gap betweenthe obtained error exponent for constant composition codes and the best knownclassical random coding exponent vanishes in the order of $o(\log n / n)$,indicating our sphere-packing bound is almost exact in the high rate regime.Finally, for a special class of symmetric classical-quantum channels, we cancompletely characterize its optimal error probability without the constantcomposition code assumption. The main technical contributions are two converseHoeffding bounds for quantum hypothesis testing and the saddle-point propertiesof error exponent functions.
Chubb, CT, Tan, VYF & Tomamichel, M 2017, 'Moderate deviation analysis for classical communication over quantum channels', Communications in Mathematical Physics, vol. 355, no. 3, pp. 1283-1315.
View/Download from: Publisher's site
View description>>
We analyse families of codes for classical data transmission over quantumchannels that have both a vanishing probability of error and a code rateapproaching capacity as the code length increases. To characterise thefundamental tradeoff between decoding error, code rate and code length for suchcodes we introduce a quantum generalisation of the moderate deviation analysisproposed by Altug and Wagner as well as Polyanskiy and Verdu. We derive such atradeoff for classical-quantum (as well as image-additive) channels in terms ofthe channel capacity and the channel dispersion, giving further evidence thatthe latter quantity characterises the necessary backoff from capacity whentransmitting finite blocks of classical data. To derive these results we alsostudy asymmetric binary quantum hypothesis testing in the moderate deviationsregime. Due to the central importance of the latter task, we expect that ourtechniques will find further applications in the analysis of other quantuminformation processing tasks.
Fang, K, Wang, X, Tomamichel, M & Duan, R 2017, 'Non-asymptotic entanglement distillation', IEEE Transactions on Information Theory, vol. 65, no. 10, pp. 6454-6465.
View/Download from: Publisher's site
View description>>
Entanglement distillation, an essential quantum information processing task,refers to the conversion from multiple copies of noisy entangled states to asmaller number of highly entangled states. In this work, we study thenon-asymptotic fundamental limits for entanglement distillation. We investigatethe optimal tradeoff between the distillation rate, the number of preparedstates, and the error tolerance. First, we derive the one-shot distillableentanglement under completely positive partial transpose preserving operationsas a semidefinite program and demonstrate an exact characterization via thequantum hypothesis testing relative entropy. Second, we establish efficientlycomputable second-order estimations of the distillation rate for generalquantum states. In particular, we provide explicit as well as approximateevaluations for various quantum states of practical interest, including purestates, mixture of Bell states, maximally correlated states and isotropicstates.
Granade, C, Ferrie, C & Flammia, ST 2017, 'Practical adaptive quantum tomography', New Journal of Physics, vol. 19, no. 11, pp. 113017-113017.
View/Download from: Publisher's site
View description>>
© 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios includ ing mixed states, higher-dimensional systems, and restricted measurements.
Grochow, JA & Qiao, Y 2017, 'Algorithms for Group Isomorphism via Group Extensions and Cohomology', SIAM Journal on Computing, vol. 46, no. 4, pp. 1153-1216.
View/Download from: Publisher's site
View description>>
© 2017 SIAM. The isomorphism problem for finite groups of order n (GpI) has long been known to be solvable in nlog n+O(1) time, but only recently were polynomial-time algorithms designed for several interesting group classes. Inspired by recent progress, we revisit the strategy for GpI via the extension theory of groups. The extension theory describes how a normal subgroup N is related to G/N via G, and this naturally leads to a divide-and-conquer strategy that 'splits' GpI into two subproblems: one regarding group actions on other groups, and one regarding group cohomology. When the normal subgroup N is abelian, this strategy is well known. Our first contribution is to extend this strategy to handle the case when N is not necessarily abelian. This allows us to provide a unified explanation of all recent polynomial-time algorithms for special group classes. Guided by this strategy, to make further progress on GpI, we consider central-radical groups, proposed in Babai et al. [Code equivalence and group isomorphism, in Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'11), SIAM, Philadelphia, 2011, ACM, New York, pp. 1395-1408]: the class of groups such that G modulo its center has no abelian normal subgroups. This class is a natural extension of the group class considered by Babai et al. [Polynomial-time isomorphism test for groups with no abelian normal subgroups (extended abstract), in International Colloquium on Automata, Languages, and Programming (ICALP), 2012, pp. 51-62], namely those groups with no abelian normal subgroups. Following the above strategy, we solve GpI in nO(log log n) time for central-radical groups, and in polynomial time for several prominent subclasses of centralradical groups. We also solve GpI in nO(log log n) time for groups whose solvable normal subgroups are elementary abelian but not necessarily central. As far as we are aware, this is the first time there have been worst-case guarantees on an n...
Guan, J, Feng, Y & Ying, M 2017, 'Super-activating Quantum Memory with Entanglement', Quantum Information and Computation, vol. 18, no. 13-14, pp. 1115-1124.
View description>>
Noiseless subsystems were proved to be an efficient and faithful approach topreserve fragile information against decoherence in quantum informationprocessing and quantum computation. They were employed to design a general(hybrid) quantum memory cell model that can store both quantum and classicalinformation. In this paper, we find an interesting new phenomenon that thepurely classical memory cell can be super-activated to preserve quantum states,whereas the null memory cell can only be super-activated to encode classicalinformation. Furthermore, necessary and sufficient conditions for thisphenomenon are discovered so that the super-activation can be easily checked byexamining certain eigenvalues of the quantum memory cell without computing thenoiseless subsystems explicitly. In particular, it is found that entangled andseparable stationary states are responsible for the super-activation of storingquantum and classical information, respectively.
Harper, R, Chapman, RJ, Ferrie, C, Granade, C, Kueng, R, Naoumenko, D, Flammia, ST & Peruzzo, A 2017, 'Explaining quantum correlations through evolution of causal models', Physical Review A, vol. 95, no. 4, pp. 1-16.
View/Download from: Publisher's site
View description>>
© 2017 American Physical Society. We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Herr, D, Nori, F & Devitt, SJ 2017, 'Lattice surgery translation for quantum computation', New Journal of Physics, vol. 19, no. 1, pp. 013034-013034.
View/Download from: Publisher's site
View description>>
© 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. In this paper we outline a method for a compiler to translate any non fault tolerant quantum circuit to the geometric representation of the lattice surgery error-correcting code using inherent merge and split operations. Since the efficiency of state distillation procedures has not yet been investigated in the lattice surgery model, their translation is given as an example using the proposed method. The resource requirements seem comparable or better to the defect-based state distillation process, but modularity and eventual implementability allow the lattice surgery model to be an interesting alternative to braiding.
Herr, D, Nori, F & Devitt, SJ 2017, 'Optimization of Lattice Surgery is NP-Hard', npj Quantum Information 3, Article number: 35 (2017), vol. 3, no. 1, pp. 1-5.
View/Download from: Publisher's site
View description>>
The traditional method for computation in either the surface code or in theRaussendorf model is the creation of holes or 'defects' within the encodedlattice of qubits that are manipulated via topological braiding to enact logicgates. However, this is not the only way to achieve universal, fault-tolerantcomputation. In this work, we focus on the Lattice Surgery representation,which realizes transversal logic operations without destroying the intrinsic 2Dnearest-neighbor properties of the braid-based surface code and achievesuniversality without defects and braid based logic. For both techniques thereare open questions regarding the compilation and resource optimization ofquantum circuits. Optimization in braid-based logic is proving to be difficultand the classical complexity associated with this problem has yet to bedetermined. In the context of lattice-surgery-based logic, we can introduce anoptimality condition, which corresponds to a circuit with the lowest resourcerequirements in terms of physical qubits and computational time, and prove thatthe complexity of optimizing a quantum circuit in the lattice surgery model isNP-hard.
Ivanyos, G, Qiao, Y & Subrahmanyam, KV 2017, 'Non-commutative Edmonds’ problem and matrix semi-invariants', computational complexity, vol. 26, no. 3, pp. 717-763.
View/Download from: Publisher's site
View description>>
© 2016, Springer International Publishing. In 1967, J. Edmonds introduced the problem of computing the rank over the rational function field of an n× n matrix T with integral homogeneous linear polynomials. In this paper, we consider the non-commutative version of Edmonds’ problem: compute the rank of T over the free skew field. This problem has been proposed, sometimes in disguise, from several different perspectives in the study of, for example, the free skew field itself (Cohn in J Symbol Log 38(2):309–314, 1973), matrix spaces of low rank (Fortin-Reutenauer in Sémin Lothar Comb 52:B52f 2004), Edmonds’ original problem (Gurvits in J Comput Syst Sci 69(3):448–484, 2004), and more recently, non-commutative arithmetic circuits with divisions (Hrubeš and Wigderson in Theory Comput 11:357-393, 2015. doi:10.4086/toc.2015.v011a014). It is known that this problem relates to the following invariant ring, which we call the F-algebra of matrix semi-invariants, denoted as R(n, m). For a field F, it is the ring of invariant polynomials for the action of SL (n, F) × SL (n, F) on tuples of matrices—(A, C) ∈ SL (n, F) × SL (n, F) sends (B1, … , Bm) ∈ M(n, F) ⊕m to (AB1CT, … , ABmCT). Then those T with non-commutative rank < n correspond to those points in the nullcone of R(n, m). In particular, if the nullcone of R(n, m) is defined by elements of degree ≤ σ, then there follows a poly (n, σ) -time randomized algorithm to decide whether the non-commutative rank of T is full. To our knowledge, previously the best bound for σ was O(n2·4n2) over algebraically closed fields of characteristic 0 (Derksen in Proc Am Math Soc 129(4):955–964, 2001). We now state the main contributions of this paper:We observe that by using an algorithm of Gurvits, and assuming the above bound σ for R(n, m) over Q, deciding whether or not T has non-commutative rank < n over Q can be done deterministically in time polynomial in the input size and σ.When F is large enough, we devise an algorithm ...
Khare, V, Shivakumara, P, Paramesran, R & Blumenstein, M 2017, 'Arbitrarily-oriented multi-lingual text detection in video', Multimedia Tools and Applications, vol. 76, no. 15, pp. 16625-16655.
View/Download from: Publisher's site
View description>>
© 2016 Springer Science+Business Media New YorkText detection in arbitrarily-oriented multi-lingual video is an emerging area of research because it plays a vital role for developing real-time indexing and retrieval systems. In this paper, we propose to explore moments for identifying text candidates. We introduce a novel idea for determining automatic windows to extract moments for tackling multi-font and multi-sized text in video based on stroke width information. The temporal information is explored to find deviations between moving and non-moving pixels in successive frames iteratively, which results in static clusters containing caption text and dynamic clusters containing scene text, as well as background pixels. The gradient directions of pixels in static and dynamic clusters are analyzed to identify the potential text candidates. Furthermore, boundary growing is proposed that expands the boundary of potential text candidates until it finds neighbor components based on the nearest neighbor criterion. This process outputs text lines appearing in the video. Experimental results on standard video data, namely, ICDAR 2013, ICDAR 2015, YVT videos and on our own English and Multi-lingual videos demonstrate that the proposed method outperforms the state-of-the-art methods.
Kieferová, M & Wiebe, N 2017, 'Tomography and generative training with quantum Boltzmann machines', Physical Review A, vol. 96, no. 6.
View/Download from: Publisher's site
Kong, S, Li, S & Sioutis, M 2017, 'Exploring Directional Path-Consistency for Solving Constraint Networks', The Computer Journal, vol. 61, no. 9.
View/Download from: Publisher's site
View description>>
Among the local consistency techniques used for solving constraint networks,path-consistency (PC) has received a great deal of attention. However,enforcing PC is computationally expensive and sometimes even unnecessary.Directional path-consistency (DPC) is a weaker notion of PC that considers agiven variable ordering and can thus be enforced more efficiently than PC. Thispaper shows that DPC (the DPC enforcing algorithm of Dechter and Pearl) decidesthe constraint satisfaction problem (CSP) of a constraint language if it iscomplete and has the variable elimination property (VEP). However, we also showthat no complete VEP constraint language can have a domain with more than 2values. We then present a simple variant of the DPC algorithm, called DPC*, andshow that the CSP of a constraint language can be decided by DPC* if it isclosed under a majority operation. In fact, DPC* is sufficient for guaranteeingbacktrack-free search for such constraint networks. Examples of majority-closedconstraint classes include the classes of connected row-convex (CRC)constraints and tree-preserving constraints, which have found applications invarious domains, such as scene labeling, temporal reasoning, geometricreasoning, and logical filtering. Our experimental evaluations show that DPC*significantly outperforms the state-of-the-art algorithms for solvingmajority-closed constraints.
Kong, S, Li, S, Li, Y & Long, Z 2017, 'On tree-preserving constraints', Annals of Mathematics and Artificial Intelligence, vol. 81, no. 3-4, pp. 241-271.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing Switzerland. The study of tractable subclasses of constraint satisfaction problems is a central topic in constraint solving. Tree convex constraints are extensions of the well-known row convex constraints. Just like the latter, every path-consistent tree convex constraint network is globally consistent. However, it is NP-complete to decide whether a tree convex constraint network has solutions. This paper studies and compares three subclasses of tree convex constraints, which are called chain-, path-, and tree-preserving constraints respectively. The class of tree-preserving constraints strictly contains the subclasses of path-preserving and arc-consistent chain-preserving constraints. We prove that, when enforcing strong path-consistency on a tree-preserving constraint network, in each step, the network remains tree-preserving. This ensures the global consistency of consistent tree-preserving networks after enforcing strong path-consistency, and also guarantees the applicability of the partial path-consistency algorithms to tree-preserving constraint networks, which is usually much more efficient than the path-consistency algorithms for large sparse constraint networks. As an application, we show that the class of tree-preserving constraints is useful in solving the scene labelling problem.
Langford, NK, Sagastizabal, R, Kounalakis, M, Dickel, C, Bruno, A, Luthi, F, Thoen, DJ, Endo, A & DiCarlo, L 2017, 'Experimentally simulating the dynamics of quantum light and matter at deep-strong coupling', Nature Communications, vol. 8, no. 1, pp. 1715-1715.
View/Download from: Publisher's site
View description>>
AbstractThe quantum Rabi model describing the fundamental interaction between light and matter is a cornerstone of quantum physics. It predicts exotic phenomena like quantum phase transitions and ground-state entanglement in ultrastrong and deep-strong coupling regimes, where coupling strengths are comparable to or larger than subsystem energies. Demonstrating dynamics remains an outstanding challenge, the few experiments reaching these regimes being limited to spectroscopy. Here, we employ a circuit quantum electrodynamics chip with moderate coupling between a resonator and transmon qubit to realise accurate digital quantum simulation of deep-strong coupling dynamics. We advance the state of the art in solid-state digital quantum simulation by using up to 90 second-order Trotter steps and probing both subsystems in a combined Hilbert space dimension of ∼80, demonstrating characteristic Schrödinger-cat-like entanglement and large photon build-up. Our approach will enable exploration of extreme coupling regimes and quantum phase transitions, and demonstrates a clear first step towards larger complexities such as in the Dicke model.
Lee, T, Wei, Z & de Wolf, R 2017, 'Some upper and lower bounds on PSD-rank', Mathematical Programming, vol. 162, no. 1-2, pp. 495-521.
View/Download from: Publisher's site
Lekitsch, B, Weidt, S, Fowler, AG, Mølmer, K, Devitt, SJ, Wunderlich, C & Hensinger, WK 2017, 'Blueprint for a microwave trapped ion quantum computer', Science Advances, vol. 3, no. 2, pp. 1-11.
View/Download from: Publisher's site
View description>>
Design to build a trapped ion quantum computer with modules connected by ion transport and voltage-driven quantum gate technology.
Li, Y, Wang, X & Duan, R 2017, 'Indistinguishability of bipartite states by positive-partial-transpose operations in the many-copy scenario', Phys. Rev. A, vol. 95, no. 5, pp. 052346-052346-6.
View/Download from: Publisher's site
View description>>
A bipartite subspace $S$ is called stronglypositive-partial-transpose-unextendible (PPT-unextendible) if for everypositive integer $k$, there is no PPT operator supporting on the orthogonalcomplement of $S^{\otimes k}$. We show that a subspace is stronglyPPT-unextendible if it contains a PPT-definite operator (a positivesemidefinite operator whose partial transpose is positive definite). Based onthese, we are able to propose a simple criterion for verifying whether a set ofbipartite orthogonal quantum states is indistinguishable by PPT operations inthe many copy scenario. Utilizing this criterion, we further point out that anyentangled pure state and its orthogonal complement cannot be distinguished byPPT operations in the many copy scenario. On the other hand, we investigatethat the minimum dimension of strongly PPT-unextendible subspaces in an$m\otimes n$ system is $m+n-1$, which involves a generalization of the resultthat non-positive-partial-transpose (NPT) subspaces can be as large as anyentangled subspace [N. Johnston, Phys. Rev. A 87: 064302 (2013)].
Lund, AP, Bremner, MJ & Ralph, TC 2017, 'Quantum Sampling Problems, BosonSampling and Quantum Supremacy', npj Quantum Information (2017) 3:15, vol. 3, no. 1, pp. 1-8.
View/Download from: Publisher's site
View description>>
There is a large body of evidence for the potential of greater computationalpower using information carriers that are quantum mechanical over thosegoverned by the laws of classical mechanics. But the question of the exactnature of the power contributed by quantum mechanics remains only partiallyanswered. Furthermore, there exists doubt over the practicality of achieving alarge enough quantum computation that definitively demonstrates quantumsupremacy. Recently the study of computational problems that produce samplesfrom probability distributions has added to both our understanding of the powerof quantum algorithms and lowered the requirements for demonstration of fastquantum algorithms. The proposed quantum sampling problems do not require aquantum computer capable of universal operations and also permit physicallyrealistic errors in their operation. This is an encouraging step towards anexperimental demonstration of quantum algorithmic supremacy. In this paper, wewill review sampling problems and the arguments that have been used to deducewhen sampling problems are hard for classical computers to simulate. Twoclasses of quantum sampling problems that demonstrate the supremacy of quantumalgorithms are BosonSampling and IQP Sampling. We will present the details ofthese classes and recent experimental progress towards demonstrating quantumsupremacy in BosonSampling.
Mans, B & Mathieson, L 2017, 'Incremental Problems in the Parameterized Complexity Setting', Theory of Computing Systems, vol. 60, no. 1, pp. 3-19.
View/Download from: Publisher's site
View description>>
© 2016, Springer Science+Business Media New York. Dynamic systems are becoming steadily more important with the profusion of mobile and distributed computing devices. Coincidentally incremental computation is a natural approach to deal with ongoing changes. We explore incremental computation in the parameterized complexity setting and show that incrementalization leads to non-trivial complexity classifications. Interestingly, some incremental versions of hard problems become tractable, while others remain hard. Moreover tractability or intractability is not a simple function of the problem’s static complexity, every level of the W-hierarchy exhibits complete problems with both tractable and intractable incrementalizations. For problems that are already tractable in their static form, we also show that incrementalization can lead to interesting algorithms, improving upon the trivial approach of using the static algorithm at each step.
Mukhopadhyay, P & Qiao, Y 2017, 'Sparse multivariate polynomial interpolation on the basis of Schubert polynomials', computational complexity, vol. 26, no. 4, pp. 881-909.
View/Download from: Publisher's site
View description>>
© 2016, Springer International Publishing. Schubert polynomials were discovered by A. Lascoux and M. Schützenberger in the study of cohomology rings of flag manifolds in 1980s. These polynomials generalize Schur polynomials and form a linear basis of multivariate polynomials. In 2003, Lenart and Sottile introduced skew Schubert polynomials, which generalize skew Schur polynomials and expand in the Schubert basis with the generalized Littlewood–Richardson coefficients. In this paper, we initiate the study of these two families of polynomials from the perspective of computational complexity theory. We first observe that skew Schubert polynomials, and therefore Schubert polynomials, are in #P (when evaluating on nonnegative integral inputs) and VNP. Our main result is a deterministic algorithm that computes the expansion of a polynomial f of degree d in Z[ x1, ⋯ , xn] on the basis of Schubert polynomials, assuming an oracle computing Schubert polynomials. This algorithm runs in time polynomial in n, d, and the bit size of the expansion. This generalizes, and derandomizes, the sparse interpolation algorithm of symmetric polynomials in the Schur basis by Barvinok and Fomin (Adv Appl Math 18(3):271–285, 1997). In fact, our interpolation algorithm is general enough to accommodate any linear basis satisfying certain natural properties. Applications of the above results include a new algorithm that computes the generalized Littlewood–Richardson coefficients.
Nemoto, K, Devitt, S & Munro, WJ 2017, 'Noise management to achieve superiority in quantum information systems', Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 375, no. 2099, pp. 20160236-20160236.
View/Download from: Publisher's site
View description>>
Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue ‘Quantum technology for the 21st century’.
Peng, H, Lan, C, Liu, Y, Liu, T, Blumenstein, M & Li, J 2017, 'Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes', Oncotarget, vol. 8, no. 45, pp. 78901-78916.
View/Download from: Publisher's site
View description>>
© Peng et al. Disease-related protein-coding genes have been widely studied, but diseaserelated non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the stateof- the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long noncoding genes.
Shivakumara, P, Wu, L, Lu, T, Tan, CL, Blumenstein, M & Anami, BS 2017, 'Fractals based multi-oriented text detection system for recognition in mobile video images', Pattern Recognition, vol. 68, pp. 158-174.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Text detection in mobile video is challenging due to poor quality, complex background, arbitrary orientation and text movement. In this work, we introduce fractals for text detection in video captured by mobile cameras. We first use fractal properties such as self-similarity in a novel way in the gradient domain for enhancing low resolution mobile video. We then propose to use k-means clustering for separating text components from non-text ones. To make the method font size independent, fractal expansion is further explored in the wavelet domain in a pyramid structure for text components in text cluster to identify text candidates. Next, potential text candidates are obtained by studying the optical flow property of text candidates. Direction guided boundary growing is finally proposed to extract multi-oriented texts. The method is tested on different datasets, which include low resolution video captured by mobile, benchmark ICDAR 2013 video, YouTube Video Text (YVT) data, ICDAR 2013, Microsoft, and MSRA arbitrary orientation natural scene datasets, to evaluate the performance of the proposed method in terms of recall, precision, F-measure and misdetection rate. To show the effectiveness of the proposed method, the results are compared with the state of the art methods.
Wang, X, Fang, K & Tomamichel, M 2017, 'On converse bounds for classical communication over quantum channels', IEEE Transactions on Information Theory 65(7): 4609 - 4619, July 2019, vol. 65, no. 7, pp. 4609-4619.
View/Download from: Publisher's site
View description>>
We explore several new converse bounds for classical communication overquantum channels in both the one-shot and asymptotic regimes. First, we showthat the Matthews-Wehner meta-converse bound for entanglement-assistedclassical communication can be achieved by activated, no-signalling assistedcodes, suitably generalizing a result for classical channels. Second, we derivea new efficiently computable meta-converse on the amount of classicalinformation unassisted codes can transmit over a single use of a quantumchannel. As applications, we provide a finite resource analysis of classicalcommunication over quantum erasure channels, including the second-order andmoderate deviation asymptotics. Third, we explore the asymptotic analogue ofour new meta-converse, the $\Upsilon$-information of the channel. We show thatits regularization is an upper bound on the classical capacity, which isgenerally tighter than the entanglement-assisted capacity and other knownefficiently computable strong converse bounds. For covariant channels we showthat the $\Upsilon$-information is a strong converse bound.
Yu, N, Duan, R & Xu, Q 2017, 'Bounds on the Distance Between a Unital Quantum Channel and the Convex Hull of Unitary Channels', IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 1299-1310.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Motivated by the recent resolution of asymptotic quantum birkhoff conjecture (AQBC), we attempt to estimate the distance between a given unital quantum channel and the convex hull of unitary channels. We provide two lower bounds on this distance by employing techniques from quantum information and operator algebras, respectively. We then show how to apply these results to construct some explicit counterexamples to AQBC. We also point out an interesting connection between the Grothendieck's inequality and AQBC.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Impact of struck-out text on writer identification', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 1465-1471.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The presence of struck-out text in handwritten manuscripts may affect the accuracy of automated writer identification. This paper presents a study on such effects of struck-out text. Here we consider offline English and Bengali handwritten document images. At first, the struck-out texts are detected using a hybrid classifier of a CNN (Convolutional Neural Network) and an SVM (Support Vector Machine). Then the writer identification process is activated on normal and struck-out text separately, to ascertain the impact of struck-out texts. For writer identification, we use two methods: (a) a hand-crafted feature-based SVM classifier, and (b) CNN-extracted auto-derived features with a recurrent neural model. For the experimental analysis, we have generated a database from 100 English and 100 Bengali writers. The performance of our system is very encouraging.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Legibility and Aesthetic Analysis of Handwriting', 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), IEEE, Kyoto, Japan, pp. 175-182.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper deals with computer-based cognitive analysis towards legibility and aesthetics of a handwritten document. The legible text creates a human perception that the writing can be read effortlessly because of its orthographic clarity. The aesthetic property relates to the beautiful appearance of a handwritten document. In this study, we deal with these properties on offline Bengali handwriting. We formulate both legibility and aesthetic analysis tasks as machine learning problems supervised by the human cognitive system. We employ automatically derived feature-based recurrent neural networks to investigate writing legibility. For aesthetics evaluation, we employ hand-crafted feature-based support vector machines (SVMs). We have collected contemporary Bengali handwritings, on which the subjective legibility and aesthetic scores are provided by human readers. On this corpus containing legibility and aesthetic ground-Truth information, we executed our experiments. The experimental results obtained on various handwritings are encouraging.
Anshu, A, Touchette, D, Yao, P & Yu, N 1970, 'Exponential separation of quantum communication and classical information', Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC '17: Symposium on Theory of Computing, ACM, Montreal, Canada, pp. 277-288.
View/Download from: Publisher's site
View description>>
© 2017 ACM. We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550-1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. Motivated by the celebrated results of Ganor, Kol and Raz [FOCS 14, pp. 557-566, STOC 15, pp. 977-986], and by Rao and Sinha [ECCC TR15-057], we use the Symmetric k-ary Pointer Jumping function, whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, we give a simple proof for an optimal trade-off between Alice's and Bob's communication while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O(b)bits to Bob. This holds even when allowing pre-shared entanglement. We also present a classical protocol achieving this bound.
Bei, X, Qiao, Y & Zhang, S 1970, 'Networked Fairness in Cake Cutting', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, pp. 3632-3638.
View/Download from: Publisher's site
View description>>
We introduce a graphical framework for fair division in cake cutting, where comparisons between agents are limited by an underlying network structure. We generalize the classical fairness notions of envy-freeness and proportionality in this graphical setting. An allocation is called envy-free on a graph if no agent envies any of her neighbor's share, and is called proportional on a graph if every agent values her own share no less than the average among her neighbors, with respect to her own measure. These generalizations enable new research directions in developing simple and efficient algorithms that can produce fair allocations under specific graph structures.On the algorithmic frontier, we first propose a moving-knife algorithm that outputs an envy-free allocation on trees. The algorithm is significantly simpler than the discrete and bounded envy-free algorithm introduced in [Aziz and Mackenzie, 2016] for compete graphs. Next, we give a discrete and bounded algorithm for computing a proportional allocation on transitive closure of trees, a class of graphs by taking a rooted tree and connecting all its ancestor-descendant pairs.
Cheng, E-J, Prasad, M, Puthal, D, Sharma, N, Prasad, OK, Chin, P-H, Lin, C-T & Blumenstein, M 1970, 'Deep Learning Based Face Recognition with Sparse Representation Classification', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 24th International Conference on Neural Information Processing (ICONIP), Springer International Publishing, Guangzhou, PEOPLES R CHINA, pp. 665-674.
View/Download from: Publisher's site
View description>>
Feature extraction is an essential step in solving real-world pattern recognition and classification problems. The accuracy of face recognition highly depends on the extracted features to represent a face. The traditional algorithms uses geometric techniques, comprising feature values including distance and angle between geometric points (eyes corners, mouth extremities, and nostrils). These features are sensitive to the elements such as illumination, variation of poses, various expressions, to mention a few. Recently, deep learning techniques have been very effective for feature extraction, and deep features have considerable tolerance for various conditions and unconstrained environment. This paper proposes a two layer deep convolutional neural network (CNN) for face feature extraction and applied sparse representation for face identification. The sparsity and selectivity of deep features can strengthen sparseness for the solution of sparse representation, which generally improves the recognition rate. The proposed method outperforms other feature extraction and classification methods in terms of recognition accuracy.
Cheng, H-C & Hsieh, M-H 1970, 'Moderate deviations for classical-quantum channels', 2017 IEEE International Symposium on Information Theory (ISIT), 2017 IEEE International Symposium on Information Theory (ISIT), IEEE, Aachen, Germany, pp. 296-300.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. 'To be considered for the 2017 IEEE Jack Keil Wolf ISIT Student Paper Award.' We show that the reliable communication through a classical-quantum channel is possible when the transmission rate approaches the channel capacity sufficiently slowly. This scenario exists between the non-vanishing error probability regime, where the rate tends to capacity with a fixed error, and the small error probability regime, where the error vanishes given a rate below capacity. The proof employs a sharp concentration bound in strong large deviation theory, and the asymptotic expansions of the error-exponent functions.
Cheng, H-C & Hsieh, M-H 1970, 'Moderate deviations for quantum hypothesis testing and a martingale inequality', 2017 IEEE International Symposium on Information Theory (ISIT), 2017 IEEE International Symposium on Information Theory (ISIT), IEEE, Aachen, Germany, pp. 1978-1982.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. 'To be considered for the 2017 IEEE Jack Keil Wolf ISIT Student Paper Award.' We study the asymptotic behavior of the type-I error in quantum hypothesis testing when the exponent of the type-II error approaches the quantum relative entropy sufficiently slowly. Our result shows that the moderate deviation principle holds for the testing problem if the quantum relative variance is positive. Our proof strategy employs strong large deviation theory and a martingale inequality.
Cheng, H-C, Hsieh, M-H & Tomamichel, M 1970, 'Sphere-packing bound for classical-quantum channels', 2017 IEEE Information Theory Workshop (ITW), 2017 IEEE Information Theory Workshop (ITW), IEEE, Kaohsiung, Taiwan, pp. 479-483.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. We study lower bounds on the optimal error probability in channel coding at rates below capacity, commonly termed sphere-packing bounds. In this work, we establish a sphere-packing bound for classical-quantum channels, which significantly improves previous prefactor from the order of subexponential to polynomial. Furthermore, the gap between the obtained error exponent for constant composition codes and the best known classical random coding exponent vanishes in the order of o(log n/n), indicating our sphere-packing bound is almost exact in the high rate regime. The main technical contributions are two converse Hoeffding bounds for quantum hypothesis testing and the saddle-point properties of error exponent functions.
Cheng, H-C, Hsieh, M-H & Tomamichel, M 1970, 'Sphere-Packing Bound for Symmetric Classical-Quantum Channels', IEEE International Symposium on Information Theory - Proceedings, IEEE International Symposium on Information Theory, IEEE, Aachen, Germany, pp. 286-290.
View/Download from: Publisher's site
View description>>
We provide a sphere-packing lower bound for the optimal error probability infinite blocklengths when coding over a symmetric classical-quantum channel. Ourresult shows that the pre-factor can be significantly improved from the orderof the subexponential to the polynomial. The established pre-factor isessentially optimal because it matches the best known random coding upper boundin the classical case. Our approaches rely on a sharp concentration inequalityin strong large deviation theory and crucial properties of the error-exponentfunction.
Chubb, CT, Tan, VYF & Tomamichel, M 1970, 'Moderate deviation analysis for classical communication over quantum channels', 2017 IEEE International Symposium on Information Theory (ISIT), 2017 IEEE International Symposium on Information Theory (ISIT), IEEE, Vail, CO, USA, pp. 1544-1548.
View/Download from: Publisher's site
Coluccia, A, Ghenescu, M, Piatrik, T, De Cubber, G, Schumann, A, Sommer, L, Klatte, J, Schuchert, T, Beyerer, J, Farhadi, M, Amandi, R, Aker, C, Kalkan, S, Saqib, M, Sharma, N, Daud, S, Makkah, K & Blumenstein, M 1970, 'Drone-vs-Bird detection challenge at IEEE AVSS2017', 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Lecce, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the 'drone-vs-bird detection challenge' to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results1.
Das, A, Mondal, P, Pal, U, Blumenstein, M & Ferrer, MA 1970, 'Sclera Vessel Pattern Synthesis Based on a Non-parametric Texture Synthesis Technique', Advances in Intelligent Systems and Computing, International Conference on Computer Vision and Image Processing, Springer Singapore, pp. 241-250.
View/Download from: Publisher's site
View description>>
© Springer Science+Business Media Singapore 2017. This work proposes a sclera vessel texture pattern synthesis technique. Sclera texture was synthesized by a non-parametric based texture regeneration technique. A small number of classes from the UBIRIS version: 1 dataset was employed as primitive images. An appreciable result was achieved which solicits the successful synthesis of sclera texture patterns. It is difficult to get a huge collection real sclera data and hence such synthetic data will be useful to the researchers.
Das, A, Pal, U, Ferrer, MA & Blumenstein, M 1970, 'A decision-level fusion strategy for multimodal ocular biometric in visible spectrum based on posterior probability', 2017 IEEE International Joint Conference on Biometrics (IJCB), 2017 IEEE International Joint Conference on Biometrics (IJCB), IEEE, Denver, CO, USA, pp. 794-798.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this work, we propose a posterior probability-based decision-level fusion strategy for multimodal ocular biometric in the visible spectrum employing iris, sclera and peri-ocular trait. To best of our knowledge this is the first attempt to design a multimodal ocular biometrics using all three ocular traits. Employing all these traits in combination can help to increase the reliability and universality of the system. For instance in some scenarios, the sclera and iris can be highly occluded or for completely closed eyes scenario, the peri-ocular trait can be relied on for the decision. The proposed system is constituted of three independent traits and their combinations. The classification output of the trait which produces highest posterior probability is to consider as the final decision. An appreciable reliability and universal applicability of ocular trait are achieved in experiments conducted employing the proposed scheme.
Das, A, Pal, U, Ferrer, MA, Blumenstein, M, Stepec, D, Rot, P, Emersic, Z, Peer, P, Struc, V, Kumar, SVA & Harish, BS 1970, 'SSERBC 2017: Sclera segmentation and eye recognition benchmarking competition', 2017 IEEE International Joint Conference on Biometrics (IJCB), 2017 IEEE International Joint Conference on Biometrics (IJCB), IEEE, Denver, CO, USA, pp. 742-747.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper summarises the results of the Sclera Segmentation and Eye Recognition Benchmarking Competition (SSERBC 2017). It was organised in the context of the International Joint Conference on Biometrics (IJCB 2017). The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sclera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject. In this regard, we have used the Multi-Angle Sclera Dataset (MASD version 1). It is comprised of2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82×2) eyes. A manual segmentation mask of these images was created to baseline both tasks. Precision and recall based statistical measures were employed to evaluate the effectiveness of the segmentation and the ranks of the segmentation task. Recognition accuracy measure has been employed to measure the recognition task. Manually segmented sclera, iris and peri-ocular regions were used in the recognition task. Sixteen teams registered for the competition, and among them, six teams submitted their algorithms or systems for the segmentation task and two of them submitted their recognition algorithm or systems. The results produced by these algorithms or systems reflect current developments in the literature of sclera segmentation and eye recognition, employing cutting edge techniques. The MASD version 1 dataset with some of the ground truth will be freely available for research purposes. The success of the competition also demonstrates the recent interests of researchers from academia as well as industry on this subject.
Das, A, Sengupta, A, Ferrer, MA, Pal, U & Blumenstein, M 1970, 'Linking face images captured from the optical phenomenon in the wild for forensic science', 2017 IEEE International Joint Conference on Biometrics (IJCB), 2017 IEEE International Joint Conference on Biometrics (IJCB), IEEE, Denver, CO, USA, pp. 781-786.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.
Farhood, H, He, X, Jia, W, Blumenstein, M & Li, H 1970, 'Counting People Based on Linear, Weighted, and Local Random Forests', 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Sydney, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Recently, many works have been published for counting people. However, when being applied to real-world train station videos, they have exposed many limitations due to problems such as low resolution, heavy occlusion, various density levels and perspective distortions. In this paper, following the recent trend of regression-based density estimation, we present a linear regression approach based on local Random Forests for counting either standing or moving people on station platforms. By dividing each frame into sub-windows and extracting features with ground truth densities as well as learned weights, we perform a linear transformation for counting people to overcome the perspective problems of the existing patch-based approaches. We present improvements against several recent baselines on the UCSD dataset and a dataset of CCTV videos taken from a train station. We also show improvements in speed compared with the state-of-the-art models based on detection and Deep Learning.
Haque, MN, Mathieson, L & Moscato, P 1970, 'A memetic algorithm for community detection by maximising the connected cohesion', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, Hawaii, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Community detection is an exciting field of research which has attracted the interest of many researchers during the last decade. While many algorithms and heuristics have been proposed to scale existing approaches a relatively smaller number of studies have looked at exploring different measures of quality of the detected community. Recently, a new score called 'cohesion' was introduced in the computing literature. The cohesion score is based comparing the number of triangles in a given group of vertices to the number of triangles only partly in that group. In this contribution, we propose a memetic algorithm that aims to find a subset of the vertices of an undirected graph that maximizes the cohesion score. The associated combinatorial optimisation problem is known to be NP-Hard and we also prove it to be W[1]-hard when parameterized by the score. We used a Local Search individual improvement heuristic to expand the putative solution. Then we removed all vertices from the group which are not a part of any triangle and expand the neighbourhood by adding triangles which have at least two nodes already in the group. Finally we compute the maximum connected component of this group. The highest quality solutions of the memetic algorithm have been obtained for four real-world network scenarios and we compare our results with ground-truth information about the graphs. We also compare the results to those obtained with eight other community detection algorithms via interrater agreement measures. Our results give a new lower bound on the parameterized complexity of this problem and give novel insights on its potential usefulness as a new natural score for community detection.
Ji, Z 1970, 'Compression of quantum multi-prover interactive proofs', Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC '17: Symposium on Theory of Computing, ACM, Montreal, Canada, pp. 289-302.
View/Download from: Publisher's site
View description>>
© 2017 ACM. We present a protocol that transforms any quantum multi-prover interactive proof into a nonlocal game in which questions consist of logarithmic number of bits and answers of constant number of bits. As a corollary, it follows that the promise problem corresponding to the approximation of the nonlocal value to inverse polynomial accuracy is complete for QMIP∗, and therefore NEXP-hard. This establishes that nonlocal games are provably harder than classical games without any complexity theory assumptions. Our result also indicates that gap amplification for nonlocal games may be impossible in general and provides a negative evidence for the feasibility of the gap amplification approach to the multi-prover variant of the quantum PCP conjecture.
Kong, S, Lee, JH & Li, S 1970, 'A deterministic distributed algorithm for reasoning with connected row-convex constraints', Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, International Conference on Autonomous Agents and Multiagent System, International Foundation for Autonomous Agents and Multiagent Systems, São Paulo, Brazil, pp. 203-211.
View description>>
The class of CRC constraints generalizes several tractable classes of constraints and is expressive enough to model problems in domains such as temporal reasoning, geometric reasoning, and scene labelling. This paper presents the first distributed deterministic algorithm for connected row-convex (CRC) constraints. Our distributed (partial) path consistency algorithm efficiently transforms a CRC constraint network into an equivalent constraint network, where all constraints are minimal (i.e., they are the tightest constraints) and generating all solutions can be done in a backtrack- free manner. When compared with the state-of-ihe-Art distributed algorithm for CRC constraints, which is a randomized one, our algorithm guarantees to generate a solution for satisfiable CRC constraint networks and it is applicable to solve large networks in real distributed systems. The experimental evaluations show that our algorithm outperforms the statc-of-Thc-Art algorithm in both practice and theory.
Li, S, Long, Z, Liu, W, Duckham, M & Both, A 1970, 'On Redundant Topological Constraints (Extended Abstract)', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 5020-5024.
View/Download from: Publisher's site
View description>>
Redundancy checking is an important task in AI subfields such as knowledge representation and constraint solving. This paper considers redundant topological constraints, defined in the region connection calculus RCC8. We say a constraint in a set C of RCC8 constraints is redundant if it is entailed by the rest of C. A prime subnetwork of C is a subset of C which contains no redundant constraints and has the same solution set as C. It is natural to ask how to compute such a prime subnetwork, and when it is unique. While this problem is in general intractable, we show that, if S is a subalgebra of RCC8 in which weak composition distributes over nonempty intersections, then C has a unique prime subnetwork, which can be obtained in cubic time by removing all redundant constraints simultaneously from C. As a by-product, we show that any path-consistent network over such a distributive subalgebra is minimal.
Saqib, M, Daud Khan, S, Sharma, N & Blumenstein, M 1970, 'A study on detecting drones using deep convolutional neural networks', 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Lecce, Italy.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The object detection is a challenging problem in computer vision with various potential real-world applications. The objective of this study is to evaluate the deep learning based object detection techniques for detecting drones. In this paper, we have conducted experiments with different Convolutional Neural Network (CNN) based network architectures namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16) etc. Due to sparse data available for training, networks are trained with pre-trained models using transfer learning. The snapshot of trained models is saved at regular interval during training. The best models having high mean Average Precision (mAP) for each network architecture are used for evaluation on the test dataset. The experimental results show that VGG16 with Faster R-CNN perform better than other architectures on the training dataset. Visual analysis of the test dataset is also presented.
Saqib, M, Daud Khan, S, Sharma, N & Blumenstein, M 1970, 'Extracting descriptive motion information from crowd scenes', 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Christchurch, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. An important contribution that automated analysis tools can generate for management of pedestrians and crowd safety is the detection of conflicting large pedestrian flows: this kind of movement pattern, in fact, may lead to dangerous situations and potential threats to pedestrian's safety. For this reason, detecting dominant motion patterns and summarizing motion information from the scene are inevitable for crowd management. In this paper, we develop a framework that extracts motion information from the scene by generating point trajectories using particle advection approach. The trajectories obtained are then clustered by using unsupervised hierarchical clustering algorithm, where the similarity is measured by the Longest Common Sub-sequence (LCS) metric. The achieved motions patterns in the scene are summarized and represented by using color-coded arrows, where speeds of the different flows are encoded with colors, the width of an arrow represents the density (number of people belonging to a particular motion pattern) while the arrowhead represents the direction. This novel representation of crowded scene provides a clutter free visualization which helps the crowd managers in understanding the scene. Experimental results show that our method outperforms state-of-the-art methods.
Saqib, M, Khan, SD & Blumenstein, M 1970, 'Detecting dominant motion patterns in crowds of pedestrians', Eighth International Conference on Graphic and Image Processing (ICGIP 2016), Eighth International Conference on Graphic and Image Processing, SPIE, Tokyo, Japan.
View/Download from: Publisher's site
View description>>
© 2017 SPIE. As the population of the world increases, urbanization generates crowding situations which poses challenges to public safety and security. Manual analysis of crowded situations is a tedious job and usually prone to errors. In this paper, we propose a novel technique of crowd analysis, the aim of which is to detect different dominant motion patterns in real-time videos. A motion field is generated by computing the dense optical flow. The motion field is then divided into blocks. For each block, we adopt an Intra-clustering algorithm for detecting different flows within the block. Later on, we employ Inter-clustering for clustering the flow vectors among different blocks. We evaluate the performance of our approach on different real-time videos. The experimental results show that our proposed method is capable of detecting distinct motion patterns in crowded videos. Moreover, our algorithm outperforms state-of-the-art methods.
Sharma, N, Sengupta, A, Sharma, R, Pal, U & Blumenstein, M 1970, 'Pincode detection using deep CNN for postal automation', 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Christchurch, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Postal automation has been a topic of research over a decade. The challenges and complexity involved in developing a postal automation system for a multi-lingual and multi-script country like India are many-fold. The characteristics of Indian postal documents include: multi-lingual behaviour, unconstrained handwritten addresses, structured/unstructured envelopes and postcards, being among the most challenging aspects. This paper examines the state-of-the-art Deep CNN architectures for detecting pin-code in both structured and unstructured postal envelopes and documents. Region-based Convolutional Neural Networks (RCNN) are used for detecting the various significant regions, namely Pin-code blocks/regions, destination address block, seal and stamp in a postal document. Three network architectures, namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16), and VGG M were considered for analysis and identifying their potential. A dataset consisting of 2300 multilingual Indian postal documents of three different categories was developed and used for experiments. The VGG-M architecture with Faster-RCNN performed better than others and promising results were obtained.
Sutter, D, Berta, M & Tomamichel, M 1970, 'Quantum Markov chains and logarithmic trace inequalities', 2017 IEEE International Symposium on Information Theory (ISIT), 2017 IEEE International Symposium on Information Theory (ISIT), IEEE, Aachen, Germany, pp. 1988-1992.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. A Markov chain is a tripartite quantum state ρABC where there exists a recovery map RB→BC such that ρABC = RB→BC(ρAB). More generally, an approximate Markov chain ρABC is a state whose distance to the closest recovered state RB→BC(ρAB) is small. Recently it has been shown that this distance can be bounded from above by the conditional mutual information I(A: C|B)ρ of the state. We improve on this connection by deriving the first bound that is tight in the commutative case and features an explicit recovery map that only depends on the reduced state pBC. The key tool in our proof is a multivariate extension of the Golden-Thompson inequality, which allows us to extend logarithmic trace inequalities from two to arbitrarily many matrices.
Suwanwiwat, H, Das, A, Ferrer, MA, Pal, U & Blumenstein, M 1970, 'An Automatic Student Verification System Utilising Off-Line Thai Name Components', 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Sydney, Australia, pp. 826-831.
View/Download from: Publisher's site
View description>>
This research proposed an automatic student identification and verification system utilising off-line Thai name components. The Thai name components consist of first and last names. Dense texture-based feature descriptors were able to yield encouraging results when applied to different handwritten text recognition scenarios. As a result, the authors employed such features in investigating their performance on Thai name component verification system. In this research, Dense-Local Binary Pattern, Dense-Local Directional Pattern, and Local Binary Pattern combined with Local Directional Pattern were employed. A base-line shape/feature i.e. Hidden Markov Model (HMM) was also utilised in this study. As there is no dataset on Thai name verification in the literature, a dataset is proposed for a Thai name verification system. The name component samples were collected from high school students. It consists of 8,400 name components (first and last names) from 100 students. Each student provided 60 genuine name components, and each of the name components was forged by 12 other students. An encouraging result was found employing the above-mentioned features on the proposed dataset.
Wang, B, Gao, Y, Sun, C, Blumenstein, M & La Salle, J 1970, 'Can Walking and Measuring Along Chord Bunches Better Describe Leaf Shapes?', 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, HI, USA, pp. 2047-2056.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Effectively describing and recognizing leaf shapes under arbitrary deformations, particularly from a large database, remains an unsolved problem. In this research, we attempted a new strategy of describing shape by walking along a bunch of chords that pass through the shape to measure the regions trespassed. A novel chord bunch walks (CBW) descriptor is developed through the chord walking that effectively integrates the shape image function over the walked chord to reflect the contour features and the inner properties of the shape. For each contour point, the chord bunch groups multiple pairs of chord walks to build a hierarchical framework for a coarse-to-fine description. The proposed CBW descriptor is invariant to rotation, scaling, translation, and mirror transforms. Instead of using the expensive optimal correspondence based matching, an improved Hausdorff distance encoded correspondence information is proposed for efficient yet effective shape matching. In experimental studies, the proposed method obtained substantially higher accuracies with low computational cost over the benchmarks, which indicates the research potential along this direction.
Wang, X, Xie, W & Duan, R 1970, 'Semidefinite programming converse bounds for classical communication over quantum channels', 2017 IEEE International Symposium on Information Theory (ISIT), 2017 IEEE International Symposium on Information Theory (ISIT), IEEE, Aachen, Germany, pp. 1728-1732.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. We study the classical communication over quantum channels when assisted by no-signalling (NS) and PPT-preserving (PPT) codes. We first show that both the optimal success probability of a given transmission rate and one-shot-error capacity can be formalized as semidefinite programs (SDPs) when assisted by NS or NS∩PPT codes. Based on this, we derive SDP finite blocklength converse bounds for general quantum channels, which also reduce to the converse bound of Polyanskiy, Poor, and Verdii for classical channels. Furthermore, we derive an SDP strong converse bound for the classical capacity of a general quantum channel: for any code with a rate exceeding this bound, the optimal success probability vanishes exponentially fast as the number of channel uses increases. In particular, applying our efficiently computable bound, we derive improved upper bounds to the classical capacity of the amplitude damping channels and also establish the strong converse property for a new class of quantum channels.
Wang, Z, Shivakumara, P, Lu, T, Basavanna, M, Pal, U & Blumenstein, M 1970, 'Fourier-Residual for Printer Identification', 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), IEEE, Japan, pp. 1114-1119.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Printer identification is challenging due to advanced software technologies in the field of forgery detection. This paper presents a new idea of using the Fourier transform residual for the identification of documents printed by different printers. The proposed approach first convolves a Laplacian mask with a Fourier transform in the frequency domain to smoothen the edges. Next, we apply an inverse Fourier transform to reconstruct images from smoothed information (RFL). Similarly, the proposed approach reconstructs images using gray information of the input image (RFG). Then the residual is calculated by subtracting RFG from RFL. The set of statistical features, texture and spatial features are extracted from residual images for printer identification. Experimental results with the existing method on our dataset and a standard dataset show that the proposed approach outperforms the existing approach on both the datasets in terms of classification rate, recall, precision and F-measure.
Wilde, MM, Tomamichel, M & Berta, M 1970, 'A meta-converse for private communication over quantum channels', 2017 IEEE International Symposium on Information Theory (ISIT), 2017 IEEE International Symposium on Information Theory (ISIT), IEEE, Aachen, Germany, pp. 291-295.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. We establish a converse bounds on the private transmission capabilities of a quantum channel. The main conceptual development builds firmly on the notion of a private state, which is a powerful, uniquely quantum method for simplifying the tripartite picture of privacy involving local operations and public classical communication to a bipartite picture of quantum privacy involving local operations and classical communication. This approach has previously led to some of the strongest upper bounds on secret key rates, including the squashed entanglement and the relative entropy of entanglement. Here we use this approach along with a "privacy test" to establish a general meta-converse bound for private communication.
Wu, D, Sharma, N & Blumenstein, M 1970, 'Recent advances in video-based human action recognition using deep learning: A review', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, Alaska, USA, pp. 2865-2872.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Video-based human action recognition has become one of the most popular research areas in the field of computer vision and pattern recognition in recent years. It has a wide variety of applications such as surveillance, robotics, health care, video searching and human-computer interaction. There are many challenges involved in human action recognition in videos, such as cluttered backgrounds, occlusions, viewpoint variation, execution rate, and camera motion. A large number of techniques have been proposed to address the challenges over the decades. Three different types of datasets namely, single viewpoint, multiple viewpoint and RGB-depth videos, are used for research. This paper presents a review of various state-of-the-art deep learning-based techniques proposed for human action recognition on the three types of datasets. In light of the growing popularity and the recent developments in video-based human action recognition, this review imparts details of current trends and potential directions for future work to assist researchers.
Ying, M, Ying, S & Wu, X 1970, 'Invariants of quantum programs: characterisations and generation.', POPL, ACM SIGPLAN Symposium on Principles of Programming Languages, ACM, Paris, France, pp. 818-832.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for ver- ification and analysis of quantum programs. Interestingly, the no- tion of invariant can be defined for quantum programs in two d- ifferent ways - additive invariants and multiplicative invariants - corresponding to two interpretations of implication in a continuous valued logic: the £ukasiewicz implication and the Godel implica- tion. It is shown that both of them can be used to establish partial correctness of quantum programs. The problem of generating ad- ditive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms - quantum walk and quantum Metropolis sam- pling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimis- ing quantum Metropolis sampling. To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
Zhou, L & Ying, M 1970, 'Differential Privacy in Quantum Computation.', CSF, IEEE Computer Security Foundations Symposium, IEEE Computer Society, Santa Barbara, CA, USA, pp. 249-262.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. More and more quantum algorithms have been designed for solving problems in machine learning, database search and data analytics. An important problem then arises: how privacy can be protected when these algorithms are used on private data? For classical computing, the notion of differential privacy provides a very useful conceptual framework in which a great number of mechanisms that protect privacy by introducing certain noises into algorithms have been successfully developed. This paper defines a notion of differential privacy for quantum information processing. We carefully examine how the mechanisms using three important types of quantum noise, the amplitude/phase damping and depolarizing, can protect differential privacy. A composition theorem is proved that enables us to combine multiple privacy-preserving operations in quantum information processing.
Aggarwal, D, Brennen, GK, Lee, T, Santha, M & Tomamichel, M 2017, 'Quantum attacks on Bitcoin, and how to protect against them'.
View description>>
The key cryptographic protocols used to secure the internet and financialtransactions of today are all susceptible to attack by the development of asufficiently large quantum computer. One particular area at risk arecryptocurrencies, a market currently worth over 150 billion USD. We investigatethe risk of Bitcoin, and other cryptocurrencies, to attacks by quantumcomputers. We find that the proof-of-work used by Bitcoin is relativelyresistant to substantial speedup by quantum computers in the next 10 years,mainly because specialized ASIC miners are extremely fast compared to theestimated clock speed of near-term quantum computers. On the other hand, theelliptic curve signature scheme used by Bitcoin is much more at risk, and couldbe completely broken by a quantum computer as early as 2027, by the mostoptimistic estimates. We analyze an alternative proof-of-work called Momentum,based on finding collisions in a hash function, that is even more resistant tospeedup by a quantum computer. We also review the available post-quantumsignature schemes to see which one would best meet the security and efficiencyrequirements of blockchain applications.
Anshu, A, Hsieh, M-H & Jain, R 2017, 'Quantifying resource in catalytic resource theory'.
View description>>
We consider a general resource theory that allows the use of free resource asa catalyst. We show that the amount of `resource' contained in a given state,in the asymptotic scenario, is equal to the regularized relative entropy ofresource of that state, which then yields a straightforward operational meaningto this quantity. Such an answer has been long sought for in any resourcetheory since the usefulness of a state in information-processing tasks isdirectly related to the amount of resource the state possesses in thebeginning. While we need to place a few assumptions in our resource theoreticalframework, it is still general enough and includes quantum resource theory ofentanglement, coherence, asymmetry, non-uniformity, purity, contextuality,stabilizer computation and the classical resource theory of randomnessextraction as special cases. Since our resource theoretic framework includesentanglement theory, our result also implies that the amount of noise one hasto inject locally in order to erase all entanglement contained in an entangledstate is equal to the regularized relative entropy of entanglement, resolvingan open question posted in [Groisman et al., Phys. Rev. A. 72: 032317, 2005].On the way to prove the main result, we also quantify the amount of resourcecontained in a state in the one-shot setting (where one only has a single copyof the state), in terms of the smooth max-relative entropy. Our one-shot resultemploys a recently developed technique of convex-split lemma.
Berry, DW, Kieferová, M, Scherer, A, Sanders, YR, Low, GH, Wiebe, N, Gidney, C & Babbush, R 2017, 'Improved Techniques for Preparing Eigenstates of Fermionic Hamiltonians'.
Chubb, CT, Tomamichel, M & Korzekwa, K 2017, 'Beyond the thermodynamic limit: finite-size corrections to state interconversion rates'.
View description>>
Thermodynamics is traditionally constrained to the study of macroscopicsystems whose energy fluctuations are negligible compared to their averageenergy. Here, we push beyond this thermodynamic limit by developing amathematical framework to rigorously address the problem of thermodynamictransformations of finite-size systems. More formally, we analyse stateinterconversion under thermal operations and between arbitraryenergy-incoherent states. We find precise relations between the optimal rate atwhich interconversion can take place and the desired infidelity of the finalstate when the system size is sufficiently large. These so-called second-orderasymptotics provide a bridge between the extreme cases of single-shotthermodynamics and the asymptotic limit of infinitely large systems. Weillustrate the utility of our results with several examples. We first show howthermodynamic cycles are affected by irreversibility due to finite-sizeeffects. We then provide a precise expression for the gap between thedistillable work and work of formation that opens away from the thermodynamiclimit. Finally, we explain how the performance of a heat engine gets affectedwhen one of the heat baths it operates between is finite. We find that whileperfect work cannot generally be extracted at Carnot efficiency, there areconditions under which these finite-size effects vanish. In deriving ourresults we also clarify relations between different notions of approximatemajorisation.
Kong, S, Lee, JH & Li, S 2017, 'Multiagent Simple Temporal Problem: The Arc-Consistency Approach'.
View description>>
The Simple Temporal Problem (STP) is a fundamental temporal reasoning problemand has recently been extended to the Multiagent Simple Temporal Problem(MaSTP). In this paper we present a novel approach that is based on enforcingarc-consistency (AC) on the input (multiagent) simple temporal network. We showthat the AC-based approach is sufficient for solving both the STP and MaSTP andprovide efficient algorithms for them. As our AC-based approach does not imposenew constraints between agents, it does not violate the privacy of the agentsand is superior to the state-of-the-art approach to MaSTP. Empiricalevaluations on diverse benchmark datasets also show that our AC-basedalgorithms for STP and MaSTP are significantly more efficient than existingapproaches.
Liu, S, Wang, X, Zhou, L, Guan, J, Li, Y, He, Y, Duan, R & Ying, M 2017, '$Q|SI\rangle$: A Quantum Programming Environment', Symposium on Real-Time and Hybrid Systems.
View description>>
This paper describes a quantum programming environment, named $Q|SI\rangle$.It is a platform embedded in the .Net language that supports quantumprogramming using a quantum extension of the $\mathbf{while}$-language. Theframework of the platform includes a compiler of the quantum$\mathbf{while}$-language and a suite of tools for simulating quantumcomputation, optimizing quantum circuits, and analyzing and verifying quantumprograms. Throughout the paper, using $Q|SI\rangle$ to simulate quantumbehaviors on classical platforms with a combination of components isdemonstrated. The scalable framework allows the user to program customizedfunctions on the platform. The compiler works as the core of $Q|SI\rangle$bridging the gap from quantum hardware to quantum software. The built-indecomposition algorithms enable the universal quantum computation on thepresent quantum hardware.
Madhav, KV, Biswas, T & Ghosh, S 2017, 'Coarse-graining of measurement and quantum-to-classical transition in the bipartite scenario'.
View description>>
The connection between coarse-graining of measurement and emergence of
classicality has been investigated for some time, if not well understood.
Recently in (PRL $\textbf{112}$, 010402, (2014)) it was pointed out that
coarse-graining measurements can lead to non-violation of Bell-type
inequalities by a state which would violate it under sharp measurements. We
study here the effects of coarse-grained measurements on bipartite cat states.
We show that while it is true that coarse-graining does indeed lead to
non-violation of a Bell-type inequality, this is not reflected at the state
level. Under such measurements the post-measurement states can be non-classical
(in the quantum optical sense) and in certain cases coarse-graning can lead to
an increase in this non-classicality with respect to the coarse-graining
parameter. While there is no universal way to quantify non-classicality, we do
so using well understood notions in quantum optics such as the negativity of
the Wigner function and the singular nature of the Gluaber-Sudharshan P
distribution.
Mann, RL & Bremner, MJ 2017, 'On the Complexity of Random Quantum Computations and the Jones Polynomial'.
View description>>
There is a natural relationship between Jones polynomials and quantum
computation. We use this relationship to show that the complexity of evaluating
relative-error approximations of Jones polynomials can be used to bound the
classical complexity of approximately simulating random quantum computations.
We prove that random quantum computations cannot be classically simulated up to
a constant total variation distance, under the assumption that (1) the
Polynomial Hierarchy does not collapse and (2) the average-case complexity of
relative-error approximations of the Jones polynomial matches the worst-case
complexity over a constant fraction of random links. Our results provide a
straightforward relationship between the approximation of Jones polynomials and
the complexity of random quantum computations.
Ying, S, Ying, M & Feng, Y 2017, 'Quantum Privacy-Preserving Data Analytics.'.
Ying, S, Ying, M & Feng, Y 2017, 'Quantum Privacy-Preserving Perceptron.'.
Zhu, EY, Zhuang, Q, Hsieh, M-H & Shor, PW 2017, 'Superadditivity in trade-off capacities of quantum channels'.
View description>>
In this article, we investigate the additivity phenomenon in the dynamiccapacity of a quantum channel for trading classical communication, quantumcommunication and entanglement. Understanding such additivity property isimportant if we want to optimally use a quantum channel for generalcommunication purpose. However, in a lot of cases, the channel one will beusing only has an additive single or double resource capacity, and it islargely unknown if this could lead to an superadditive double or tripleresource capacity. For example, if a channel has an additive classical andquantum capacity, can the classical-quantum capacity be superadditive? In thiswork, we answer such questions affirmatively. We give proof-of-principle requirements for these channels to exist. In mostcases, we can provide an explicit construction of these quantum channels. Theexistence of these superadditive phenomena is surprising in contrast to theresult that the additivity of both classical-entanglement and classical-quantumcapacity regions imply the additivity of the triple capacity region.