Blumenstein, M 2008, 'Cursive Character Segmentation Using Neural Network Techniques', Studies in Computational Intelligence, vol. 90, pp. 259-275.
View/Download from: Publisher's site
View description>>
The segmentation of cursive and mixed scripts persists to be a difficult problem in the area of handwriting recognition. This research details advances for segmenting characters in off-line cursive script. Specifically, a heuristic algorithm and a neural network-based technique, which uses a structural feature vector representation, are proposed and combined for identifying incorrect segmentation points. Following the location of appropriate anchorage points, a character extraction technique, using segmentation paths, is employed to complete the segmentation process. Results are presented for neural-based heuristic segmentation, segmentation point validation, character recognition, segmentation path detection and overall segmentation accuracy. © 2008 Springer-Verlag Berlin Heidelberg.
Blumenstein, M, Green, S, Fogelman, S, Nguyen, A & Muthukkumarasarny, V 2008, 'Performance analysis of GAME: A generic automated marking environment', COMPUTERS & EDUCATION, vol. 50, no. 4, pp. 1203-1216.
View/Download from: Publisher's site
View description>>
This paper describes the Generic Automated Marking Environment (GAME) and provides a detailed analysis of its performance in assessing student programming projects and exercises. GAME has been designed to automatically assess programming assignments written in a variety of languages based on the "structure" of the source code and the correctness of the program's output. Currently, the system is able to mark programs written in Java, C++ and the C language. To use the system, instructors are required to provide a simple "marking schema" for each given assessment item, which includes pertinent information such as the location of files and the model solution. In this research, GAME has been tested on a number of student programming exercises and assignments and its performance has been compared against that of a human marker. An in-depth statistical analysis of the comparison is presented, providing encouraging results and directions for employing GAME as a tool for teaching and learning. © 2006 Elsevier Ltd. All rights reserved.
Cha, D, Blumenstein, M, Zhang, H & Jeng, D-S 2008, 'A Neural-Genetic Technique for Coastal Engineering: Determining Wave-induced Seabed Liquefaction Depth', Studies in Computational Intelligence, vol. 82, pp. 337-351.
View/Download from: Publisher's site
View description>>
In the past decade, computational intelligence (CI) techniques have been widely adopted in various fields such as business, science and engineering, as well as information technology. Specifically, hybrid techniques using artificial neural networks (ANNs) and genetic algorithms (GAs) are becoming an important alternative for solving problems in the field of engineering in comparison to traditional solutions, which ordinarily use complicated mathematical theories. The wave-induced seabed liquefaction problem is one of the most critical issues for analysing and designing marine structures such as caissons, oil platforms and harbours. In the past, various investigations into wave-induced seabed liquefaction have been carried out including numerical models, analytical solutions and some laboratory experiments. However, most previous numerical studies are based on solving complicated partial differential equations. In this study, the proposed neural-genetic model is applied to wave-induced liquefaction, which provides a better prediction of liquefaction potential. The neural-genetic simulation results illustrate the applicability of the hybrid technique for the accurate prediction of wave-induced liquefaction depth, which can also provide coastal engineers with alternative tools to analyse the stability of marine sediments. © 2008 Springer-Verlag Berlin Heidelberg.
Chen, J, Duan, R, Ji, Z, Ying, M & Yu, J 2008, 'Existence of universal entangler', JOURNAL OF MATHEMATICAL PHYSICS, vol. 49, no. 1, pp. 1-7.
View/Download from: Publisher's site
View description>>
A gate is called an entangler if it transforms some (pure) product states to entangled states. A universal entangler is a gate which transforms all product states to entangled states. In practice, a universal entangler is a very powerful device for generating entanglements, and thus provides important physical resources for accomplishing many tasks in quantum computing and quantum information. This paper demonstrates that a universal entangler always exists except for a degenerated case. Nevertheless, the problem how to find a universal entangler remains open. © 2008 American Institute of Physics.
Chitambar, E & Duan, R 2008, 'Nonlocal Entanglement Transformations Achievable by Separable Operations', Phys. Rev. Lett., vol. 103, no. 11, p. 110502.
View/Download from: Publisher's site
View description>>
For manipulations of multipartite quantum systems, it was well known that alllocal operations assisted by classical communication (LOCC) constitute a propersubset of the class of separable operations. Recently, Gheorghiu and Griffithsfound that LOCC and general separable operations are equally powerful fortransformations between bipartite pure states. In this letter we extend thiscomparison to mixed states and show that in general separable operations arestrictly stronger than LOCC when transforming a mixed state to a pure entangledstate. A remarkable consequence of our finding is the existence of entanglementmonotone which may increase under separable operations.
Chitambar, E, Duan, R & Shi, Y 2008, 'Tripartite entanglement transformations and tensor rank', Phys. Rev. Lett., vol. 101, no. 14, p. 140502.
View/Download from: Publisher's site
View description>>
Understanding the nature of multipartite entanglement is a central mission ofquantum information theory. To this end, we investigate the question oftripartite entanglement convertibility. We find that there exists no easycriterion to determine whether a general tripartite transformation can beperformed with a nonzero success probability and in fact, the problem isNP-hard. Our results are based on the connections between multipartiteentanglement and tensor rank (also called Schmidt rank), a key concept inalgebraic complexity theory. Not only does this relationship allow us tocharacterize the general difficulty in determining possible entanglementtransformations, but it also enables us to observe the previously overlookedfact that {\em the Schmidt rank is not an additive entanglement measure}. As aresult, we improve some best known transformation rates between specifictripartite entangled states. In addition, we find obtaining the most efficientalgorithm for matrix multiplication to be precisely equivalent to determiningthe optimal rate of conversion between the Greenberger-Horne-Zeilinger stateand a triangular distribution of three Einstein-Podolsky-Rosen states.
de Burgh, MD, Langford, NK, Doherty, AC & Gilchrist, A 2008, 'Choice of measurement sets in qubit tomography', Physical Review A, vol. 78, no. 5.
View/Download from: Publisher's site
Devitt, SJ, Fowler, AG, Stephens, AM, Greentree, AD, Hollenberg, LCL, Munro, WJ & Nemoto, K 2008, 'Architectural design for a topological cluster state quantum computer', New. J. Phys., vol. 11, p. 083032.
View/Download from: Publisher's site
View description>>
The development of a large scale quantum computer is a highly sought aftergoal of fundamental research and consequently a highly non-trivial problem.Scalability in quantum information processing is not just a problem of qubitmanufacturing and control but it crucially depends on the ability to adaptadvanced techniques in quantum information theory, such as error correction, tothe experimental restrictions of assembling qubit arrays into the millions. Inthis paper we introduce a feasible architectural design for large scale quantumcomputation in optical systems. We combine the recent developments intopological cluster state computation with the photonic module, a simple chipbased device which can be used as a fundamental building block for a largescale computer. The integration of the topological cluster model with thiscomparatively simple operational element addresses many significant issues inscalable computing and leads to a promising modular architecture with completeintegration of active error correction exhibiting high fault-tolerantthresholds.
Devitt, SJ, Munro, WJ & Nemoto, K 2008, 'High Performance Quantum Computing', Progress in Informatics 8 (2011): 49-55, no. 8, pp. 49-55.
View/Download from: Publisher's site
View description>>
The architecture scalability afforded by recent proposals of a large scalephotonic based quantum computer, utilizing the theoretical developments oftopological cluster states and the photonic chip, allows us to move on to adiscussion of massively scaled Quantum Information Processing (QIP). In thisletter we introduce the model for a secure and unsecured topological clustermainframe. We consider the quantum analogue of High Performance Computing,where a dedicated server farm is utilized by many users to run algorithms andshare quantum data. The scaling structure of photonics based topologicalcluster computing leads to an attractive future for server based QIP, wherededicated mainframes can be constructed and/or expanded to serve anincreasingly hungry user base with the ideal resource for individual quantuminformation processing.
Duan, R & Shi, Y 2008, 'Entanglement between two uses of a noisy multipartite quantum channel enables perfect transmission of classical information', PHYSICAL REVIEW LETTERS, vol. 101, no. 2, pp. 1-4.
View/Download from: Publisher's site
View description>>
Suppose that m senders want to transmit classical information to n receivers with zero probability of error using a noisy multipartite communication channel. The senders are allowed to exchange classical, but not quantum, messages among themselves, and t
Duan, R, Feng, Y & Ying, M 2008, 'Local distinguishability of multipartite unitary operations', PHYSICAL REVIEW LETTERS, vol. 100, no. 2, pp. 1-4.
View/Download from: Publisher's site
View description>>
We show that any two different unitary operations acting on an arbitrary multipartite quantum system can be perfectly distinguished by local operations and classical communication when a finite number of runs is allowed. Intuitively, this result indicates that the lost identity of a nonlocal unitary operation can be recovered locally. No entanglement between distant parties is required. © 2008 The American Physical Society.
Duer, W, Bremner, MJ & Briegel, HJ 2008, 'Quantum simulation of interacting high-dimensional systems: The influence of noise', PHYSICAL REVIEW A, vol. 78, no. 5, p. 052325.
View/Download from: Publisher's site
View description>>
We consider the simulation of interacting high-dimensional systems using pairwise interacting qubits. The main tool in this context is the generation of effective many-body interactions, and we examine a number of different protocols for obtaining them. These methods include the usage of higher-order processes commutator method , unitary conjugation or graph state encoding, as well as teleportation-based approaches. We illustrate and compare these methods in detail and analyze the time cost for simulation. In the second part of the paper, we investigate the influence of noise on the simulation process. We concentrate on errors in the interaction Hamiltonians and consider two generic noise models: i timing errors in pairwise interactions and ii noisy pairwise interactions described by Master equations of Lindblad form. We analyze and compare the effect of noise for the different simulation methods and propose a way to significantly reduce the influence of noise by making use of entanglement purification together with a teleportation-based protocol.
Hsieh, M-H & Wilde, MM 2008, 'Entanglement-assisted communication of classical and quantum information', IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4682-4704, September 2010, vol. 56, no. 9, pp. 4682-4704.
View/Download from: Publisher's site
View description>>
We consider the problem of transmitting classical and quantum informationreliably over an entanglement-assisted quantum channel. Our main result is acapacity theorem that gives a three-dimensional achievable rate region. Pointsin the region are rate triples, consisting of the classical communication rate,the quantum communication rate, and the entanglement consumption rate of aparticular coding scheme. The crucial protocol in achieving the boundary pointsof the capacity region is a protocol that we name the classically-enhancedfather protocol. The classically-enhanced father protocol is more general thanother protocols in the family tree of quantum Shannon theoretic protocols, inthe sense that several previously known quantum protocols are now childprotocols of it. The classically-enhanced father protocol also shows animprovement over a time-sharing strategy for the case of a qubit dephasingchannel--this result justifies the need for simultaneous coding of classicaland quantum information over an entanglement-assisted quantum channel. Ourcapacity theorem is of a multi-letter nature (requiring a limit over many usesof the channel), but it reduces to a single-letter characterization for atleast three channels: the completely depolarizing channel, the quantum erasurechannel, and the qubit dephasing channel.
Hsieh, M-H, Brun, TA & Devetak, I 2008, 'Entanglement-Assisted Quantum Quasi-Cyclic Low-Density Parity-Check Codes', Phys. Rev. A, vol. 79, no. 3, p. 032340.
View/Download from: Publisher's site
View description>>
We investigate the construction of quantum low-density parity-check (LDPC)codes from classical quasi-cyclic (QC) LDPC codes with girth greater than orequal to 6. We have shown that the classical codes in the generalizedCalderbank-Shor-Steane (CSS) construction do not need to satisfy thedual-containing property as long as pre-shared entanglement is available toboth sender and receiver. We can use this to avoid the many 4-cycles whichtypically arise in dual-containing LDPC codes. The advantage of such quantumcodes comes from the use of efficient decoding algorithms such as sum-productalgorithm (SPA). It is well known that in the SPA, cycles of length 4 makesuccessive decoding iterations highly correlated and hence limit the decodingperformance. We show the principle of constructing quantum QC-LDPC codes whichrequire only small amounts of initial shared entanglement.
Ji, Z, Wang, G, Duan, R, Feng, Y & Ying, M 2008, 'Parameter Estimation of Quantum Channels', IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 54, no. 11, pp. 5172-5185.
View/Download from: Publisher's site
View description>>
The efficiency of parameter estimation of quantum channels is studied in this paper. We introduce the concept of programmable parameters to the theory of estimation. It is found that programmable parameters obey the standard quantum limit strictly; hence, no speedup is possible in its estimation. We also construct a class of nonunitary quantum channels whose parameter can be estimated in a way that the standard quantum limit is broken. The study of estimation of general quantum channels also enables an investigation of the effect of noises on quantum estimation. © 2008 IEEE.
Kremsky, I, Hsieh, M-H & Brun, TA 2008, 'Classical Enhancement of Quantum Error-Correcting Codes', Phys. Rev. A, vol. 78, no. 1, p. 012341.
View/Download from: Publisher's site
View description>>
We present a general formalism for quantum error-correcting codes that encodeboth classical and quantum information (the EACQ formalism). This formalismunifies the entanglement-assisted formalism and classical error correction, andincludes encoding, error correction, and decoding steps such that the encodedquantum and classical information can be correctly recovered by the receiver.We formally define this kind of quantum code using both stabilizer andsymplectic language, and derive the appropriate error-correcting conditions. Wegive several examples to demonstrate the construction of such codes.
Lanyon, BP, Weinhold, TJ, Langford, NK, O’Brien, JL, Resch, KJ, Gilchrist, A & White, AG 2008, 'Manipulating Biphotonic Qutrits', Physical Review Letters, vol. 100, no. 6.
View/Download from: Publisher's site
Lee, J, Sanmugarasa, K, Blumenstein, M & Loo, Y-C 2008, 'Improving the reliability of a Bridge Management System (BMS) using an ANN-based Backward Prediction Model (BPM)', Automation in Construction, vol. 17, no. 6, pp. 758-772.
View/Download from: Publisher's site
View description>>
The slow adoption of Bridge Management Systems (BMSs) and its impractical future prediction of the condition rating of bridges are attributed to the inconsistency between BMS inputs and bridge agencies' existing data for a BMS in terms of compatibility and the enormous number of bridge datasets that include historical structural information. Among these, historical bridge element condition ratings are some of the key pieces of information required for bridge asset prioritisation but in most cases only limited data is available. This study addresses the abovementioned difficulties faced by bridge management agencies by using limited historical bridge inspection records to model time-series element-level data. This paper presents an Artificial Neural Network (ANN) based prediction model, called the Backward Prediction Model (BPM), for generating historical bridge condition ratings using limited bridge inspection records. The BPM employs historical non-bridge datasets such as traffic volumes, populations and climates, to establish correlations with existing bridge condition ratings from very limited bridge inspection records. The resulting model predicts the missing historical condition ratings of individual bridge elements. The outcome of this study can contribute to reducing the uncertainty in predicting future bridge condition ratings and so improve the reliability of various BMS analysis outcomes. © 2008 Elsevier B.V. All rights reserved.
Lee, T & Mittal, R 2008, 'Product theorems via semidefinite programming', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5125 LNCS, no. PART 1, pp. 674-685.
View/Download from: Publisher's site
View description>>
The tendency of semidefinite programs to compose perfectly under product hasbeen exploited many times in complexity theory: for example, by Lovasz todetermine the Shannon capacity of the pentagon; to show a direct sum theoremfor non-deterministic communication complexity and direct product theorems fordiscrepancy; and in interactive proof systems to show parallel repetitiontheorems for restricted classes of games. Despite all these examples of product theorems--some going back nearly thirtyyears--it was only recently that Mittal and Szegedy began to develop a generaltheory to explain when and why semidefinite programs behave perfectly underproduct. This theory captured many examples in the literature, but there werealso some notable exceptions which it could not explain--namely, an earlyparallel repetition result of Feige and Lovasz, and a direct product theoremfor the discrepancy method of communication complexity by Lee, Shraibman, andSpalek. We extend the theory of Mittal and Szegedy to explain these cases as well.Indeed, to the best of our knowledge, our theory captures all examples ofsemidefinite product theorems in the literature.
Lee, T & Shraibman, A 2008, 'An approximation algorithm for approximation rank', Proceedings of the Annual IEEE Conference on Computational Complexity, pp. 351-357.
View/Download from: Publisher's site
View description>>
One of the strongest techniques available for showing lower bounds on quantumcommunication complexity is the logarithm of the approximation rank of thecommunication matrix--the minimum rank of a matrix which is entrywise close tothe communication matrix. This technique has two main drawbacks: it isdifficult to compute, and it is not known to lower bound quantum communicationcomplexity with entanglement. Linial and Shraibman recently introduced a norm, called gamma_2^{alpha}, toquantum communication complexity, showing that it can be used to lower boundcommunication with entanglement. Here the parameter alpha is a measure ofapproximation which is related to the allowable error probability of theprotocol. This bound can be written as a semidefinite program and gives boundsat least as large as many techniques in the literature, although it is smallerthan the corresponding alpha-approximation rank, rk_alpha. We show that in factlog gamma_2^{alpha}(A)$ and log rk_{alpha}(A)$ agree up to small factors. Ascorollaries we obtain a constant factor polynomial time approximation algorithmto the logarithm of approximate rank, and that the logarithm of approximationrank is a lower bound for quantum communication complexity with entanglement.
Li, S & Ying, M 2008, 'Soft constraint abstraction based on semiring homomorphism', THEORETICAL COMPUTER SCIENCE, vol. 403, no. 2-3, pp. 192-201.
View/Download from: Publisher's site
View description>>
The semiring-based constraint satisfaction problems (semiring CSPs), proposed by Bistarelli, Montanari and Rossi [S. Bistarelli, U. Montanari, F. Rossi, Semiring-based constraints solving and optimization, Journal of the ACM 44 (2) (1997) 201-236], is a very general framework of soft constraints. In this paper we propose an abstraction scheme for soft constraints that uses semiring homomorphism. To find optimal solutions of the concrete problem, we first work in the abstract problem and find its optimal solutions, and then use them to solve the concrete problem. In particular, we show that a mapping preserves optimal solutions if and only if it is an order-reflecting semiring homomorphism. Moreover, for a semiring homomorphism α and a problem P over S, if t is optimal in α (P), then there is an optimal solution over(t, ̄) of P such that over(t, ̄) has the same value as t in α (P). © 2008 Elsevier B.V. All rights reserved.
Scheidl, T, Ursin, R, Kofler, J, Ramelow, S, Ma, X-S, Herbst, T, Ratschbacher, L, Fedrizzi, A, Langford, NK, Jennewein, T & Zeilinger, A 2008, 'Violation of local realism with freedom of choice', Proc. Natl. Acad. Sci. USA, vol. 107, no. 46, pp. 19708-19713.
View/Download from: Publisher's site
View description>>
Bell's theorem shows that local realistic theories place strong restrictionson observable correlations between different systems, giving rise to Bell'sinequality which can be violated in experiments using entangled quantum states.Bell's theorem is based on the assumptions of realism, locality, and thefreedom to choose between measurement settings. In experimental tests,'loopholes' arise which allow observed violations to still be explained bylocal realistic theories. Violating Bell's inequality while simultaneouslyclosing all such loopholes is one of the most significant still open challengesin fundamental physics today. In this paper, we present an experiment thatviolates Bell's inequality while simultaneously closing the locality loopholeand addressing the freedom-of-choice loophole, also closing the latter within areasonable set of assumptions. We also explain that the locality andfreedom-of-choice loopholes can be closed only within non-determinism, i.e. inthe context of stochastic local realism.
Schirmer, SG, Oi, DKL & Devitt, SJ 2008, 'Physics-based mathematical models for quantum devices via experimental system identification', Journal of Physics: Conference Series, vol. 107, pp. 012011-012011.
View/Download from: Publisher's site
Stephens, AM, Evans, ZWE, Devitt, SJ & Hollenberg, LCL 2008, 'Asymmetric quantum error correction via code conversion', Physical Review A, vol. 77, no. 6.
View/Download from: Publisher's site
Stephens, AM, Evans, ZWE, Devitt, SJ, Greentree, AD, Fowler, AG, Munro, WJ, O’Brien, JL, Nemoto, K & Hollenberg, LCL 2008, 'Deterministic optical quantum computer using photonic modules', Physical Review A, vol. 78, no. 3.
View/Download from: Publisher's site
Tomamichel, M, Colbeck, R & Renner, R 2008, 'A Fully Quantum Asymptotic Equipartition Property', IEEE Trans. on Inf. Theory 55 (2009) p. 5840-5847, vol. 55, no. 12, pp. 5840-5847.
View/Download from: Publisher's site
View description>>
The classical asymptotic equipartition property is the statement that, in thelimit of a large number of identical repetitions of a random experiment, theoutput sequence is virtually certain to come from the typical set, each memberof which is almost equally likely. In this paper, we prove a fully quantumgeneralization of this property, where both the output of the experiment andside information are quantum. We give an explicit bound on the convergence,which is independent of the dimensionality of the side information. Thisnaturally leads to a family of Renyi-like quantum conditional entropies, forwhich the von Neumann entropy emerges as a special case.
Witzigmann, B, Tomamichel, M, Steiger, S, Veprek, RG, Kojima, K & Schwarz, UT 2008, 'Analysis of Gain and Luminescence in Violet and Blue GaInN–GaN Quantum Wells', IEEE Journal of Quantum Electronics, vol. 44, no. 2, pp. 144-149.
View/Download from: Publisher's site
Xin, Y & Duan, R 2008, 'Local distinguishability of orthogonal 2 circle times 3 pure states', PHYSICAL REVIEW A, vol. 77, no. 1, pp. 1-10.
View/Download from: Publisher's site
View description>>
We present a complete characterization for the local distinguishability of orthogonal 2-3 pure states except for some special cases of three states. Interestingly, we find there is a large class of four or three states that are indistinguishable by local projective measurements and classical communication (LPCC), but can be perfectly distinguished by LOCC. That indicates the ability of LOCC for discriminating 2-3 states is strictly more powerful than that of LPCC, which is strikingly different from the case of multiqubit states. We also show that classical communication plays a crucial role for local distinguishability by constructing a class of m-n states which require at least 2 min{m,n}-2 rounds of classical communication in order to achieve a perfect local discrimination. © 2008 The American Physical Society.
Greentree, AD, Jong, LM, Van Donkelaar, JA, Devitt, SJ, Cole, JH, Stephens, AM, Jamieson, DN & Hollenberg, LCL 1970, 'Spatial adiabatic passage as a quantum wire', 2008 International Conference on Nanoscience and Nanotechnology, 2008 International Conference on Nanoscience and Nanotechnology (ICONN), IEEE, Melbourne, AUSTRALIA, pp. 156-159.
View/Download from: Publisher's site
Hsieh, M-H, Luo, Z & Brun, T 1970, 'Secret Keys Assisted Private Classical Communication Capacity over Quantum Channels', Phys. Rev. A, The 8th Asian Conference on Quantum Information Science (AQIS08), AMER PHYSICAL SOC, Seoul, Korea, p. 042306.
View/Download from: Publisher's site
View description>>
We prove a regularized formula for the secret key-assisted capacity region ofa quantum channel for transmitting private classical information. This resultparallels the work of Devetak on entanglement assisted quantum communicationcapacity \cite{DHW05RI}. This formula provides a new family protocol, theprivate father protocol, under the resource inequality framework that includesprivate classical communication \it{without} secret key assistance as a childprotocol.
Lee, J, Guan, H, Blumenstein, M & Loo, Y-C 1970, 'An ANN-Based Backward Prediction Model for Reliable Bridge Management System Implementations Using Limited Inspection Records – Case Studies', IABSE Congress, Chicago 2008: Creating and Renewing Urban Structures – Tall Buildings, Bridges and Infrastructure, IABSE Congress, Chicago 2008: Creating and Renewing Urban Structures – Tall Buildings, Bridges and Infrastructure, International Association for Bridge and Structural Engineering (IABSE).
View/Download from: Publisher's site
View description>>
<p>Computer-aided Bridge Management Systems (BMSs) as Decision Support Systems (DSSs) for an effective bridge asset management are used to establish the feasible bridge maintenance, repair and rehabilitation (MR&R) strategies which ensure an adequate level of safety at the lowest possible bridge life-cycle cost. To achieve this goal, keeping up-to-date bridge condition ratings are crucial for a BMS software package. Although most bridge agencies in the past have conducted inspections and maintenance, the form of such bridge inspection records is dissimilar to those required by BMSs. These data inconsistencies inevitably slow down the BMS implementations. This paper presents an Artificial Neural Network (ANN) based prediction model, called the Backward Prediction Model (BPM), for generating unavailable years of historical bridge condition ratings using very limited existing inspection records. The BPM employed historical non-bridge datasets such as traffic volumes, populations and climates, to establish correlations with the existing bridge condition ratings from the very limited bridge inspection records. Such correlations can help fill the condition rating gaps required for an effective and accurate BMS implementation. This paper covers a brief description of the BPM methodology and presents nine case studies. The outcome of this study can help establish a comprehensive condition rating database, which will in turn assist to predict reliable future bridge depreciations.</p>
Lee, T, Shraibman, A & Spalek, R 1970, 'A Direct Product Theorem for Discrepancy', 2008 23rd Annual IEEE Conference on Computational Complexity, Twenty-Third Annual IEEE Conference on Computational Complexity - CCC 2008, IEEE.
View/Download from: Publisher's site
Mathieson, L & Szeider, S 1970, 'Parameterized Graph Editing with Chosen Vertex Degrees', COMBINATORIAL OPTIMIZATION AND APPLICATIONS, PROCEEDINGS, 2nd International Conference on Combinatorial Optimization and Applications, Springer Berlin Heidelberg, St Johns, CANADA, pp. 13-22.
View/Download from: Publisher's site
Mathieson, L & Szeider, S 1970, 'The parameterized complexity of regular subgraph problems and generalizations', Conferences in Research and Practice in Information Technology Series.
View description>>
We study variants and generalizations of the problem of finding an r-regular subgraph (where r ≥ 3) in a given graph by deleting at most k vertices. Moser and Thilikos (2006) have shown that the problem is fixed-parameter tractable (FPT) if parameterized by (k, r). They asked whether the problem remains fixed-parameter tractable if parameterized by k alone. We answer this question negatively: we show that if parameterized by k alone the problem is W[1]-hard and therefore very unlikely fixed-parameter tractable. We also give W[1]-hardness results for variants of the problem where the parameter is the number of vertex and edge deletions allowed, and for a new generalized form of the problem where the obtained subgraph is not necessarily regular but its vertices have certain prescribed degrees. Following this we demonstrate fixed-parameter tractability for the considered problems if the parameter includes the regularity r or an upper bound on the prescribed degrees in the generalized form of the problem. These FPT results are obtained via kernelization, so also provide a practical approach to the problems presented. © 2008, Australian Computer Society, Inc.
Qiao, Y & Tartary, C 1970, 'Counting Method for Multi-party Computation over Non-abelian Groups', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Cryptology and Network Security, Springer Berlin Heidelberg, Hong Kong, pp. 162-177.
View/Download from: Publisher's site
View description>>
In the Crypto'07 paper [5], Desmedt et al. studied the problem of achieving secure n-party computation over non-Abelian groups. The function to be computed is f G (x 1,...,x n ) :∈=∈x 1 •...•x n where each participant P i holds an input x i from the non-commutative group G. The settings of their study are the passive adversary model, information-theoretic security and black-box group operations over G. They presented three results. The first one is that honest majority is needed to ensure security when computing f G . Second, when the number of adversary , they reduced building such a secure protocol to a graph coloring problem and they showed that there exists a deterministic secure protocol computing f G using exponential communication complexity. Finally, Desmedt et al. turned to analyze random coloring of a graph to show the existence of a probabilistic protocol with polynomial complexity when t∈<∈n/μ, in which μ is a constant less than 2.948. We call their analysis method of random coloring the counting method as it is based on the counting of the number of a specific type of random walks. This method is inspiring because, as far as we know, it is the first instance in which the theory of self-avoiding walk appears in multiparty computation. In this paper, we first give an altered exposition of their proof. This modification will allow us to adapt this method to a different lattice and reduce the communication complexity by 1/3, which is an important saving for practical implementations of the protocols. We also show the limitation of the counting method by presenting a lower bound for this technique. In particular, we will deduce that this approach would not achieve the optimal collusion resistance. © 2008 Springer Berlin Heidelberg.
Schirmer, SG, Kandasamy, G & Devitt, SJ 1970, 'Control paradigms for quantum engineering', 2008 3rd International Symposium on Communications, Control and Signal Processing, 2008 3rd International Symposium on Communications, Control and Signal Processing (ISCCSP), IEEE, St Julians, MALTA, pp. 966-+.
View/Download from: Publisher's site
Thornton, J, Faichney, J, Blumenstein, M & Hine, T 1970, 'Character Recognition Using Hierarchical Vector Quantization and Temporal Pooling', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Berlin Heidelberg, pp. 562-572.
View/Download from: Publisher's site
View description>>
In recent years, there has been a cross-fertilization of ideas between computational neuroscience models of the operation of the neocortex and artificial intelligence models of machine learning. Much of this work has focussed on the mammalian visual cortex, treating it as a hierarchically- structured pattern recognition machine that exploits statistical regularities in retinal input. It has further been proposed that the neocortex represents sensory information probabilistically, using some form of Bayesian inference to disambiguate noisy data. In the current paper, we focus on a particular model of the neocortex developed by Hawkins, known as hierarchical temporal memory (HTM). Our aim is to evaluate an important and recently implemented aspect of this model, namely its ability to represent temporal sequences of input within a hierarchically structured vector quantization algorithm. We test this temporal pooling feature of HTM on a benchmark of cursive handwriting recognition problems and compare it to a current state-of-the-art support vector machine implementation. We also examine whether two pre-processing techniques can enhance the temporal pooling algorithm's performance. Our results show that a relatively simple temporal pooling approach can produce recognition rates that approach the current state-of-the-art without the need for extensive tuning of parameters. We also show that temporal pooling performance is surprisingly unaffected by the use of preprocessing techniques. © 2008 Springer Berlin Heidelberg.
Zhang, X, Liu, W, Li, S & Ying, M 1970, 'Reasoning with Cardinal Directions: An Efficient Algorithm.', AAAI, National Conference of the American Association for Artificial Intelligence, AAAI Press, Chicago, Illinois, USA, pp. 387-392.
View description>>
Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a formal model, called Cardinal Direction Calculus (CDC), for representing direction relations between connected plane regions. CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking consistency of basic CDC constraint networks. As one byproduct, we also show that any consistent network of CDC constraints has a canonical realization in digital plane. The cubic algorithm can also been adapted to cope with disconnected regions, in which case the current best algorithm is of time complexity O(n5).