Budka, M, Eastwood, M, Gabrys, B, Kadlec, P, Martin Salvador, M, Schwan, S, Tsakonas, A & Žliobaitė, I 2014, 'From Sensor Readings to Predictions: On the Process of Developing Practical Soft Sensors' in Advances in Intelligent Data Analysis XIII, Springer International Publishing, pp. 49-60.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Bioinformatics-Based Drug Discovery for Protein Kinases' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 255-270.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Conclusion and FutureWorks' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 271-277.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Data Resources and Applications' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 45-59.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Detecting Inconsistency in Biological Molecular Databases Using Ontologies' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 61-83.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Discovering Conserved and Diverged Patterns of MiRNA Families' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 235-253.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Discovery of Structural and Functional Features Bind to RNA Pseudoknots' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 193-217.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Exploration of Positive Frequent Patterns for AMP-Activated Protein Kinase Regulation' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 85-105.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Interval Based Similarity for Function Classification of RNA Pseudoknots' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 175-192.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Introduction' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 1-43.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Mining Featured Patterns of MiRNA Interaction Based on Sequence and Structure Similarity' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 219-233.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Mining Inhibition Pathways for Protein Kinases on Skeletal Muscle' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 127-150.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Mining Protein Kinase Regulation Using Graphical Models' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 107-126.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Modeling Conserved Structure Patterns for Functional Noncoding RNA' in Intelligent Strategies for Pathway Mining, Springer International Publishing, pp. 151-173.
View/Download from: Publisher's site
Li, M, Li, J, Ou, Y, Zhang, Y, Luo, D, Bahtia, M & Cao, L 2014, 'Coupled K-Nearest Centroid Classification for Non-iid Data' in Transactions on Computational Collective Intelligence XV, Springer Berlin Heidelberg, pp. 89-100.
View/Download from: Publisher's site
Li, M, Li, J, Ou, Y, Zhang, Y, Luo, D, Bahtia, M & Cao, L 2014, 'Coupled K-Nearest Centroid Classification for Non-iid Data' in Transactions on Computational Collective Intelligence XV, Springer Berlin Heidelberg, pp. 89-100.
View/Download from: Publisher's site
Moreno-Mas, M, Estelles-Miguel, S, Merigo, JM & González-Vázquez, E 2014, 'Management by Processes: An Effective Tool for Employee Motivation' in Action-Based Quality Management, Springer International Publishing, pp. 43-51.
View/Download from: Publisher's site
View description>>
© 2014 Springer International Publishing Switzerland. All rights reserved. This article shows, through different case studies, how the introduction of Management by Processes methodology not only has contributed to improve quality services and efficiency at different type of organizations, but it has also reinforced team spirit, achieving high levels of employee motivation. When a radical change is needed in a company, and a process management culture is named to substitute the function culture, the Management by process concept becomes an effective way of introducing behaviour changes. Innovation is generally defined as the development or use of new products, services, production, processes, organizational structures or administrative systems that are new to the adopting. And Management by Processes is a kind of innovation. It has been verified that explaining and involving employees in the process from the beginning, by means of training and communicating, from the first steps, the methodology, skills and techniques that are involved in the Management by Processes Approach, encourages employees to cooperate actively in the whole process.
Román, CP, García, MPC & Merigó, JM 2014, 'The Integration of Social Networks in the Competitiveness of Cooperation Networks: An Analysis to be Applied in Pharmacies' in Strategies in E-Business, Springer US, pp. 105-113.
View/Download from: Publisher's site
View description>>
© 2014 Springer Science+Business Media New York. All rights are reserved. Concepts such as virtual spaces networks, interactivity and interconnectivity are an essential part of the literature used by authors to refer to the future of the information society and knowledge. These days, there is considerable discussion about the use and impact of ICT in the development of enterprises. The research carried out in this paper is based on the evaluation of key variables for the implementation of social network and communication solutions in the pharmaceutical sector (and in other companies in the same industry). This research aims to validate the proposed model and thus generalise its use. The application of the model shows us coherent results, which encourages its validation and facilitates the analysis and diagnosis of the impact of the diverse solutions of social networks on the different needs of a range of sectorial groups.
Xu, G, Wu, Z, Cao, J & Tao, H 2014, 'Models for Community Dynamics' in Alhajj, R & Rokne, J (eds), Encyclopedia of Social Network Analysis and Mining, Springer New York, pp. 969-982.
View/Download from: Publisher's site
View description>>
In the realm of network science, a complex network is a graph with non-trivial topological features. A social network, one of the important real-world complex networks, is usually be modeled as a graph, where the nodes are called actors and the edges connecting nodes are used to represent various ties. The dynamic network is usually defined as a sequence of snapshot graphs indexed by time. Community dynamics aims to process such dynamic network to produce a sequence of communities, that is, one community for each timestamp. Different from traditional community detection methods on the static network, community dynamics assumes to obtain communities of the current timestamp relies on the results of the previous timestamps.
Agarwal, N, Zhou, A & Xu, G 2014, 'Social cyber systems—Challenges, opportunities, and beyond', Journal of Systems and Software, vol. 94, pp. 1-3.
View/Download from: Publisher's site
Apeh, E, Gabrys, B & Schierz, A 2014, 'Customer profile classification: To adapt classifiers or to relabel customer profiles?', Neurocomputing, vol. 132, pp. 3-13.
View/Download from: Publisher's site
View description>>
Customer profiles are, by definition, made up of factual and transactional data. It is often the case that due to reasons such as high cost of data acquisition and/or protection, only the transactional data are available for data mining operations. Transactional data, however, tend to be highly sparse and skewed due to a large proportion of customers engaging in very few transactions. This can result in a bias in the prediction accuracy of classifiers built using them. The problem is even more so when identifying and classifying changing customer profiles whose classification may change either due to a concept drift or due to a change in buying behaviour. This paper presents a comparative investigation of 4 approaches for classifying dynamic customer profiles built using evolving transactional data over time. The changing class values of the customer profiles were analysed together with the challenging problem of deciding whether to change the class label or adapt the classifier. The results from the experiments we conducted on a highly sparse and skewed real-world transactional data show that adapting the classifiers leads to more stable classification of customer profiles in the shorter time windows; while relabelling the changed customer profile classes leads to more accurate and stable classification in the longer time windows. © 2013 Elsevier B.V.
Arsene, CTC & Gabrys, B 2014, 'Mixed simulation-state estimation of water distribution systems based on a least squares loop flows state estimator', Applied Mathematical Modelling, vol. 38, no. 2, pp. 599-619.
View/Download from: Publisher's site
View description>>
This paper presents combined simulation and state estimation algorithm for water distribution systems based on the loop corrective flows and the variation of nodal demands as independent variables and it optimizes the Least Squares (LS) criterion. The combination of the two algorithms for simulation and state estimation is based on the delimitation of regions in the water network that are state estimated while for the remaining parts of the water network the simulation task is realized. The sizes of the respective delimitations can be based either on the hydraulic or topological distances from the real pressure measurements, flow measurements or measured nodal consumptions. The delimitations are realized through modifications of the inverse of the upper form tree incidence matrix which is used in order to construct the respective state estimated or simulated water network areas: the simulated nodes and pipes have the corresponding incidence columns zeroed in the inverse of the upper form tree incidence matrix while the state estimated nodes and pipes keep the values of their incidence described in the corresponding columns of the inverse of the upper form tree incidence matrix. The combined novel algorithm can be also applied to regions of water distribution systems which contain low pipe flows so that to avoid any convergence problems in the numerical algorithm. It results an efficient and effective novel mixed simulation-state estimation which is implemented on realistic water distribution systems. © 2013 Elsevier Inc.
Ashraf, J, Hussain, OK & Hussain, FK 2014, 'Empirical analysis of domain ontology usage on the Web: eCommerce domain in focus', CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, vol. 26, no. 5, pp. 1157-1184.
View/Download from: Publisher's site
View description>>
In the recent past, there has been an exponential growth in Resource Description Framework data on the web known as web of data. The emergence of the web of data is transforming the existing web from a document-sharing medium to a decentralized knowledge platform for publishing and sharing information between humans and computers. To enable common understanding between different users, domain ontologies are being developed and deployed to annotate information on the web. This semantically annotated information is then accessed by machines to extract and aggregate information, on the basis of the underlying ontologies used. To effectively and efficiently access data on the web, insight into the usage of ontology is pivotal, because this assists users in experiencing the benefits offered by the Semantic Web. However, such an approach has not been proposed in the literature. In this paper, we present a pragmatic approach to the analysis of domain ontology usage on the web. We propose metrics to measure the use of domain ontology constructs on the web from different aspects. To comprehensively understand the usage patterns of conceptual knowledge, instance data, and ontology co-usability, we considered GoodRelations ontology as the domain ontology and built a dataset by collecting structured data from 211 web-based data sources that have published information using the domain ontology. The dataset is analyzed by using the proposed metrics and observations along with their usability and applicability to the different users of the Semantic Web. Copyright © 2013 John Wiley & Sons, Ltd.
Belles-Sampera, J, Merigó, JM, Guillén, M & Santolino, M 2014, 'Indicators for the characterization of discrete Choquet integrals', Information Sciences, vol. 267, pp. 201-216.
View/Download from: Publisher's site
Bo Liu, Yanshan Xiao, Yu, PS, Zhifeng Hao & Longbing Cao 2014, 'An Efficient Approach for Outlier Detection with Imperfect Data Labels', IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 7, pp. 1602-1616.
View/Download from: Publisher's site
View description>>
The task of outlier detection is to identify data objects that are markedly different from or inconsistent with the normal set of data. Most existing solutions typically build a model using the normal data and identify outliers that do not fit the represented model very well. However, in addition to normal data, there also exist limited negative examples or outliers in many applications, and data may be corrupted such that the outlier detection data is imperfectly labeled. These make outlier detection far more difficult than the traditional ones. This paper presents a novel outlier detection approach to address data with imperfect labels and incorporate limited abnormal examples into learning. To deal with data with imperfect labels, we introduce likelihood values for each input data which denote the degree of membership of an example toward the normal and abnormal classes respectively. Our proposed approach works in two steps. In the first step, we generate a pseudo training dataset by computing likelihood values of each example based on its local behavior. We present kernel k-means clustering method and kernel LOF-based method to compute the likelihood values. In the second step, we incorporate the generated likelihood values and limited abnormal examples into SVDD-based learning framework to build a more accurate classifier for global outlier detection. By integrating local and global outlier detection, our proposed method explicitly handles data with imperfect labels and enhances the performance of outlier detection. Extensive experiments on real life datasets have demonstrated that our proposed approaches can achieve a better tradeoff between detection rate and false alarm rate as compared to state-of-the-art outlier detection approaches.
Bouwmans, T, Gonzàlez, J, Shan, C, Piccardi, M & Davis, L 2014, 'Special issue on background modeling for foreground detection in real-world dynamic scenes', Machine Vision and Applications, vol. 25, no. 5, pp. 1101-1103.
View/Download from: Publisher's site
Brandl, MB, Pasquier, E, Li, F, Beck, D, Zhang, S, Zhao, H, Kavallaris, M & Wong, STC 2014, 'Computational analysis of image-based drug profiling predicts synergistic drug combinations: Applications in triple-negative breast cancer', MOLECULAR ONCOLOGY, vol. 8, no. 8, pp. 1548-1560.
View/Download from: Publisher's site
View description>>
An imaged-based profiling and analysis system was developed to predict clinically effective synergistic drug combinations that could accelerate the identification of effective multi-drug therapies for the treatment of triple-negative breast cancer and other challenging malignancies. The identification of effective drug combinations for the treatment of triple-negative breast cancer (TNBC) was achieved by integrating high-content screening, computational analysis, and experimental biology. The approach was based on altered cellular phenotypes induced by 55 FDA-approved drugs and biologically active compounds, acquired using fluorescence microscopy and retained in multivariate compound profiles. Dissimilarities between compound profiles guided the identification of 5 combinations, which were assessed for qualitative interaction on TNBC cell growth. The combination of the microtubule-targeting drug vinblastine with KSP/Eg5 motor protein inhibitors monastrol or ispinesib showed potent synergism in 3 independent TNBC cell lines, which was not substantiated in normal fibroblasts. The synergistic interaction was mediated by an increase in mitotic arrest with cells demonstrating typical ispinesib-induced monopolar mitotic spindles, which translated into enhanced apoptosis induction. The antitumour activity of the combination vinblastine/ispinesib was confirmed in an orthotopic mouse model of TNBC. Compared to single drug treatment, combination treatment significantly reduced tumour growth without causing increased toxicity. Image-based profiling and analysis led to the rapid discovery of a drug combination effective against TNBC in vitro and in vivo, and has the potential to lead to the development of new therapeutic options in other hard-to-treat cancers.
Bubna-Litic, K & Stoianoff, NP 2014, 'Carbon Pricing and Renewable Energy Innovation: A Comparison of Australian, British and Canadian Carbon Pricing Policies', Bubna-Litic, Karen, Stoianoff, Natalie (2014) 'Carbon Pricing and Renewable Energy Innovation: A Comparison of Australian, British, and Canadian Carbon Pricing Policies', Environmental and Planning Law Journal, vol. 31, no. 5, pp. 368-384.
Cao, L 2014, 'Non-IIDness Learning in Behavioral and Social Data', The Computer Journal, vol. 57, no. 9, pp. 1358-1370.
View/Download from: Publisher's site
View description>>
Most of the classic theoretical systems and tools in statistics, data mining and machine learning are built on the fundamental assumption of IIDness, which assumes the independence and identical distribution of underlying objects, attributes and/or values. However, complex behavioral and social problems often exhibit strong couplings and heterogeneity between values, attributes and objects (i.e., non-IIDness). This fundamentally challenges the IIDness-based learning methodologies and techniques. This paper presents a high-level overview of the needs, challenges and opportunities of non-IIDness learning for handling complex behavioral and social problems. By reviewing the nature and issues of classic IIDness-based algorithms in frequent pattern mining, clustering and classification to complex behavioral and social applications, concepts, structures, frameworks and exemplar techniques are discussed for non-IIDness learning. Case studies, relatedwork and prospects of non-IIDness learning are presented. Non-IIDness learning is also a fundamental issue in big data analytics. © The British Computer Society 2013.
Cao, L & Joachims, T 2014, 'Behavior Computing', IEEE Intelligent Systems, vol. 29, no. 4, pp. 62-66.
Cao, L, Joachims, T, Wang, C, Gaussier, E, Li, J, Ou, Y, Luo, D, Zafarani, R, Liu, H, Xu, G, Wu, Z, Pasi, G, Zhang, Y, Yang, X, Zha, H, Serra, E & Subrahmanian, VS 2014, 'Behavior Informatics: A New Perspective', IEEE Intelligent Systems, vol. 29, no. 4, pp. 62-80.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. This installment of Trends & Controversies provides an array of perspectives on the latest research in behavior informatics. Longbing Cao introduces the work in 'Behavior Informatics: A New Perspective.' Then, in 'Behavior Computing,' Longbing Cao and Thorsten Joachims provide a basic overview of the topic. Next is 'Coupled Behavior Representation, Modeling, Analysis, and Reasoning' by Can Wang, Longbing Cao, Eric Gaussier, Jinjiu Li, Yuming Ou, and Dan Luo. The fourth article is 'Behavior Analysis in Social Media,' by Reza Zafarani and Huan Liu. The fifth article is 'Group Recommendation and Behavior,' by Guandong Xu and Zhiang Wu. Gabriella Pasi wrote the sixth article, 'Web Search and Behavior.' The seventh article, 'Behaviors of IPTV Users,' is by Ya Zhang, Xiaokang Yang, and Hongyuan Zha. Finally, 'Should Behavioral Models of Terror Groups Be Disclosed?' is by Edoardo Serra and V.S. Subrahmanian.
Chacon, D, Beck, D, Perera, D, Wong, JWH & Pimanda, JE 2014, 'BloodChIP: a database of comparative genome-wide transcription factor binding profiles in human blood cells', Nucleic Acids Research, vol. 42, no. D1, pp. D172-D177.
View/Download from: Publisher's site
View description>>
The BloodChIP database (http://www.med.unsw.edu.au/CRCWeb.nsf/page/ BloodChIP) supports exploration and visualization of combinatorial transcription factor (TF) binding at a particular locus in human CD34-positive and other normal and leukaemic cells or retrieval of target gene sets for user-defined combinations of TFs across one or more cell types. Increasing numbers of genome-wide TF binding profiles are being added to public repositories, and this trend is likely to continue. For the power of these data sets to be fully harnessed by experimental scientists, there is a need for these data to be placed in context and easily accessible for downstream applications. To this end, we have built a user-friendly database that has at its core the genome-wide binding profiles of seven key haematopoietic TFs in human stem/progenitor cells. These binding profiles are compared with binding profiles in normal differentiated and leukaemic cells. We have integrated these TF binding profiles with chromatin marks and expression data in normal and leukaemic cell fractions. All queries can be exported into external sites to construct TF-gene and protein-protein networks and to evaluate the association of genes with cellular processes and tissue expression. © 2013 The Author(s). Published by Oxford University Press.
Chang, X, Nie, F, Wang, S, Yang, Y, Zhou, X & Zhang, C 2014, 'Compound Rank-k Projections for Bilinear Analysis', IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 7, pp. 1502-1513.
View/Download from: Publisher's site
View description>>
In many real-world applications, data are represented by matrices orhigh-order tensors. Despite the promising performance, the existingtwo-dimensional discriminant analysis algorithms employ a single projectionmodel to exploit the discriminant information for projection, making the modelless flexible. In this paper, we propose a novel Compound Rank-k Projection(CRP) algorithm for bilinear analysis. CRP deals with matrices directly withouttransforming them into vectors, and it therefore preserves the correlationswithin the matrix and decreases the computation complexity. Different from theexisting two dimensional discriminant analysis algorithms, objective functionvalues of CRP increase monotonically.In addition, CRP utilizes multiple rank-kprojection models to enable a larger search space in which the optimal solutioncan be found. In this way, the discriminant ability is enhanced.
Chang, X, Nie, F, Yang, Y & Huang, H 2014, 'A Convex Sparse PCA for Feature Analysis', ACM Transactions on Knowledge Discovery from Data, vol. 11, no. 1, pp. 1-16.
View/Download from: Publisher's site
View description>>
Principal component analysis (PCA) has been widely applied to dimensionalityreduction and data pre-processing for different applications in engineering,biology and social science. Classical PCA and its variants seek for linearprojections of the original variables to obtain a low dimensional featurerepresentation with maximal variance. One limitation is that it is verydifficult to interpret the results of PCA. In addition, the classical PCA isvulnerable to certain noisy data. In this paper, we propose a convex sparseprincipal component analysis (CSPCA) algorithm and apply it to featureanalysis. First we show that PCA can be formulated as a low-rank regressionoptimization problem. Based on the discussion, the l 2 , 1 -norm minimizationis incorporated into the objective function to make the regression coefficientssparse, thereby robust to the outliers. In addition, based on the sparse modelused in CSPCA, an optimal weight is assigned to each of the original feature,which in turn provides the output with good interpretability. With the outputof our CSPCA, we can effectively analyze the importance of each feature underthe PCA criteria. The objective function is convex, and we propose an iterativealgorithm to optimize it. We apply the CSPCA algorithm to feature selection andconduct extensive experiments on six different benchmark datasets. Experimentalresults demonstrate that the proposed algorithm outperforms state-of-the-artunsupervised feature selection algorithms.
Chen, X, Liu, L, Luo, D, Xu, G, Lu, Y, Liu, M & Gao, R 2014, 'A Spectral Clustering Algorithm Based on Hierarchical Method', pp. 111-123.
View/Download from: Publisher's site
View description>>
Most of the clustering algorithms were designed to cluster the data in convex spherical sample space, but their ability was poor for clustering more complex structures. In the past few years, several spectral clustering algorithms were proposed to cluster arbitrarily shaped data in various real applications including image processing and web analysis. However, most of these algorithms were based on k-means, which is a randomized algorithm and makes the algorithm easy to fall into local optimal solutions. Hierarchical method could handle the local optimum well because it organizes data into different groups at different levels. In this paper, we propose a novel clustering algorithm called spectral clustering algorithm based on hierarchical clustering (SCHC), which combines the advantages of hierarchical clustering and spectral clustering algorithms to avoid the local optimum issues. The experiments on both synthetic data sets and real data sets show that SCHC outperforms other six popular clustering algorithms. The method is simple but is shown to be efficient in clustering both convex shaped data and arbitrarily shaped data.
D'Agostino, R, Day, S, Greenhouse, J & Ryan, L 2014, 'Editors' Note', Statistics in Medicine, vol. 33, no. 1, pp. 1-1.
View/Download from: Publisher's site
Deng, S, Huang, L & Xu, G 2014, 'Social network-based service recommendation with trust enhancement', Expert Systems with Applications, vol. 41, no. 18, pp. 8075-8084.
View/Download from: Publisher's site
View description>>
Given the increasing applications of service computing and cloud computing, a large number of Web services are deployed on the Internet, triggering the research of Web service recommendation. Despite of service QoS, the use of user feedback is becoming the current trend in service recommendation. Likewise in traditional recommender systems, sparsity, cold-start and trustworthiness are major issues challenging service recommendation in adopting similarity-based approaches. Meanwhile, with the prevalence of social networks, nowadays people become active in interacting with various computers and users, resulting in a huge volume of data available, such as service information, user-service ratings, interaction logs, and user relationships. Therefore, how to incorporate the trust relationship in social networks with user feedback for service recommendation motivates this work. In this paper, we propose a social network-based service recommendation method with trust enhancement known as RelevantTrustWalker. First, a matrix factorization method is utilized to assess the degree of trust between users in social network. Next, an extended random walk algorithm is proposed to obtain recommendation results. To evaluate the accuracy of the algorithm, experiments on a real-world dataset are conducted and experimental results indicate that the quality of the recommendation and the speed of the method are improved compared with existing algorithms. © 2014 Elsevier Ltd. All rights reserved.
Deng, Z, Choi, K-S, Cao, L & Wang, S 2014, 'T2FELA: Type-2 Fuzzy Extreme Learning Algorithm for Fast Training of Interval Type-2 TSK Fuzzy Logic System', IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 4, pp. 664-676.
View/Download from: Publisher's site
View description>>
A challenge in modeling type-2 fuzzy logic systems is the development of efficient learning algorithms to cope with the ever increasing size of real-world data sets. In this paper, the extreme learning strategy is introduced to develop a fast training algorithm for interval type-2 Takagi-Sugeno-Kang fuzzy logic systems. The proposed algorithm, called type-2 fuzzy extreme learning algorithm (T2FELA), has two distinctive characteristics. First, the parameters of the antecedents are randomly generated and parameters of the consequents are obtained by a fast learning method according to the extreme learning mechanism. In addition, because the obtained parameters are optimal in the sense of minimizing the norm, the resulting fuzzy systems exhibit better generalization performance. The experimental results clearly demonstrate that the training speed of the proposed T2FELA algorithm is superior to that of the existing state-of-the-art algorithms. The proposed algorithm also shows competitive performance in generalization abilities. © 2013 IEEE.
Diffner, E, Beck, D, Gudgin, E, Thoms, JAI, Knezevic, K, Pridans, C, Foster, S, Goode, D, Khong Lim, W, Boelen, L, Metzeler, KH, Micklem, G, Bohlander, SK, Buske, C, Burnett, A, Ottersbach, K, Vassiliou, GS, Olivier, J, Wong, JWH, Gottgens, B, Huntly, BJ & Pimanda, JE 2014, 'Diffner E, Beck D, Gudgin E, et al. Activity of a heptad of transcription factors is associated with stem cell programs and clinical outcome in acute myeloid leukemia. Blood. 2013;121(12):2289-2300.', Blood, vol. 123, no. 18, pp. 2901-2901.
View/Download from: Publisher's site
Dong, H & Hussain, FK 2014, 'Self-Adaptive Semantic Focused Crawler for Mining Services Information Discovery', IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, vol. 10, no. 2, pp. 1616-1626.
View/Download from: Publisher's site
Dong, XJ, Liu, EQ, Yang, J & Wu, Q 2014, 'Visible and infrared automatic image registration based on SLER', Hongwai Yu Haomibo Xuebao/Journal of Infrared and Millimeter Waves, vol. 33, no. 1, pp. 90-97.
View/Download from: Publisher's site
View description>>
A novel approach to the problem of visible and infrared automatic image registration was proposed. The registration is performed by extracting affine covariant regions through same level extremal region (SLER) detector on a gray gradient image. Then, hypergraph matching algorithm was employed to obtain identical key points. The approach is especially suitable for registering multi-sensor infrared images where the quality of images or the corresponding edge maps are worse than the counterparts on a common optical image. Experiments performed on several challenging real image pair show that our proposed method achieves better performance than other approaches.
Durao, F, Bayyapu, K, Xu, G, Dolog, P & Lage, R 2014, 'Expanding user’s query with tag-neighbors for effective medical information retrieval', Multimedia Tools and Applications, vol. 71, no. 2, pp. 905-929.
View/Download from: Publisher's site
Fong, S, Deb, S, Yang, X-S & Li, J 2014, 'Feature Selection in Life Science Classification: Metaheuristic Swarm Search', IT Professional, vol. 16, no. 4, pp. 24-29.
View/Download from: Publisher's site
Fu, B, Wang, Z, Xu, G & Cao, L 2014, 'Multi-label learning based on iterative label propagation over graph', PATTERN RECOGNITION LETTERS, vol. 42, pp. 85-90.
View/Download from: Publisher's site
View description>>
One key challenge in multi-label learning is how to exploit label dependency effectively, and existing methods mainly address this issue via training a prediction model for each label based on the combination of original features and the labels on which it depends on. However, the influence of label dependency might be depressed due to the significant imbalance in dimensionality of feature set and dependent label set in this way, also the dynamic interaction between labels cannot be utilized effectively. In this paper, we propose a new framework to exploit the dependencies between labels iteratively and interactively. Every label’s prediction will be updated through iterative process of propagation, other than being determined directly by a prediction model. Specifically, we utilize a graph model to encode the dependencies between labels, and employ the random-walk with restart (RWR) strategy to propagate the dependency among all labels iteratively until the predictions for all the labels converge. We validate our approach by experiments, and the results demonstrate that it yields significant improvements compared with several state-of-the-art algorithms.
Ghous, H, Kennedy, PJ, Ho, N & Catchpoole, DR 2014, 'Comparing Functional Visualisations of Lists of Genes using Singular Value Decomposition', Journal of Research and Practice in Information Technology, vol. 47, no. 1, pp. 47-76.
View description>>
Progress in understanding core pathways of cancer requires analysis of many genes. New insights arehampered due to the lack of tools to make sense of large lists of genes identifi ed using high throughputtechnology. Data mining, particularly visualisation that fi nds relationships between genes and the GeneOntology (GO), can assist in functional understanding. This paper addresses the question using GOannotations for functional understanding of genes. We augment genes with GO terms using two similaritymeasures: a Hop-based measure and an Information Content based measure, and visualise with SingularValue Decomposition (SVD). The results demonstrate that SVD visualisation of GO augmented genesmatches the biological understanding expected in simulated and real-life data. Diff erences are observed invisualisation of GO terms, where the information content method produces more tightly-packed clustersthan the hop-based method.
Gil-Lafuente, AM, Merigó, JM & Vizuete, E 2014, 'Analysis of luxury resort hotels by using the Fuzzy Analytic Hierarchy Process and the Fuzzy Delphi Method', Economic Research-Ekonomska Istraživanja, vol. 27, no. 1, pp. 244-266.
View/Download from: Publisher's site
Gong, C, Fu, K, Wu, Q, Tu, E & Yang, J 2014, 'Semi-supervised classification with pairwise constraints', NEUROCOMPUTING, vol. 139, pp. 130-137.
View/Download from: Publisher's site
View description>>
Graph-based semi-supervised learning has been intensively investigated for a long history. However, existing algorithms only utilize the similarity information between examples for graph construction, so their discriminative ability is rather limited. In order to overcome this limitation, this paper considers both similarity and dissimilarity constraints, and constructs a signed graph with positive and negative edge weights to improve the classification performance. Therefore, the proposed algorithm is termed as Constrained Semi-supervised Classifier (CSSC). A novel smoothness regularizer is proposed to make the 'must-linked' examples obtain similar labels, and 'cannot-linked' examples get totally different labels. Experiments on a variety of synthetic and real-world datasets demonstrate that CSSC achieves better performances than some state-of-the-art semi-supervised learning algorithms, such as Harmonic Functions, Linear Neighborhood Propagation, LapRLS, LapSVM, and Safe Semi-supervised Support Vector Machines. © 2014 Elsevier B.V.
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2014, 'Discovering a vaccine against neosporosis using computers: is it feasible?', TRENDS IN PARASITOLOGY, vol. 30, no. 8, pp. 401-411.
View/Download from: Publisher's site
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2014, 'Enhancing In Silico Protein-Based Vaccine Discovery for Eukaryotic Pathogens Using Predicted Peptide-MHC Binding and Peptide Conservation Scores', PLOS ONE, vol. 9, no. 12.
View/Download from: Publisher's site
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2014, 'Vacceed: a high-throughput in silico vaccine candidate discovery pipeline for eukaryotic pathogens based on reverse vaccinology', BIOINFORMATICS, vol. 30, no. 16, pp. 2381-2383.
View/Download from: Publisher's site
View description>>
Summary: We present Vacceed, a highly configurable and scalable framework designed to automate the process of high-throughput in silico vaccine candidate discovery for eukaryotic pathogens. Given thousands of protein sequences from the target pathogen as input, the main output is a ranked list of protein candidates determined by a set of machine learning algorithms. Vacceed has the potential to save time and money by reducing the number of false candidates allocated for laboratory validation. Vacceed, if required, can also predict protein sequences from the pathogen's genome. © The Author 2014.
Gu, Y, Yang, Z, Xu, G, Nakano, M, Toyoda, M & Kitsuregawa, M 2014, 'Exploration on efficient similar sentences extraction', World Wide Web, vol. 17, no. 4, pp. 595-626.
View/Download from: Publisher's site
View description>>
Measuring the semantic similarity between sentences is an essential issue for many applications, such as text summarization, Web page retrieval, question-answer model, image extraction, and so forth. A few studies have explored on this issue by several techniques, e.g., knowledge-based strategies, corpus-based strategies, hybrid strategies, etc. Most of these studies focus on how to improve the effectiveness of the problem. In this paper, we address the efficiency issue, i.e., for a given sentence collection, how to efficiently discover the top-k semantic similar sentences to a query. The previous methods cannot handle the big data efficiently, i.e., applying such strategies directly is time consuming because every candidate sentence needs to be tested. In this paper, we propose efficient strategies to tackle such problem based on a general framework. The basic idea is that for each similarity, we build a corresponding index in the preprocessing. Traversing these indices in the querying process can avoid to test many candidates, so as to improve the efficiency. Moreover, an optimal aggregation algorithm is introduced to assemble these similarities. Our framework is general enough that many similarity metrics can be incorporated, as will be discussed in the paper. We conduct extensive experimental evaluation on three real datasets to evaluate the efficiency of our proposal. In addition, we illustrate the trade-off between the effectiveness and efficiency. The experimental results demonstrate that the performance of our proposal outperforms the state-of-the-art techniques on efficiency while keeping the same high precision as them. © 2013 Springer Science+Business Media New York.
Guglyuvatyy, E & Stoianoff, NP 2014, 'Applying the Delphi Method As a Research Technique in Law and Policy'.
Guglyuvatyy, E & Stoianoff, NP 2014, 'Applying the Delphi Method as a Research Technique in Tax Law and Policy', Australian Tax Forum, vol. 30, no. 1, pp. 179-204.
View description>>
This article examines the Delphi method as a tool for legal research that can be used to facilitate transparent and informative policy-making in a variety of fields including tax policy. It points to strengths and limitations of the technique based on the findings of the Delphi study conducted to assist in the assessment of fiscal and more general market-based instruments (referred to in this article as carbon pricing instruments) that could be used to tackle climate change in Australia. Whether the Delphi method is utilised in empirical or theoretical legal research or in legal and policy decision-making, this article demonstrates the strength of the technique in providing transparent and justified results, which in turn reinforces the utility of the method as a legal research and/or decision-making tool.
He, W, Xu, G & Kruck, SE 2014, 'Online is education for the 21st century', Journal of Information Systems Education, vol. 25, no. 2, pp. 101-105.
View description>>
Online teaching and learning have become increasingly common in higher educational institutions. These higher educational institutions realize the growing importance of online learning in information systems/information technology (IS/IT) education and are now offering online IS/IT courses and programs to students. However, designing, developing, teaching, and assessing an online IS/IT course effectively is often a challenge. Many IS/IT instructors are new to online teaching and need orientation and training for their own readiness in designing, developing, teaching, and assessing IS/IT courses in the online environment. It is recognized that effective faculty are key to student success in online courses and to the success of online programs (Meyer and Jones, 2012). Therefore, it is imperative that administrators and instructors of IS/IT courses and programs learn more of the best practices of online teaching for high student success. This support to instructors and administrators is the purpose of the Special Issue of the Journal of Information Systems Education.
Huque, MH, Bondell, HD & Ryan, L 2014, 'On the impact of covariate measurement error on spatial regression modelling', ENVIRONMETRICS, vol. 25, no. 8, pp. 560-570.
View/Download from: Publisher's site
Kennedy, PJ 2014, 'Redesign of Data Analytics Major: Challenges and Lessons Learned', Procedia - Social and Behavioral Sciences, vol. 116, pp. 1373-1377.
View/Download from: Publisher's site
Kusakunniran, W, Wu, Q, Zhang, J, Li, H & Wang, L 2014, 'Recognizing Gaits Across Views Through Correlated Motion Co-Clustering', IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 696-709.
View/Download from: Publisher's site
View description>>
Human gait is an important biometric feature, which can be used to identify a person remotely. However, view change can cause significant difficulties for gait recognition because it will alter available visual features for matching substantially. Moreover, it is observed that different parts of gait will be affected differently by view change. By exploring relations between two gaits from two different views, it is also observed that a part of gait in one view is more related to a typical part than any other parts of gait in another view. A new method proposed in this paper considers such variance of correlations between gaits across views that is not explicitly analyzed in the other existing methods. In our method, a novel motion co-clustering is carried out to partition the most related parts of gaits from different views into the same group. In this way, relationships between gaits from different views will be more precisely described based on multiple groups of the motion co-clustering instead of a single correlation descriptor. Inside each group, a linear correlation between gait information across views is further maximized through canonical correlation analysis (CCA). Consequently, gait information in one view can be projected onto another view through a linear approximation under the trained CCA subspaces. In the end, a similarity between gaits originally recorded from different views can be measured under the approximately same view. Comprehensive experiments based on widely adopted gait databases have shown that our method outperforms the state-of-the-art. © 2013 IEEE.
Li, CX, Chen, P, Wang, RJ, Wang, XJ, Su, YR & Li, J 2014, 'PPI-IRO: a two-stage method for protein-protein interaction extraction based on interaction relation ontology', International Journal of Data Mining and Bioinformatics, vol. 10, no. 1, pp. 98-98.
View/Download from: Publisher's site
View description>>
Mining Protein-Protein Interactions (PPIs) from the fast-growing biomedical literature resources has been proven as an effective approach for the identifi cation of biological regulatory networks. This paper presents a novel method based on the idea of Interaction Relation Ontology (IRO), which specifi es and organises words of various proteins interaction relationships. Our method is a two-stage PPI extraction method. At fi rst, IRO is applied in a binary classifi er to determine whether sentences contain a relation or not. Then, IRO is taken to guide PPI extraction by building sentence dependency parse tree. Comprehensive and quantitative evaluations and detailed analyses are used to demonstrate the signifi cant performance of IRO on relation sentences classifi cation and PPI extraction. Our PPI extraction method yielded a recall of around 80% and 90% and an F1 of around 54% and 66% on corpora of AIMed and Bioinfer, respectively, which are superior to most existing extraction methods. Copyright © 2014 Inderscience Enterprises Ltd.
Liang, G, Zhu, X & Zhang, C 2014, 'The effect of varying levels of class distribution on bagging for different algorithms: An empirical study', International Journal of Machine Learning and Cybernetics, vol. 5, no. 1, pp. 63-71.
View/Download from: Publisher's site
View description>>
Many real world applications involve highly imbalanced class distribution. Research into learning from imbalanced class distribution is considered to be one of ten challenging problems in data mining research, and it has increasingly captured the attention of both academia and industry. In this work, we study the effects of different levels of imbalanced class distribution on bagging predictors by using under-sampling techniques. Despite the popularity of bagging in many real-world applications, some questions have not been clearly answered in the existing research, such as the effect of varying the levels of class distribution on different bagging predictors, e.g., whether bagging is superior to single learners when the levels of class distribution change. Most classification learning algorithms are designed to maximize the overall accuracy rate and assume that training instances are uniformly distributed; however, the overall accuracy does not represent correct prediction on the minority class, which is the class of interest to users. The overall accuracy metric is therefore ineffective for evaluating the performance of classifiers in extremely imbalanced data. This study investigates the effect of varying levels of class distribution on different bagging predictors based on the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) as a performance metric, using an under-sampling technique on 14 data-sets with imbalanced class distributions. Our experimental results indicate that Decision Table (DTable) and RepTree are the learning algorithms with the best bagging AUC performance. The AUC performances of bagging predictors are statistically superior to single learners, with the exception of Support Vector Machines (SVM) and Decision Stump (DStump).
Liu, B, Xiao, Y, Yu, PS, Cao, L, Zhang, Y & Hao, Z 2014, 'Uncertain One-Class Learning and Concept Summarization Learning on Uncertain Data Streams', IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 2, pp. 468-484.
View/Download from: Publisher's site
View description>>
This paper presents a novel framework to uncertain one-class learning and concept summarization learning on uncertain data streams. Our proposed framework consists of two parts. First, we put forward uncertain one-class learning to cope with data of uncertainty. We first propose a local kernel-density-based method to generate a bound score for each instance, which refines the location of the corresponding instance, and then construct an uncertain one-class classifier (UOCC) by incorporating the generated bound score into a one-class SVM-based learning phase. Second, we propose a support vectors (SVs)-based clustering technique to summarize the concept of the user from the history chunks by representing the chunk data using support vectors of the uncertain one-class classifier developed on each chunk, and then extend k-mean clustering method to cluster history chunks into clusters so that we can summarize concept from the history chunks. Our proposed framework explicitly addresses the problem of one-class learning and concept summarization learning on uncertain one-class data streams. Extensive experiments on uncertain data streams demonstrate that our proposed uncertain one-class learning method performs better than others, and our concept summarization method can summarize the evolving interests of the user from the history chunks. © 1989-2012 IEEE.
Liu, B, Xiao, Y, Yu, PS, Hao, Z & Cao, L 2014, 'An efficient orientation distance–based discriminative feature extraction method for multi-classification', Knowledge and Information Systems, vol. 39, no. 2, pp. 409-433.
View/Download from: Publisher's site
View description>>
Feature extraction is an important step before actual learning. Although many feature extraction methods have been proposed for clustering, classification and regression, very limited work has been done on multi-class classification problems. This paper proposes a novel feature extraction method, called orientation distance–based discriminative (ODD) feature extraction, particularly designed for multi-class classification problems. Our proposed method works in two steps. In the first step, we extend the Fisher Discriminant idea to determine an appropriate kernel function and map the input data with all classes into a feature space where the classes of the data are well separated. In the second step, we put forward two variants of ODD features, i.e., one-vs-all-based ODD and one-vs-one-based ODD features. We first construct hyper-plane (SVM) based on one-vs-all scheme or one-vs-one scheme in the feature space; we then extract one-vs-all-based or one-vs-one-based ODD features between a sample and each hyper-plane. These newly extracted ODD features are treated as the representative features and are thereafter used in the subsequent classification phase. Extensive experiments have been conducted to investigate the performance of one-vs-all-based and one-vs-one-based ODD features for multi-class classification. The statistical results show that the classification accuracy based on ODD features outperforms that of the state-of-the-art feature extraction methods.
Liu, H-D, Yang, M, Gao, Y & Cao, L 2014, 'Fast Local Histogram Specification', IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 11, pp. 1833-1843.
View/Download from: Publisher's site
View description>>
Local histogram specification (LHS) is a useful technique for image processing. However, LHS faces a critical computational challenge when it is applied to high-resolution high-precision images. The calculation of the values in the cumulative distribution function (CDF) and the mapped value for the central pixel in each sliding window is time consuming with the computational complexity O(s + L) of the state-of-the-art techniques, where s is the side length of the square window and L is the number of gray levels. In this paper, we propose a fast algorithm for LHS, called fast local histogram specification (FLHS). FLHS reduces the complexity of calculating the CDF value for the central pixel in each sliding window to O(s + root L), and the time complexity for the mapping procedure in each window to O(log L). This results in the overall time complexity of LHS reduced from O(s + L) to O(s + root L) in each sliding window. Theoretical analysis shows that the newly developed algorithm is efficient. Experimental results on the 8-bit and high-resolution high-precision (16-bit) images demonstrate the efficiency of our proposed algorithm.
Liu, Q, Li, Z & Li, J 2014, 'Use B-factor related features for accurate classification between protein binding interfaces and crystal packing contacts', BMC Bioinformatics, vol. 15, no. S16, pp. S3-S3.
View/Download from: Publisher's site
View description>>
© 2014 Liu et al.; licensee BioMed Central Ltd. Background: Distinction between true protein interactions and crystal packing contacts is important for structural bioinformatics studies to respond to the need of accurate classification of the rapidly increasing protein structures. There are many unannotated crystal contacts and there also exist false annotations in this rapidly expanding volume of data. Previous tools have been proposed to address this problem. However, challenging issues still remain, such as low performance when the training and test data contain mixed interfaces having diverse sizes of contact areas. Methods and results: B factor is a measure to quantify the vibrational motion of an atom, a more relevant feature than interface size to characterize protein binding. We propose to use three features related to B factor for the classification between biological interfaces and crystal packing contacts. The first feature is the sum of the normalized B factors of the interfacial atoms in the contact area, the second is the average of the interfacial B factor per residue in the chain, and the third is the average number of interfacial atoms with a negative normalized B factor per residue in the chain. We investigate the distribution properties of these basic features and a compound feature on four datasets of biological binding and crystal packing, and on a protein binding-only dataset with known binding affinity. We also compare the cross-dataset classification performance of these features with existing methods and with a widely-used and the most effective feature interface area. The results demonstrate that our features outperform the interface area approach and the existing prediction methods remarkably for many tests on all of these datasets. Conclusions: The proposed B factor related features are more effective than interface area to distinguish crystal packing from biological binding interfaces. Our computational methods have a potent...
Liu, X, Wang, L, Zhang, J, Yin, J & Liu, H 2014, 'Global and local structure preservation for feature selection', IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 6, pp. 1083-1095.
View/Download from: Publisher's site
View description>>
The recent literature indicates that preserving global pairwise sample similarity is of great importance for feature selection and that many existing selection criteria essentially work in this way. In this paper, we argue that besides global pairwise sample similarity, the local geometric structure of data is also critical and that these two factors play different roles in different learning scenarios. In order to show this, we propose a global and local structure preservation framework for feature selection (GLSPFS) which integrates both global pairwise sample similarity and local geometric data structure to conduct feature selection. To demonstrate the generality of our framework, we employ methods that are well known in the literature to model the local geometric data structure and develop three specific GLSPFS-based feature selection algorithms. Also, we develop an efficient optimization algorithm with proven global convergence to solve the resulting feature selection problem. A comprehensive experimental study is then conducted in order to compare our feature selection algorithms with many state-of-the-art ones in supervised, unsupervised, and semisupervised learning scenarios. The result indicates that: 1) our framework consistently achieves statistically significant improvement in selection performance when compared with the currently used algorithms; 2) in supervised and semisupervised learning scenarios, preserving global pairwise similarity is more important than preserving local geometric data structure; 3) in the unsupervised scenario, preserving local geometric data structure becomes clearly more important; and 4) the best feature selection performance is always obtained when the two factors are appropriately integrated. In summary, this paper not only validates the advantages of the proposed GLSPFS framework but also gains more insight into the information to be preserved in different feature selection tasks. © 2012 IEEE.
Liu, Z, Li, J, Yang, L, Chen, Q, Chu, Y & Dai, N 2014, 'Efficient near-infrared quantum cutting in Ce3+–Yb3+ codoped glass for solar photovoltaic', Solar Energy Materials and Solar Cells, vol. 122, pp. 46-50.
View/Download from: Publisher's site
Llopis-Albert, C, Palacios-Marqués, D & Merigó, JM 2014, 'A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty', Journal of Hydrology, vol. 511, pp. 10-16.
View/Download from: Publisher's site
Lu, S, Mei, T, Wang, J, Zhang, J, Wang, Z & Li, S 2014, 'Browse-to-Search', ACM Transactions on Information Systems, vol. 32, no. 4, pp. 1-27.
View/Download from: Publisher's site
View description>>
With the development of image search technology, users are no longer satisfied with searching for images using just metadata and textual descriptions. Instead, more search demands are focused on retrieving images based on similarities in their contents (textures, colors, shapes etc.). Nevertheless, one image may deliver rich or complex content and multiple interests. Sometimes users do not sufficiently define or describe their seeking demands for images even when general search interests appear, owing to a lack of specific knowledge to express their intents. A new form of information seeking activity, referred to as exploratory search, is emerging in the research community, which generally combines browsing and searching content together to help users gain additional knowledge and form accurate queries, thereby assisting the users with their seeking and investigation activities. However, there have been few attempts at addressing integrated exploratory search solutions when image browsing is incorporated into the exploring loop. In this work, we investigate the challenges of understanding users' search interests from the images being browsed and infer their actual search intentions. We develop a novel system to explore an effective and efficient way for allowing users to seamlessly switch between browse and search processes, and naturally complete visual-based exploratory search tasks. The system, called Browse-to-Search enables users to specify their visual search interests by circling any visual objects in the webpages being browsed, and then the system automatically forms the visual entities to represent users' underlying intent. One visual entity is not limited by the original image content, but also encapsulated by the textual-based browsing context and the associated heterogeneous attributes. We use large-scale image search technology to find the associated textual attributes ...
Maity, A, Williams, PL, Ryan, L, Missmer, SA, Coull, BA & Hauser, R 2014, 'Analysis of in vitro fertilization data with multiple outcomes using discrete time-to-event analysis', STATISTICS IN MEDICINE, vol. 33, no. 10, pp. 1738-1749.
View/Download from: Publisher's site
Marin, L, Valls, A, Isern, D, Moreno, A & Merigó, JM 2014, 'Induced Unbalanced Linguistic Ordered Weighted Average and Its Application in Multiperson Decision Making', The Scientific World Journal, vol. 2014, pp. 1-19.
View/Download from: Publisher's site
View description>>
Linguistic variables are very useful to evaluate alternatives in decision making problems because they provide a vocabulary in natural language rather than numbers. Some aggregation operators for linguistic variables force the use of a symmetric and uniformly distributed set of terms. The need to relax these conditions has recently been posited. This paper presents the induced unbalanced linguistic ordered weighted average (IULOWA) operator. This operator can deal with a set of unbalanced linguistic terms that are represented using fuzzy sets. We propose a new order-inducing criterion based on the specificity and fuzziness of the linguistic terms. Different relevancies are given to the fuzzy values according to their uncertainty degree. To illustrate the behaviour of the precision-based IULOWA operator, we present an environmental assessment case study in which a multiperson multicriteria decision making model is applied.
Merigó, JM 2014, 'DECISION-MAKING UNDER RISK AND UNCERTAINTY AND ITS APPLICATION IN STRATEGIC MANAGEMENT', Journal of Business Economics and Management, vol. 16, no. 1, pp. 93-116.
View/Download from: Publisher's site
View description>>
We introduce a new decision-making model that unifies risk and uncertain environments in the same formulation. For doing so, we present the induced probabilistic ordered weighted averaging (IPOWA) operator. It is an aggregation operator that unifies the probability with the OWA operator in the same formulation and considering the degree of importance of each concept in the aggregation. Moreover, it also uses induced aggregation operators that provide a more general representation of the attitudinal character of the decision-maker. We study its applicability and we see that it is very broad because all the previous studies that use the probability or the OWA operator can be revised and extended with this new approach. We briefly analyze some basic applications in statistics such as the implementation of this approach with the variance, the covariance, the Pearson coefficient and in a simple linear regression model. We focus on a multi-person decision-making problem in strategic management. Thus, we are able to construct a new aggregation operator that we call the multi-person IPOWA operator. Its main advantage is that it can deal with the opinion of several persons in the analysis so we can represent the information in a more complete way.
Merigo, JM & Gil-Lafuente, AM 2014, 'Computational Intelligence in Business Administration', COMPUTER SCIENCE AND INFORMATION SYSTEMS, vol. 11, no. 2.
Merigó, JM & Peris-Ortiz, M 2014, 'Entrepreneurship and Decision- Making in Latin America', Innovar, vol. 24, no. 1Spe, pp. 101-111.
View/Download from: Publisher's site
View description>>
The principal purpose of this paper is to analyze different methods for decision making, with a focus on entrepreneurship in Latin America. Decision-making methods may be informed by aggregation operators that are based on the use of probabilities, weighted averages (WAs) and generalized aggregation operators. The paper presents a new generalized probabilistic weighted averaging (GPWA) operator that unifies WAs and probability in the same formulation, considering the degree of importance of each concept used in the analysis. The fundamental advantage of this approach is that it includes a wide range of particular cases including the probabilistic weighted averaging (PWA) operator, the probabilistic weighted geometric averaging (PWGA) operator and the probabilistic weighted quadratic averaging (PWQA) operator. Quasi-arithmetic means are used to obtain the Quasi-PWA operator and to generalize the approach, which is then applied to a set of hypothetical entrepreneurial investment decisions in a politically unified Latin American region.
Merigó, JM, Casanovas, M & Liu, P 2014, 'Decision making with fuzzy induced heavy ordered weighted averaging operators', International Journal of Fuzzy Systems, vol. 16, no. 3, pp. 277-289.
View description>>
This paper presents the fuzzy induced heavy ordered weighted averaging (FIHOWA) operator. It is an aggregation operator that uses the main characteristics of three well known aggregation operators: the heavy OWA, the induced OWA and the fuzzy OWA operator. Therefore, this operator provides a parameterized family of aggregation operators that includes the OWA operator and the total operator as special cases. It uses order inducing variables in the reordering of its arguments and it deals with uncertain information represented in the form of fuzzy numbers. Some of the main properties of this operator are studied including a wide range of families of FIHOWA operators such as the fuzzy heavy weighted average and the fuzzy heavy average. An illustrative example in investment selection is also presented.
Merigó, JM, Casanovas, M & Palacios-Marqués, D 2014, 'Linguistic group decision making with induced aggregation operators and probabilistic information', Applied Soft Computing, vol. 24, pp. 669-678.
View/Download from: Publisher's site
Merigó, JM, Casanovas, M & Yang, J-B 2014, 'Group decision making with expertons and uncertain generalized probabilistic weighted aggregation operators', European Journal of Operational Research, vol. 235, no. 1, pp. 215-224.
View/Download from: Publisher's site
Merigó, JM, Casanovas, M & Zeng, S 2014, 'Distance measures with heavy aggregation operators', Applied Mathematical Modelling, vol. 38, no. 13, pp. 3142-3153.
View/Download from: Publisher's site
Merigo, JM, Engemann, KJ & Palacios-Marques, D 2014, 'DECISION MAKING WITH DEMPSTER-SHAFER BELIEF STRUCTURE AND THE OWAWA OPERATOR', Technological and Economic Development of Economy, vol. 19, no. Supplement_1, pp. S100-S118.
View/Download from: Publisher's site
View description>>
A new decision making model that uses the weighted average and the ordered weighted averaging (OWA) operator in the Dempster-Shafer belief structure is presented. Thus, we are able to represent the decision making problem considering objective and subjective information and the attitudinal character of the decision maker. For doing so, we use the ordered weighted averaging – weighted average (OWAWA) operator. It is an aggregation operator that unifies the weighted average and the OWA in the same formulation. This approach is generalized by using quasi-arithmetic means and group decision making techniques. An application of the new approach in a group decision making problem concerning political management of a country is also developed.
Moghaddam, Z & Piccardi, M 2014, 'Training Initialization of Hidden Markov Models in Human Action Recognition', IEEE Transactions on Automation Science and Engineering, vol. 11, no. 2, pp. 394-408.
View/Download from: Publisher's site
View description>>
Human action recognition in video is often approached by means of sequential probabilistic models as they offer a natural match to the temporal dimension of the actions. However, effective estimation of the models' parameters is critical if one wants to achieve significant recognition accuracy. Parameter estimation is typically performed over a set of training data by maximizing objective functions such as the data likelihood or the conditional likelihood. However, such functions are nonconvex in nature and subject to local maxima. This problem is major since any solution algorithm (expectation- maximization, gradient ascent, variational methods and others) requires an arbitrary initialization and can only find a corresponding local maximum. Exhaustive search is otherwise impossible since the number of local maxima is unknown. While no theoretical solutions are available for this problem, the only practicable mollification is to repeat training with different initializations until satisfactory cross-validation accuracy is attained. Such a process is overall empirical and highly time-consuming. In this paper, we propose two methods for one-off initialization of hidden Markov models achieving interesting tradeoffs between accuracy and training time. Experiments over three challenging human action video datasets (Weizmann, MuHAVi and Hollywood Human Actions) and with various feature sets measured from the frames (STIP descriptors, projection histograms, notable contour points) prove that the proposed one-off initializations are capable of achieving accuracy above the average of repeated random initializations and comparable to the best. In addition, the methods proposed are not restricted solely to human action recognition as they suit time series classification as a general problem. © 2004-2012 IEEE.
Movassaghi, S, Abolhasan, M, Lipman, J, Smith, D & Jamalipour, A 2014, 'Wireless Body Area Networks: A Survey', IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, vol. 16, no. 3, pp. 1658-1686.
View/Download from: Publisher's site
View description>>
Recent developments and technological advancements in wireless communication, MicroElectroMechanical Systems (MEMS) technology and integrated circuits has enabled low-power, intelligent, miniaturized, invasive/non-invasive micro and nano-technology sensor nodes strategically placed in or around the human body to be used in various applications, such as personal health monitoring. This exciting new area of research is called Wireless Body Area Networks (WBANs) and leverages the emerging IEEE 802.15.6 and IEEE 802.15.4j standards, specifically standardized for medical WBANs. The aim of WBANs is to simplify and improve speed, accuracy, and reliability of communication of sensors/actuators within, on, and in the immediate proximity of a human body. The vast scope of challenges associated with WBANs has led to numerous publications. In this paper, we survey the current state-of-art of WBANs based on the latest standards and publications. Open issues and challenges within each area are also explored as a source of inspiration towards future developments in WBANs. © 2014 IEEE.
Musial, K, Bródka, P, Kazienko, P & Gaworecki, J 2014, 'Extraction of Multi-layered Social Networks from Activity Data', The Scientific World Journal, vol. 2014, p. 3.
View/Download from: Publisher's site
View description>>
The data gathered in all kind of web-based systems, which enable users tointeract with each other, provides an opportunity to extract social networksthat consist of people and relationships between them. The emerging structuresare very complex due to the number and type of discovered connections. Inwebbased systems, the characteristic element of each interaction between usersis that there is always an object that serves as a communication medium. Thiscan be e.g. an email sent from one user to another or post at the forumauthored by one user and commented by others. Based on these objects andactivities that users perform towards them, different kinds of relationshipscan be identified and extracted. Additional challenge arises from the fact thathierarchies can exist between objects, e.g. a forum consists of one or moregroups of topics, and each of them contains topics that finally include posts.In this paper, we propose a new method for creation of multi-layered socialnetwork based on the data about users activities towards different types ofobjects between which the hierarchy exists. Due to the flattening,preprocessing procedure new layers and new relationships in the multi-layeredsocial network can be identified and analysed.
Perera, D, Chacon, D, Thoms, JAI, Poulos, RC, Shlien, A, Beck, D, Campbell, PJ, Pimanda, JE & Wong, JWH 2014, 'OncoCis: annotation of cis-regulatory mutations in cancer', GENOME BIOLOGY, vol. 15, no. 10, pp. 1-14.
View/Download from: Publisher's site
View description>>
Whole genome sequencing has enabled the identification of thousands of somatic mutations within non-coding genomic regions of individual cancer samples. However, identification of mutations that potentially alter gene regulation remains a major challenge. Here we present OncoCis, a new method that enables identification of potential cis-regulatory mutations using cell type-specific genome and epigenome-wide datasets along with matching gene expression data. We demonstrate that the use of cell type-specific information and gene expression can significantly reduce the number of candidate cis-regulatory mutations compared with existing tools designed for the annotation of cis-regulatory SNPs.
Poon, AH, Houseman, EA, Ryan, L, Sparrow, D, Vokonas, PS & Litonjua, AA 2014, 'Variants of Asthma and Chronic Obstructive Pulmonary Disease Genes and Lung Function Decline in Aging', JOURNALS OF GERONTOLOGY SERIES A-BIOLOGICAL SCIENCES AND MEDICAL SCIENCES, vol. 69, no. 7, pp. 907-913.
View/Download from: Publisher's site
View description>>
Background. A substantial proportion of the general population has low lung function, and lung function is known to decrease as we age. Low lung function is a feature of several pulmonary disorders, such as uncontrolled asthma and chronic obstructive pulmonary disease. The objective of this study is to investigate the association of polymorphisms in asthma and chronic obstructive pulmonary disease candidate genes with rates of lung function decline in a general population sample of aging men. Methods. We analyzed data from a cohort of 1,047 Caucasian men without known lung disease, who had a mean of 25 years of lung function data, and on whom DNA was available. The cohort was randomly divided into two groups, and we tested a total of 940 single-nucleotide polymorphisms in 44 asthma and chronic obstructive pulmonary disease candidate genes in the first group (testing cohort, n = 545) for association with change in forced expiratory volume in 1 second over time. Results. One hundred nineteen single-nucleotide polymorphisms that showed nominal associations in the testing cohort were then genotyped and tested in the second group (replication cohort, n = 502). Evidence for association from the testing and replication cohorts were combined, and after adjustment for multiple testing, seven variants of three genes (DPP10, NPSR1, and ADAM33) remained significantly associated with change in forced expiratory volume in 1 second over time. Conclusions. Our findings that genetic variants of genes involved in asthma and chronic obstructive pulmonary disease are associated with lung function decline in normal aging participants suggest that similar genetic mechanisms may underlie lung function decline in both disease and normal aging processes. © The Author 2013.
Rehman, ZU, Hussain, OK & Hussain, FK 2014, 'Parallel Cloud Service Selection and Ranking Based on QoS History', INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, vol. 42, no. 5, pp. 820-852.
View/Download from: Publisher's site
Sargent-Cox, KA, Anstey, KJ & Luszcz, MA 2014, 'Longitudinal Change of Self-Perceptions of Aging and Mortality', The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, vol. 69, no. 2, pp. 168-173.
View/Download from: Publisher's site
Shouzhen, Z, Qifeng, W, Merigó, J & Tiejun, P 2014, 'Induced intuitionistic fuzzy ordered weighted averaging: Weighted average operator and its application to business decision-making', Computer Science and Information Systems, vol. 11, no. 2, pp. 839-857.
View/Download from: Publisher's site
View description>>
We present the induced intuitionistic fuzzy ordered weighted averaging-weighted average (I-IFOWAWA) operator. It is a new aggregation operator that uses the intuitionistic fuzzy weighted average (IFWA) and the induced intuitionistic fuzzy ordered weighted averaging (I-IFOWA) operator in the same formulation. We study some of its main properties and we have seen that it has a lot of particular cases such as the IFWA and the intuitionistic fuzzy ordered weighted averaging (IFOWA) operator. We also study its applicability in a decision-making problem concerning strategic selection of investments. We see that depending on the particular type of I-IFOWAWA operator used, the results may lead to different decisions.
Sun, L, Dong, H, Hussain, FK, Hussain, OK & Chang, E 2014, 'Cloud service selection: State-of-the-art and future research directions', JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, vol. 45, no. 1, pp. 134-150.
View/Download from: Publisher's site
Tafavogh, S, Catchpoole, DR & Kennedy, PJ 2014, 'Cellular quantitative analysis of neuroblastoma tumor and splitting overlapping cells', BMC BIOINFORMATICS, vol. 15, no. 1.
View/Download from: Publisher's site
View description>>
© 2014 Tafavogh et al.; licensee BioMed Central Ltd. Background: Neuroblastoma Tumor (NT) is one of the most aggressive types of infant cancer. Essential to accurate diagnosis and prognosis is cellular quantitative analysis of the tumor. Counting enormous numbers of cells under an optical microscope is error-prone. There is therefore an urgent demand from pathologists for robust and automated cell counting systems. However, the main challenge in developing these systems is the inability of them to distinguish between overlapping cells and single cells, and to split the overlapping cells. We address this challenge in two stages by: 1) distinguishing overlapping cells from single cells using the morphological differences between them such as area, uniformity of diameters and cell concavity; and 2) splitting overlapping cells into single cells. We propose a novel approach by using the dominant concave regions of cells as markers to identify the overlap region. We then find the initial splitting points at the critical points of the concave regions by decomposing the concave regions into their components such as arcs, chords and edges, and the distance between the components is analyzed using the developed seed growing technique. Lastly, a shortest path determination approach is developed to determine the optimum splitting route between two candidate initial splitting points.Results: We compare the cell counting results of our system with those of a pathologist as the ground-truth. We also compare the system with three state-of-the-art methods, and the results of statistical tests show a significant improvement in the performance of our system compared to state-of-the-art methods. The F-measure obtained by our system is 88.70%. To evaluate the generalizability of our algorithm, we apply it to images of follicular lymphoma, which has similar histological regions to NT. Of the algorithms tested, our algorithm obtains the highest F-measure of 92.79%.Conclusion:...
Thi, TH, Wang, L, Ye, N, Zhang, J, Maurer-Stroh, S & Cheng, L 2014, 'Recognizing flu-like symptoms from videos', BMC Bioinformatics, vol. 15, no. 1, pp. 1-10.
View/Download from: Publisher's site
View description>>
BACKGROUND: Vision-based surveillance and monitoring is a potential alternative for early detection of respiratory disease outbreaks in urban areas complementing molecular diagnostics and hospital and doctor visit-based alert systems. Visible actions representing typical flu-like symptoms include sneeze and cough that are associated with changing patterns of hand to head distances, among others. The technical difficulties lie in the high complexity and large variation of those actions as well as numerous similar background actions such as scratching head, cell phone use, eating, drinking and so on. RESULTS: In this paper, we make a first attempt at the challenging problem of recognizing flu-like symptoms from videos. Since there was no related dataset available, we created a new public health dataset for action recognition that includes two major flu-like symptom related actions (sneeze and cough) and a number of background actions. We also developed a suitable novel algorithm by introducing two types of Action Matching Kernels, where both types aim to integrate two aspects of local features, namely the space-time layout and the Bag-of-Words representations. In particular, we show that the Pyramid Match Kernel and Spatial Pyramid Matching are both special cases of our proposed kernels. Besides experimenting on standard testbed, the proposed algorithm is evaluated also on the new sneeze and cough set. Empirically, we observe that our approach achieves competitive performance compared to the state-of-the-arts, while recognition on the new public health dataset is shown to be a non-trivial task even with simple single person unobstructed view. CONCLUSIONS: Our sneeze and cough video dataset and newly developed action recognition algorithm is the first of its kind and aims to kick-start the field of action recognition of flu-like symptoms from videos. It will be challenging but necessary in future developments to consider more complex real-life scenario of detecting ...
Tu, E, Cao, L, Yang, J & Kasabov, N 2014, 'A novel graph-based k-means for nonlinear manifold clustering and representative selection', NEUROCOMPUTING, vol. 143, pp. 109-122.
View/Download from: Publisher's site
Wang, C, Cao, L, Gaussier, E, Li, J, Ou, Y & Luo, D 2014, 'Coupled Behavior Representation, Modeling, Analysis, and Reasoning', IEEE INTELLIGENT SYSTEMS, vol. 29, no. 4, pp. 66-69.
View/Download from: Publisher's site
View description>>
Behavior refers to the action, reaction,
or property of an entity, human or otherwise,
to situations or stimuli in its
environment.1 The in-depth analysis
of behavior has been increasingly recognized
as a crucial means for understanding
and disclosing interior driving
forces and intrinsic cause-effects
on business and social applications,
including Web community analysis,
counter-terrorism, fraud detection,
and customer relationship management.
With the deepening and widening
of social/business intelligences and
their networking, the concept of behavior
is in great demand to be consolidated
and formalized to deeply
scrutinize the native behavior intention,
lifecycle, and impact on complex
problems and business issues.
Although there’s an emerging focus
on deep behavior studies, such as social
network analysis,2 periodic behavior
analysis3 and behavior informatics
approach,1 previous research work has
mainly focused on individual behaviors
without considering the interactions of
them. However, with increasing network
and community-based events
as well as their applications, such as
group-based crime and social network
interactions, coupling relationships between
behaviors contribute to the intrinsic
causes and impacts of eventual
business and social problems. In the real-world applications, group behavior
interactions (that is, coupled behaviors)
are widely seen in natural, social,
and artificial behavior-related problems.
Complex behavior and social applications
often exhibit strong explicit
or implicit coupling relationships both
between their entities and properties.
Moreover, it’s also quite difficult to
model, analyze, and check behaviors
coupled with one another due to the
complexity from data, domain, context,
and impact perspectives.
Due to the emerging popularity and
importance of coupled behaviors, the
representation, modeling, analysis,
mining and learning, and determination
of coupled behaviors are becoming
increasingly essential yet challenging
in ub...
Wang, D, Yuan, C, Sun, Y, Zhang, J & Jin, X 2014, 'A fast mode decision algorithm applied to Coarse-Grain quality Scalable Video Coding', Journal of Visual Communication and Image Representation, vol. 25, no. 7, pp. 1631-1639.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier Inc. All rights reserved. A fast mode decision algorithm is proposed for a Coarse-Grain Scalable (CGS) video encoder based on the encoding characteristics of quality Scalable Video Coding (SVC). First, candidate modes and coding orders are predicted, based on inter-layer and spatial correlations. Three early termination methods are then proposed based on CGS encoding structure. Finally, all candidate modes are checked sequentially, according to their predicted order with three early termination conditions, to improve the coding speed. Experimental results have demonstrated that the proposed algorithm could reduce the encoding time by an average of 84.39%, with negligible coding efficiency losses.
Wang, H, Xu, Y & Merigó, JM 2014, 'Prioritized aggregation for non-homogeneous group decision making in water resource management', Economic Computation and Economic Cybernetics Studies and Research, vol. 48, no. 1, pp. 247-257.
View description>>
This paper deals with non-homogeneous group decision making problems in water resource management, in which there exists a prioritization of decision makers. The group decision makers are partitioned into three sets: the officials from government, the experts in water resource management, the users of water resources. There exists a prioritization relationship over the different sets of decision makers. In order to aggregate a collective preference based on the aggregation of different individual preferences, we suggest that prioritization between decision makers can modeled by making the weights associated with a decision maker dependent upon the satisfaction of the higher priority decision maker. Then, a so-called prioritized weighted aggregation operator based on ordered weighted averaging (OWA) is utilized to aggregate the preference values provided by different decision makers. Finally, an application in water resource management is provided to illustrate the usefulness and how the prioritized aggregation works in practice.
Welsh, P, Woodward, M, Hillis, GS, Li, Q, Marre, M, Williams, B, Poulter, N, Ryan, L, Harrap, S, Patel, A, Chalmers, J & Sattar, N 2014, 'Do Cardiac Biomarkers NT-proBNP and hsTnT Predict Microvascular Events in Patients With Type 2 Diabetes? Results From the ADVANCE Trial', DIABETES CARE, vol. 37, no. 8, pp. 2202-2210.
View/Download from: Publisher's site
Xiao, Y, Liu, B, Hao, Z & Cao, L 2014, 'A K-Farthest-Neighbor-based approach for support vector data description', Applied Intelligence, vol. 41, no. 1, pp. 196-211.
View/Download from: Publisher's site
Xiao, Y, Liu, B, Hao, Z & Cao, L 2014, 'A Similarity-Based Classification Framework for Multiple-Instance Learning', IEEE Transactions on Cybernetics, vol. 44, no. 4, pp. 500-515.
View/Download from: Publisher's site
View description>>
Multiple-instance learning (MIL) is a generalization of supervised learning that attempts to learn useful information from bags of instances. In MIL, the true labels of instances in positive bags are not available for training. This leads to a critical challenge, namely, handling the instances of which the labels are ambiguous (ambiguous instances). To deal with these ambiguous instances, we propose a novel MIL approach, called similarity-based multiple-instance learning (SMILE). Instead of eliminating a number of ambiguous instances in positive bags from training the classifier, as done in some previous MIL works, SMILE explicitly deals with the ambiguous instances by considering their similarity to the positive class and the negative class. Specifically, a subset of instances is selected from positive bags as the positive candidates and the remaining ambiguous instances are associated with two similarity weights, representing the similarity to the positive class and the negative class, respectively. The ambiguous instances, together with their similarity weights, are thereafter incorporated into the learning phase to build an extended SVM-based predictive classifier. A heuristic framework is employed to update the positive candidates and the similarity weights for refining the classification boundary. Experiments on real-world datasets show that SMILE demonstrates highly competitive classification accuracy and shows less sensitivity to labeling noise than the existing MIL methods. © 2013 IEEE.
Xu, C, Liu, Y, Sun, Q, Li, J & He, Y 2014, 'Polyline‐sourced Geodesic Voronoi Diagrams on Triangle Meshes', Computer Graphics Forum, vol. 33, no. 7, pp. 161-170.
View/Download from: Publisher's site
View description>>
AbstractThis paper studies the Voronoi diagrams on 2‐manifold meshes based on geodesic metric (a.k.a. geodesic Voronoi diagrams or GVDs), which have polyline generators. We show that our general setting leads to situations more complicated than conventional 2D Euclidean Voronoi diagrams as well as point‐source based GVDs, since a typical bisector contains line segments, hyperbolic segments and parabolic segments. To tackle this challenge, we introduce a new concept, called local Voronoi diagram (LVD), which is a combination of additively weighted Voronoi diagram and line‐segment Voronoi diagram on a mesh triangle. We show that when restricting on a single mesh triangle, the GVD is a subset of the LVD and only two types of mesh triangles can contain GVD edges. Based on these results, we propose an efficient algorithm for constructing the GVD with polyline generators. Our algorithm runs in O(nNlogN) time and takes O(nN) space on an n‐face mesh with m generators, where N = max{m, n}. Computational results on real‐world models demonstrate the efficiency of our algorithm.
Xu, G, Zhou, A & Agarwal, N 2014, 'Special Issue on Social Computing and its Applications', The Computer Journal, vol. 57, no. 9, pp. 1279-1280.
View/Download from: Publisher's site
Xu, J, Wu, Q, Zhang, J & Tang, Z 2014, 'Exploiting Universum data in AdaBoost using gradient descent', Image and Vision Computing, vol. 32, no. 8, pp. 550-557.
View/Download from: Publisher's site
View description>>
Recently, Universum data that does not belong to any class of the training data, has been applied for training better classifiers. In this paper, we address a novel boosting algorithm called UAdaBoost that can improve the classification performance of AdaBoost with Universum data. UAdaBoost chooses a function by minimizing the loss for labeled data and Universum data. The cost function is minimized by a greedy, stagewise, functional gradient procedure. Each training stage of UAdaBoost is fast and efficient. The standard AdaBoost weights labeled samples during training iterations while UAdaBoost gives an explicit weighting scheme for Universum samples as well. In addition, this paper describes the practical conditions for the effectiveness of Universum learning. These conditions are based on the analysis of the distribution of ensemble predictions over training samples. Experiments on handwritten digits classification and gender classification problems are presented. As exhibited by our experimental results, the proposed method can obtain superior performances over the standard AdaBoost by selecting proper Universum data. © 2014 Elsevier B.V.
Xu, J, Wu, Q, Zhang, J, Shen, F & Tang, Z 2014, 'Boosting Separability in Semisupervised Learning for Object Classification', IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 7, pp. 1197-1208.
View/Download from: Publisher's site
View description>>
Boosting algorithms, especially AdaBoost, have attracted great attention in computer vision. In the early version of boosting algorithms, the weak classifier selection and the strong classifier learning are linked together. It has been demonstrated that decoupling of these two processes can provide more flexibility for training a better classifier. In these studies, linear discriminant analysis (LDA) has been adopted to select weak classifiers independently based on class separability rather than a training error that occurs normally in AdaBoost. It is observed that LDA is successful only if a large number of labeled training samples is available. However, a large-scale labeled training set is not always available in many computer vision applications such as object classification. To tackle this problem, this paper proposes semisupervised subspace learning combined with a boosting framework for object classification, through which unlabeled data can participate in the boosting training to compensate for the lack of enough labeled data. With the proposed framework, this paper develops three various approaches that utilize unlabeled data in different ways. According to the experiments on several public image data sets, the proposed methods achieve superior performance over AdaBoost and existing semisupervised algorithms. © 1991-2012 IEEE.
Xu, Y, Wang, H & Merigó, JM 2014, 'INTUITIONISTIC FUZZY EINSTEIN CHOQUET INTEGRAL OPERATORS FOR MULTIPLE ATTRIBUTE DECISION MAKING', Technological and Economic Development of Economy, vol. 20, no. 2, pp. 227-253.
View/Download from: Publisher's site
View description>>
In this paper, we propose some new aggregation operators which are based on the Choquet integral and Einstein operations. The operators not only consider the importance of the elements or their ordered positions, but also consider the interactions phenomena among the decision making criteria or their ordered positions. It is shown that the proposed operators generalize several intuitionistic fuzzy Einstein aggregation operators. Moreover, some of their properties are investigated. We also study the relationship between the proposed operators and the existing intuitionistic fuzzy Choquet aggregation operators. Furthermore, an approach based on intuitionistic fuzzy Einstein Choquet integral operators is presented for multiple attribute decision-making problem. Finally, a practical decision making problem involving the water resource management is given to illustrate the multiple attribute decision making process.
Xu, Z, Zhang, Y & Cao, L 2014, 'Social Image Analysis From a Non-IID Perspective', IEEE Transactions on Multimedia, vol. 16, no. 7, pp. 1986-1998.
View/Download from: Publisher's site
View description>>
An image in social media, termed a social image, exhibits characteristics different from images widely discussed in image processing. They can be described by both content and social related attributes, called social image attributes, including visual contents, users, tags, and timestamps. There are strong coupling relationships between social image attributes, which make social images not independent and identically distributed (non-IID). By analyzing the relationships among these attributes, we can better understand the semantic activities conducted on such non-IID social images, hence enabling new applications including content organization, recommendation, and social activity understanding. In this article, we present a novel algorithm to analyze the coupling relationships between social images, which involves not only intra-coupled similarity within a social image attribute, but also inter-coupled similarity between attributes, in analyzing the non-IIDness of the similarity between social images. In particular, we propose a multi-entry version of the coupled similarity metric to deal with attributes (i.e., tags) which have a many-to-one relationship with respect to images. Experimental results on a Flickr group dataset show that the proposed algorithm captures coupling relationships and therefore achieves promising results in various applications, including image clustering and tagging.
Yang, W, Gao, Y, Cao, L, Yang, M & Shi, Y 2014, 'mPadal: a joint local-and-global multi-view feature selection method for activity recognition', Applied Intelligence, vol. 41, no. 3, pp. 776-790.
View/Download from: Publisher's site
Yin, H, Cui, B, Sun, Y, Hu, Z & Chen, L 2014, 'LCARS', ACM Transactions on Information Systems, vol. 32, no. 3, pp. 1-37.
View/Download from: Publisher's site
View description>>
Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge to the traditional collaborative filtering-based recommender systems. The problem becomes even more challenging when people travel to a new city where they have no activity information. In this article, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants and shopping malls) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item cooccurrence patterns and exploiting item contents. The online recommendation part takes a querying user along with a querying city as input, and automatically combines the learned interest of the querying user and the local preference of the querying city to produce the top- k recommendations. To speed up the online process, a scalable query processing technique is developed by extending both the Threshold Algorithm (TA) and TA-approximation algorithm. We evaluate the performance of our recommender system on two real datasets, that is, DoubanEvent and Foursquare, and one large-scale synthetic dataset. The results show the superiority of LCARS in recommending spatial it...
Yue, XD, Miao, DQ, Cao, LB, Wu, Q & Chen, YF 2014, 'An efficient color quantization based on generic roughness measure', Pattern Recognition, vol. 47, no. 4, pp. 1777-1789.
View/Download from: Publisher's site
View description>>
Color quantization is a process to compress image color space while minimizing visual distortion. The quantization based on preclustering has low computational complexity but cannot guarantee quantization precision. The quantization based on postclustering can produce high quality quantization results. However, it has to traverse image pixels iteratively and suffers heavy computational burden. Its computational complexity was not reduced although the revised versions have improved the precision. In the work of color quantization, balancing quantization quality and quantization complexity is always a challenging point. In this paper, a two-stage quantization framework is proposed to achieve this balance. In the first stage, high-resolution color space is initially compressed to a condensed color space by thresholding roughness indices. Instead of linear compression, we propose generic roughness measure to generate the delicate segmentation of image color. In this way, it causes less distortion to the image. In the second stage, the initially compressed colors are further clustered to a palette using Weighted Rough K-means to obtain final quantization results. Our objective is to design a postclustering quantization strategy at the color space level rather than the pixel level. Applying the quantization in the precisely compressed color space, the computational cost is greatly reduced; meanwhile, the quantization quality is maintained. The substantial experimental results validate the high efficiency of the proposed quantization method, which produces high quality color quantization while possessing low computational complexity. © 2013 Elsevier Ltd.
Yuwei Wu, Bo Ma, Min Yang, Jian Zhang & Yunde Jia 2014, 'Metric Learning Based Structural Appearance Model for Robust Visual Tracking', IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 5, pp. 865-877.
View/Download from: Publisher's site
Zhou, T, Lu, Y, Lv, F, Di, H, Zhao, Q & Zhang, J 2014, 'Abrupt Motion Tracking via Nearest Neighbor Field Driven Stochastic Sampling', Neurocomputing, vol. 165, pp. 350-360.
View/Download from: Publisher's site
View description>>
Stochastic sampling based trackers have shown good performance for abruptmotion tracking so that they have gained popularity in recent years. However,conventional methods tend to use a two-stage sampling paradigm, in which thesearch space needs to be uniformly explored with an inefficient preliminarysampling phase. In this paper, we propose a novel sampling-based method in theBayesian filtering framework to address the problem. Within the framework,nearest neighbor field estimation is utilized to compute the importanceproposal probabilities, which guide the Markov chain search towards promisingregions and thus enhance the sampling efficiency; given the motion priors, asmoothing stochastic sampling Monte Carlo algorithm is proposed to approximatethe posterior distribution through a smoothing weight-updating scheme.Moreover, to track the abrupt and the smooth motions simultaneously, we developan abrupt-motion detection scheme which can discover the presence of abruptmotions during online tracking. Extensive experiments on challenging imagesequences demonstrate the effectiveness and the robustness of our algorithm inhandling the abrupt motions.
Zhou, Y, Tang, M, Pan, W, Li, J, Wang, W, Shao, J, Wu, L, Li, J, Yang, Q & Yan, B 2014, 'Bird Flu Outbreak Prediction via Satellite Tracking', IEEE Intelligent Systems, vol. 29, no. 4, pp. 10-17.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Advanced satellite tracking technologies have collected huge amounts of wild bird migration data. Biologists use these data to understand dynamic migration patterns, study correlations between habitats, and predict global spreading trends of avian influenza. The research discussed here transforms the biological problem into a machine learning problem by converting wild bird migratory paths into graphs. H5N1 outbreak prediction is achieved by discovering weighted closed cliques from the graphs using the mining algorithm High-wEight cLosed cliquE miNing (HELEN). The learning algorithm HELEN-p then predicts potential H5N1 outbreaks at habitats. This prediction method is more accurate than traditional methods used on a migration dataset obtained through a real satellite bird-tracking system. Empirical analysis shows that H5N1 spreads in a manner of high-weight closed cliques and frequent cliques.
Zhu, L, Cao, L, Yang, J & Lei, J 2014, 'Evolving soft subspace clustering', Applied Soft Computing, vol. 14, no. b, pp. 210-228.
View/Download from: Publisher's site
Zliobaite, I & Gabrys, B 2014, 'Adaptive Preprocessing for Streaming Data', IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 2, pp. 309-321.
View/Download from: Publisher's site
Al-Jubouri, B & Gabrys, B 1970, 'Multicriteria approaches for predictive model generation: A comparative experimental study', 2014 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM), 2014 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM), IEEE, Orlando, FL, pp. 64-71.
View/Download from: Publisher's site
Azadeh, A, Kokabi, R, Saberi, M, Hussain, FK, Hussain, OK & IEEE 1970, 'Trust Prediction Using Z-numbers and Artificial Neural Networks', 2014 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), IEEE International Conference on Fuzzy Systems, IEEE - Institute of Electrical and Electronics Engineers, Beijing, China, pp. 522-528.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Trust modeling of both the interacting parties in a virtual world, is a critical element of business intelligence. A key aspect in trust modeling is to be able to accurately predict the future trust value of an interacting party. In this paper, we propose an intelligent method for predicting the future trust value of a trusted entity. We propose the use of Z-number to represent both the trust value and its corresponding reliability. Subsequently, we apply Artificial Neural Network (ANN) to predict future trust values. We generate a large number of synthetic time series, with a view to model real-world trust values of trusted entity. We validate the working of our methodology using the generated time series.
Azadeh, A, Zadeh, SA, Saberi, M, Hussain, FK, Hussain, OK & IEEE 1970, 'A trust-based performance measurement modeling using DEA, T-norm and S-norm operators', 2014 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), IEEE International Conference on Fuzzy Systems, IEEE - Institute of Electrical and Electronics Engineers, Beijing, China, pp. 1913-1920.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In today's highly dynamic economy and society, the performance evaluation of Decision Making Units (DMUs) is of high importance. This study presents an efficient model for analyzing the outputs of performance measurement methodologies by means of trust, which provides explicit qualitative scales instead of representing pure numerical data. The efficiency rate of the current, previous and coming years, as well as the average efficiency and standard deviation, are the five inputs for this model. These efficiency rates are calculated using Data Envelopment Analysis (DEA). The approach uses time series forecasting to predict the future efficiency rate. Furthermore, the implemented Auto Regressive (AR) model includes an Auto Correlation Function (ACF) for input selection. The model utilizes T-norms and S-norms as the final modeling tools. To illustrate the applicability of the proposed model, we apply it to a data set of DMUs. Ultimately, modified trust values for these DMUs are determined using the proposed approach.
Azadeh, A, Zia, NP, Saberi, M, Hussain, FK, Hussain, OK, Chang, E & IEEE 1970, 'Trust-Based Performance Measurement Using Fuzzy Operators', PROCEEDINGS OF THE 2014 9TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), IEEE Conference on Industrial Electronics and Applications, IEEE - Institute of Electrical and Electronics Engineers, Huangzhou, China, pp. 1701-1706.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Performance assessment is a critical aspect for any organization, as it provides them with the means to measure their performance. Decision makers and top management need to gain a comprehensive view of the capabilities and performance of decision making units (DMU's) in order to make efficient decisions and beneficial improvements. In this study a novel model has been proposed to place performance assessment outputs in linguistic form, which utilize proper trust labels. Trust labels provide explicit qualitative scales, instead of representing pure numerical data, which are more meaningful for top manager. Fifteen scenarios are formed based on two main factors: the number of decision making units and the number of timeslots, which together form the basis of the proposed method for performance assessment. The efficiency rates of the current, previous and following years, along with the average efficiency and standard deviation, are the five inputs to this model. The approach uses time series forecasting to predict the future efficiency rate and is armed with an Auto Correlation Function (ACF) for input selection. The model utilizes fuzzy t-norm and s-norm as the final modeling tools. To show the applicability and superiority of the proposed model, it is applied to a data set provided by running a simulation structured by a unique logic.
Bargi, A, Da Xu, RY & Piccardi, M 1970, 'An Infinite Adaptive Online Learning Model for Segmentation and Classification of Streaming Data', 2014 22nd International Conference on Pattern Recognition, 2014 22nd International Conference on Pattern Recognition (ICPR), IEEE, Stockholm, Sweden, pp. 3440-3445.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In recent years, the desire and need to understand streaming data has been increasing. Along with the constant flow of data, it is critical to classify and segment the observations on-the-fly without being limited to a rigid number of classes. In other words, the system needs to be adaptive to the streaming data and capable of updating its parameters to comply with natural changes. This interesting problem, however, is poorly addressed in the literature, as many of the common studies focus on offline classification over a pre-defined class set. In this paper, we propose a novel adaptive online system based on Markov switching models with hierarchical Dirichlet process priors. This infinite adaptive online approach is capable of segmenting and classifying the streaming data over infinite classes, while meeting the memory and delay constraints of streaming contexts. The model is further enhanced by a 'predictive batching' mechanism, that is able to divide the flowing data into batches of variable size, imitating the ground-truth segments. Experiments on two video datasets show significant performance of the proposed approach in frame-level accuracy, segmentation recall and precision, while determining the accurate number of classes in acceptable computational time.
Bargi, A, Da Xu, RY, Ghahramani, Z & Piccardi, M 1970, 'A non-parametric conditional factor regression model for multi-dimensional input and response', Journal of Machine Learning Research, International Conference on Artificial Intelligence and Statistics, JMLR, Reykjavik, Iceland, pp. 77-85.
View description>>
In this paper, we propose a non-parametric conditional factor regression (NCFR) model for domains with multi-dimensional input and response. NCFR enhances linear regression in two ways: a) introducing low-dimensional latent factors leading to dimensionality reduction and b) integrating the Indian Buffet Process as prior for the latent layer to dynamically derive an optimal number of sparse factors. Thanks to IBP's enhancements to the latent factors, NCFR can significantly avoid over-fitting even in the case of a very small sample size compared to the dimensionality. Experimental results on three diverse datasets comparing NCRF to a few baseline alternatives give evidence of its robust learning, remarkable predictive performance, good mixing and computational efficiency.
Beck, D, Palu, C, Shah, A, Herold, T, Olivier, J, Valk, PJM, Delwel, R, Bohlander, SK, Wong, JW & Pimanda, JE 1970, 'Integrative Analysis of Lincrna Expression and Clinical Annotations Reveals a Signature of 17 Genes with Prognostic Significance in Acute Myeloid Leukemia (AML)', BLOOD, 56th Annual Meeting of the American-Society-of-Hematology, AMER SOC HEMATOLOGY, San Francisco, CA.
Borzeshi, EZ, Dehghan, A, Piccardi, M & Shah, M 1970, 'Complex event recognition by latent temporal models of concepts', 2014 IEEE International Conference on Image Processing (ICIP), 2014 IEEE International Conference on Image Processing (ICIP), IEEE, Paris, pp. 2373-2377.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Complex event recognition is an expanding research area aiming to recognize entities of high-level semantics in videos. Typical approaches exploit the so-called 'bags' of spatiotemporal features such as STIP, ISA and DTF-HOG; yet, more recently, the notion of concept has emerged as an alternative, intermediate representation with greater descriptive power, and 'bags of concepts' have been used for recognition. In this paper we argue that concepts in an event tend to articulate over a discernible temporal structure and we exploit a temporal model using the scores of concept detectors as measurements. In addition, we propose several heuristics to improve the initialization of the model's latent states and take advantage of the time-sparsity of the concepts. Experimental results on videos from the challenging TRECVID MED 2012 dataset show that the proposed approach achieves an improvement in average precision of 8.92% over comparable bags of concepts, thus validating the use of temporal structure over concepts for complex event recognition.
Brodka, P, Magnani, M & Musial, K 1970, 'Message from SNAA 2014 program chairs', 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), IEEE, p. xxxiv.
View/Download from: Publisher's site
Bu, Z, Wu, Z, Qian, L, Cao, J & Xu, G 1970, 'A backbone extraction method with Local Search for complex weighted networks', 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), IEEE, Beijing, China, pp. 85-88.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The backbone is the natural abstraction of a complex network, which can help people to understand it in a more simplified form. Backbone extraction becomes more challenging as many networks are evolving into large scale and the weight distributions are spanning several orders of magnitude. Traditional filter-based methods tend to include many outliers into the backbone. What is more, they often suffer from the computational inefficiency-the exhaustive search of all nodes or edges is often prohibitively expensive. In this work, we propose a Local Search based Backbone Extraction Heuristic (LS-BEH) to find the backbone in a complex weighted network. First, a strict filtering rule is carefully designed to determine edges to be preserved or discarded. Second, we present a local search model to examine part of edges in an iterative way. Experimental results on two real-life networks demonstrate the advantage of LS-BEH over the classic disparity filter method by either effectiveness or efficiency validity.
Budka, M, Eastwood, M, Gabrys, B, Kadlec, P, Salvador, MM, Schwan, S, Tsakonas, A & Zliobaite, I 1970, 'Advances in Intelligent Data Analysis XIII', ADVANCES IN INTELLIGENT DATA ANALYSIS XIII, 13th International Symposium on Intelligent Data Analysis (IDA), Springer International Publishing, Fac Club, Leuven, BELGIUM, pp. 49-60.
View/Download from: Publisher's site
Chotipant, S, Hussain, FK, Dong, H & Hussain, OK 1970, 'A Fuzzy VSM-Based Approach for Semantic Service Retrieval', NEURAL INFORMATION PROCESSING, ICONIP 2014, PT III, International Conference on Neural Information Processing, Springer Verlag, Kuching, Malaysia, pp. 682-689.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2014. A vast number of business services have been published on the Web in an attempt to achieve cost reductions and satisfy user demand. Service retrieval consequently plays an important role, but unfortunately existing research focuses on crisp service retrieval techniques which are unsuitable for vague real world information. In this paper, we propose a new fuzzy service retrieval approach which consists of two modules: service annotation and service retrieval. Related service concepts for a given query are semantically retrieved, following which services that are annotated to those concepts are retrieved. The degree of retrieval of the retrieval module and the similarity between a service, a concept, and a query are fuzzy. Our experiment shows that the proposed approach performs better than a non-fuzzy approach on Recall measure.
Cuzzocrea, A & Xu, G 1970, 'A novel heuristic scheme for modeling and managing time bound constraints in data-intensive grid and cloud infrastructures', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), On the Move to Meaningful Internet Systems, Springer Berlin Heidelberg, Amantea, Italy, pp. 172-191.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2014. Inspired by the emerging Cloud Computing challenge, in this paper we provide a comprehensive framework for modeling and managing time bound constraints in data-intensive Grid and Cloud infrastructures, along with its experimental assessment and analysis. We provide both conceptual and theoretical contributions of the proposed framework, along with a heuristic scheme, called RGDTExec, that solves all possible instances of the problem underlying the proposed framework by exploiting a suitable greedy algorithm, called RGDTExecRun. As we demonstrate throughout the paper, the framework keeps several aspects of research innovations that are beneficial in a wide range of application scenarios.
Cuzzocrea, A & Xu, G 1970, 'Towards a framework for supporting web search of complex objects via multidimensional paradigms', Proceedings - 14th International Conference on Computational Science and Its Applications, ICCSA 2014, 2014 14th International Conference on Computational Science and Its Applications (ICCSA), IEEE, Portugal, pp. 217-220.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In this paper we present WebClustCube, an innovative framework for supporting Web search of complex objects via multidimensional paradigms. WebClustCube focuses on the issue of empowering traditional Web search methodologies by means of novel paradigms. In particular, WebClustCube supports the building and the interactive manipulation of OLAP-enabled Web views over complex objects extracted from distributed databases. The data management, OLAP-like support of WebClustCube is provided by ClustCube, a state-of-the-art framework for coupling OLAP methodologies and clustering algorithms with the goal of analyzing and mining of complex database objects. We complement of analytical contribution by means of a case study that clearly shows the potentialities of WebClustCube in the context of next-generation Web search environments.
Deng, Z, Jiang, Y, Cao, L & Wang, S 1970, 'Knowledge-leverage based TSK fuzzy system with improved knowledge transfer', 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Beijing, China, pp. 178-185.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In this study, the improved knowledge-leverage based TSK fuzzy system modeling method is proposed in order to overcome the weaknesses of the knowledge-leverage based TSK fuzzy system (TSK-FS) modeling method. In particular, two improved knowledge-leverage strategies have been introduced for the parameter learning of the antecedents and consequents of the TSK-FS constructed in the current scene by transfer learning from the reference scene, respectively. With the improved knowledge-leverage learning abilities, the proposed method has shown the more adaptive modeling effect compared with traditional TSK fuzzy modeling methods and some related methods on the synthetic and real world datasets.
Dong, H, Hussain, FK & Bouguettaya, A 1970, 'Discovering Plain-Text-Described Services Based on Ontology Learning', NEURAL INFORMATION PROCESSING, ICONIP 2014, PT III, International Conference on Neural Information Processing, ICONIP, Kuching, Malaysia, pp. 673-681.
View/Download from: Publisher's site
Gabrys, B 1970, 'Robust Adaptive Predictive Modeling and Data Deluge (Extended Abstract)', MAN-MACHINE INTERACTIONS 3, 3rd International Conference on Man-Machine Interactions (ICMMI), Springer International Publishing, Brenna, POLAND, pp. 39-41.
View/Download from: Publisher's site
Ghosh, S, Feng, M, Nguyen, H & Li, J 1970, 'Risk prediction for acute hypotensive patients by using gap constrained sequential contrast patterns.', AMIA Annu Symp Proc, AMIA Annual Symposium, AMIA, United States, pp. 1748-1757.
View description>>
The development of acute hypotension in a critical care patient causes decreased tissue perfusion, which can lead to multiple organ failures. Existing systems that employ population level prognostic scores to stratify the risks of critical care patients based on hypotensive episodes are suboptimal in predicting impending critical conditions, or in directing an effective goal-oriented therapy. In this work, we propose a sequential pattern mining approach which target novel and informative sequential contrast patterns for the detection of hypotension episodes. Our results demonstrate the competitiveness of the approach, in terms of both prediction performance as well as knowledge interpretability. Hence, sequential patterns-based computational biomarkers can help comprehend unusual episodes in critical care patients ahead of time for early warning systems. Sequential patterns can thus aid in the development of a powerful critical care knowledge discovery framework for facilitating novel patient treatment plans.
Guo, D, Zhang, J, Liu, X, Cui, Y & Zhao, C 1970, 'Multiple Kernel Learning Based Multi-view Spectral Clustering', 2014 22nd International Conference on Pattern Recognition, 2014 22nd International Conference on Pattern Recognition (ICPR), IEEE, Stockholm, Sweden, pp. 3774-3779.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. For a given data set, exploring their multi-view instances under a clustering framework is a practical way to boost the clustering performance. This is because that each view might reflect partial information for the existing data. Furthermore, due to the noise and other impact factors, exploring these instances from different views will enhance the mining of the real structure and feature information within the data set. In this paper, we propose a multiple kernel spectral clustering algorithm through the multi-view instances on the given data set. By combining the kernel matrix learning and the spectral clustering optimization into one process framework, the algorithm can determine the kernel weights and cluster the multi-view data simultaneously. We compare the proposed algorithm with some recent published methods on real-world datasets to show the efficiency of the proposed algorithm.
Guo, D, Zhang, J, Xu, M, He, X, Li, M & Zhao, C 1970, 'A Multiple Features Distance Preserving (MFDP) Model for Saliency Detection', 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Wollongong.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Playing a vital role, saliency has been widely applied for various image analysis tasks, such as content-aware image retargeting, image retrieval and object detection. It is generally accepted that saliency detection can benefit from the integration of multiple visual features. However, most of the existing literatures fuse multiple features at saliency map level without considering cross-feature information, i.e. generate a saliency map based on several maps computed from an individual feature. In this paper, we propose a Multiple Feature Distance Preserving (MFDP) model to seamlessly integrate multiple visual features through an alternative optimization process. Our method outperforms the state-of-the-arts methods on saliency detection. Saliency detected by our method is further cooperated with seam carving algorithm and significantly improves the performance on image retargeting.
Guo, T, Zhu, X, Pei, J & Zhang, C 1970, 'SNOC: Streaming Network Node Classification', 2014 IEEE International Conference on Data Mining, 2014 IEEE International Conference on Data Mining (ICDM), IEEE, Shenzhen, PEOPLES R CHINA, pp. 150-159.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Many real-world networks are featured with dynamic changes, such as new nodes and edges, and modification of the node content. Because changes are continuously introduced to the network in a streaming fashion, we refer to such dynamic networks as streaming networks. In this paper, we propose a new classification method for streaming networks, namely streaming network node classification (SNOC). For streaming networks, the essential challenge is to properly capture the dynamic changes of the node content and node interactions to support node classification. While streaming networks are dynamically evolving, for a short temporal period, a subset of salient features are essentially tied to the network content and structures, and therefore can be used to characterize the network for classification. To achieve this goal, we propose to carry out streaming network feature selection (SNF) from the network, and use selected features as gauge to classify unlabeled nodes. A Laplacian based quality criterion is proposed to guide the node classification, where the Laplacian matrix is generated based on node labels and structures. Node classification is achieved by finding the class that results in the minimal gauging value with respect to the selected features. By frequently updating the features selected from the network, node classification can quickly adapt to the changes in the network for maximal performance gain. Experiments demonstrate that SNOC is able to capture changes in network structures and node content, and outperforms baseline approaches with significant performance gain.
Hanh, LTM, Binh, NT & Tung, KT 1970, 'Applying the meta-heuristic algorithms for mutation-based test data generation for Simulink models', Proceedings of the Fifth Symposium on Information and Communication Technology - SoICT '14, the Fifth Symposium, ACM Press, pp. 102-109.
View/Download from: Publisher's site
View description>>
Test data generation is one of the most important steps in testing process in order to reveal the faults in software. This activity is time-consuming and labor intensive. With the development of modeling tools such as Simulink, testing is able to early realize at design level. Therefore, it is desirable to seek the effective techniques for automating the testing process for Simulink models in order to make sure that the accurateness of systems which are built from these models. Mutation testing could be used as criterion to generate test data for Simulink models. In this paper, we evaluate the application of different meta-heuristic algorithms, like genetic algorithm, simulated annealing and artificial immune system, to optimize mutation-based test data generation in terms of killing the number of generated mutants for Simulink models. We discuss the effectiveness of these approaches and propose also an improvement of the genetic algorithm. These approaches have been applied to some different case studies and the obtained results are very promising.
Huang, S, Zhang, J, Liu, X & Wang, L 1970, 'A Method of Discriminative Information Preservation and In-Dimension Distance Minimization Method for Feature Selection', 2014 22nd International Conference on Pattern Recognition, 2014 22nd International Conference on Pattern Recognition (ICPR), IEEE, Swedish Soc Automated Image Anal, Stockholm, SWEDEN, pp. 1615-1620.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Preserving sample's pair wise similarity is essential for feature selection. In supervised learning, labels can be used as a direct measure to check whether two samples are similar with each other. In unsupervised learning, however, such similarity information is usually unavailable. In this paper, we propose a new feature selection method through spectral clustering based on discriminative information as an underlying data structure. Laplacian matrix is used to obtain more partitioning information than other previously proposed structures such as the Eigen space of original data. The high dimension of sample data is projected into a low dimensional space. The in-dimension distance is also considered to get a better compact clustering result. The proposed method can be solved efficiently by updating the projection matrix and its inverse normalized diagonal matrix. A comprehensive experimental study has demonstrated that the proposed method outperforms many state-of-the-art feature selection algorithms with different criterion including the accuracy of clustering/classification and Jaccard score.
Huang, Y, Fu, K, Yao, L, Wu, Q & Yang, J 1970, 'Saliency Detection Based on Spread Pattern and Manifold Ranking', Proceedings for CCPR Conference on Pattern Recognition, Chinese Conference on Pattern Recognition, Springer Berlin Heidelberg, Changsha; China, pp. 283-292.
View/Download from: Publisher's site
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Maintaining Trust in Cloud Computing through SLA Monitoring', NEURAL INFORMATION PROCESSING, ICONIP 2014, PT III, International Conference on Neural Information Processing, Springer Verlag, Kuching, Malaysia, pp. 690-697.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2014. Maintaining trust in cloud computing is a significant challenge due to the dynamic nature of cloud computing and the fragility of trust. Trust can be established by conducting successful transactions and meeting all the parameters of the Service Level Agreement (SLA) drawn up between two interacting parties. Trust can be maintained by continuous monitoring of these predefined SLA parameters. There are number of commentaries on SLA monitoring that describe different frameworks for the proactive or reactive detection of SLA violations. The aim of this research is to present an overview of the literature and make a comparative analysis of SLA monitoring in respect of trust maintenance in cloud computing.
Jiang, Z, Dai, N, Yang, L, Peng, J, Li, H, Li, J & Liu, W 1970, 'Effects of Al2O3 composition on the near-infrared emission in Bi-doped and Yb–Bi-codoped silicate glasses for broadband optical amplification', Journal of Non-Crystalline Solids, Elsevier BV, pp. 196-199.
View/Download from: Publisher's site
Krol, D, Budka, M & Musial, K 1970, 'Simulating the Information Diffusion Process in Complex Networks Using Push and Pull Strategies', 2014 European Network Intelligence Conference, 2014 European Network Intelligence Conference (ENIC), IEEE, Wroclaw, POLAND, pp. 1-8.
View/Download from: Publisher's site
Le, M, Nauck, D, Gabrys, B & Martin, T 1970, 'Sequential Clustering for Event Sequences and Its Impact on Next Process Step Prediction', INFORMATION PROCESSING AND MANAGEMENT OF UNCERTAINTY IN KNOWLEDGE-BASED SYSTEMS, PT I, 15th International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU), SPRINGER-VERLAG BERLIN, Montpellier, FRANCE, pp. 168-178.
Le, S, Dong, H, Hussain, FK, Hussain, OK, Ma, J, Zhang, Y & IEEE 1970, 'Multicriteria Decision Making with Fuzziness and Criteria Interdependence in Cloud Service Selection', 2014 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), IEEE International Conference on Fuzzy Systems, IEEE - Institute of Electrical and Electronics Engineers, Beijing, China, pp. 1929-1936.
View/Download from: Publisher's site
Li, J, Fong, S, Zhuang, Y & Khoury, R 1970, 'Hierarchical Classification in Text Mining for Sentiment Analysis', 2014 International Conference on Soft Computing and Machine Intelligence, 2014 International Conference on Soft Computing & Machine Intelligence (ISCMI), IEEE, New Delhi, INDIA, pp. 46-51.
View/Download from: Publisher's site
Li, M, Li, J, Ou, Y, Zhang, Y, Luo, D, Bahtia, M & Cao, L 1970, 'Coupled K-nearest centroid classification for non-iid data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Transactions on Computational Collective Intelligence XV: International Conference on Practical Applications on Agents and Multi-Agent Systems, Springer Verlag, Salamanca, pp. 89-100.
View/Download from: Publisher's site
View description>>
Most traditional classification methods assume the independence and identical distribution (iid) of objects, attributes and values. However, real world data, such as multi-agent data and behavioral data, usually contains strong couplings among values, attributes and objects, which greatly challenges existing methods and tools. This work targets the coupling similarities from these three perspectives and designs a novel classification method that applies a weighted K-Nearest Centroid to obtain the coupled similarity for non-iid data. From value and attribute perspectives, coupled similarity serves as a metric for nominal objects, which consider not only intra-coupled similarity within an attribute but also inter-coupled similarity between attributes. From the object perspective, we propose a more effective method that measures the centroid object by connecting all related objects. Extensive experiments on UCI and student data sets reveal that the proposed method outperforms classical methods for higher accuracy, especially in imbalanced data.
Li, M, Li, J, Ou, Y, Zhang, Y, Luo, D, Bahtia, M & Cao, L 1970, 'Learning Heterogeneous Coupling Relationships Between Non-IID Terms', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Workshop on Agents and Data Mining Interaction, Springer Berlin Heidelberg, Saint Paul, MN, pp. 79-91.
View/Download from: Publisher's site
View description>>
With the rapid proliferation of social media and online community, a vast amount of text data has been generated. Discovering the insightful value of the text data has increased its importance, a variety of text mining and process algorithms have been created in the recent years such as classification, clustering, similarity comparison. Most previous research uses a vector-space model for text representation and analysis. However, the vector-space model does not utilise the information about the relationships between the term to term. Moreover, the classic classification methods also ignore the relationships between each text document to another. In other word, the traditional text mining techniques assume the relation between terms and between documents are independent and identically distributed (iid). In this paper, we will introduce a novel term representation by involving the coupled relations from term to term. This coupled representation provides much richer information that enables us to create a coupled similarity metric for measuring document similarity, and a coupled document similarity based K-Nearest centroid classifier will be applied to the classification task. Experiments verify the proposed approach outperforming the classic vector-space based classifier, and show potential advantages and richness in exploring the other text mining tasks. © 2014 Springer-Verlag.
Li, X, Zhang, L, Luo, P, Chen, E, Xu, G, Zong, Y & Guan, C 1970, 'Mining user tasks from print logs', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, China, pp. 1250-1257.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. With lots of applications emerging in World Wide Web, many interaction data from users are collected and exploited to discover user behavior or interest patterns. In this paper, we attempt to exploit a new interaction data, namely print logs, where each record is printing URLs selected by a user using a popular web printing tool. Users usually print web contents based on an intention (subtask or task). Apparently, mining common print tasks from print logs is able to capture users' intentions, which undoubtedly benefits many web applications, such as task oriented recommendation and behavior targeting. However, it is not an easy job to perform this due to the difficulty of URL topic representation and task formulation. To this end, we propose a general framework, named UPT (Users Print Tasks mining framework), for mining print tasks from print logs. Specifically, we attempt to leverage delicious (a social book marking web service) as an external thesaurus to expand the expression of each URL by selecting tags associated with the domain of each URL. Then, we construct a tag co-occurrence graph where similar tags can be clustered as subtasks. If we view each subtask as an item, then the print log is transformed to a transaction database, on which an efficient pattern mining algorithm is proposed to induce tasks. Finally, we evaluate the effectiveness of the proposed framework through experiments on a real print log.
Liu, C, Cao, L & Yu, PS 1970, 'A hybrid coupled k-nearest neighbor algorithm on imbalance data', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, China, pp. 2011-2018.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The state-of-the-art classification algorithms rarely consider the relationship between the attributes in the data sets and assume the attributes are independently to each other (IID). However, in real-world data, these attributes are more or less interacted via explicit or implicit relationships. Although the classifiers for class-balanced data are relatively well developed, the classification of class-imbalanced data is not straightforward, especially for mixed type data which has both categorical and numerical features. Limited research has been conducted on the class-imbalanced data. Some algorithms mainly synthesize or remove instances to force the sizes of each class comparable, which may change the inherent data structure or introduces noise to the source data. While for the distance or similarity based algorithms, they ignored the relationship between features when computing the similarity. This paper proposes a hybrid coupled k-nearest neighbor classification algorithm (HC-kNN) for mixed type data, by doing discretization on numerical features to adapt the inter coupling similarity as we do on categorical features, then combing this coupled similarity to the original similarity or distance, to overcome the shortcoming of the previous algorithms. The experiment results demonstrate that our proposed algorithm can get a higher average performance than that of the relevant algorithms (e.g. the variants of kNN, Decision Tree, SMOTE and NaiveBayes).
Liu, C, Cao, L & Yu, PS 1970, 'Coupled fuzzy k-nearest neighbors classification of imbalanced non-IID categorical data', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, China, pp. 1122-1129.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Mining imbalanced data has recently received increasing attention due to its challenge and wide applications in the real world. Most of the existing work focuses on numerical data by manipulating the data structure which essentially changes the data characteristics or developing new distance or similarity measures which are designed for data with the so-called IID assumption, namely data is independent and identically distributed. This is not consistent with the real-life data and business needs, which request to fully respect the data structure and coupling relationships embedded in data objects, features and feature values. In this paper, we propose a novel coupled fuzzy similarity-based classification approach to cater for the difference between classes by a fuzzy membership and the couplings by coupled object similarity, and incorporate them into the most popular classifier: kNN to form a coupled fuzzy kNN (ie. CF-kNN). We test the approach on 14 categorical data sets compared to several kNN variants and classic classifiers including C4.5 and NaiveBayes. The experimental results show that CF-kNN outperforms the baselines, and those classifiers incorporated with the proposed coupled fuzzy similarity perform better than their original editions.
Liu, L, Chen, S, Hsu, CH, Xu, G, Zhang, X, Li, L, Su, G, Liu, M, Huang, Z, Zhu, T, Jin, J, Carlson, D, Chen, W, Wang, B, An, N & Yang, Y 1970, 'Message from the PUDA 2014 Workshop Chairs', 2014 IEEE 11th Intl Conf on Ubiquitous Intelligence and Computing and 2014 IEEE 11th Intl Conf on Autonomic and Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops, 2014 IEEE 11th Intl Conf on Ubiquitous Intelligence & Computing and 2014 IEEE 11th Intl Conf on Autonomic & Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), IEEE, p. xxxvii.
View/Download from: Publisher's site
Liu, N, Li, L, Xu, G & Yang, Z 1970, 'Identifying domain-dependent influential microblog users: A post-feature based approach', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Quebec, Canada, pp. 3122-3123.
View description>>
Users of a social network like to follow the posts published by influential users. Such posts usually are delivered quickly and thus will produce a strong influence on public opinions. In this paper, we focus on the problem of identifying domain- dependent influential users(or topic experts). Some of traditional approaches are based on the post contents of users users to identify influential users, which may be biased by spammers who try to make posts related to some topics through a simple copy and paste. Others make use of user authentication information given by a service platform or user self description (introduction or label) in finding influential users. However, what users have published is not necessarily related to what they have registed and described. In addition, if there is no comments from other users, its less objective to assess a users post quality. To improve effectiveness of recognizing influential users in a topic of microblogs, we propose a post-feature based approach which is supplementary to post- content based approaches. Our experimental results show that the post-feature based approach produces relatively higher precision than that of the content based approach.
Liu, W, Sarda, A, Chen, F & Geers, G 1970, 'Forecasting changes of traffic flow caused by road incidents', 21st World Congress on Intelligent Transport Systems, ITSWC 2014: Reinventing Transportation in Our Connected World.
View description>>
This paper explores the potential for supervised machine learning techniques in forecasting changes of traffic flow caused by road incidents based on incident features. Data fusion approaches are carried out on a high quality SCATS dataset measuring traffic flow of a major Australian city, and on an incident log data set encompassing a time period of 4 months' road incidents. Based on incident features, a range of both prevalent and advanced machine learning algorithms are applied to these data, and the accuracies of the algorithms are evaluated. We then examine the effectiveness of such models in categorizing changes of traffic flow as either trivial or non-trivial in the extent of their responses to incidents. The models are promising in their capacity and are able to correctly predict with more than 70% accuracy that a change of traffic flow shall be major. This has significant implications for determining the optimal allocation of resources for both road traffic control and incident response units.
Liu, W, Xue, H, Gu, Y, Yang, J, Wu, Q & Jia, Z 1970, 'Shape Preserving RGB-D Depth Map Restoration', Proceedings, Part III 21st International Conference, ICONIP 2014., International Conference on Neural Information Processing, Springer International Publishing, Kuching, Malaysia, pp. 150-158.
View/Download from: Publisher's site
View description>>
The RGB-D cameras have enjoined a great popularity these years. However, the quality of the depth maps obtained by such cameras is far from perfect. In this paper, we propose a framework for shape preserving depth map restoration for RGB-D cameras. The quality of the depth map is improved from three aspects: 1) the proposed region adaptive bilateral filter (RA-BF) smooths the depth noise across the depth map adaptively, 2) by associating the color information with the depth information, incorrect depth values are adjusted properly, 3) a selective joint bilateral filter (SJBF) is proposed to successfully fill in the holes caused by low quality depth sensing. Encouraging performance is obtained through our experiments.
Liu, X, Wang, L, Zhang, J & Yin, J 1970, 'Sample-adaptive multiple kernel learning', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Publication, Québec, Canada, pp. 1975-1981.
View description>>
Existing multiple kernel learning (MKL) algorithms indiscriminately apply a same set of kernel combination weights to all samples. However, the utility of base kernels could vary across samples and a base kernel useful for one sample could become noisy for another. In this case, rigidly applying a same set of kernel combination weights could adversely affect the learning performance. To improve this situation, we propose a sample-adaptive MKL algorithm, in which base kernels are allowed to be adaptively switched on/off with respect to each sample. We achieve this goal by assigning a latent binary variable to each base kernel when it is applied to a sample. The kernel combination weights and the iatent variables are jointly optimized via margin maximization principle. As demonstrated on five benchmark data sets, the proposed algorithm consistently outperforms the comparable ones in the literature.
Meng, Q, Tafavogh, S, Kennedy, PJ & IEEE 1970, 'Community Detection on Heterogeneous Networks by Multiple Semantic-Path Clustering', 2014 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL ASPECTS OF SOCIAL NETWORKS (CASON), International Conference on Computational Aspects of Social Networks (CASoN), IEEE, Porto, PORTUGAL, pp. 7-12.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Heterogeneous networks have become a commonly used model to represent complex and abstract social phenomena. They allow objects to have many different relationships and represent relationships by semantic paths which connect object types via a sequence of relations. A major challenge in community detection on heterogeneous networks is how to organize and combine different semantic paths. In order to acquire desired clustering, we propose a novel community detection method for heterogeneous networks based on matrix decomposition and semantic paths. The major advantage of this method is to treat objects individually and to assign them with different combinations of semantic-path weights so as to improve the clustering quality. The comparative experiments of the proposed method with another two state-of-the-art methods, spectral clustering and path-selection clustering, confirms that it can acquire desired clustering results better.
Meng, X, Cao, L & Shao, J 1970, 'Semantic Approximate Keyword Query Based on Keyword and Query Coupling Relationship Analysis', Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM '14: 2014 ACM Conference on Information and Knowledge Management, ACM, Shanghai, China, pp. 529-538.
View/Download from: Publisher's site
View description>>
Due to imprecise query intention, Web database users often use a limited number of keywords that are not directly related to their precise query to search information. Semantic approximate keyword query is challenging but helpful for specifying such query intent and providing more relevant answers. By extracting the semantic relationships both between keywords and keyword queries, this paper proposes a new keyword query approach which generates semantic approximate answers by identifying a set of keyword queries from the query history whose semantics are related to the given keyword query. To capture the semantic relationships between keywords, a semantic coupling relationship analysis model is introduced to model both the intra- and inter - keyword couplings. Building on the coupling relationships between keywords, the semantic similarity of different keyword queries is then measured by a semantic matrix. The representative queries in query history are identified and then a priori order of remaining queries corresponding to each representative query in an off-line preprocessing step is created. These representative queries and associated orders are then used to expeditiously generate top-k ranked semantically related keyword queries. We demonstrate that our coupling relationship analysis model can accurately capture the semantic relationships both between keywords and queries. The efficiency of top-k keyword query selection algorithm is also demonstrated.
Merigo, JM & Yang, J-B 1970, 'Bibliometric analysis in financial research', 2014 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), 2014 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), IEEE, London, ENGLAND, pp. 223-230.
View/Download from: Publisher's site
Merigó, JM, Casanovas, M & Xu, Y 1970, 'Fuzzy group decision-making with generalized probabilistic OWA operators', Journal of Intelligent & Fuzzy Systems, IOS Press, pp. 783-792.
View/Download from: Publisher's site
Merigó, JM, Peris-Ortiz, M & Palacios-Marqués, D 1970, 'Entrepreneurial fuzzy group decision-making under complex environments', Journal of Intelligent & Fuzzy Systems, IOS Press, pp. 901-912.
View/Download from: Publisher's site
MERIGÓ, JM, ZHOU, L & YU, D 1970, 'DISTANCE MEASURES WITH PROBABILITIES, OWA OPERATORS AND WEIGHTED AVERAGES', Decision Making and Soft Computing, The 11th International FLINS Conference (FLINS 2014), WORLD SCIENTIFIC, Joao Pessoa, BRAZIL, pp. 324-329.
View/Download from: Publisher's site
Nowak, P, Czeczot, J, Klopot, T, Szymura, M & Gabrys, B 1970, 'Linearizing Controller for Higher-degree Nonlinear Processes with Compensation for Modeling Inaccuracies - Practical Validation and Future Developments', Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics, 11th International Conference on Informatics in Control, Automation and Robotics, SCITEPRESS - Science and and Technology Publications, pp. 691-698.
View/Download from: Publisher's site
View description>>
This work shows the results of the practical implementation of the linearizing controller for the example laboratory pneumatic process of the third relative degree. Controller design is based on the Lie algebra framework but in contrast to the previous attempts, the on-line model update method is suggested to ensure offset-free control. The paper details the proposed concept and reports the experiences from the practical implementation of the suggested controller. The superiority of the proposed approach over the conventional PI controller is demonstrated by experimental results. Based on the experiences and the validation results, the possibilities of the potential application of the data-driven soft sensors for further improvement of the control performance are discussed.
Peng, F, Wu, Q, Fan, L, Zhang, J, You, Y, Lu, J & Yang, J-Y 1970, 'Street view cross-sourced point cloud matching and registration', 2014 IEEE International Conference on Image Processing (ICIP), 2014 IEEE International Conference on Image Processing (ICIP), IEEE, Paris, France, pp. 2026-2030.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Object registration has been widely discussed with the development of various range sensing technologies. In most work, however, the point clouds of reference and target are generated by the same technology, such as a Kinect range camera, LiDAR sensor, or Structure from Motion technique. Cases in which reference and target point clouds are generated by different technologies are rarely discussed. Due to the significant differences across various point cloud data in terms of point cloud density, sensing noise, scale, occlusion etc., object registration between such different point clouds becomes extremely difficult. In this study, we address for the first time an even more challenging case in which the differently-sourced point clouds are acquired from a real street view. One is generated on the basis of an image sequence through the SfM process, and the other is produced directly by the Li-DAR system. We propose a two-stage matching and registration algorithm to achieve object registration between these two different point clouds. The experiments are based on real building object point cloud data and demonstrate the effectiveness and efficiency of the proposed solution. The newly proposed solution can be further developed to contribute to several related applications, such as Location Based Service.
Qi Gu, Yan Zhang, Jian Cao, Guandong Xu & Cuzzocrea, A 1970, 'A confidence-based entity resolution approach with incomplete information', 2014 International Conference on Data Science and Advanced Analytics (DSAA), 2014 International Conference on Data Science and Advanced Analytics (DSAA), IEEE, China, pp. 97-103.
View/Download from: Publisher's site
View description>>
Entity resolution identifies entities from different data sources that refer to the same real-world entity and it is an important prerequisite for integrating data from multiple sources. Entity resolution mainly relies on similarity measures on data records. Unfortunately, the data quality of data sources is not so good in practice. Especially web data sources often only provide incomplete information, which leads to the difficulties of direct applying similarity measures to identify the same entities. In order to address this problem, the concept of confidence is introduced to measure the trustworthy of the similarity calculation. An adaptive rule-based approach is used to calculate the similarity between records and its confidence is also derived. Then the similarity and confidence are propagated on the entity relational graph until fix point is reached. Finally, any pair of two records can be determined as matched or unmatched based on a threshold. We performed a series of experiments on real data sets and experiment results show that our approach has a better performance comparing with others.
Qin, L, Yu, JX, Chang, L, Cheng, H, Zhang, C & Lin, X 1970, 'Scalable big graph processing in MapReduce.', SIGMOD Conference, ACM Special Interest Group on Management of Data Conference, ACM, Utah, USA, pp. 827-838.
View/Download from: Publisher's site
View description>>
MapReduce has become one of the most popular parallel computing paradigms in cloud, due to its high scalability, reliability, and fault-tolerance achieved for a large variety of applications in big data processing. In the literature, there are MapReduce Class MRC and Minimal MapReduce ClassMMC to define the memory consumption, communication cost, CPU cost, and number of MapReduce rounds for an algorithm to execute in MapReduce. However, neither of them is designed for big graph processing in MapReduce, since the constraints inMMCcan be hardly achieved simultaneously on graphs and the conditions in MRC may induce scalability problems when processing big graph data. In this paper, we study scalable big graph processing in MapReduce. We introduce a Scalable Graph processing Class SGC by relaxing some constraints inMMCto make it suitable for scalable graph processing. We define two graph join operators in SGC, namely, EN join and NE join, using which a wide range of graph algorithms can be designed, including PageRank, breadth first search, graph keyword search, Connected Component (CC) computation, and Minimum Spanning Forest (MSF) computation. Remarkably, to the best of our knowledge, for the two fundamental graph problems CC and MSF computation, this is the first work that can achieve O(log(n)) MapReduce rounds with O(n + m) total communication cost in each round and constant memory consumption on each machine, where n and m are the number of nodes and edges in the graph respectively. We conducted extensive performance studies using two web-scale graphs Twitter-2010 and Friendster with different graph characteristics. The experimental results demonstrate that our algorithms can achieve high scalability in big graph processing. © 2014 ACM.
Rahman, ZU, Hussain, OK & Hussain, FK 1970, 'Time Series QoS Forecasting for Management of Cloud Services', 2014 Ninth International Conference on Broadband and Wireless Computing, Communication and Applications, 2014 Ninth International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), IEEE, Guangdong.
View/Download from: Publisher's site
View description>>
Management of Cloud services is one of the important aspects for the cloud service users to manage in order to ensure that they achieve their required outcomes. There is a wide interest in the literature on this problem, but most of that work has approached this problem from the service provider's (platform) viewpoint. While on the one hand, having techniques to monitor a service from this viewpoint is important, on the other hand it is also important to monitor the QoS of a cloud service being received at the user side. This is because there is a possibility of the service user being unable to obtain the promised service with the required characteristics due to factors beyond the platform side which affects the QoS being received at the run time. One of the main factors for user side service monitoring is the accurate forecasting of the QoS of cloud services over a period of time in the future based on the past observed pattern or history. In this paper we investigate the use of exponential smoothing and autoregressive moving average models for forecasting the QoS of cloud services. We propose a forecasting mechanism which uses the past QoS values collected though QoS monitoring to forecast the future QoS of cloud services.
Ramezani, F, Lu, J & Hussain, F 1970, 'Task Based System Load Balancing Approach in Cloud Environments', Knowledge Engineering and Management, International Conference on Intelligent Systems and Knowledge Engineering, Springer Berlin Heidelberg, Beijing, China, pp. 31-42.
View/Download from: Publisher's site
View description>>
Live virtual machine (VM) migration is a technique for transferring an active VM from one physical host to another without disrupting the VM. This technique has been proposed to reduce the downtime for migrated overload VMs. As VMs migration takes much more times and cost in comparison with tasks migration, this study develops a novel approach to confront with the problem of overload VM and achieving system load balancing, by assigning the arrival task to another similar VM in a cloud environment. In addition, we propose a multi-objective optimization model to migrate these tasks to a new VM host applying multi-objective genetic algorithm (MOGA). In the proposed approach, there is no need to pause VM during migration time. In addition, as contrast to tasks migration, VM live migration takes longer to complete and needs more idle capacity in host physical machine (PM), the proposed approach will significantly reduce time, downtime memory, and cost consumption.
Salvador, MM, Gabrys, B & Žliobaitė, I 1970, 'Online Detection of Shutdown Periods in Chemical Plants: A Case Study', Procedia Computer Science, 18th Annual International Conference on Knowledge-Based and Intelligent Information and Engineering Systems (KES), Elsevier BV, Pomeranian Sci & Technol, Gdynia, POLAND, pp. 580-588.
View/Download from: Publisher's site
Subrahmanian, VS, Chen, SH, Zaiane, O, Martin, H, Jo, GS, Cao, J, Liu, H, Xu, G & Nejdl, W 1970, 'Welcome from BESC 2014 chairs', 2014 International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC2014), 2014 International Conference on Behavior, Economic and Social Computing (BESC), IEEE.
View/Download from: Publisher's site
Sun, L, Dong, H, Hussain, FK, Hussain, OK, Ma, J & Zhang, Y 1970, 'A Hybrid Fuzzy Framework for Cloud Service Selection', 2014 IEEE International Conference on Web Services, 2014 IEEE International Conference on Web Services (ICWS), IEEE, Alaska, USA, pp. 313-320.
View/Download from: Publisher's site
Tafavogh, S, Meng, Q, Catchpoole, DR & Kennedy, PJ 1970, 'Automated Quantitative and Qualitative Analysis of Whole Neuroblastoma Tumour Images for Prognosis', Biomedical Engineering / 817: Robotics Applications, Biomedical Engineering / Robotics Applications, ACTAPRESS, Zurich, Switzerland, pp. 244-251.
View/Download from: Publisher's site
View description>>
Manual quantitative and qualitative microscopic analysis of cancerous tumours is subject to inter-intra observer variability in pathology. Neuroblastoma is an infant cancer with one of the lowest survival rates. Choosing a proper therapeutic regime for the tumour is highly dependent on determining the tumour aggressiveness level which requires an extensive microscopic analysis. There is an urgent demand from pathologists for reducing the role of microscopic analysis in the process of prognosis and using an automated system to determine the tumour aggressiveness. In this paper, we develop an automated system to address this demand. We propose a novel four-stage hybrid algorithm. First, we develop novel whole slide image partitioning and zooming techniques. Second, we introduce an image enhancement technique to reduce the intensity variation within the tissue images. Third, we deploy a thresholding technique for segmenting the regions of interest. Fourth, we develop a prognosis decision making engine based on a robust clinical prognosis scheme to classify the aggressiveness level using the segmented regions of interest. The performance of the system is evaluated by a pathologist. The system is compared against a state-of-the-Art system, and the results indicate a superiority for our system in grading the tumour with average F-measure 86.77%.
Taib, R, Yee, D, Chen, F & Liu, W 1970, 'Improved incident management through anomaly detection in historical records', 21st World Congress on Intelligent Transport Systems, ITSWC 2014: Reinventing Transportation in Our Connected World.
View description>>
Real-time decision support can significantly help Transport Management Centre (TMC) operators respond to incidents more efficiently and reduce congestion. However, the complexity of road networks, changing demand patterns, and massive volumes of data recorded to date prevented a deep analysis of the situation. The NSW TMC and research organisation NICTA in Sydney have collaborated to identify patterns in historical incident response records, leading to the identification of both anomalies and common patterns among past incidents using advanced machine learning techniques. Such techniques were used to process 15, 465 incident logs, comparing and clustering responses along 15 key characteristics. Abnormally effective or ineffective responses were unveiled, as well as seven generic incident profiles, allowing the TMC to improve its procedures, response plans, and resource allocations. These mechanisms also helped boost early incident outcome prediction, promising benefits for TMCs around the world.
Wan, Y, Wu, Q & He, X 1970, 'Dense feature correspondence for video-based endoscope three-dimensional motion tracking', IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), 2014 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), IEEE, Valencia, pp. 49-52.
View/Download from: Publisher's site
View description>>
This paper presents an improved video-based endoscope tracking approach on the basis of dense feature correspondence. Currently video-based methods often fail to track the endoscope motion due to low-quality endoscopic video images. To address such failure, we use image texture information to boost the tracking performance. A local image descriptor - DAISY is introduced to efficiently detect dense texture or feature information from endoscopic images. After these dense feature correspondence, we compute relative motion parameters between the previous and current endoscopic images in terms of epipolar geometric analysis. By initializing with the relative motion information, we perform 2-D/3-D or video-volume registration and determine the current endoscope pose information with six degrees of freedom (6DoF) position and orientation parameters. We evaluate our method on clinical datasets. Experimental results demonstrate that our proposed method outperforms state-of-the-art approaches. The tracking error was significantly reduced from 7.77 mm to 4.78 mm. © 2014 IEEE.
Wang, D, Yuan, C, Sun, Y, Zhang, J & Zhou, H 1970, 'Fast Mode and Depth Decision Algorithm for Intra Prediction of Quality SHVC', Intelligent Computing Theory, International Conference on Intelligent Computing, Springer International Publishing, Taiyuan, China, pp. 693-699.
View/Download from: Publisher's site
View description>>
Scalable High-Efficiency Video Coding (SHVC) is an extension of High Efficiency Video Coding (HEVC). Since the coding procedure for HEVC is very complex, the coding procedure for SHVC is even more complex, it is very important to improve its coding speed. In this paper, we have proposed a fast mode and depth decision algorithm for Intra prediction of Quality SHVC. Initially, only partial modes are checked to determine the local minimum points (LMPs) based on the relationships between the modes and their corresponding Hadamard Costs (HC); and then only partial depths are checked by skipping depths with low possibilities indicated based on their inter-layer correlations and textural features. The experimental results showed that the proposed algorithm could improve coding speed by 61.31% on average with negligible coding efficiency losses.
Wang, Y, Di, H, Wang, B, Liang, W, Zhang, J & Jia, Y 1970, 'Depth Super-resolution by Fusing Depth Imaging and Stereo Vision with Structural Determinant Information Inference', 2014 22nd International Conference on Pattern Recognition, 2014 22nd International Conference on Pattern Recognition (ICPR), IEEE, Stockholm, Sweden, pp. 4212-4217.
View/Download from: Publisher's site
View description>>
In this paper, we present a depth super-resolution
framework by fusing depth imaging and stereo vision for highresolution
and high-accuracy depth maps. Depth cameras and
stereo vision have their own limitations in some aspects, but
their characteristics of range sensing are complementary. Thus,
combining both approaches can produce more satisfactory results
than either one. Unlike previous fusion methods, we initially
taking the noisy depth observation from depth camera as prior
information of scene structure. The prior information of scene
structure is also utilized to infer structural determinant information,
like depth discontinuity and occlusion, which is essential
to improve the quality of depth map in the fusion process. In
succession, the prior knowledge helps to overcome difficulties of
intensity inconsistency in image observation from stereo vision
component. Experimental results dem
Wang, Z, Luo, T, Xu, G & Wang, X 1970, 'The Application of Cartesian-Join of Bloom Filters to Supporting Membership Query of Multidimensional Data', 2014 IEEE International Congress on Big Data, 2014 IEEE International Congress on Big Data (BigData Congress), IEEE, USA, pp. 288-295.
View/Download from: Publisher's site
Wei, W, Yin, J, Li, J & Cao, L 1970, 'Modeling Asymmetry and Tail Dependence among Multiple Variables by Using Partial Regular Vine', Proceedings of the 2014 SIAM International Conference on Data Mining, Proceedings of the 2014 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics, Philadelphia, USA, pp. 776-784.
View/Download from: Publisher's site
View description>>
Modeling high-dimensional dependence is widely studied to explore deep relations in multiple variables particularly useful for financial risk assessment. Very often, strong restrictions are applied on a dependence structure by existing high-dimensional dependence models. These restrictions disabled the detection of sophisticated structures such as asymmetry, upper and lower tail dependence between multiple variables. The paper proposes a partial regular vine copula model to relax these restrictions. The new model employs partial correlation to construct the regular vine structure, which is algebraically independent. This model is also able to capture the asymmetric characteristics among multiple variables by using two-parametric copula with flexible lower and upper tail dependence. Our method is tested on a cross-country stock market data set to analyse the asymmetry and tail dependence. The high prediction performance is examined by the Value at Risk, which is a commonly adopted evaluation measure in financial market.
Read More: http://epubs.siam.org/doi/abs/10.1137/1.9781611973440.89
Wu, J, Cai, Z, Pan, S, Zhu, X & Zhang, C 1970, 'Attribute weighting: How and when does it work for Bayesian Network Classification', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, PEOPLES R CHINA, pp. 4076-4083.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. A Bayesian Network (BN) is a graphical model which can be used to represent conditional dependency between random variables, such as diseases and symptoms. A Bayesian Network Classifier (BNC) uses BN to characterize the relationships between attributes and the class labels, where a simplified approach is to employ a conditional independence assumption between attributes and the corresponding class labels, i.e., the Naive Bayes (NB) classification model. One major approach to mitigate NB's primary weakness (the conditional independence assumption) is the attribute weighting, and this type of approach has been proved to be effective for NB with simple structure. However, for weighted BNCs involving complex structures, in which attribute weighting is embedded into the model, there is no existing study on whether the weighting will work for complex BNCs and how effective it will impact on the learning of a given task. In this paper, we first survey several complex structure models for BNCs, and then carry out experimental studies to investigate the effectiveness of the attribute weighting strategies for complex BNCs, with a focus on Hidden Naive Bayes (HNB) and Averaged One-Dependence Estimation (AODE). Our studies use classification accuracy (ACC), area under the ROC curve ranking (AUC), and conditional log likelihood (CLL), as the performance metrics. Experiments and comparisons on 36 benchmark data sets demonstrate that attribute weighting technologies just slightly outperforms unweighted complex BNCs with respect to the ACC and AUC, but significant improvement can be observed using CLL.
Wu, J, Hong, Z, Pan, S, Zhu, X, Cai, Z & Zhang, C 1970, 'Exploring Features for Complicated Objects', Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM '14: 2014 ACM Conference on Information and Knowledge Management, ACM, Shanghai, China, pp. 1699-1708.
View/Download from: Publisher's site
View description>>
Copyright 2014 ACM. In traditional multi-instance learning (MIL), instances are typically represented by using a single feature view. As MIL becoming popular in domain specific learning tasks, aggregating multiple feature views to represent multi-instance bags has recently shown promising results, mainly because multiple views provide extra information for MIL tasks. Nevertheless, multiple views also increase the risk of involving redundant views and irrelevant features for learning. In this paper, we formulate a new cross-view feature selection problem that aims to identify the most representative features across all feature views for MIL. To achieve the goal, we design a new optimization problem by integrating both multiview representation and multi-instance bag constraints. The solution to the objective function will ensure that the identified top-m features are the most informative ones across all feature views. Experiments on two real-world applications demonstrate the performance of the cross-view feature selection for content-based image retrieval and social media content recommendation.
Wu, J, Hong, Z, Pan, S, Zhu, X, Cai, Z & Zhang, C 1970, 'Multi-graph-view Learning for Graph Classification', Proceedings - IEEE International Conference on Data Mining, ICDM, IEEE International Conference on Data Mining, IEEE, Shenzhen, China, pp. 590-599.
View/Download from: Publisher's site
View description>>
Graph classification has traditionally focused on graphs generated from a single feature view. In many applications, it is common to have useful information from different channels/views to describe objects, which naturally results in a new representation with multiple graphs generated from different feature views being used to describe one object. In this paper, we formulate a new Multi-Graph-View learning task for graph classification, where each object to be classified contains graphs from multiple graph-views. This problem setting is essentially different from traditional single-graph-view graph classification, where graphs are from one single feature view. To solve the problem, we propose a Cross Graph-View Sub graph Feature based Learning (gCGVFL) algorithm that explores an optimal set of sub graphs, across multiple graph-views, as features to represent graphs. Specifically, we derive an evaluation criterion to estimate the discriminative power and the redundancy of sub graph features across all views, and assign proper weight values to each view to indicate its importance for graph classification. The iterative cross graph-view sub graph scoring and graph-view weight updating form a closed loop to find optimal sub graphs to represent graphs for multi-graph-view learning. Experiments and comparisons on real-world tasks demonstrate the algorithm's performance.
Wu, J, Hong, Z, Pan, S, Zhu, X, Zhang, C & Cai, Z 1970, 'Multi-Graph Learning with Positive and Unlabeled Bags', Proceedings of the 2014 SIAM International Conference on Data Mining, Proceedings of the 2014 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania, USA, pp. 217-225.
View/Download from: Publisher's site
View description>>
© SIAM. In this paper, we formulate a new multi-graph learning task with only positive and unlabeled bags, where labels are only available for bags but not for individual graphs inside the bag. This problem setting raises significant challenges because bag-of-graph setting does not have features to directly represent graph data, and no negative bags exits for deriving discriminative classification models. To solve the challenge, we propose a puMGL learning framework which relies on two iteratively combined processes for multigraph learning: (1) deriving features to represent graphs for learning; and (2) deriving discriminative models with only positive and unlabeled graph bags. For the former, we derive a subgraph scoring criterion to select a set of informative subgraphs to convert each graph into a feature space. To handle unlabeled bags, we assign a weight value to each bag and use the adjusted weight values to select most promising unlabeled bags as negative bags. A margin graph pool (MGP), which contains some representative graphs from positive bags and identified negative bags, is used for selecting subgraphs and training graph classifiers. The iterative subgraph scoring, bag weight updating, and MGP based graph classification forms a closed loop to find optimal subgraphs and most suitable unlabeled bags for multi-graph learning. Experiments and comparisons on real-world multigraph data demonstrate the algorithm performance. Copyright
Wu, J, Pan, S, Cai, Z, Zhu, X & Zhang, C 1970, 'Dual instance and attribute weighting for Naive Bayes classification', 2014 International Joint Conference on Neural Networks (IJCNN), 2014 International Joint Conference on Neural Networks (IJCNN), IEEE, Beijing, PEOPLES R CHINA, pp. 1675-1679.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Naive Bayes (NB) network is a popular classification technique for data mining and machine learning. Many methods exist to improve the performance of NB by overcoming its primary weakness the assumption that attributes are conditionally independent given the class, using techniques such as backwards sequential elimination and lazy elimination. Some weighting technologies, including attribute weighting and instance weighting, have also been proposed to improve the accuracy of NB. In this paper, we propose a dual weighted model, namely DWNB, for NB classification. In DWNB, we firstly employ an instance similarity based method to weight each training instance. After that, we build an attribute weighted model based on the new training data, where the calculation of the probability value is based on the embedded instance weights. The dual instance and attribute weighting allows DWNB to tackle the conditional independence assumption for accurate classification. Experiments and comparisons on 36 benchmark data sets demonstrate that DWNB outperforms existing weighted NB algorithms.
Wu, J, Zhu, X, Zhang, C & Cai, Z 1970, 'Multi-Instance Learning from Positive and Unlabeled Bags', The 18th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Taiwan, pp. 237-248.
View/Download from: Publisher's site
Xindong Wu, Ester, M & Guandong Xu 1970, 'Welcome from the ASONAM 2014 program chairs', 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), IEEE, p. xiv.
View/Download from: Publisher's site
Xu, J, Wu, Q, Zhang, J, Silk, B, Ngo, GT & Tang, Z 1970, 'Efficient People Counting with Limited Manual Interferences', 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Wollongong, NSW, Australia.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. People counting is a topic with various practical applications. Over the last decade, two general approaches have been proposed to tackle this problem: a) counting based on individual human detection; b)counting by measuring regression relation between the crowd density and number of people. Because the regression based method can avoid explicit people detection which faces several well-known challenges, it has been considered as a robust method particularly on a complicated environments. An efficient regression based method is proposed in this paper, which can be well adopted into any existing video surveillance system. It adopts color based segmentation to extract foreground regions in images. Regression is established based on the foreground density and the number of people. This method is fast and can deal with lighting condition changes. Experiments on public datasets and one captured dataset have shown the effectiveness and robustness of the method.
Xu, L, Gong, C, Yang, J, Wu, Q, Yao, L & IEEE 1970, 'VIOLENT VIDEO DETECTION BASED ON MoSIFT FEATURE AND SPARSE CODING', 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Florence, Italy, pp. 3538-3542.
View/Download from: Publisher's site
View description>>
To detect violence in a video, a common video description method is to apply local spatio-temporal description on the query video. Then, the low-level description is further summarized onto the high-level feature based on Bag-of-Words (BoW) model. However, traditional spatio-temporal descriptors are not discriminative enough. Moreover, BoW model roughly assigns each feature vector to only one visual word, therefore inevitably causing quantization error. To tackle the constrains, this paper employs Motion SIFT (MoSIFT) algorithm to extract the low-level description of a query video. To eliminate the feature noise, Kernel Density Estimation (KDE) is exploited for feature selection on the MoSIFT descriptor. In order to obtain the highly discriminative video feature, this paper adopts sparse coding scheme to further process the selected MoSIFTs. Encouraging experimental results are obtained based on two challenging datasets which record both crowded scenes and non-crowded scenes. © 2014 IEEE.
Yin, H, Cui, B, Chen, L, Hu, Z & Huang, Z 1970, 'A temporal context-aware model for user behavior modeling in social media systems', Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, SIGMOD/PODS'14: International Conference on Management of Data, ACM, Snowbird, UT, USA, pp. 1543-1554.
View/Download from: Publisher's site
View description>>
Social media provides valuable resources to analyze user behaviors and capture user preferences. This paper focuses on analyzing user behaviors in social media systems and designing a latent class statistical mixture model, named temporal context-aware mixture model (TCAM), to account for the intentions and preferences behind user behaviors. Based on the observation that the behaviors of a user in social media systems are generally influenced by intrinsic interest as well as the temporal context (e.g., the public's attention at that time), TCAM simultaneously models the topics related to users' intrinsic interests and the topics related to temporal context and then combines the influences from the two factors to model user behaviors in a unified way. To further improve the performance of TCAM, an item-weighting scheme is proposed to enable TCAM to favor items that better represent topics related to user interests and topics related to temporal context, respectively. Based on TCAM, we design an efficient query processing technique to support fast online recommendation for large social media data. Extensive experiments have been conducted to evaluate the performance of TCAM on four real-world datasets crawled from different social media sites. The experimental results demonstrate the superiority of the TCAM models, compared with the state-of-the-art competitor methods, by modeling user behaviors more precisely and making more effective and efficient recommendations. © 2014 ACM.
Yusoff, B & Merigó Lindahl, JM 1970, 'Analytical Hierarchy Process under Group Decision Making with Some Induced Aggregation Operators', INFORMATION PROCESSING AND MANAGEMENT OF UNCERTAINTY IN KNOWLEDGE-BASED SYSTEMS, PT I, 15th International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU), Springer International Publishing, Montpellier, FRANCE, pp. 476-485.
View/Download from: Publisher's site
Zhang, G & Piccardi, M 1970, 'Sequential labeling with structural SVM under the F<inf>1</inf> loss', 2014 IEEE International Conference on Image Processing (ICIP), 2014 IEEE International Conference on Image Processing (ICIP), IEEE, Paris.
View/Download from: Publisher's site
View description>>
Sequential labeling addresses the classification of sequential data and is of increasing importance for the classification and segmentation of video data. The model traditionally used for sequential labeling is the hidden Markov model where the sequence of class labels to be predicted is encoded as a Markov chain. In recent years, hidden Markov models and other structural models have benefited from minimum-loss training approaches which in many cases lead to greater classification accuracy. However, the loss functions available for training are restricted to decomposable cases such as the zero-one loss and the Hamming loss. Other useful losses such as the F1 loss, equal error rates and others are not available for sequential labeling. For this reason, in this paper we propose a training algorithm that can cater for the F1 loss and any other loss function based on the contingency table. Experimental results over the challenging TUM Kitchen Dataset depicting human actions in a kitchen scenario show that the proposed training approach leads to significant improvement of different performance metrics such as the classification accuracy (4.3 percentage points) and the F1 measure (8.9 percentage points).
Zhang, Y, Zhang, W, Lin, X, Cheema, MA & Zhang, C 1970, 'Matching dominance', Proceedings of the 26th International Conference on Scientific and Statistical Database Management, SSDBM '14: Conference on Scientific and Statistical Database Management, ACM, Denmark, pp. 18-18.
View/Download from: Publisher's site
View description>>
The dominance operator plays an important role in a wide spectrum of multi-criteria decision making applications. Generally speaking, a dominance operator is a partial order on a setO of objects, and we say the dominance operator has the monotonic property regarding a family of ranking functions F if o1dominates o2implies f(o1) ≥ f( o2) for any ranking function f ε F and objects o1, o2ε O The dominance operator on the multi-dimensional points is well defined, which has the monotonic property regarding any monotonic ranking (scoring) function. Due to the uncertain nature of data in many emerging applications, a variety of existing works have studied the semantics of ranking query on uncertain objects. However, the problem of dominance operator against multi-dimensional uncertain objects remains open. Although there are several attempts to propose dominance operator on multi-dimensional uncertain objects, none of them claims the monotonic property on these ranking approaches. Motivated by this, in this paper we propose a novel matching based dominance operator, namely matching dominance, to capture the semantics of the dominance for multidimensional uncertain objects so that the new dominance operator has the monotonic property regarding the monotonic parameterized ranking function, which can unify other popular ranking approaches for uncertain objects. Then we develop a layer indexing technique, Matching Dominance based Band (MDB), to facilitate the top k queries on multidimensional uncertain objects based on the matching dominance operator proposed in this paper. Efficient algorithms are proposed to compute the MDB index. Comprehensive experiments convincingly demonstrate the effectiveness and efficiency of our indexing techniques. © Copyright 2014 ACM 978-1-4503-2722-0/14/06⋯$15.00.
Rai, T & Ryan, L 2014, Vein Visualization Trial - 2nd Interim Report, pp. 1-17, Sydney.
Rai, T & Ryan, L 2014, Vein Visualization Trial - Interim Analysis, pp. 1-20, Sydney.
Stoianoff, NP, Cahill, A, Wright, E & Marshall, V UTS – Indigenous Knowledge Forum & North West Local Land Services (NSW)) 2014, Recognising and Protecting Aboriginal Knowledge Associated with Natural Resource Management - White Paper for the Office of Environment and Heritage, NSW, 2014, pp. 1-137, Sydney.
Chang, X, Nie, F, Wang, S, Yang, Y, Zhou, X & Zhang, C 2014, 'Compound Rank-k Projections for Bilinear Analysis', arXiv.
View/Download from: Publisher's site
Chen, Q, Chen, B & Zhang, C 2014, 'Intelligent Strategies for Pathway Mining', Springer International Publishing, Switzerland, pp. 1-316.
View/Download from: Publisher's site