Stoianoff, NP, Roy, A, Reynolds, R & Adrian, A 2012, Intellectual Property - Text and Essential Cases, 4th, The Federation Press.
Apeh, E, Žliobaitė, I, Pechenizkiy, M & Gabrys, B 2012, 'Predicting Multi-class Customer Profiles Based on Transactions: a Case Study in Food Sales' in Research and Development in Intelligent Systems XXIX, Springer London, pp. 213-218.
View/Download from: Publisher's site
Hussain, O, Sangka, KB & Hussain, FK 2012, 'Determining the Significance of Assessment Criteria for Risk Analysis in Business Associations' in Lu, J, Jain, LC & Zhang, G (eds), Handbook on Decision Making, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 403-416.
View/Download from: Publisher's site
View description>>
Risk assessment in business associations is the process which determines the likelihood of negative outcomes according to a given set of desired criteria. When there is more than one desired criterion to be achieved in a business association, the process of risk assessment needs to be done by capturing the importance that each of the criteria will have on the successful completion of the business activity. In this paper, we present an approach that determines the significance of each criterion with respect to the goal of the business association and by considering the inter-dependencies that may exist between the different assessment criteria. This analysis will provide important insights during the process of risk management, where the occurrence of such negative outcomes can be managed, according to their significance, to ensure the successful completion of a business activity. © Springer-Verlag Berlin Heidelberg 2012.
Le, M, Gabrys, B & Nauck, D 2012, 'A Hybrid Model for Business Process Event Prediction' in Research and Development in Intelligent Systems XXIX, Springer London, pp. 179-192.
View/Download from: Publisher's site
Stoianoff, NP & Blazey, P 2012, 'Intellectual Property Laws and Governance' in Blazey, P & Chan, KW (eds), Commercial Law of the People's Republic of China, LAWBOOK CO., Sydney, pp. 167-180.
View description>>
Commercial law plays a large part in China's transition to its status as a major trading nation. This book contains chapters that focus on areas of the law pertinent to China's continuing economic development. It provides an analysis of the Five Year Plans and their effect on the development of and changes in commercial law. China is focused on developing its internal market and Commercial Law of the People's Republic of China provides an examination of a number of highly relevant topics, such as Company Law, Labour Law, Property Law, Intellectual Property Law, Consumer Law, Energy Law and Renewable Energy Law. Chapters on Tax Law, Competition Law and Policy, and Commercial Arbitration Law written by experts in their field provide an up-to-date and in-depth coverage of other important commercial law subjects. This book acknowledges that China's rapid development is affected by policy changes on issues such as urbanisation, the structure of the industrial sector and the environment. These changes and their effect on the national economy and the legal system are discussed in the book.
Arsene, CTC, Gabrys, B & Al-Dabass, D 2012, 'Decision support system for water distribution systems based on neural networks and graphs theory for leakage detection', Expert Systems with Applications, vol. 39, no. 18, pp. 13214-13224.
View/Download from: Publisher's site
Assis‐Dorr, H, Palacios‐Marques, D & Merigó, JM 2012, 'Social networking as an enabler of change in entrepreneurial Brazilian firms', Journal of Organizational Change Management, vol. 25, no. 5, pp. 699-708.
View/Download from: Publisher's site
View description>>
PurposeThis paper aims to research the effects of market orientation in the use of social networking and its relationship with organisational learning.Design/methodology/approachThe empirical study was carried out in 132 recently created Brazilian biotechnology companies. Structural equation models were used in order to test the hypotheses.FindingsThe findings suggest that market orientation is positively related to social networking and organizational learning. The study also examines businesses that employ social networks and generate learning procedures within the organisations.Practical implicationsStatistically speaking, the use of social networking platforms such as Facebook and Twitter have significant effects on the internal variables of the organisation, which is why businesses should develop new profiles that better reflect the company's corporate strategy.Originality/valueCurrently, studies carried out on technologically based social networks are a new feature of the field of management. This article brings together classic management constructs, such as organizational learning or market orientation, together with the incorporation of technological social networks.
Beck, D, Brandl, MB, Boelen, L, Unnikrishnan, A, Pimanda, JE & Wong, JWH 2012, 'Signal analysis for genome-wide maps of histone modifications measured by ChIP-seq', Bioinformatics, vol. 28, no. 8, pp. 1062-1069.
View/Download from: Publisher's site
View description>>
Abstract Motivation: Chromatin structure, including post-translational modifications of histones, regulates gene expression, alternative splicing and cell identity. ChIP-seq is an increasingly used assay to study chromatin function. However, tools for downstream bioinformatics analysis are limited and are only based on the evaluation of signal intensities. We reasoned that new methods taking into account other signal characteristics such as peak shape, location and frequencies might reveal new insights into chromatin function, particularly in situation where differences in read intensities are subtle. Results: We introduced an analysis pipeline, based on linear predictive coding (LPC), which allows the capture and comparison of ChIP-seq histone profiles. First, we show that the modeled signal profiles distinguish differentially expressed genes with comparable accuracy to signal intensities. The method was robust against parameter variations and performed well up to a signal-to-noise ratio of 0.55. Additionally, we show that LPC profiles of activating and repressive histone marks cluster into distinct groups and can be used to predict their function. Availability and implementation: http://www.cancerresearch.unsw.edu.au/crcweb.nsf/page/LPCHP A Matlab implementation along with usage instructions and an example input file are available from: http://www.cancerresearch.unsw.edu.au/crcweb.nsf/page/LPCHP Contact: d.beck@student.unsw.edu.au; jpimanda@unsw.edu.au; jason.wong@unsw.edu.au Supplementary information: Supplementary data are available at Bioinformatics online.
Bródka, P, Kazienko, P, Musiał, K & Skibicki, K 2012, 'Analysis of Neighbourhoods in Multi-layered Dynamic Social Networks', International Journal of Computational Intelligence Systems, vol. 5, no. 3, pp. 582-596.
View/Download from: Publisher's site
View description>>
Social networks existing among employees, customers or users of various ITsystems have become one of the research areas of growing importance. A socialnetwork consists of nodes - social entities and edges linking pairs of nodes.In regular, one-layered social networks, two nodes - i.e. people are connectedwith a single edge whereas in the multi-layered social networks, there may bemany links of different types for a pair of nodes. Nowadays data about peopleand their interactions, which exists in all social media, provides informationabout many different types of relationships within one network. Analysing thisdata one can obtain knowledge not only about the structure and characteristicsof the network but also gain understanding about semantic of human relations.Are they direct or not? Do people tend to sustain single or multiple relationswith a given person? What types of communication is the most important forthem? Answers to these and more questions enable us to draw conclusions aboutsemantic of human interactions. Unfortunately, most of the methods used forsocial network analysis (SNA) may be applied only to one-layered socialnetworks. Thus, some new structural measures for multi-layered social networksare proposed in the paper, in particular: cross-layer clustering coefficient,cross-layer degree centrality and various versions of multi-layered degreecentralities. Authors also investigated the dynamics of multi-layeredneighbourhood for five different layers within the social network. Theevaluation of the presented concepts on the real-world dataset is presented.The measures proposed in the paper may directly be used to various methods forcollective classification, in which nodes are assigned to labels according totheir structural input features.
Cao, L 2012, 'Actionable knowledge discovery and delivery', WIREs Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 149-163.
View/Download from: Publisher's site
View description>>
AbstractActionable knowledge has been qualitatively and intensively studied in the social sciences. Its marriage with data mining is only a recent story. On the one hand, data mining has been booming for a while and has attracted an increasing variety of increasing applications. On the other, it is a reality that the so‐called knowledge discovered from data by following the classic frameworks often cannot support meaningful decision‐making actions. This shows the poor relationship and significant gap between data mining research and practice, and between knowledge, power, and action, and forms an increasing imbalance between research outcomes and business needs. Thorough and innovative retrospection and thinking are timely in bridging the gaps and promoting data mining toward next‐generation research and development: namely, the paradigm shift from knowledge discovery from data to actionable knowledge discovery and delivery. © 2012 Wiley Periodicals, Inc.This article is categorized under:Application Areas > Data Mining Software ToolsFundamental Concepts of Data and Knowledge > Key Design Issues in Data MiningFundamental Concepts of Data and Knowledge > Motivation and Emergence of Data Mining
Cao, L, Ou, Y & Yu, PS 2012, 'Coupled Behavior Analysis with Applications', IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 8, pp. 1378-1392.
View/Download from: Publisher's site
View description>>
Coupled behaviors refer to the activities of one to many actors who are associated with each other in terms of certain relationships. With increasing network and community-based events and applications, such as group-based crime and social network interactions, behavior coupling contributes to the causes of eventual business problems. Effective approaches for analyzing coupled behaviors are not available, since existing methods mainly focus on individual behavior analysis. This paper discusses the problem of Coupled Behavior Analysis (CBA) and its challenges. A Coupled Hidden Markov Model (CHMM)-based approach is illustrated to model and detect abnormal group-based trading behaviors. The CHMM models cater for: 1) multiple behaviors from a group of people, 2) behavioral properties, 3) interactions among behaviors, customers, and behavioral properties, and 4) significant changes between coupled behaviors. We demonstrate and evaluate the models on order-book-level stock tick data from a major Asian exchange and demonstrate that the proposed CHMMs outperforms HMM-only for modeling a single sequence or combining multiple single sequences, without considering coupling relationships to detect anomalies. Finally, we discuss interaction relationships and modes between coupled behaviors, which are worthy of substantial study. © 1989-2012 IEEE.
Cao, L, Weiss, G & Yu, PS 2012, 'A brief introduction to agent mining', Autonomous Agents and Multi-Agent Systems, vol. 25, no. 3, pp. 419-424.
View/Download from: Publisher's site
View description>>
Agent mining is an emerging interdisciplinary area that integrates multiagent systems, data mining and knowledge discovery, machine learning and other relevant areas. It brings new opportunities to tackling issues in relevant fields more efficiently by e
Casanovas, M & Merigó, JM 2012, 'Fuzzy aggregation operators in decision making with Dempster–Shafer belief structure', Expert Systems with Applications, vol. 39, no. 8, pp. 7138-7149.
View/Download from: Publisher's site
Chen, P, Wong, L & Li, J 2012, 'Detection of Outlier Residues for Improving Interface Prediction in Protein Heterocomplexes', IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, vol. 9, no. 4, pp. 1155-1165.
View/Download from: Publisher's site
View description>>
Sequence-based understanding and identification of protein binding interfaces is a challenging research topic due to the complexity in protein systems and the imbalanced distribution between interface and noninterface residues. This paper presents an out
Eastwood, M & Gabrys, B 2012, 'Generalised bottom-up pruning: A model level combination of decision trees', Expert Systems with Applications, vol. 39, no. 10, pp. 9150-9158.
View/Download from: Publisher's site
Ellis, J, Goodswen, S, Kennedy, PJ & Bush, S 2012, 'The Core Mouse Response to Infection by Neospora Caninum Defined by Gene Set Enrichment Analyses', Bioinformatics and Biology Insights, vol. 6, pp. BBI.S9954-BBI.S9954.
View/Download from: Publisher's site
View description>>
In this study, the BALB/c and Qs mouse responses to infection by the parasite Neospora caninum were investigated in order to identify host response mechanisms. Investigation was done using gene set (enrichment) analyses of microarray data. GSEA, MANOVA, Romer, subGSE and SAM-GS were used to study the contrasts Neospora strain type, Mouse type (BALB/c and Qs) and time post infection (6 hours post infection and 10 days post infection). The analyses show that the major signal in the core mouse response to infection is from time post infection and can be defined by gene ontology terms Protein Kinase Activity, Cell Proliferation and Transcription Initiation. Several terms linked to signaling, morphogenesis, response and fat metabolism were also identified. At 10 days post infection, genes associated with fatty acid metabolism were identified as up regulated in expression. The value of gene set (enrichment) analyses in the analysis of microarray data is discussed.
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2012, 'Evaluating High-Throughput Ab Initio Gene Finders to Discover Proteins Encoded in Eukaryotic Pathogen Genomes Missed by Laboratory Techniques', PLOS ONE, vol. 7, no. 11.
View/Download from: Publisher's site
Hussain, OK, Dillon, T, Hussain, FK & Chang, E 2012, 'Probabilistic assessment of loss in revenue generation in demand-driven production', JOURNAL OF INTELLIGENT MANUFACTURING, vol. 23, no. 6, pp. 2069-2084.
View/Download from: Publisher's site
View description>>
In Demand-driven Production with Just-in-Time inputs, there are several sources of uncertainty which impact on themanufacturers ability to meet the required customers demand within the given time frame. This can result in a loss of revenue and customers, which will have undesirable impacts on the financial aspects and on the viability of the manufacturer.Hence, a key concern for manufacturers in justin- time production is to determine whether they can meet a specific level of demand within a given time frame, to meet the customers orders and also to achieve the required revenue target for that period of time. In this paper, we propose a methodology by which a manufacturer can ascertain the probability of not meeting the required demand within a given period by considering the uncertainties in the availability of production units and raw materials, and the loss of financial revenue that it would experience as a result.
Janjua, NK & Hussain, FK 2012, 'Web@IDSS - Argumentation-enabled Web-based IDSS for reasoning over incomplete and conflicting information', KNOWLEDGE-BASED SYSTEMS, vol. 32, no. 1, pp. 9-27.
View/Download from: Publisher's site
View description>>
Over the past few decades, there has been a resurgence of interest in using high-level software intelligence for business intelligence (BI). The objective is to produce actionable information that is delivered at the right time, easily comprehendible and exportable to other software to assist business decision-making processes. Although the design and development of decision support systems (DSS) has been carried out for over 40 years, DSS still suffer from many limitations such as poor maintainability, poor flexibility and less reusability. The development of the Internet and WWW has helped information systems to overcome those limitations and Web DSS is now an active area of research in business intelligence, impacting significantly on the way information is exchanged and businesses are conducted. However, to remain competitive, companies rely on business intelligence (BI) to continually monitor and analyze the operating environment (both internal and external), to identify potential risks, and to devise competitive business strategies. However, the current Web DSS applications are not able to reason over information present across organizational boundaries which could be incomplete and conflicting. The use of an argumentation-based mechanism has not been explored to address such shortcomings in Web DSS. Argumentation is a kind of commonsense reasoning used by human beings to reach a justifiable conclusion when available information is incomplete and/or inconsistent among participants. In this paper, we propose and elaborate in detail a conceptual framework and formal argumentation-based semantics for Web enabled Intelligent DSS (Web@IDSS). We evaluate the use of argumentative reasoning in Web DSS with the help of a case study, prototype development and future directions. Applications built according to the proposed framework will provide more practical, understandable results to decision makers.
Kusakunniran, W, Qiang Wu, Jian Zhang & Hongdong Li 2012, 'Gait Recognition Across Various Walking Speeds Using Higher Order Shape Configuration Based on a Differential Composition Model', IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 6, pp. 1654-1668.
View/Download from: Publisher's site
View description>>
Gait has been known as an effective biometric feature to identify a person at a distance. However, variation of walking speeds may lead to significant changes to human walking patterns. It causes many difficulties for gait recognition. A comprehensive analysis has been carried out in this paper to identify such effects. Based on the analysis, Procrustes shape analysis is adopted for gait signature description and relevant similarity measurement. To tackle the challenges raised by speed change, this paper proposes a higher order shape configuration for gait shape description, which deliberately conserves discriminative information in the gait signatures and is still able to tolerate the varying walking speed. Instead of simply measuring the similarity between two gaits by treating them as two unified objects, a differential composition model (DCM) is constructed. The DCM differentiates the different effects caused by walking speed changes on various human body parts. In the meantime, it also balances well the different discriminabilities of each body part on the overall gait similarity measurements. In this model, the Fisher discriminant ratio is adopted to calculate weights for each body part. Comprehensive experiments based on widely adopted gait databases demonstrate that our proposed method is efficient for cross-speed gait recognition and outperforms other state-of-the-art methods. © 1996-2012 IEEE.
Kusakunniran, W, Wu, Q, Zhang, J & Li, H 2012, 'Cross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron', PATTERN RECOGNITION LETTERS, vol. 33, no. 7, pp. 882-889.
View/Download from: Publisher's site
View description>>
Gait has been shown to be an efficient biometric feature for human identification at a distance. However, performance of gait recognition can be affected by view variation. This leads to a consequent difficulty of cross-view gait recognition. A novel method is proposed to solve the above difficulty by using view transformation model (VTM). VTM is constructed based on regression processes by adopting multi-layer perceptron (MLP) as a regression tool. VTM estimates gait feature from one view using a well selected region of interest (ROI) on gait feature from another view. Thus, trained VTMs can normalize gait features from across views into the same view before gait similarity is measured. Moreover, this paper proposes a new multi-view gait recognition which estimates gait feature on one view using selected gait features from several other views. Extensive experimental results demonstrate that the proposed method significantly outperforms other baseline methods in literature for both cross-view and multi-view gait recognitions. In our experiments, particularly, average accuracies of 99%, 98% and 93% are achieved for multiple views gait recognition by using 5 cameras, 4 cameras and 3 cameras respectively. © 2011 Elsevier B.V. All rights reserved.
Kusakunniran, W, Wu, Q, Zhang, J & Li, H 2012, 'Gait Recognition Under Various Viewing Angles Based on Correlated Motion Regression', IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 6, pp. 966-980.
View/Download from: Publisher's site
View description>>
It is well recognized that gait is an important biometric feature to identify a person at a distance, e.g., in video surveillance application. However, in reality, change of viewing angle causes significant challenge for gait recognition. A novel approach using regression-based view transformation model (VTM) is proposed to address this challenge. Gait features from across views can be normalized into a common view using learned VTM(s). In principle, a VTM is used to transform gait feature from one viewing angle (source) into another viewing angle (target). It consists of multiple regression processes to explore correlated walking motions, which are encoded in gait features, between source and target views. In the learning processes, sparse regression based on the elastic net is adopted as the regression function, which is free from the problem of overfitting and results in more stable regression models for VTM construction. Based on widely adopted gait database, experimental results show that the proposed method significantly improves upon existing VTM-based methods and outperforms most other baseline methods reported in the literature. Several practical scenarios of applying the proposed method for gait recognition under various views are also discussed in this paper. © 2012 IEEE.
Lambert, M & Kennedy, P 2012, 'Using Artificial Intelligence to Build with Unprocessed Rock', Key Engineering Materials, vol. 517, pp. 939-945.
View/Download from: Publisher's site
View description>>
Unprocessed rock is a massive resource of very cheap building material with very low embodied energy. However, it is highly underutilised due to the difficulty of dealing with irregular shaped blocks. We have developed a novel software application using the artificial intelligence methods of search and optimisation to simulate building three-dimensional structures in a virtual world. The aim of our software is to help builders solve the 3-dimensional jigsaw puzzle of building with rock rubble with an emphasis on its potential use for building sustainable housing and infrastructure. This paper describes our approach and the design of our software including an overview of the rock digitising, optimisation software and building methods. We present simulation results of building and testing several small drystone structures using the prototype software.
Li, L, Zhong, L, Xu, G & Kitsuregawa, M 2012, 'A feature-free search query classification approach using semantic distance', Expert Systems with Applications, vol. 39, no. 12, pp. 10739-10748.
View/Download from: Publisher's site
View description>>
When classifying search queries into a set of target categories, machine learning based conventional approaches usually make use of external sources of information to obtain additional features for search queries and training data for target categories. Unfortunately, these approaches rely on large amount of training data for high classification precision. Moreover, they are known to suffer from inability to adapt to different target categories which may be caused by the dynamic changes observed in both Web topic taxonomy and Web content. In this paper, we propose a feature-free classification approach using semantic distance. We analyze queries and categories themselves and utilizes the number of Web pages containing both a query and a category as a semantic distance to determine their similarity. The most attractive feature of our approach is that it only utilizes the Web page counts estimated by a search engine to provide the search query classification with respectable accuracy. In addition, it can be easily adaptive to the changes in the target categories, since machine learning based approaches require extensive updating process, e.g.; re-labeling outdated training data, re-training classifiers, to name a few, which is time consuming and high-cost. We conduct experimental study on the effectiveness of our approach using a set of rank measures and show that our approach performs competitively to some popular state-of-the-art solutions which, however, frequently use external sources and are inherently insufficient in flexibility. © 2012 Elsevier Ltd. All rights reserved.
Li, Y & Li, J 2012, 'Disease gene identification by random walk on multigraphs merging heterogeneous genomic and phenotype data', BMC Genomics, vol. 13, no. Suppl 7, pp. S27-S27.
View/Download from: Publisher's site
View description>>
Background High throughput experiments resulted in many genomic datasets and hundreds of candidate disease genes. To discover the real disease genes from a set of candidate genes, computational methods have been proposed and worked on various types of genomic data sources. As a single source of genomic data is prone of bias, incompleteness and noise, integration of different genomic data sources is highly demanded to accomplish reliable disease gene identification. Results In contrast to the commonly adapted data integration approach which integrates separate lists of candidate genes derived from the each single data sources, we merge various genomic networks into a multigraph which is capable of connecting multiple edges between a pair of nodes. This novel approach provides a data platform with strong noise tolerance to prioritize the disease genes. A new idea of random walk is then developed to work on multigraphs using a modified step to calculate the transition matrix. Our method is further enhanced to deal with heterogeneous data types by allowing cross-walk between phenotype and gene networks. Compared on benchmark datasets, our method is shown to be more accurate than the state-of-the-art methods in disease gene identification. We also conducted a case study to identify disease genes for Insulin-Dependent Diabetes Mellitus. Some of the newly identified disease genes are supported by recently published literature.
Li, Z, He, Y, Cao, L, Wong, L & Li, J 2012, 'Conservation of water molecules in protein binding interfaces', International Journal of Bioinformatics Research and Applications, vol. 8, no. 3/4, pp. 228-228.
View/Download from: Publisher's site
View description>>
The conservation of interfacial water molecules has only been studied in small data sets consisting of interfaces of a specific function. So far, no general conclusions have been drawn from largescale analysis, due to the challenges of using structural alignment in large data sets. To avoid using structural alignment, we propose a solvated sequence method to analyse water conservation properties in protein interfaces. We first use water information to label the residues, and then align interfacial residues in a fashion similar to normal sequence alignment. Our results show that, for a watercontacting interfacial residue, substituting it into hydrophobic residues tends to desolvate the local area. Surprisingly, residues with short side chains also tend not to lose their contacting water, emphasising the role of water in shaping binding sites. Deeply buried water molecules are found more conserved in terms of their contacts with interfacial residues
Li, Z, He, Y, Wong, L & Li, J 2012, 'Progressive dry-core-wet-rim hydration trend in a nested-ring topology of protein binding interfaces', BMC BIOINFORMATICS, vol. 13.
View/Download from: Publisher's site
View description>>
Background Water is an integral part of protein complexes. It shapes protein binding sites by filling cavities and it bridges local contacts by hydrogen bonds. However, water molecules are usually not included in protein interface models in the past, and few distribution profiles of water molecules in protein binding interfaces are known. Results In this work, we use a tripartite protein-water-protein interface model and a nested-ring atom re-organization method to detect hydration trends and patterns from an interface data set which involves immobilized interfacial water molecules. This data set consists of 206 obligate interfaces, 160 non-obligate interfaces, and 522 crystal packing contacts. The two types of biological interfaces are found to be drier than the crystal packing interfaces in our data, agreeable to a hydration pattern reported earlier although the previous definition of immobilized water is pure distance-based. The biological interfaces in our data set are also found to be subject to stronger water exclusion in their formation. To study the overall hydration trend in protein binding interfaces, atoms at the same burial level in each tripartite protein-water-protein interface are organized into a ring. The rings of an interface are then ordered with the core atoms placed at the middle of the structure to form a nested-ring topology. We find that water molecules on the rings of an interface are generally configured in a dry-core-wet-rim pattern with a progressive level-wise solvation towards to the rim of the interface. This solvation trend becomes even sharper when counterexamples are separated.
Liu, Q, Wong, L & Li, J 2012, 'Z-score biological significance of binding hot spots of protein interfaces by using crystal packing as the reference state', BIOCHIMICA ET BIOPHYSICA ACTA-PROTEINS AND PROTEOMICS, vol. 1824, no. 12, pp. 1457-1467.
View/Download from: Publisher's site
View description>>
Characterization of binding hot spots of protein interfaces is a fundamental study in molecular biology. Many computational methods have been proposed to identify binding hot spots. However, there are few studies to assess the biological significance of binding hot spots. We introduce the notion of biological significance of a contact residue for capturing the probability of the residue occurring in or contributing to protein binding interfaces. We take a statistical Z-score approach to the assessment of the biological significance. The method has three main steps. First, the potential score of a residue is defined by using a knowledge-based potential function with relative accessible surface area calculations. A null distribution of this potential score is then generated from artifact crystal packing contacts. Finally, the Z-score significance of a contact residue with a specific potential score is determined according to this null distribution. We hypothesize that residues at binding hot spots have big absolute values of Z-score as they contribute greatly to binding free energy. Thus, we propose to use Z-score to predict whether a contact residue is a hot spot residue. Comparison with previously reported methods on two benchmark datasets shows that this Z-score method is mostly superior to earlier methods. This article is part of a Special Issue entitled: Computational Methods for Protein Interaction and Structural Prediction.
Liu, T, Lipnicki, DM, Zhu, W, Tao, D, Zhang, C, Cui, Y, Jin, JS, Sachdev, PS & Wen, W 2012, 'Cortical Gyrification and Sulcal Spans in Early Stage Alzheimer's Disease', PLoS ONE, vol. 7, no. 2, pp. e31083-e31083.
View/Download from: Publisher's site
View description>>
Alzheimer's disease (AD) is characterized by an insidious onset of progressive cerebral atrophy and cognitive decline. Previous research suggests that cortical folding and sulcal width are associated with cognitive function in elderly individuals, and the aim of the present study was to investigate these morphological measures in patients with AD. The sample contained 161 participants, comprising 80 normal controls, 57 patients with very mild AD, and 24 patients with mild AD. From 3D T1-weighted brain scans, automated methods were used to calculate an index of global cortex gyrification and the width of five individual sulci: superior frontal, intra-parietal, superior temporal, central, and Sylvian fissure. We found that global cortex gyrification decreased with increasing severity of AD, and that the width of all individual sulci investigated other than the intra-parietal sulcus was greater in patients with mild AD than in controls. We also found that cognitive functioning, as assessed by Mini-Mental State Examination (MMSE) scores, decreased as global cortex gyrification decreased. MMSE scores also decreased in association with a widening of all individual sulci investigated other than the intra-parietal sulcus. The results suggest that abnormalities of global cortex gyrification and regional sulcal span are characteristic of patients with even very mild AD, and could thus facilitate the early diagnosis of this condition. © 2012 Liu et al.
Liu, Z, Chen, Q, Dai, N, Yu, Y, Yang, L & Li, J 2012, 'Tunable white light emitting glass suitable for long-wavelength ultraviolet excitation', Journal of Non-Crystalline Solids, vol. 358, no. 23, pp. 3289-3293.
View/Download from: Publisher's site
Longbing Cao 2012, 'Social Security and Social Welfare Data Mining: An Overview', IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 6, pp. 837-853.
View/Download from: Publisher's site
View description>>
The importance of social security and social welfare business has been increasingly recognized in more and more countries. It impinges on a large proportion of the population and affects government service policies and peopleâs life quality. Typical welfare countries, such as Australia and Canada, have accumulated a huge amount of social security and social welfare data. Emerging business issues such as fraudulent outlays, and customer service and performance improvements challenge existing policies, as well as techniques and systems including data matching and business intelligence reporting systems. The need for a deep understanding of customers and customerâgovernment interactions through advanced data analytics has been increasingly recognized by the community at large. So far, however, no substantial work on the mining of social security and social welfare data has been reported. For the first time in data mining and machine learning, and to the best of our knowledge, this paper draws a comprehensive overall picture and summarizes the corresponding techniques and illustrations to analyze social security/welfare data, namely, social security datamining (SSDM), based on a thorough review of a large number of related references from the past half century. In particular, we introduce an SSDM framework, including business and research issues, social security/welfare services and data, as well as challenges, goals, and tasks in mining social security/welfare data. A summary of SSDM case studies is also presented with substantial citations that direct readers to more specific techniques and practices about SSDM.
Melli, G, Wu, X, Beinat, P, Bonchi, F, Cao, L, Duan, R, Faloutsos, C, Ghani, R, Kitts, B, Goethals, B, Mclachlan, G, Pei, J, Srivastava, A & Zaiane, O 2012, 'TOP-10 DATA MINING CASE STUDIES', INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY & DECISION MAKING, vol. 11, no. 2, pp. 389-400.
View/Download from: Publisher's site
View description>>
We report on the panel discussion held at the ICDM'10 conference on the top 10 data mining case studies in order to provide a snapshot of where and how data mining techniques have made significant real-world impact. The tasks covered by 10 case studies r
Merigó, J & Gil-Lafuente, A 2012, 'A method for decision making with the OWA operator', Computer Science and Information Systems, vol. 9, no. 1, pp. 357-380.
View/Download from: Publisher's site
View description>>
A new method for decision making that uses the ordered weighted averaging (OWA) operator in the aggregation of the information is presented. It is used a concept that it is known in the literature as the index of maximum and minimum level (IMAM). This index is based on distance measures and other techniques that are useful for decision making. By using the OWA operator in the IMAM, we form a new aggregation operator that we call the ordered weighted averaging index of maximum and minimum level (OWAIMAM) operator. The main advantage is that it provides a parameterized family of aggregation operators between the minimum and the maximum and a wide range of special cases. Then, the decision maker may take decisions according to his degree of optimism and considering ideals in the decision process. A further extension of this approach is presented by using hybrid averages and Choquet integrals. We also develop an application of the new approach in a multi-person decision-making problem regarding the selection of strategies.
Merigó, JM 2012, 'OWA operators in the weighted average and their application in decision making', Control and Cybernetics, vol. 41, no. 3, pp. 605-643.
View description>>
We introduce a new aggregation operator that unifies the weighted average (WA) and the ordered weighted averaging (OWA) operator in a single formulation. We call it the ordered weighted averaging - weighted average (OWAWA) operator. This aggregation operator provides a more complete representation of the weighted average and the OWA operator because it considers the degree of importance that each concept has in the aggregation and includes them as particular cases of a more general context. We study different properties and families of the OWAWA operator. The applicability of this method is very broad because any study that uses the weighted average or the OWA can be revised and extended with our approach. We focus on a multi-person decision-making application in the selection of financial strategies.
Merigó, JM 2012, 'Probabilities in the OWA operator', Expert Systems with Applications, vol. 39, no. 13, pp. 11456-11467.
View/Download from: Publisher's site
Merigó, JM 2012, 'The probabilistic weighted average and its application in multiperson decision making', International Journal of Intelligent Systems, vol. 27, no. 5, pp. 457-476.
View/Download from: Publisher's site
Merigó, JM & Casanovas, M 2012, 'Decision-making with uncertain aggregation operators using the Dempster-Shafer belief structure', International Journal of Innovative Computing, Information and Control, vol. 8, no. 2, pp. 1037-1061.
View description>>
We develop a new decision-making model using the Dempster-Shafer (D-S) belief structure when available information is uncertain and can be assessed with interval numbers. We use a wide range of aggregation operators involving interval numbers such as the uncertain weighted average (UWA), the uncertain ordered weighted average (UOWA), the uncertain generalized weighted average (UGWA) and the uncertain generalized ordered weighted average (UGOWA). We present a new approach to using interval weights in these uncertain aggregation operators. By using these aggregation operators within a D-S framework, we obtain various belief structures (BS), including the UWA (BS-UWA), the BS-UOWA, the BS-UGWA and the BS-UGOWA. We also use more complete formulations by using induced, hybrid and quasi-arithmetic aggregation operators. We end the paper by applying these operators to a decision-making problem regarding strategic management. © 2012 ICIC International.
Merigo, JM & Gil-Lafuente, AM 2012, 'Decision-making techniques with similarity measures and OWA operators', SORT, vol. 36, no. 1, pp. 81-102.
View description>>
We analyse the use of the ordered weighted average (OWA) in decision-making giving special attention to business and economic decision-making problems. We present several aggregation techniques that are very useful for decision-making such as the Hamming distance, the adequacy coefficient and the index of maximum and minimum level. We suggest a new approach by using immediate weights, that is, by using the weighted average and the OWA operator in the same formulation. We further generalize them by using generalized and quasi-arithmetic means. We also analyse the applicability of the OWA operator in business and economics and we see that we can use it instead of the weighted average. We end the paper with an application in a business multi-person decision-making problem regarding production management.
Merigó, JM, Carral, CL & Castillo, AC 2012, 'Decision making in the European Union under risk and uncertainty', European J. of International Management, vol. 6, no. 5, pp. 590-590.
View/Download from: Publisher's site
Merigo, JM, Casanovas, M & Engemann, KJ 2012, 'GROUP DECISION-MAKING WITH GENERALIZED AND PROBABILISTIC AGGREGATION OPERATORS', INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, vol. 8, no. 7A, pp. 4823-4835.
Merigó, JM, Casanovas, M & Engemann, KJ 2012, 'Group decision-making with generalized and probabilistic aggregation operators', International Journal of Innovative Computing, Information and Control, vol. 8, no. 7 A, pp. 4823-4835.
View description>>
The aim of this paper is to introduce a unified model between the generalized ordered weighted averaging (GOWA) operator and the generalized probabilistic aggregation. We present the generalized probabilistic OWA (GPOWA) operator. It is a new aggregation operator that unifies the probability with the OWA operator considering the degree of importance that each concept has in the analysis. It includes a wide range of particular cases including the GOWA operator and the probabilistic OWA (POWA) operator. We also study the applicability of this new approach and we see that it is very broad because all the previous studies that use the probability or the OWA operator can be revised with this new approach. We develop an application in multi-person decision making concerning the selection of the optimal strategies. © ICIC International 2012.
Merigó, JM, Gil-Lafuente, AM & Martorell, O 2012, 'Uncertain induced aggregation operators and its application in tourism management', Expert Systems with Applications, vol. 39, no. 1, pp. 869-880.
View/Download from: Publisher's site
Merigó, JM, Gil-Lafuente, AM, Zhou, L-G & Chen, H-Y 2012, 'Induced and Linguistic Generalized Aggregation Operators and Their Application in Linguistic Group Decision Making', Group Decision and Negotiation, vol. 21, no. 4, pp. 531-549.
View/Download from: Publisher's site
Merigó-Lindahl, JM 2012, 'Bibliometric Analysis of Business and Economics in the Web of Science', Studies in Fuzziness and Soft Computing, vol. 287, pp. 3-17.
View/Download from: Publisher's site
View description>>
We present a general overview of the most influential results found in the Web of Science in the subject area of Business & Economics that includes the categories of Business, Economics, Business Finance and Management. We analyse the most cited papers in the history and rank the most influential institutions by number of papers published. We analyse the most relevant journals, the temporal evolution and the countries with the highest number of publications. We also develop a similar analysis to the Spanish case studying the most cited papers, the most influential institutions and the temporal evolution. Note that this study is only based on the results found on the Web of Science with the objective of giving a general overview of the research done in Business & Economics especially over the last half century. However, many exceptions and particularities may be found throughout the results. © 2012 Springer-Verlag Berlin Heidelberg.
Mirtalaei, MS, Saberi, M, Hussain, OK, Ashjari, B & Hussain, FK 2012, 'A trust-based bio-inspired approach for credit lending decisions', COMPUTING, vol. 94, no. 7, pp. 541-577.
View/Download from: Publisher's site
View description>>
Credit scoring computation essentially involves taking into account various financial factors and the previous behavior of the credit requesting person. There is a strong degree of correlation between the compliance level and the credit score of a given entity. The concept of trust has been widely used and applied in the existing literature to determine the compliance level of an entity. However it has not been studied in the context of credit scoring literature. In order to address this shortcoming, in this paper we propose a six-step bio-inspired methodology for trust-based credit lending decisions by credit institutions. The proposed methodology makes use of an artificial neural network-based model to classify the (potential) customers into various categories. To show the applicability and superiority of the proposed algorithm, it is applied to a credit-card dataset obtained from the UCI repository. Due to the varying spectrum of trust levels, we are able to solve the problem of binary credit lending decisions. A trust-based credit scoring approach allows the financial institutions to grant credit-based on the level of trust in potential customers. © Springer-Verlag 2012.
Parvin, S, Hussain, FK, Hussain, OK & Faruque, AA 2012, 'Trust-based Throughput in Cognitive Radio Networks', Procedia Computer Science, vol. 10, pp. 713-720.
View/Download from: Publisher's site
View description>>
Cognitive Radio Networks (CRNs) deal with opportunistic spectrum access in order to fully utilize the scarce of spectrum resources, with the development of cognitive radio technologies to greater utilization of the spectrum. Nowa-days Cognitive Radio (CR) is a promising concept for improving the utilization of limited radio spectrum resources for future wireless communications and mobile computing. In this paper, we propose two approaches. At first we propose a trust aware model to authenticate the secondary users (SUs) in CRNs which provides a reliable technique to establish trust for CRNs. Secondly, we propose trust throughput mechanism to measure throughput in CRNs.
Parvin, S, Hussain, FK, Hussain, OK, Han, S, Tian, B & Chang, E 2012, 'Cognitive radio network security: A survey', JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, vol. 35, no. 6, pp. 1691-1708.
View/Download from: Publisher's site
View description>>
Recent advancements in wireless communication are creating a spectrum shortage problem on a daily basis. Recently, Cognitive Radio (CR), a novel technology, has attempted to minimize this problem by dynamically using the free spectrum in wireless communications and mobile computing. Cognitive radio networks (CRNs) can be formed using cognitive radios by extending the radio link features to network layer functions. The objective of CRN architecture is to improve the whole network operation to fulfil the userâs demands anytime and anywhere, through accessing CRNs in a more efficient way, rather than by just linking spectral efficiency. CRNs are more flexible and exposed to wireless networks compared with other traditional radio networks. Hence, there are many security threats to CRNs, more so than other traditional radio environments. The unique characteristics of CRNs make security more challenging. Several crucial issues have not yet been investigated in the area of security for CRNs. A typical public key infrastructure (PKI) scheme which achieves secure routing and other purposes in typical ad hoc networks is not enough to guarantee the security of CRNs under limited communication and computation resources. However, there has been increasing research attention on security threats caused specifically by CR techniques and special characteristics of CR in CRNs. Therefore, in this research, a survey of CRNs and their architectures and security issues has been carried out in a broad way in this paper.
Parvin, S, Hussain, FK, Park, JS & Kim, DS 2012, 'A survivability model in wireless sensor networks', COMPUTERS & MATHEMATICS WITH APPLICATIONS, vol. 64, no. 12, pp. 3666-3682.
View/Download from: Publisher's site
View description>>
In this paper, we present a survivability evaluation model and analyze the performance of Wireless Sensor Networks (WSNs) under attack and key compromise. First, we present a survivability evaluation model of WSNs by representing the states of WSNs under
Raza, M, Hussain, FK & Hussain, OK 2012, 'Neural Network-Based Approach for Predicting Trust Values Based on Non-uniform Input in Mobile Applications', COMPUTER JOURNAL, vol. 55, no. 3, pp. 347-378.
View/Download from: Publisher's site
View description>>
Recently, there has been much research focus on trust and reputation modelling as one of the key strategies for the formation of successful business intelligence strategies, particularly for service in mobile applications. One of the key trust modelling activities is trust prediction. During this process, the accuracy and reliability of the predicted trust values play an important role in the making of informed business decisions. Key factors to be considered at this stage are the variability and the high levels of distortion in the input series that have to be captured when predicting the trust values at a point in time in the future. In this paper, we propose a Multi-layer Feed Forward Artificial Neural Network to predict the future trust values of entities (services, agents, products etc.) for a future point in time based on data series input. We use four different non-uniform' data input series and measure the accuracy of the predicted values under different experimental scenarios for benchmarking and comparison with existing approaches. Results indicate that the model is reliable in predicting trust values even in scenarios where there are only limited data available on training the neural network and a high level of distortion is present in the input series. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Song, M, Tao, D, Chen, C, Bu, J, Luo, J & Zhang, C 2012, 'Probabilistic Exposure Fusion.', IEEE Trans. Image Process., vol. 21, no. 1, pp. 341-357.
View/Download from: Publisher's site
View description>>
The luminance of a natural scene is often of high dynamic range (HDR). In this paper, we propose a new scheme to handle HDR scenes by integrating locally adaptive scene detail capture and suppressing gradient reversals introduced by the local adaptation. The proposed scheme is novel for capturing an HDR scene by using a standard dynamic range (SDR) device and synthesizing an image suitable for SDR displays. In particular, we use an SDR capture device to record scene details (i.e., the visible contrasts and the scene gradients) in a series of SDR images with different exposure levels. Each SDR image responds to a fraction of the HDR and partially records scene details. With the captured SDR image series, we first calculate the image luminance levels, which maximize the visible contrasts, and then the scene gradients embedded in these images. Next, we synthesize an SDR image by using a probabilistic model that preserves the calculated image luminance levels and suppresses reversals in the image luminance gradients. The synthesized SDR image contains much more scene details than any of the captured SDR image. Moreover, the proposed scheme also functions as the tone mapping of an HDR image to the SDR image, and it is superior to both global and local tone mapping operators. This is because global operators fail to preserve visual details when the contrast ratio of a scene is large, whereas local operators often produce halos in the synthesized SDR image. The proposed scheme does not require any human interaction or parameter tuning for different scenes. Subjective evaluations have shown that it is preferred over a number of existing approaches. © 2011 IEEE.
Stoianoff, NP 2012, 'The Influence of the WTO over China’s Intellectual Property Regime', The Sydney Law Review, vol. 34, no. 1, pp. 65-89.
View description>>
This article commences with a brief history of China's intellectual property policy and international relations over the past 150 years. China's engagement with the western construct of intellectual property rights is strongly aligned with China's international trade relations. In particular, this article will consider the influence of the enquiries into transparency that followed China's first review after accession to the WTO and then the dispute resolution process initiated by the United States specifically on issues of intellectual property enforcement. Despite the numerous international treaties and agreements on intellectual property rights that exist and to which China acceded in the early days of the Open Door Policy period, it was the need to become a member of the WTO and with that the expectation of compliance with the prescriptive requirements found in the WTO Agreement on Trade Related Aspects of Intellectual Property Rights ("the TRIPS Agreement") that provided the greatest influence on the shaping of China's intellectual property regime today. Recent developments highlight a counterpoint in China's engagement with the TRIPS Agreement. This is indicated in China's willingness to align itself with the views of developing nations in the way that the TRIPS Agreement is interpreted and this is most evident in the recent Patent Law amendments which demonstrate China's desire to be an innovator, not a copier.
Thi, TH, Cheng, L, Zhang, J, Wang, L & Satoh, S 2012, 'Integrating local action elements for action analysis', Computer Vision and Image Understanding, vol. 116, no. 3, pp. 378-395.
View/Download from: Publisher's site
View description>>
In this paper, we propose a framework for human action analysis from video footage. A video action sequence in our perspective is a dynamic structure of sparse local spatial-temporal patches termed action elements, so the problems of action analysis in video are carried out here based on the set of local characteristics as well as global shape of a prescribed action. We first detect a set of action elements that are the most compact entities of an action, then we extend the idea of Implicit Shape Model to space time, in order to properly integrate the spatial and temporal properties of these action elements. In particular, we consider two different recipes to construct action elements: one is to use a Sparse Bayesian Feature Classifier to choose action elements from all detected Spatial Temporal Interest Points, and is termed discriminative action elements. The other one detects affine invariant local features from the holistic Motion History Images, and picks up action elements according to their compactness scores, and is called generative action elements. Action elements detected from either way are then used to construct a voting space based on their local feature representations as well as their global configuration constraints. Our approach is evaluated in the two main contexts of current human action analysis challenges, action retrieval and action classification. Comprehensive experimental results show that our proposed framework marginally outperforms all existing state-of-the-arts techniques on a range of different datasets. © 2011 Elsevier Inc. All rights reserved.
Thi, TH, Cheng, L, Zhang, J, Wang, L & Satoh, S 2012, 'Structured learning of local features for human action classification and localization', Image and Vision Computing, vol. 30, no. 1, pp. 1-14.
View/Download from: Publisher's site
View description>>
Human action recognition is a promising yet non-trivial computer vision field with many potential applications. Current advances in bag-of-feature approaches have brought significant insights into recognizing human actions within complex context. It is, however, a common practice in literature to consider action as merely an orderless set of local salient features. This representation has been shown to be oversimplified, which inherently limits traditional approaches from robust deployment in real-life scenarios. In this work, we propose and show that, by taking into account global configuration of local features, we can greatly improve recognition performance. We first introduce a novel feature selection process called Sparse Hierarchical Bayes Filter to select only the most contributive features of each action type based on neighboring structure constraints. We then present the application of structured learning in human action analysis. That is, by representing human action as a complex set of local features, we can incorporate different spatial and temporal feature constraints into the learning tasks of human action classification and localization. In particular, we tackle the problem of action localization in video using structured learning with two alternatives: one is Dynamic Conditional Random Field from probabilistic perspective; the other is Structural Support Vector Machine from max-margin point of view. We evaluate our modular classification-localization framework on various testbeds, in which our proposed framework is proven to be highly effective and robust compared against bag-of-feature methods. © 2011 Elsevier B.V. All rights reserved.
Tsakonas, A & Gabrys, B 2012, 'GRADIENT: Grammar-driven genetic programming framework for building multi-component, hierarchical predictive systems', Expert Systems with Applications, vol. 39, no. 18, pp. 13253-13266.
View/Download from: Publisher's site
Wang, C, Cao, L & Miao, B 2012, 'Optimal feature selection for sparse linear discriminant analysis and its applications in gene expression data', Computational Statistics and Data Analysis, vol. 66, pp. 140-149.
View/Download from: Publisher's site
View description>>
This work studies the theoretical rules of feature selection in lineardiscriminant analysis (LDA), and a new feature selection method is proposed forsparse linear discriminant analysis. An $l_1$ minimization method is used toselect the important features from which the LDA will be constructed. Theasymptotic results of this proposed two-stage LDA (TLDA) are studied,demonstrating that TLDA is an optimal classification rule whose convergencerate is the best compared to existing methods. The experiments on simulated andreal datasets are consistent with the theoretical results and show that TLDAperforms favorably in comparison with current methods. Overall, TLDA uses alower minimum number of features or genes than other approaches to achieve abetter result with a reduced misclassification rate.
Wang, C, Tong, T, Cao, L & Miao, B 2012, 'Non-parametric shrinkage mean estimation for quadratic loss functions with unknown covariance matrices', Journal of Multivariate Analysis, vol. 125, pp. 222-232.
View/Download from: Publisher's site
View description>>
In this paper, a shrinkage estimator for the population mean is proposedunder known quadratic loss functions with unknown covariance matrices. The newestimator is non-parametric in the sense that it does not assume a specificparametric distribution for the data and it does not require the priorinformation on the population covariance matrix. Analytical results on theimprovement of the proposed shrinkage estimator are provided and somecorresponding asymptotic properties are also derived. Finally, we demonstratethe practical improvement of the proposed method over existing methods throughextensive simulation studies and real data analysis. Keywords: High-dimensionaldata; Shrinkage estimator; Large $p$ small $n$; $U$-statistic.
Wang, C, Yang, J, Miao, B & Cao, L 2012, 'On Identity Tests for High Dimensional Data Using RMT', Journal of Multivariate Analysis, vol. 118, pp. 128-137.
View/Download from: Publisher's site
View description>>
In this work, we redefined two important statistics, the CLRT test (Baiet.al., Ann. Stat. 37 (2009) 3822-3840) and the LW test (Ledoit and Wolf, Ann.Stat. 30 (2002) 1081-1102) on identity tests for high dimensional data usingrandom matrix theories. Compared with existing CLRT and LW tests, the new testscan accommodate data which has unknown means and non-Gaussian distributions.Simulations demonstrate that the new tests have good properties in terms ofsize and power. What is more, even for Gaussian data, our new tests performfavorably in comparison to existing tests. Finally, we find the CLRT is moresensitive to eigenvalues less than 1 while the LW test has more advantages inrelation to detecting eigenvalues larger than 1.
Wang, T, Qin, Z, Zhang, S & Zhang, C 2012, 'Cost-sensitive classification with inadequate labeled data', Information Systems, vol. 37, no. 5, pp. 508-516.
View/Download from: Publisher's site
View description>>
It is an actual and challenging issue to learn cost-sensitive models from those datasets that are with few labeled data and plentiful unlabeled data, because some time labeled data are very difficult, time consuming and/or expensive to obtain. To solve this issue, in this paper we proposed two classification strategies to learn cost-sensitive classifier from training datasets with both labeled and unlabeled data, based on Expectation Maximization (EM). The first method, Direct-EM, uses EM to build a semi-supervised classifier, then directly computes the optimal class label for each test example using the class probability produced by the learning model. The second method, CS-EM, modifies EM by incorporating misclassification cost into the probability estimation process. We conducted extensive experiments to evaluate the efficiency, and results show that when using only a small number of labeled training examples, the CS-EM outperforms the other competing methods on majority of the selected UCI data sets across different cost ratios, especially when cost ratio is high. © 2011 Elsevier Ltd. All rights reserved.
Wei, GW & Merigó, JM 2012, 'Methods for strategic decision-making problems with immediate probabilities in intuitionistic fuzzy setting', Scientia Iranica, vol. 19, no. 6, pp. 1936-1946.
View/Download from: Publisher's site
Wu, Z, Xu, G, Yu, Z, Yi, X, Chen, E & Zhang, Y 2012, 'Executing SQL queries over encrypted character strings in the Database-As-Service model', Knowledge-Based Systems, vol. 35, pp. 332-348.
View/Download from: Publisher's site
View description>>
Rapid advances in the networking technologies have prompted the emergence of the 'software as service' model for enterprise computing, moreover, which is becoming one of the key industries quickly. 'Database as service' model provides users power to store, modify and retrieve data from anywhere in the world, as long as they have access to the Internet, thus, being increasingly popular in current enterprise data management systems. However, this model introduces several challenges, an essential issue being how to implement SQL queries over encrypted data efficiently. To ensure data security, this model generally encrypts sensitive data at the trusted client's site, before storing them into the non-trusted database service provider's site, which, unfortunately, results in that SQL queries cannot be executed over the encrypted data immediately at the database service provider. In this paper we only focus on how to query encrypted character strings efficiently. Our strategy is that when storing character strings to the database service provider, we not only store the encrypted character strings themselves, but also generate some characteristic index values for these character strings, and store them in an additional field; and when querying the encrypted character strings, we first execute a coarse query over the characteristic index fields at the database service provider, in order to filter out most of tuples not related to the querying conditions, and then, we decrypt the rest tuples and execute a refined query over them again at the client site. In our strategy, we define an n-phase reachability matrix for a character string and use it as the characteristic index values, and based on such a definition, we present some theorems to split a SQL query into its server-side representation and client-side representation for partitioning the computation of a query across the client and the server and thus improving query performance. Finally, experimental resul...
Wu, Z, Xu, G, Zhang, Y, Cao, Z, Li, G & Hu, Z 2012, 'GMQL: A graphical multimedia query language', Knowledge-Based Systems, vol. 26, pp. 135-143.
View/Download from: Publisher's site
View description>>
The rapid increase of multimedia data makes multimedia query more and more important. To better satisfy users' query requirements, developing a functional multimedia query language is becoming a promising and interesting task. In this paper, we propose a graphical multimedia query language called GMQL, which is developed based on a semi-structured data organization model. In GMQL, we combine the advantages of graphs and texts, making the query language much clear, easy to use and with powerful expressiveness. In this paper, we first present the notations and basic capabilities of GMQL by query examples. Second, we discuss the GMQL query processing techniques. Last, we evaluate and analyze our multimedia query language through the comparison with other existing multimedia query languages. The evaluation results show that, GMQL has powerful expressiveness, and thus is much applicable for multimedia information retrieval. © 2011 Elsevier B.V. All rights reserved.
Wu, Z, Xu, G, Zhang, Y, Dolog, P & Lu, C 2012, 'An Improved Contextual Advertising Matching Approach based on Wikipedia Knowledge', The Computer Journal, vol. 55, no. 3, pp. 277-292.
View/Download from: Publisher's site
View description>>
The current boom of the Web is associated with the revenues originated from Web advertising. As one prevalent type of Web advertising, contextual advertising refers to the placement of the most relevant commercial textual ads within the content of a Web page, so as to provide a better user experience and thereby increase the revenues of Web site owners and an advertising platform. Therefore, in contextual advertising, the relevance of selected ads with a Web page is essential. However, some problems, such as homonymy and polysemy, low intersection of keywords and context mismatch, can lead to the selection of irrelevant textual ads for a Web page, making that a simple keyword matching technique generally gives poor accuracy. To overcome these problems and thus to improve the relevance of contextual ads, in this paper we propose a novel Wikipedia-based matching technique which, using selective matching strategies, selects a certain amount of relevant articles from Wikipedia as an intermediate semantic reference model for matching Web pages and textual ads. We call this technique SIWI: Selective Wikipedia Matching, which, instead of using the whole Wikipedia articles, only matches the most relevant articles for a page (or a textual ad), resulting in the effective improvement of the overall matching performance. An experimental evaluation is conducted, which runs over a set of real textual ads, a set of Web pages from the Internet and a dataset of more than 260 000 articles from Wikipedia. The experimental results show that our method performs better than existing matching strategies, which can deal with the matching over the large dataset of Wikipedia articles efficiently, and achieve a satisfactory contextual advertising effect. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Xu, J, Wu, Q, Zhang, J & Tang, Z 2012, 'Fast and Accurate Human Detection Using a Cascade of Boosted MS-LBP Features', IEEE Signal Processing Letters, vol. 19, no. 10, pp. 676-679.
View/Download from: Publisher's site
View description>>
In this letter, a new scheme for generating local binary patterns (LBP) is presented. This Modified Symmetric LBP (MS-LBP) feature takes advantage of LBP and gradient features. It is then applied into a boosted cascade framework for human detection. By combining MS-LBP with Haar-like feature into the boosted framework, the performances of heterogeneous features based detectors are evaluated for the best trade-off between accuracy and speed. Two feature training schemes, namely Single AdaBoost Training Scheme (SATS) and Dual AdaBoost Training Scheme (DATS) are proposed and compared. On the top of AdaBoost, two multidimensional feature projection methods are described. A comprehensive experiment is presented. Apart from obtaining higher detection accuracy, the detection speed based on DATS is 17 times faster than HOG method. © 1994-2012 IEEE.
Xu, Y, Merigó, JM & Wang, H 2012, 'Linguistic power aggregation operators and their application to multiple attribute group decision making', Applied Mathematical Modelling, vol. 36, no. 11, pp. 5427-5444.
View/Download from: Publisher's site
Yeh, WC, Cao, L & Jin, JS 2012, 'A cellular automata hybrid quasi-random Monte Carlo simulation for estimating the one-to-all reliability of acyclic multi-state information networks', International Journal of Innovative Computing, Information and Control, vol. 8, no. 3 B, pp. 2001-2014.
View description>>
Many real-world systems (such as cellular telephones and transportation) are acyclic multi-state information networks (AMIN). These networks are composed of multi-state nodes, with different states determined by a set of nodes that receive a signal directly from these multi-state nodes, without satisfying the conservation law. Evaluating the AMIN reliability arises at the design and exploitation stage of many types of technical systems. However, existing analytical methods fail to estimate AMIN reliability in a realistic time frame, even for smaller-sized AMINs. Hence, the main purpose of this article is to present a cellular automata hybrid quasi-Monte Carlo simulation (CA-HMC) by combining cellular automata (CA, to rapidly determine network states), pseudo-random sequences (PRS, to obtain the exibility of the network) and quasi-random sequences (QRS, to improve the accuracy) to obtain a high-quality estimation of AMIN reliability in order to improve the calculation efficiency. We use one benchmark example from well-known algorithms in literature to show the utility and performance of the proposed CA-HMC simulation when evaluating the one-to-all AMIN reliability. © 2012 ISSN 1349-4198.
Yue, XD, Miao, DQ, Zhang, N, Cao, LB & Wu, Q 2012, 'Multiscale roughness measure for color image segmentation', Information Sciences, vol. 216, pp. 93-112.
View/Download from: Publisher's site
View description>>
Color image segmentation is always an important technique in image processing system. Highly precise segmentation with low computation complexity can be achieved through roughness measurement which approximate the color histogram based on rough set theory. However, due to the imprecise description of neighborhood similarity, the existing roughness measure tends to over-focus on the trivial homogeneous regions but is not accurate enough to measure the color homogeneity. This paper aims to construct a multiscale roughness measure through simulating the human vision. We apply the theories of linear scale-space and rough sets to generate the hierarchical roughness of color distribution under multiple scales. This multiscale roughness can tolerate the disturbance of trivial regions and also can provide the multilevel homogeneity representation in vision, which therefore produces precise and intuitive segmentation results. Furthermore, we propose roughness entropy for scale selection. The optimal scale for segmentation is decided by the entropy variation. The proposed method shows the encouraging performance in the experiments based on Berkeley segmentation database. © 2012 Elsevier Inc. All rights reserved.
Zhang, S, Chen, F, Wu, X, Zhang, C & Wang, R 2012, 'Mining bridging rules between conceptual clusters', Applied Intelligence, vol. 36, no. 1, pp. 108-118.
View/Download from: Publisher's site
View description>>
Bridging rules take the antecedent and action from different conceptual clusters. They are distinguished from association rules (frequent itemsets) because (1) they can be generated by the infrequent itemsets that are pruned in association rule mining, and (2) they are measured by their importance including the distance between two conceptual clusters, whereas frequent itemsets are measured only by their support. In this paper, we first design two algorithms for mining bridging rules between clusters, and then propose two non-linear metrics to measure their interestingness. We evaluate these algorithms experimentally and demonstrate that our approach is promising. © 2010 Springer Science+Business Media, LLC.
Zhao, L, Hoi, SCH, Wong, L, Hamp, T & Li, J 2012, 'Structural and Functional Analysis of Multi-Interface Domains', PLoS ONE, vol. 7, no. 12, pp. e50821-e50821.
View/Download from: Publisher's site
View description>>
A multi-interface domain is a domain that can shape multiple and distinctive binding sites to contact with many other domains, forming a hub in domain-domain interaction networks. The functions played by the multiple interfaces are usually different, but there is no strict bijection between the functions and interfaces as some subsets of the interfaces play the same function. This work applies graph theory and algorithms to discover fingerprints for the multiple interfaces of a domain and to establish associations between the interfaces and functions, based on a huge set of multi-interface proteins from PDB. We found that about 40% of proteins have the multi-interface property, however the involved multi-interface domains account for only a tiny fraction (1.8%) of the total number of domains. The interfaces of these domains are distinguishable in terms of their fingerprints, indicating the functional specificity of the multiple interfaces in a domain. Furthermore, we observed that both cooperative and distinctive structural patterns, which will be useful for protein engineering, exist in the multiple interfaces of a domain
Zhao, L, Wong, L, Lu, L, Hoi, SCH & Li, J 2012, 'B-cell epitope prediction through a graph model', BMC BIOINFORMATICS, vol. 13, no. suppl 17, pp. 1-12.
View/Download from: Publisher's site
View description>>
Background Prediction of B-cell epitopes from antigens is useful to understand the immune basis of antibody-antigen recognition, and is helpful in vaccine design and drug development. Tremendous efforts have been devoted to this long-studied problem, however, existing methods have at least two common limitations. One is that they only favor prediction of those epitopes with protrusive conformations, but show poor performance in dealing with planar epitopes. The other limit is that they predict all of the antigenic residues of an antigen as belonging to one single epitope even when multiple non-overlapping epitopes of an antigen exist. Results In this paper, we propose to divide an antigen surface graph into subgraphs by using a Markov Clustering algorithm, and then we construct a classifier to distinguish these subgraphs as epitope or non-epitope subgraphs. This classifier is then taken to predict epitopes for a test antigen. On a big data set comprising 92 antigen-antibody PDB complexes, our method significantly outperforms the state-of-the-art epitope prediction methods, achieving 24.7% higher averaged f-score than the best existing models. In particular, our method can successfully identify those epitopes with a non-planarity which is too small to be addressed by the other models. Our method can also detect multiple epitopes whenever they exist.
Zhou, L-G, Chen, H-Y, Merigó, JM & Gil-Lafuente, AM 2012, 'Uncertain generalized aggregation operators', Expert Systems with Applications, vol. 39, no. 1, pp. 1105-1117.
View/Download from: Publisher's site
Zliobaite, I, Bifet, A, Gaber, M, Gabrys, B, Gama, J, Minku, L & Musial, K 2012, 'Next challenges for adaptive learning systems', ACM SIGKDD Explorations Newsletter, vol. 14, no. 1, pp. 48-55.
View/Download from: Publisher's site
View description>>
Learning from evolving streaming data has become a 'hot' research topic in the last decade and many adaptive learning algorithms have been developed. This research was stimulated by rapidly growing amounts of industrial, transactional, sensor and other business data that arrives in real time and needs to be mined in real time. Under such circumstances, constant manual adjustment of models is in-efficient and with increasing amounts of data is becoming infeasible. Nevertheless, adaptive learning models are still rarely employed in business applications in practice. In the light of rapidly growing structurally rich 'big data', new generation of parallel computing solutions and cloud computing services as well as recent advances in portable computing devices, this article aims to identify the current key research directions to be taken to bring the adaptive learning closer to application needs. We identify six forthcoming challenges in designing and building adaptive learning (pre-diction) systems: making adaptive systems scalable, dealing with realistic data, improving usability and trust, integrat-ing expert knowledge, taking into account various application needs, and moving from adaptive algorithms towards adaptive tools. Those challenges are critical for the evolving stream settings, as the process of model building needs to be fully automated and continuous.
Apeh, E, Žliobaite, I, Pechenizkiy, M & Gabrys, B 1970, 'Predicting multi-class customer profiles based on transactions: A case study in food sales', Res. and Dev. in Intelligent Syst. XXIX: Incorporating Applications and Innovations in Intel. Sys. XX - AI 2012, 32nd SGAI Int. Conf. on Innovative Techniques and Applications of Artificial Intel., pp. 213-218.
View/Download from: Publisher's site
View description>>
Predicting the class of customer profiles is a key task in marketing, which enables businesses to approach the customers in a right way to satisfy the customer's evolving needs. However, due to costs, privacy and/or data protection, only the business' owned transactional data is typically available for constructing customer profiles. We present a new approach that is designed to efficiently and accurately handle the multi-class classification of customer profiles built using sparse and skewed transactional data. Our approach first bins the customer profiles on the basis of the number of items transacted. The discovered bins are then partitioned and prototypes within each of the discovered bins selected to build the multi-class classifier models. The results obtained from using four multi-class classifiers on real-world transactional data consistently show the critical numbers of items at which the predictive performance of customer profiles can be substantially improved. © Springer-Verlag London 2012.
Bargi, A, Da Xu, RY & Piccardi, M 1970, 'An online HDP-HMM for joint action segmentation and classification in motion capture data', 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), IEEE, Providence RI, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
Since its inception, action recognition research has mainly focused on recognizing actions from closed, predefined sets of classes. Conversely, the problem of recognizing actions from open, possibly incremental sets of classes is still largely unexplored. In this paper, we propose a novel online method based on the âstickyâ hierarchical Dirichlet process and the hidden Markov model [11, 5]. This approach, labelled as the online HDP-HMM, provides joint segmentation and classification of actions while a) processing the data in an online, recursive manner, b) discovering new classes as they occur, and c) adjusting its parameters over the streaming data. In a set of experiments, we have applied the online HDP-HMM to recognize actions from motion capture data from the TUM kitchen dataset, a challenging dataset of manipulation actions in a kitchen [12]. The results show significant accuracy in action classification, time segmentation and determination of the number of action classes
Bródka, P, Skibicki, K, Kazienko, P & Musiał, K 1970, 'A degree centrality in multi-layered social network', Proceedings of the 2011 International Conference on Computational Aspects of Social Networks, CASoN'11, pp. 237-242.
View/Download from: Publisher's site
View description>>
Multi-layered social networks reflect complex relationships existing inmodern interconnected IT systems. In such a network each pair of nodes may belinked by many edges that correspond to different communication orcollaboration user activities. Multi-layered degree centrality formulti-layered social networks is presented in the paper. Experimental studieswere carried out on data collected from the real Web 2.0 site. Themulti-layered social network extracted from this data consists of ten distinctlayers and the network analysis was performed for different degree centralitiesmeasures.
Budka, M, Musial, K & Juszczyszyn, K 1970, 'Predicting the Evolution of Social Networks: Optimal Time Window Size for Increased Accuracy', 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, 2012 International Conference on Privacy, Security, Risk and Trust (PASSAT), IEEE, pp. 21-30.
View/Download from: Publisher's site
View description>>
This study investigates the data preparation process for predictive modelling of the evolution of complex networked systems, using an e - mail based social network as an example. In particular, we focus on the selection of optimal time window size for building a time series of network snapshots, which forms the input of chosen predictive models. We formulate this issue as a constrained multi - objective optimization problem, where the constraints are specific to a particular application and predictive algorithm used. The optimization process is guided by the proposed Windows Incoherence Measures, defined as averaged Jensen-Shannon divergences between distributions of a range of network characteristics for the individual time windows and the network covering the whole considered period of time. The experiments demonstrate that the informed choice of window size according to the proposed approach allows to boost the prediction accuracy of all examined prediction algorithms, and can also be used for optimally defining the prediction problems if some flexibility in their definition is allowed. © 2012 IEEE.
Chen, X, Li, L, Xiao, H, Xu, G, Yang, Z & Kitsuregawa, M 1970, 'Recommending related microblogs: A comparison between topic and WordNet based approaches', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Toronto, pp. 2417-2418.
View description>>
Computing similarity between short microblogs is an important step in microblog recommendation. In this paper, we investigate a topic based approach and a WordNet based approach to estimate similarity scores between microblogs and recommend top related ones to users. Empirical study is conducted to compare their recommendation effectiveness using two evaluation measures. The results show that the WordNet based approach has relatively higher precision than that of the topic based approach using 548 tweets as dataset. In addition, the Kendall tau distance between two lists recommended by WordNet and topic approaches is calculated. Its average of all the 548 pair lists tells us the two approaches have the relative high disaccord in the ranking of related tweets. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
Dong, H, Hussain, FK & Chang, E 1970, 'Ontology-Learning-Based Focused Crawling for Online Service Advertising Information Discovery and Classification', Proceedings of the 10th International Conference on Service-Oriented Computing, 10th International Conference on Service-Oriented Computing, Springer Berlin Heidelberg, Shanghai, China, pp. 591-598.
View/Download from: Publisher's site
Esfijani, A, Hussain, FK & Chang, E 1970, 'An Approach to University Social Responsibility Ontology Development Through Text Analyses', 2012 5TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTIONS (HSI 2012), International Conference on Human System Interactions (HSI), IEEE, Perth, WA, pp. 1-7.
View/Download from: Publisher's site
View description>>
The main purpose of this paper is to propose a content analysis approach in order to develop an ontology of university social responsibility (USR). The proposed approach comprises four main phases in which two content analyses software have been utilized to extract the main USR components and to identify the domain of this concept. To achieve the goal, the existing body of knowledge of USR definitions and specifications - using a variety of terms - has been considered to identify the main notions of USR and their relationships. The developed ontology can be applied to define a formal, explicit description of the USR concept and to construct a more reliable basis for measurement purposes. © 2012 IEEE.
Fan, X & Cao, L 1970, 'A theoretical framework of the graph shift algorithm', Proceedings of the National Conference on Artificial Intelligence, ACM, Toronto, Ontario, Canada, pp. 2419-2420.
View description>>
Since no theoretical foundations for proving the convergence of Graph Shift Algorithm have been reported, we provide a generic framework consisting of three key GS components to fit the Zangwill's convergence theorem. We show that the sequence set generated by the GS procedures always terminates at a local maximum, or at worst, contains a subsequence which converges to a local maximum of the similarity measure function. What is more, a theoretical framework is proposed to apply our proof to a more general case. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
Fan, X, Zhu, L, Cao, L, Cui, X & Ong, Y-S 1970, 'Maximum margin clustering on evolutionary data', Proceedings of the 21st ACM international conference on Information and knowledge management, CIKM'12: 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, pp. 625-634.
View/Download from: Publisher's site
View description>>
Evolutionary data, such as topic changing blogs and evolving trading behaviors in capital market, is widely seen in business and social applications. The time factor and intrinsic change embedded in evolutionary data greatly challenge evolutionary clustering. To incorporate the time factor, existing methods mainly regard the evolutionary clustering problem as a linear combination of snapshot cost and temporal cost, and reflect the time factor through the temporal cost. It still faces accuracy and scalability challenge though promising results gotten. This paper proposes a novel evolutionary clustering approach, evolutionary maximum margin clustering (e-MMC), to cluster large-scale evolutionary data from the maximum margin perspective. e-MMC incorporates two frameworks: Data Integration from the data changing perspective and Model Integration corresponding to model adjustment to tackle the time factor and change, with an adaptive label allocation mechanism. Three e-MMC clustering algorithms are proposed based on the two frameworks. Extensive experiments are performed on synthetic data, UCI data and real-world blog data, which confirm that e-MMC outperforms the state-of-the-art clustering algorithms in terms of accuracy, computational cost and scalability. It shows that e-MMC is particularly suitable for clustering large-scale evolving data. © 2012 ACM.
Fang, M, Zhu, X & Zhang, C 1970, 'Active Learning from Oracle with Knowledge Blind Spot', Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012, pp. 2421-2422.
View description>>
Active learning traditionally assumes that an oracle is capable of providing labeling information for each query instance. This paper formulates a new research problem which allows an oracle admit that he/she is incapable of labeling some query instances or simply answer “I don't know the label”. We define a unified objective function to ensure that each query instance submitted to the oracle is the one mostly needed for labeling and the oracle should also has the knowledge to label. Experiments based on different types of knowledge blind spot (KBS) models demonstrate the effectiveness of the proposed design.
Fang, M, Zhu, X & Zhang, C 1970, 'Active learning from oracle with knowledge blind spot', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Toronto, Ontario, Canada, pp. 2421-2422.
View description>>
Active learning traditionally assumes that an oracle is capable of providing labeling information for each query instance. This paper formulates a new research problem which allows an oracle admit that he/she is incapable of labeling some query instances or simply answer 'I don't know the label'. We define a unified objective function to ensure that each query instance submitted to the oracle is the one mostly needed for labeling and the oracle should also has the knowledge to label. Experiments based on different types of knowledge blind spot (KBS) models demonstrate the effectiveness of the proposed design. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
Fu, B, Wang, Z, Pan, R, Xu, G & Dolog, P 1970, 'Learning Tree Structure of Label Dependency for Multi-label Learning', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer Berlin Heidelberg, Kuala Lumpur, Malaysia, pp. 159-170.
View/Download from: Publisher's site
View description>>
There always exists some kind of label dependency in multi-label data. Learning and utilizing those dependencies could improve the learning performance further. Therefore, an approach for multi-label learning is proposed in this paper, which quantifies the dependencies of pairwise labels firstly, and then builds a tree structure of the labels to describe them. Thus the approach could find out potential strong label dependencies and produce more generalized dependent relationships. The experimental results have validated that compared with other state-of-the-art algorithms, the method is not only a competitive alternative, but also has shown better performance after ensemble learning especially. © 2012 Springer-Verlag.
Ghous, H, Kennedy, PJ, Ho, N & Catchpoole, DR 1970, 'Functional visualisation of genes using singular value decomposition', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, Australian Computer Society, Sydney, Australia, pp. 53-59.
View description>>
Progress in understanding core pathways and processes of cancer requires thorough analysis of many coding regions of the genome. New insights are hampered due to the lack of tools to make sense of large lists of genes identified using high throughput technology. Data mining, particularly visualisation that finds relationships between genes and the Gene Ontology (GO), has the potential to assist in functional understanding. This paper addresses the question of how well GO annotations can help in functional understanding of genes. We augment genes with associated GO terms and visualise with Singular Value Decomposition (SVD). Meaning of derived components is further interpreted using correlations to GO terms. The results demonstrate that SVD visualisation of GO-augmented genes matches the biological understanding expected in the simulated data and presents understanding of childhood cancer genes that aligns with published results.
Hasan, MA, Xu, M, He, X & Chen, L 1970, 'Shot Classification Using Domain Specific Features for Movie Management', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on DASFAA, Springer Berlin Heidelberg, Busan, South Korea, pp. 314-318.
View/Download from: Publisher's site
View description>>
Among many video types, movie content indexing and retrieval is a significantly challenging task because of the wide variety of shooting techniques and the broad range of genres. A movie consists of a series of video shots. Managing a movie at shot level provides a feasible way for movie understanding and summarization. Consequently, an effective shot classification is greatly desired for advanced movie management. In this demo, we explore novel domain specific features for effective shot classification. Experimental results show that the proposed method classifies movie shots from wide range of movie genres with improved accuracy compared to existing work. © 2012 Springer-Verlag.
Hu, L, Cao, J, Xu, G & Gu, Z 1970, 'Latent informative links detection', Frontiers in Artificial Intelligence and Applications, 16th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, IOS Press, San Sebastian, Spain, pp. 1233-1242.
View/Download from: Publisher's site
View description>>
Sometimes, explicit relationships between entities do not provide sufficient information or can be unavailable in the real world. Unseen latent relationships may be more informative than explicit relationships. Thereby, we provide a method for constructing latent informative links between entities, using their common features, where entities are regarded as vertices on a graph. First, we employ a hierarchical nonparametric model to infer shared latent features for entities. Then, we define a filter function based on information theory to extract significant features and control the density of links. Finally, a couple of stochastic interaction processes are introduced to simulate dynamics on the net-works so that link strength can be retrieved from statistics in a natural way. In experiments, we evaluate the usage of filter function. The results of two examples based on mixture networks show how our method is capable of providing latent informative relationships in comparison to explicit relationships. © 2012 The authors and IOS Press. All rights reserved.
Juszczyszyn, K, Gonczarek, A, Tomczak, JM, Musial, K & Budka, M 1970, 'A Probabilistic Approach to Structural Change Prediction in Evolving Social Networks', 2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012), IEEE, Kadir Has Univ, Istanbul, TURKEY, pp. 996-1001.
View/Download from: Publisher's site
Li, B, Zhu, X, Chi, L & Zhang, C 1970, 'Nested Subtree Hash Kernels for Large-Scale Graph Classification over Streams', 2012 IEEE 12th International Conference on Data Mining, 2012 IEEE 12th International Conference on Data Mining (ICDM), IEEE, Brussels, Belgium, pp. 399-408.
View/Download from: Publisher's site
View description>>
Most studies on graph classification focus on designing fast and effective kernels. Several fast subtree kernels have achieved a linear time-complexity w.r.t. the number of edges under the condition that a common feature space (e.g., a subtree pattern list) is needed to represent all graphs. This will be infeasible when graphs are presented in a stream with rapidly emerging subtree patterns. In this case, computing a kernel matrix for graphs over the entire stream is difficult since the graphs in the expired chunks cannot be projected onto the unlimitedly expanding feature space again. This leads to a big trouble for graph classification over streams - Different portions of graphs have different feature spaces. In this paper, we aim to enable large-scale graph classification over streams using the classical ensemble learning framework, which requires the data in different chunks to be in the same feature space. To this end, we propose a Nested Subtree Hashing (NSH) algorithm to recursively project the multi-resolution subtree patterns of different chunks onto a set of common low-dimensional feature spaces. We theoretically analyze the derived NSH kernel and obtain a number of favorable properties: 1) The NSH kernel is an unbiased and highly concentrated estimator of the fast subtree kernel. 2) The bound of convergence rate tends to be tighter as the NSH algorithm steps into a higher resolution. 3) The NSH kernel is robust in tolerating concept drift between chunks over a stream. We also empirically test the NSH kernel on both a large-scale synthetic graph data set and a real-world chemical compounds data set for anticancer activity prediction. The experimental results validate that the NSH kernel is indeed efficient and robust for graph classification over streams. © 2012 IEEE.
Liang, G & Zhang, C 1970, 'A Comparative Study of Sampling Methods and Algorithms for Imbalanced Time Series Classification', Lecture Notes in Computer Science, Australasian Joint Conference on Artificial Intelligence, Springer Berlin Heidelberg, Sydney, pp. 637-648.
View/Download from: Publisher's site
View description>>
Mining time series data and imbalanced data are two of ten challenging problems in data mining research. Imbalanced time series classification (ITSC) involves these two challenging problems, which take place in many real world applications. In the existing research, the structure-preserving over-sampling (SOP) method has been proposed for solving the ITSC problems. It is claimed by its authors to achieve better performance than other over-sampling and state-of-the-art methods in time series classification (TSC). However, it is unclear whether an under-sampling method with various learning algorithms is more effective than over-sampling methods, e.g., SPO for ITSC, because research has shown that under-sampling methods are more effective and efficient than over-sampling methods. We propose a comparative study between an under-sampling method with various learning algorithms and oversampling methods, e.g. SPO. Statistical tests, the Friedman test and post-hoc test are applied to determine whether there is a statistically significant difference between methods. The experimental results demonstrate that the under-sampling technique with KNN is the most effective method and can achieve results that are superior to the existing complicated SPO method for ITSC.
Liang, G & Zhang, C 1970, 'An efficient and simple under-sampling technique for imbalanced time series classification', ACM International Conference Proceeding Series, CIKM 2012, ACM, Maui, Hawaii, pp. 2339-2342.
View/Download from: Publisher's site
View description>>
Imbalanced time series classification (TSC) involving many real-world applications has increasingly captured attention of researchers. Previous work has proposed an intelligent-structure preserving over-sampling method (SPO), which the authors claimed achieved better performance than other existing over-sampling and state-of-the-art methods in TSC. The main disadvantage of over-sampling methods is that they significantly increase the computational cost of training a classification model due to the addition of new minority class instances to balance data-sets with high dimensional features. These challenging issues have motivated us to find a simple and efficient solution for imbalanced TSC. Statistical tests are applied to validate our conclusions. The experimental results demonstrate that this proposed simple random under-sampling technique with SVM is efficient and can achieve results that compare favorably with the existing complicated SPO method for imbalanced TSC. © 2012 ACM.
Liang, G & Zhang, C 1970, 'An efficient and simple under-sampling technique for imbalanced time series classification', Proceedings of the 21st ACM international conference on Information and knowledge management, CIKM'12: 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, pp. 2339-2342.
View/Download from: Publisher's site
View description>>
Imbalanced time series classification (TSC) involving many real-world applications has increasingly captured attention of researchers. Previous work has proposed an intelligent structure preserving over-sampling method (SPO), which the authors claimed achieved better performance than other existing over-sampling and state-of-the-art methods in TSC. The main disadvantage of over-sampling methods is that they significantly increase the computational cost of training a classification model due to the addition of new minority class instances to balance data-sets with high dimensional features. These challenging issues have motivated us to find a simple and efficient solution for imbalanced TSC. Statistical tests are applied to validate our conclusions. The experimental results demonstrate that this proposed simple random under-sampling technique with SVM is efficient and can achieve results that compare favorably with the existing complicated SPO method for imbalanced TSC.
Lin Zhu, Longbing Cao & Jie Yang 1970, 'Multiobjective evolutionary algorithm-based soft subspace clustering', 2012 IEEE Congress on Evolutionary Computation, 2012 IEEE Congress on Evolutionary Computation (CEC), IEEE, Brisbane, Australia, pp. 1-8.
View/Download from: Publisher's site
View description>>
In this paper, a multiobjective evolutionary algorithm based soft subspace clustering, MOSSC, is proposed to simultaneously optimize the weighting within-cluster compactness and weighting between-cluster separation incorporated within two different clustering validity criteria. The main advantage of MOSSC lies in the fact that it effectively integrates the merits of soft subspace clustering and the good properties of the multiobjective optimization-based approach for fuzzy clustering. This makes it possible to avoid trapping in local minima and thus obtain more stable clustering results. Substantial experimental results on both synthetic and real data sets demonstrate that MOSSC is generally effective in subspace clustering and can achieve superior performance over existing state-of-the-art soft subspace clustering algorithms
Linares-Mustarós, S, Merigó, JM & Ferrer-Comalat, JC 1970, 'A Method for Uncertain Sales Forecast by Using Triangular Fuzzy Numbers', MODELING AND SIMULATION IN ENGINEERING, ECONOMICS, AND MANAGEMENT, MS 2012, International Conference of Modeling and Simulation in Engineering, Economics, and Management, Springer Berlin Heidelberg, New Rochelle, NY, pp. 98-113.
View/Download from: Publisher's site
Liu, L, Fan, D, Liu, M, Xu, G, Chen, S, Zhou, Y, Chen, X, Wang, Q & Wei, Y 1970, 'A MapReduce-Based Parallel Clustering Algorithm for Large Protein-Protein Interaction Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Advanced Data Mining and Applications, Springer Berlin Heidelberg, Nanjing, China, pp. 138-148.
View/Download from: Publisher's site
View description>>
Clustering proteins or identifying functionally related proteins in Protein-Protein Interaction (PPI) networks is one of the most computation-intensive problems in the proteomic community. Most researches focused on improving the accuracy of the clustering algorithms. However, the high computation cost of these clustering algorithms, such as Girvan and Newmans clustering algorithm, has been an obstacle to their use on large-scale PPI networks. In this paper, we propose an algorithm, called Clustering-MR, to address the problem. Our solution can effectively parallelize the Girvan and Newmans clustering algorithms based on edge-betweeness using MapReduce. We evaluated the performance of our Clustering-MR algorithm in a cloud environment with different sizes of testing datasets and different numbers of worker nodes. The experimental results show that our Clustering-MR algorithm can achieve high performance for large-scale PPI networks with more than 1000 proteins or 5000 interactions. © Springer-Verlag 2012.
Long, G, Chen, L, Zhu, X & Zhang, C 1970, 'TCSST', Proceedings of the 21st ACM international conference on Information and knowledge management, CIKM'12: 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, pp. 764-772.
View/Download from: Publisher's site
View description>>
Short & sparse text is becoming more prevalent on the web, such as search snippets, micro-blogs and product reviews. Accurately classifying short & sparse text has emerged as an important while challenging task. Existing work has considered utilizing external data (e.g. Wikipedia) to alleviate data sparseness, by appending topics detected from external data as new features. However, training a classifier on features concatenated from different spaces is not easy considering the features have different physical meanings and different significance to the classification task. Moreover, it exacerbates the 'curse of dimensionality' problem. In this study, we propose a transfer classification method, TCSST, to exploit the external data to tackle the data sparsity issue. The transfer classifier will be learned in the original feature space. Considering that the labels of the external data may not be readily available or sufficiently enough, TCSST further exploits the unlabeled external data to aid the transfer classification. We develop novel strategies to allow TCSST to iteratively select high quality unlabeled external data to help with the classification. We evaluate the performance of TCSST on both benchmark as well as real-world data sets. Our experimental results demonstrate that the proposed method is effective in classifying very short & sparse text, consistently outperforming existing and baseline methods. © 2012 ACM.
Memon, T, Lu, J & Hussain, FK 1970, 'Semantic De-biased Associations (SDA) Model to Improve Ill-Structured Decision Support', NEURAL INFORMATION PROCESSING, ICONIP 2012, PT II, International Conference on Neural Information Processing, Springer Verlag, Doha, Qatar, pp. 483-490.
View/Download from: Publisher's site
Meng, Q & Kennedy, PJ 1970, 'Determining the Number of Clusters in Co-authorship Networks Using Social Network Theory', 2012 Second International Conference on Cloud and Green Computing, 2012 International Conference on Cloud and Green Computing (CGC), IEEE, Xiangtan, Hunan, China, pp. 337-343.
View/Download from: Publisher's site
View description>>
Spectral clustering is a modern data clustering methodology with many notable advantages. However, this method has a weakness in that it requires researchers to specify a priori the number of clusters. In most cases, it is a challenge to know the number of clusters accurately. Here, we propose a novel way to solve this problem by involving the concept of group leaders and members from social network theory. From the perspective of social networks, groups are organized by leaders and this can provide a hint to finding the number of clusters in social networks by identifying group leaders. However, due to the fact that a group can have more than one leader, we also propose an algorithm to combine leaders from the same group. The number of leaders after the combination is expected to be the number of clusters in a network. We validate this proposed approach by using spectral clustering to cluster data comprising the co-authorship network from the University of Technology, Sydney (UTS). The experimental results show that our proposed method is effective in determining the number of cluster and can facilitate spectral clustering to achieve better clusters compared with other methods of calculating the number of clusters
Meng, Q & Kennedy, PJ 1970, 'Using network evolution theory and singular value decomposition method to improve accuracy of link prediction in social networks', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, Australian Computer Society, Sydney, pp. 175-181.
View description>>
Link prediction in large networks, especially social networks, has received significant recent attention. Although there are many papers contributing methods for link prediction, the accuracy of most predictors is generally low as they treat all nodes equally. We propose an effective approach to identifying the level of activities of nodes in networks by observing their behaviour during network evolution. It is clear that nodes that have been active previously contribute more to the changes in a network than stable nodes, which have low activity. We apply truncated singular value decomposition (SVD) to exclude the interference of stable nodes by treating them as noise in our dataset. Finally, in order to test the effectiveness of our proposed method, we use co-authorship networks from an Australian university from between 2006 and 2011 as an experimental dataset. The results show that our proposed method achieves higher accuracy in link prediction than previous methods, especially in predicting new links.
Merigo, JM 1970, 'Decision making in complex environments with generalized aggregation operators', 2012 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), 2012 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), IEEE, New York, NY, pp. 113-119.
View/Download from: Publisher's site
MERIGÓ, JM 1970, 'DECISION MAKING IN COMPLEX ENVIRONMENTS WITH UNCERTAIN GENERALIZED UNIFIED AGGREGATION OPERATORS', UNCERTAINTY MODELING IN KNOWLEDGE ENGINEERING AND DECISION MAKING, 10th International Conference on Fuzzy Logic and Intelligent Technologies in Nuclear Science (FLINS), WORLD SCIENTIFIC, Istanbul, TURKEY, pp. 357-862.
View/Download from: Publisher's site
Merigó, JM 1970, 'Measuring Errors with the OWA Operator', MODELING AND SIMULATION IN ENGINEERING, ECONOMICS, AND MANAGEMENT, MS 2012, International Conference of Modeling and Simulation in Engineering, Economics, and Management, Springer Berlin Heidelberg, New Rochelle, NY, pp. 24-33.
View/Download from: Publisher's site
Merigo, JM & Casanovas, M 1970, 'Linguistic decision making with probabilistic information and induced aggregation operators', 2012 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), 2012 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), IEEE, New York, NY, pp. 25-31.
View/Download from: Publisher's site
Merigó, JM & Gil-Lafuente, AM 1970, 'Complex Group Decision Making under Risk and Uncertainty', Studies in Fuzziness and Soft Computing, Springer Berlin Heidelberg, pp. 81-93.
View/Download from: Publisher's site
View description>>
We develop a new method for group decision making under risk and uncertain environments. We introduce the uncertain induced generalized probabilistic ordered weighted averaging weighted average (UIGPOWAWA) operator. It is an aggregation operator that unifies the probabilistic aggregation, the weighted average and the ordered weighted average in the same formulation and considering the degree of importance that each concept has in the analysis. It also deals with uncertain environments where the information is imprecise and can be assessed with interval numbers. Moreover, it deals with complex attitudinal characters represented with order inducing variables and generalizes the aggregation with generalized means. We study some of its main properties and develop an application in a group decision making problem concerning the selection of the optimal strategies. © 2012 Springer-Verlag Berlin Heidelberg.
MERIGÓ, JM & XU, Y 1970, 'INDUCED AND HEAVY AGGREGATION OPERATORS', UNCERTAINTY MODELING IN KNOWLEDGE ENGINEERING AND DECISION MAKING, 10th International Conference on Fuzzy Logic and Intelligent Technologies in Nuclear Science (FLINS), WORLD SCIENTIFIC, Istanbul, TURKEY, pp. 812-817.
View/Download from: Publisher's site
Moemeng, C, Wang, C & Cao, L 1970, 'Obtaining an Optimal MAS Configuration for Agent-Enhanced Mining Using Constraint Optimization', Lecture Notes in Computer Science, International Workshop on Agents and Data Mining Interaction, Springer Berlin Heidelberg, Taipei, Taiwan, pp. 46-57.
View/Download from: Publisher's site
View description>>
We investigate an interaction mechanism between agents and data mining, and focus on agent-enhanced mining. Existing data mining tools use workflow to capture user requirements. The workflow enactment can be improved with a suitable underlying execution layer, which is a Multi-Agent System (MAS). From this perspective, we propose a strategy to obtain an optimal MAS configuration from a given workflow when resource access restrictions and communication cost constraints are concerned, which is essentially a constraint optimization problem. In this paper, we show how workflow is modeled in the way that can be optimized, and how the optimized model is used to obtain an optimal MAS configuration. Finally, we demonstrate that our strategy can improve the load balancing and reduce the communication cost during the workflow enactment.
Movassaghi, S, Abolhasan, M, Lipman, J & IEEE 1970, 'Energy Efficient Thermal and Power Aware (ETPA) Routing in Body Area Networks', 2012 IEEE 23RD INTERNATIONAL SYMPOSIUM ON PERSONAL INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), IEEE International Symposium on Personal and Indoor Mobile Radio Conference, IEEE, Sydney, pp. 1108-1113.
View/Download from: Publisher's site
View description>>
Research on routing in a network of intelligent, lightweight, micro and nano-technology sensors deployed in or around the body, namely Body Area Network (BAN), has gained great interest in the recent years. In this paper, we present an energy efficient, thermal and power aware routing algorithm for BANs named Energy Efficient Thermal and Power Aware routing (ETPA). ETPA considers a node's temperature, energy level and received power from adjacent nodes in the cost function calculation. An optimization problem is also defined in order to minimize average temperature rise in the network. Our analysis demonstrates that ETPA can significantly decrease temperature rise and power consumption as well as providing a more efficient usage of the available resources compared to the most efficient routing protocol proposed so far in BANs, namely PRPLC. Also, ETPA has a considerably higher depletion time that guarantees a longer lasting communication among nodes. © 2012 IEEE.
Musial, K & Sastry, N 1970, 'Social media', Proceedings of the Fourth Annual Workshop on Simplifying Complex Networks for Practitioners, SIMPLEX '12: Simplifying Complex Networks for Practitioners, ACM, pp. 1-6.
View/Download from: Publisher's site
View description>>
On many social media and user-generated content sites, users can not only upload content but also create links with other users to follow their activities. It is interesting to ask whether the resulting user-user Followers' Network is based more on social ties, or shared interests in similar content. This paper reports our preliminary progress in answering this question using around five years of data from social video-sharing site vimeo. Many links in the Followers' Network are between users who do not have any videos in common, which would imply the network is not interest-based, but rather has a social character. However, the Followers' Network also exhibits properties unlike other social networks, for instance, clustering co-efficient is low, links are frequently not reciprocated, and users form links across vast geographical distances. In addition, analysis of the relationship strength, calculated as the number of commonly liked videos, people who follow each other and share some "likes" have more video likes in common than the general population. We conclude by speculating on the reasons for these differences and proposals for further work. © 2012 ACM.
Pan, R, Xu, G & Dolog, P 1970, 'Improving Recommendations in Tag-Based Systems with Spectral Clustering of Tag Neighbors', Lecture Notes in Electrical Engineering, International Symposium on Computer Science and Its Applications (CSA), Springer Netherlands, Jeju Island, Korea, pp. 355-364.
View/Download from: Publisher's site
View description>>
Tag as a useful metadata reflects the collaborative and conceptual features of documents in social collaborative annotation systems. In this paper, we propose a collaborative approach for expanding tag neighbors and investigate the spectral clustering algorithm to filter out noisy tag neighbors in order to get appropriate recommendation for users. The preliminary experiments have been conducted on MovieLens dataset to compare our proposed approach with the traditional collaborative filtering recommendation approach and naive tag neighbors expansion approach in terms of precision, and the result demonstrates that our approach could considerably improve the performance of recommendations. © 2012 Springer Science+Business Media B.V.
Pan, R, Xu, G, Dolog, P & Zong, Y 1970, 'Group Division for Recommendation in Tag-Based Systems', 2012 Second International Conference on Cloud and Green Computing, 2012 International Conference on Cloud and Green Computing (CGC), IEEE, Xiangtan, China, pp. 399-404.
View/Download from: Publisher's site
View description>>
The common usage of tags in these systems is to add the tagging attribute as an additional feature to re-model users or resources over the tag vector space, and in turn, making tag-based recommendation or personalized recommendation. With the help of tagging data, user annotation preference and document topical tendency are substantially coded into the profiles of users or documents. However, obtaining the proper relationship among user, resource and tag is still a challenge in social annotation based recommendation researches. In this paper, we utilize the relationship from between tags and resources and between tags and users to extract group information. With the help of such relationship, we can obtain the Topic-Groups based on the bipartite relationship between tags and resources, and Interest-Groups based on the bipartite relationship between tags and users. The preliminary experiments have been conducted on Movie Lens dataset to compare our proposed approach with the traditional collaborative filtering recommendation approach approach in terms of precision, and the result demonstrates that our approach could considerably improve the performance of recommendations. © 2012 IEEE.
Parvin, S & Hussain, FK 1970, 'Trust-based Security for Community-based Cognitive Radio Networks', 2012 IEEE 26TH INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS (AINA), International Conference on Advanced Information Networking and Applications (was ICOIN), IEEE, Fukuoka, Japan, pp. 518-525.
View/Download from: Publisher's site
View description>>
Cognitive Radio (CR) is considered to be a necessary mechanism to detect whether a particular segment of the radio spectrum is currently in use, and to rapidly occupy the temporarily unused spectrum without interfering with the transmissions of other users. As Cognitive Radio has dynamic properties, so a member of Cognitive Radio Networks may join or leave the network at any time. These properties mean that the issue of secure communication in CRNs becomes more critical than for other conventional wireless networks. This work thus proposes a trust-based security system for community-based CRNs. A CR node's trust value is analyzed according to its previous behavior in the network and, depending on this trust value, it is decided whether this member node can take part in the communication of CRNs. For security purposes, we have designed our model to ensure that the proposed approach is secure in different contexts.
Qinxue Meng & Kennedy, PJ 1970, 'Using Field of Research Codes to Discover Research Groups from Co-authorship Networks', 2012 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012), IEEE, Istanbul, Turkey, pp. 289-293.
View/Download from: Publisher's site
View description>>
Nowadays, academic collaboration has become more prevalent and crucial than ever before and many studies of academic collaboration analysis are implemented based on coauthor ship networks. This paper aims to build a novel coauthor ship network by importing field of research codes based on Newman's model, and then analyze and extract research groups via spectral clustering. In order to explain the effectiveness of this revised network, we take the academic collaboration at the University of Technology, Sydney (UTS) as an example. The result of this study advances methods for selecting the most prolific research groups and individuals in research institutions, and provides scientific evidence for policymakers to manage laboratories and research groups more efficiently in the future.
Rehman, ZU, Hussain, OK & Hussain, FK 1970, 'Iaas Cloud Selection using MCDM Methods', 2012 NINTH IEEE INTERNATIONAL CONFERENCE ON E-BUSINESS ENGINEERING (ICEBE), IEEE International Conference on e-Business Engineering, IEEE, Hangzhou, China, pp. 246-251.
View/Download from: Publisher's site
View description>>
The popularity of cloud computing and IaaS has spawned numerous cloud service providers which offer various cloud services, including IaaS, to cloud users. These services vary considerably in terms of their performance and cost and the selection of a suitable cloud service becomes a complex decision making issue for a cloud service user. Furthermore, the cloud services have several attributes all of which are the criteria that have to be taken into account when making a service selection decision. In the presence of these multiple criteria, a compromise has to be made because in most real-world situations, no single service exceeds all other services in all criteria but one service may be better in terms of some of the criteria while other services may outperform it if judged on the basis of the remaining criteria. Multi-criteria decision-making is a sub-field in operations research that deals with the techniques to solve such multi-criteria problems. There are several methods of multicriteria decision-making. In this paper, we use key multi-criteria decision-making methods for IaaS cloud service selection in a case study which contains five basic performance measurements of thirteen cloud services by a third party monitoring service. We demonstrate the use of these multi-criteria methods for cloud service selection and compare the results obtained by using each method to find out how the choice of a particular MCDM method affects the outcome of the decision-making process for IaaS cloud service selection.
Shangguan, Q, Hu, L, Cao, J & Xu, G 1970, 'Book Recommendation Based on Joint Multi-relational Model', 2012 Second International Conference on Cloud and Green Computing, 2012 International Conference on Cloud and Green Computing (CGC), IEEE, Xiangtan, China, pp. 523-530.
View/Download from: Publisher's site
She, Z, Wang, C & Cao, L 1970, 'CCE: A coupled framework of clustering ensembles', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Toronto, Ontario, Canada, pp. 2455-2456.
View description>>
Clustering ensemble mainly relies on the pairwise similarity to capture the consensus function. However, it usually considers each base clustering independently, and treats the similarity measure roughly with either 0 or 1. To address these two issues, we propose a coupled framework of clustering ensembles CCE, and exemplify it with the coupled version CCSPA for CSPA. Experiments demonstrate the superiority of CCS/34 over baseline approaches in terms of the clustering accuracy. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved.
Shen, Y, Miao, Z & Zhang, J 1970, 'Unsupervised online learning trajectory analysis based on weighted directed graph', Proceedings - International Conference on Pattern Recognition, International Conference on Pattern Recognition, IEEE, Tsukuba, Japan, pp. 1306-1309.
View description>>
In this paper, we propose a novel unsupervised online learning trajectory analysis method based on weighted directed graph. Each trajectory can be represented as a sequence of key points. In the training stage, unsupervised expectation-maximization algorithm (EM) is applied for training data to cluster key points. Each class is a Gaussian distribution. It is considered as a node of the graph. According to the classification of key points, we can build a weighted directed graph to represent the trajectory network in the scene. Each path is a category of trajectories. In the test stage, we adopt online EM algorithm to classify trajectories and update the graph. In the experiments, we test our approach and obtain a good performance compared with state-of-the-art approaches. © 2012 ICPR Org Committee.
Shi, S, Li, JY & Gu, XM 1970, 'A Novel Method of High Frequency Weak Signal Detection Based on Chaotic Oscillator System and Wavelet Transform System', Applied Mechanics and Materials, 3rd International Conference on Mechanical and Electronics Engineering (ICMEE 2011), Trans Tech Publications, Ltd., PEOPLES R CHINA, Hefei, pp. 2770-2773.
View/Download from: Publisher's site
View description>>
Based on chaotic oscillator system and wavelet transform system, this paper proposes a novel method on high frequency weak signal detection. Chaotic system is a typical non-linear system which is sensitive to certain signals and immune to noise at the same time. Its properties demonstrate the potential application on weak signal detection. Due to the good localization in both time domain and frequency domain, the wavelet transform method can automatically adjust to different frequency components and increase the Signal-to-Noise Ratio. Starting from the analysis of advantages and disadvantages of two signal detection methods, we put forward a combined method that takes advantage of each method to detect weak signals with high frequency. The simulation results show that the novel method can detect weak signals with frequency in an order of magnitude of 107Hz, and the input Signal-to-Noise Ratio threshold could be-42.5dB.
Song, Y, Cao, L, Wu, X, Wei, G, Ye, W & Ding, W 1970, 'Coupled behavior analysis for capturing coupling relationships in group-based market manipulations', Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '12: The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, pp. 976-984.
View/Download from: Publisher's site
View description>>
In stock markets, an emerging challenge for surveillance is that a group of hidden manipulators collaborate with each other to manipulate the price movement of securities. Recently, the coupled hidden Markov model (CHMM)-based coupled behavior analysis (CBA) has been proposed to consider the coupling relationships in the above group-based behaviors for manipulation detection. From the modeling perspective, however, this requires overall aggregation of the behavioral data to cater for the CHMM modeling, which does not differentiate the coupling relationships presented in different forms within the aggregated behaviors and degrade the capability for further anomaly detection. Thus, this paper suggests a general CBA framework for detecting group-based market manipulation by capturing more comprehensive couplings and proposes two variant implementations, which are hybrid coupling (HC)-based and hierarchical grouping (HG)-based respectively. The proposed framework consists of three stages. The first stage, qualitative analysis, generates possible qualitative coupling relationships between behaviors with or without domain knowledge. In the second stage, quantitative representation of coupled behaviors is learned via proper methods. For the third stage, anomaly detection algorithms are proposed to cater for different application scenarios. Experimental results on data from a major Asian stock market show that the proposed framework outperforms the CHMM-based analysis in terms of detecting abnormal collaborative market manipulations. Additionally,the two different implementations are compared with their effectiveness for different application scenarios.
Su, G, Ying, M & Zhang, C 1970, 'Semantic Analysis of Component-aspect Dynamism for Connector-based Architecture Styles.', WICSA/ECSA, IEEE/IFIP Working Conference on Software Architecture (now with ECSA), IEEE, Helsinki (Finland), pp. 151-160.
View/Download from: Publisher's site
View description>>
Architecture Description Languages usually specify software architectures in the levels of types and instances. Components instantiate component types by parameterization and type conformance. Behavioral analysis of dynamic architectures needs to deal with the uncertainty of actual configurations of components, even if the type-level architectural descriptions are explicitly provided. This paper addresses this verification difficulty for connector-based architecture styles, in which all communication channels of a system are between components and a connector. The contribution of this paper is two-fold: (1) We propose a process-algebraic model, in which the main architectural concepts (such as component type and component conformance) and several fundamental architectural properties (i.e. deadlock-freedom, non-starvation, conservation, and completeness) are formulated. (2)We demonstrate that the state space of verification of these properties can be reduced from the entire universe of possible configurations to specific configurations that are fixed according to the typelevel architectural descriptions.
Tafavogh, S, Kennedy, PJ, Catchpoole, DR & IEEE 1970, 'Determining Cellularity Status of Tumors based on Histopathology using Hybrid Image Segmentation', 2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), IEEE International Joint Conference on Neural Networks, IEEE, Brisbane, Australia, pp. 1-8.
View/Download from: Publisher's site
View description>>
A Computer Aided Diagnosis (CAD) system is developed to determine cellularity status of a tumor. The system helps pathologists to distinguish a tumor with cell proliferation from normal tumors. The developed CAD system implements a hybrid segmentation method to identify and extract the morphological features that are used by pathologists for determining cellularity status of tumor. Adaptive Mean Shift (AMS) clustering as a non-parametric technique is integrated with Color Template Matching (CTM) to construct segmentation approach. We used Expectation Maximization (EM) clustering as a parametric technique for the sake of comparison with our proposed approach. The output of our proposed system and EM are validated by two pathologists as ground truth. The result of our developed system is quite close to the decision of pathologists, and it significantly outperforms EM in terms of accuracy. © 2012 IEEE.
ur Rehman, Z, Hussain, OK, Parvin, S & Hussain, FK 1970, 'A Framework for User Feedback Based Cloud Service Monitoring', 2012 Sixth International Conference on Complex, Intelligent, and Software Intensive Systems, 2012 Sixth International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS), IEEE, Palermo, Italy, pp. 257-262.
View/Download from: Publisher's site
View description>>
The increasing popularity of the cloud computing paradigm and the emerging concept of federated cloud computing have motivated research efforts towards intelligent cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Given the intricate and heterogeneous nature of current clouds, the cloud service selection process is, in effect, a multi criteria optimization or decision-making problem. The possible criteria for this process are related to both functional and nonfunctional attributes of cloud services. In this context, the two major issues are: (1) choice of a criteria-set and (2) mechanisms for the assessment of cloud services against each criterion for thorough continuous cloud service monitoring. In this paper, we focus on the issue of cloud service monitoring wherein the existing monitoring and assessment mechanisms are entirely dependent on various benchmark tests which, however, are unable to accurately determine or reliably predict the performance of actual cloud applications under a real workload. We discuss the recent research aimed at achieving this objective and propose a novel user-feedback-based approach which can monitor cloud performance more reliably and accurately as compared with the existing mechanisms.
Vizuete Luciano, E, Merigó, JM, Gil-Lafuente, AM & Boria Reverté, S 1970, 'OWA Operators in the Assignment Process: The Case of the Hungarian Algorithm', MODELING AND SIMULATION IN ENGINEERING, ECONOMICS, AND MANAGEMENT, MS 2012, International Conference of Modeling and Simulation in Engineering, Economics, and Management, Springer Berlin Heidelberg, New Rochelle, NY, pp. 166-177.
View/Download from: Publisher's site
Wang, C, Wang, M, She, Z & Cao, L 1970, 'CD: A Coupled Discretization Algorithm', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer Berlin Heidelberg, Kuala Lumpur, Malaysia, pp. 407-418.
View/Download from: Publisher's site
View description>>
Discretization technique plays an important role in data mining and machine learning. While numeric data is predominant in the real world, many algorithms in supervised learning are restricted to discrete variables. Thus, a variety of research has been conducted on discretization, which is a process of converting the continuous attribute values into limited intervals. Recent work derived from entropy-based discretization methods, which has produced impressive results, introduces information attribute dependency to reduce the uncertainty level of a decision table; but no attention is given to the increment of certainty degree from the aspect of positive domain ratio. This paper proposes a discretization algorithm based on both positive domain and its coupling with information entropy, which not only considers information attribute dependency but also concerns deterministic feature relationship. Substantial experiments on extensive UCI data sets provide evidence that our proposed coupled discretization algorithm generally outperforms other seven existing methods and the positive domain based algorithm proposed in this paper, in terms of simplicity, stability, consistency, and accuracy. © 2012 Springer-Verlag.
Wei, W, Fan, X, Li, J & Cao, L 1970, 'Model the complex dependence structures of financial variables by using canonical vine', Proceedings of the 21st ACM international conference on Information and knowledge management, CIKM'12: 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, pp. 1382-1391.
View/Download from: Publisher's site
View description>>
Financial variables such as asset returns in the massive market contain various hierarchical and horizontal relationships forming complicated dependence structures. Modeling and mining of these structures is challenging due to their own high structural complexities as well as the stylized facts of the market data. This paper introduces a new canonical vine dependence model to identify the asymmetric and non-linear dependence structures of asset returns without any prior independence assumptions. To simplify the model while maintaining its merit, a partial correlation based method is proposed to optimize the canonical vine. Compared with the original canonical vine, the new model can still maintain the most important dependence but many unimportant nodes are removed to simplify the canonical vine structure. Our model is applied to construct and analyze dependence structures of European stocks as case studies. Its performance is evaluated by measuring portfolio of Value at Risk, a widely used risk management measure. In comparison to a very recent canonical vine model and the 'full' model, our experimental results demonstrate that our model has a much better quality of Value at Risk, providing insightful knowledge for investors to control and reduce the aggregation risk of the portfolio. © 2012 ACM.
Wu, Y, Lu, S, Mei, T, Zhang, J & Li, S 1970, 'Local visual words coding for low bit rate mobile visual search', Proceedings of the 20th ACM international conference on Multimedia, MM '12: ACM Multimedia Conference, ACM, Nara, Japan., pp. 989-992.
View/Download from: Publisher's site
View description>>
Mobile visual search has attracted extensive attention for its huge potential for numerous applications. Research on this topic has been focused on two schemes: sending query images, and sending compact descriptors extracted on mobile phones. The first scheme requires about 30-40KB data to transmit, while the second can reduce the bit rate by 10 times. In this paper, we propose a third scheme for extremely low bit rate mobile visual search, which sends compressed visual words consisting of vocabulary tree histogram and descriptor orientations rather than descriptors. This scheme can further reduce the bit rate with few extra computational costs on the client. Specifically, we store a vocabulary tree and extract visual descriptors on the mobile client. A light-weight pre-retrieval is performed to obtain the visited leaf nodes in the vocabulary tree. The orientation of each local descriptor and the tree histogram are then encoded to be transmitted to server. Our new scheme transmits less than 1KB data, which reduces the bit rate in the second scheme by 3 times, and obtains about 30% improvement in terms of search accuracy over the traditional Bag-of-Words baseline. The time cost is only 1.5 secs on the client and 240 msecs on the server. © 2012 ACM.
Xu, G & Wu, Z 1970, 'On Smart and Accurate Contextual Advertising', Lecture Notes in Computer Science, Database Systems for Advanced Applications, Springer Berlin Heidelberg, Busan, South Korea, pp. 104-104.
View/Download from: Publisher's site
View description>>
Wide Web to attract customers, has become one of the most important marketing channels. As one prevalent type ofWeb advertising, contextual advertising refers to the placement of the most relevant commercial ads into the content of a Web page, so as to increase the number of adclicks. However, some problems such as homonymy and polysemy, low intersection of keywords, and context mismatch, can lead to the selection of irrelevant ads for a generic page, making that the traditional keyword matching techniques generally present a poor accuracy. Furthermore, existing contextual advertising techniques only take into consideration how to select as relevant ads for a generic page as possible, without considering the positional effect of the ad placement in the page. In this paper, we propose a new contextual advertising framework to tackle problems, which (1) usesWikipedia concept and category information to enrich the semantic representation of a page (or a textual ad) and (2) takes the placement position of embedded advertise into account. To accomplish these steps, we first map each page (or ad) into three feature vectors: a keyword vector, a concept vector and a category vector. Second, we determine the relevant ads for a given page based on a similarity measure which combines the above three feature vectors. In dealing with position-wise contextual advertising, the relevant ads are selected based on not only global context relevance but also local context relevance, so that the embedded ads yield contextual relevance to both the whole targeted page and the insertion positions where the ads are placed. We experimentally validate our approach by using a real ads set, a real pages set , and a set of more than 260,000 concepts and 12,000 categories from Wikipedia. The experimental results show that our approach performs better than the simple keyword matching and can improve the precision of ads-selection effectively.
Xu, J, Wu, Q, Zhang, J & Tang, Z 1970, 'Object Detection Based on Co-occurrence GMuLBP Features', 2012 IEEE International Conference on Multimedia and Expo, 2012 IEEE International Conference on Multimedia and Expo (ICME), IEEE, 2012 IEEE International Conference on Multimedia and Expo, pp. 943-948.
View/Download from: Publisher's site
View description>>
Image co-occurrence has shown great powers on object classification because it captures the characteristic of individual features and spatial relationship between them simultaneously. For example, Co-occurrence Histogram of Oriented Gradients (CoHOG) has achieved great success on human detection task. However, the gradient orientation in CoHOG is sensitive to noise. In addition, CoHOG does not take gradient magnitude into account which is a key component to reinforce the feature detection. In this paper, we propose a new LBP feature detector based image co-occurrence. Building on uniform Local Binary Patterns, the new feature detector detects Co-occurrence Orientation through Gradient Magnitude calculation. It is known as CoGMuLBP. An extension version of the GoGMuLBP is also presented. The experimental results on the UIUC car data set show that the proposed features outperform state-of-the-art methods. © 2012 IEEE.
Xu, Y, Luo, T, Xu, G & Pan, R 1970, 'A Topic-Oriented Syntactic Component Extraction Model for Social Media', Lecture Notes in Electrical Engineering, Human Centric Technology and Service in Smart Space, Springer Netherlands, Gwangju, Korea, pp. 221-229.
View/Download from: Publisher's site
View description>>
Topic-oriented understanding is to extract information from various language instances, which reflects the characteristics or trends of semantic information related to the topic via statistical analysis. The syntax analysis and modeling is the basis of such work. Traditional syntactic formalization approaches widely used in natural language understanding could not be simply applied to the text modeling in the context of topic-oriented understanding. In this paper, we review the information extraction mode, and summarize its inherent relationship with the "Subject- Predicate" syntactic structure in Aryan language. And we propose a syntactic element extraction model based on the "topic-description" structure, which contains six kinds of core elements, satisfying the desired requirement for topic-oriented understanding. This paper also describes the model composition, the theoretical framework of understanding process, the extraction method of syntactic components, and the prototype system of generating syntax diagrams. The proposed model is evaluated on the Reuters 21578 and SocialCom2009 data sets, and the results show that the recall and precision of syntactic component extraction are up to 93.9% and 88%, respectively, which further justifies the feasibility of generating syntactic component through the word dependencies. © 2012 Springer Science+Business Media.
Yin Song & Cao, L 1970, 'Graph-based coupled behavior analysis: A case study on detecting collaborative manipulations in stock markets', The 2012 International Joint Conference on Neural Networks (IJCNN), 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane), IEEE, Brisbane, Australia, pp. 1-8.
View/Download from: Publisher's site
View description>>
Coupled behaviors, which refer to behaviors having some relationships between them, are usually seen in many real-world scenarios, especially in stock markets. Recently, the coupled hidden Markov model (CHMM)-based coupled behavior analysis has been proposed to consider the coupled relationships in a hidden state space. However, it requires aggregation of the behavioral data to cater for the CHMM modeling, which may overlook the couplings within the aggregated behaviors to some extent. In addition, the Markov assumption limits its capability to capturing temporal couplings. Thus, this paper proposes a novel graph-based framework for detecting abnormal coupled behaviors. The proposed framework represents the coupled behaviors in a graph view without aggregating the behavioral data and is flexible to capture richer coupling information of the behaviors (not necessarily temporal relations). On top of that, the couplings are learned via relational learning methods and an efficient anomaly detection algorithm is proposed as well. Experimental results on a real-world data set in stock markets show that the proposed framework outperforms the CHMM- based one in both technical and business measures. © 2012 IEEE.
Yin, J, Zheng, Z & Cao, L 1970, 'USpan: an efficient algorithm for mining high utility sequential patterns.', KDD, ACM International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, pp. 660-668.
View/Download from: Publisher's site
View description>>
Sequential pattern mining plays an important role in many applications, such as bioinformatics and consumer behavior analysis. However, the classic frequency-based framework often leads to many patterns being identified, most of which are not informative enough for business decision-making. In frequent pattern mining, a recent effort has been to incorporate utility into the pattern selection framework, so that high utility (frequent or infrequent) patterns are mined which address typical business concerns such as dollar value associated with each pattern. In this paper, we incorporate utility into sequential pattern mining, and a generic framework for high utility sequence mining is defined. An efficient algorithm, USpan, is presented to mine for high utility sequential patterns. In USpan, we introduce the lexicographic quantitative sequence tree to extract the complete set of high utility sequences and design concatenation mechanisms for calculating the utility of a node and its children with two effective pruning strategies. Substantial experiments on both synthetic and real datasets show that USpan efficiently identifies high utility sequences from large scale data with very low minimum utility. © 2012 ACM.
Yu Zong, Guandong Xu, Ping Jin, Xun Yi, Enhong Chen & Zongda Wu 1970, 'A projective clustering algorithm based on significant local dense areas', The 2012 International Joint Conference on Neural Networks (IJCNN), 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane), IEEE, Brisbane, Australia, pp. 1-8.
View/Download from: Publisher's site
Yunzhi Jiang, Pohsiang Tsai, Zhifeng Hao & Longbing Cao 1970, 'A novel auto-parameters selection process for image segmentation', 2012 IEEE Congress on Evolutionary Computation, 2012 IEEE Congress on Evolutionary Computation (CEC), IEEE, Brisbane, Australia, pp. 1-7.
View/Download from: Publisher's site
View description>>
Segmentation is a process to obtain the desirable features in image processing. However, the existing techniques that use the multilevel thresholding method in image segmentation are computationally demanding due to the lack of an automatic parameter selection process. This paper proposes an automatic parameter selection technique called an automatic multilevel thresholding algorithm using stratified sampling and Tabu Search (AMTSSTS) to remedy the limitations. It automatically determines the appropriate threshold number and values by (1) dividing an image into even strata (blocks) to extract samples; (2) applying a Tabu Search-based optimization technique on these samples to maximize the ratios of their means and variances; (3) preliminarily determining the threshold number and values based on the optimized samples; and (4) further optimizing these samples using a novel local criterion function that combines with the property of local continuity of an image. Experiments on Berkeley datasets show that AMTSSTS is an efficient and effective technique which can provide smoother results than several developed methods in recent year
Zare Borzeshi, E, Perez Concha, O & Piccardi, M 1970, 'Human Action Recognition in Video by Fusion of Structural and Spatio-temporal Features', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Joint IAPR International Workshop on SSPR & SPR, Springer Berlin Heidelberg, Hiroshima, Japan, pp. 474-482.
View/Download from: Publisher's site
View description>>
The problem of human action recognition has received increasing attention in recent years for its importance in many applications. Local representations and in particular STIP descriptors have gained increasing popularity for action recognition. Yet, the main limitation of those approaches is that they do not capture the spatial relationships in the subject performing the action. This paper proposes a novel method based on the fusion of global spatial relationships provided by graph embedding and the local spatio-temporal information of STIP descriptors. Experiments on an action recognition dataset reported in the paper show that recognition accuracy can be significantly improved by combining the structural information with the spatio-temporal features. © 2012 Springer-Verlag Berlin Heidelberg.
Zhang, J, Schonfeld, D & Feng, DD 1970, 'Message from ICME 2012 General Chairs', 2012 IEEE International Conference on Multimedia and Expo Workshops, 2012 IEEE International Conference on Multimedia & Expo Workshops (ICMEW 2012), IEEE.
View/Download from: Publisher's site
View description>>
ICME 2012 is the thirteen in the series of ICME conferences that has been held annually since 2000, in various cities throughout the world. The success of this conference would not have been possible without the generous help of sponsors. Paper prizes and Student Travel Grants are sponsored by the National Information and Communications Technology Australia (NICTA), Microsoft Research, IBM Research, Canon Information Systems Research Australia (CiSRA), and Advanced Analytics Institute (AAI) at the University of Technology, Sydney (UTS). ICME 2012 features a new plenary session - Time Machine! The session consists of a series of expert presentations that re-introduce ideas published "before their time" and, as a result, their impact has not yet been fully realized. ICME 2012 also has outstanding lectures including keynote lectures and research overviews. ICME 2012 will offer several paper prizes, including Best Paper Award, Best Student Paper Award, and Best Demo Award. © 2012 IEEE.
Zhang, J, Schonfeld, D & Feng, DD 1970, 'Message from the ICME 2012 General Chairs', 2012 IEEE International Conference on Multimedia and Expo, 2012 IEEE International Conference on Multimedia and Expo (ICME), IEEE.
View/Download from: Publisher's site
Zhao, Y, Li, J, Christen, P & Kennedy, PJ 1970, 'Preface', Conferences in Research and Practice in Information Technology Series, p. vii.
Zhou, A, Xu, G, Agarwal, N, King, I, Nejdl, W & Wang, F 1970, 'Message from the SCA2012 Chairs', 2012 Second International Conference on Cloud and Green Computing, 2012 International Conference on Cloud and Green Computing (CGC), IEEE.
View/Download from: Publisher's site
View description>>
The 2nd International Conference on Social Computing and Its Applications (SCA2012) was held in Xiangtan, China, November 1-3, 2012. SCA (Social Computing and its Applications) is created to provide a prime international forum for researchers, industry practitioners and environment experts to exchange the latest fundamental advances in the state of the art and practice of social computing and broadly related areas. SCA2012 consists of the main conference and three workshops: the 2012 International Workshop on Social Network Analysis and Information Diffusion Modelling (SNAIDM2012), the 2012 International Workshop on Web Wisdom (WW2012), and the 2012 International Workshop on Social Network Service on Databases (SNSDB2012). We greatly thank the Workshop Chairs for their valuable time and effort in organizing the workshops. SCA2012 is held jointly with the 2nd International Conference on Cloud and Green Computing (CGC2012). SCA2012 received 98 submissions from Germany, Canada, Japan, Australia, Sweden, South Korea, Portugal, Denmark, Poland and Mainland China. Each paper was peer reviewed by at least three program committee members. The final decision has been taken after a high quality review process. There are 45 paper accepted and the regular paper acceptance rate is about 32%. © 2012 IEEE.
Zhou, J, Luo, T & Xu, G 1970, 'Academic Recommendation on Graph with Dynamic Transfer Chain', 2012 Second International Conference on Cloud and Green Computing, 2012 International Conference on Cloud and Green Computing (CGC), IEEE, Xiangtan, China, pp. 331-336.
View/Download from: Publisher's site
View description>>
Academic contents update and learner's capability change over time. But nowadays, academic recommendation system does not take time factors into account. There are two challenges to capture learner's preferences and learning context accurately and dynamically. First modeling academic trend and user's cognitive level transferred by time is a hard problem. And designing dynamic algorithm to improve recommendation accuracy with implicit behavior data is difficult. In this paper, we propose Dynamic Transfer Chain (DTC) to model user's preferences and academic context over time on transaction data. Based on DTC model, we present a novel algorithm Dynamic Academic Recommendation on Graph (DARG). We evaluate the effectiveness of our method using an open dataset named CiteULike, including 9170 users, 11343 papers, 194596 user-paper pairs. The evaluation metric we used is Hit Ratio. The results show that our proposed approach gives 12.873% to 33.852% improvement over the previous counterpart, including User-KNN, Item-KNN, TUser-KNN, TItem-KNN. © 2012 IEEE.
Cao, L & Yu, PS 2012, 'Behavior computing: Modeling, analysis, mining and decision', pp. 1-374.
View/Download from: Publisher's site
View description>>
'Behavior' is an increasingly important concept in the scientific, societal, economic, cultural, political, military, living and virtual worlds. Behavior computing, or behavior informatics, consists of methodologies, techniques and practical tools for examining and interpreting behaviours in these various worlds. Behavior computing contributes to the in-depth understanding, discovery, applications and management of behavior intelligence. With contributions from leading researchers in this emerging field Behavior Computing: Modeling, Analysis, Mining and Decision includes chapters on: representation and modeling behaviors; behavior ontology; behaviour analysis; behaviour pattern mining; clustering complex behaviors; classification of complex behaviors; behaviour impact analysis; social behaviour analysis; organizational behaviour analysis; and behaviour computing applications. Behavior Computing: Modeling, Analysis, Mining and Decision provides a dedicated source of reference for the theory and applications of behavior informatics and behavior computing. Researchers, research students and practitioners in behavior studies, including computer science, behavioral science, and social science communities will find this state of the art volume invaluable.
Engemann, KJ, Gil-Lafuente, AM & Merigó, JM 2012, 'Lecture Notes in Business Information Processing: Preface'.
Gil-Lafuente, AM, Gil-Lafuente, J & José, MML 2012, 'Soft Computing in Management and Business Economics', Springer Berlin Heidelberg.
View/Download from: Publisher's site
Su, G, Ying, M & Zhang, C 2012, 'Session Communication and Integration'.
View description>>
The scenario-based specification of a large distributed system is usuallynaturally decomposed into various modules. The integration of specificationmodules contrasts to the parallel composition of program components, andincludes various ways such as scenario concatenation, choice, and nesting. Therecent development of multiparty session types for process calculi providesuseful techniques to accommodate the protocol modularisation, by encodingfragments of communication protocols in the usage of private channels for aclass of agents. In this paper, we extend forgoing session type theories byenhancing the session integration mechanism. More specifically, we propose anovel synchronous multiparty session type theory, in which sessions areseparated into the communicating and integrating levels. Communicating sessionsrecord the message-based communications between multiple agents, whilstintegrating sessions describe the integration of communicating ones. Atwo-level session type system is developed for pi-calculus with syntacticprimitives for session establishment, and several key properties of the typesystem are studied. Applying the theory to system description, we show that achannel safety property and a session conformance property can be analysed.Also, to improve the utility of the theory, a process slicing method is used tohelp identify the violated sessions in the type checking.
Xu, G 2012, 'Health Information Science', Springer Berlin Heidelberg.
View/Download from: Publisher's site
XU, G 2012, 'Web Technologies and Applications', Springer Berlin Heidelberg.
View/Download from: Publisher's site