Cao, L 2015, Metasynthetic Computing and Engineering of Complex Systems, Springer, London, UK.
View/Download from: Publisher's site
View description>>
M-computing consists of the engineering methodologies and techniques [70–75] for reifying the theory of qualitative-to-quantitative metasynthesis, carrying out M-interaction, and constructing the M-space to tackle OCGS. From the computing ...
Reynolds, R, Stoianoff, NP & Roy, A 2015, Intellectual Property: Text and Essential Cases, 5th, The Federation Press, Sydney.
Braytee, A, Gill, AQ, Kennedy, PJ & Hussain, FK 2015, 'A Review and Comparison of Service E-Contract Architecture Metamodels' in Neural Information Processing, Springer International Publishing, pp. 583-595.
View/Download from: Publisher's site
Devece, C, Peris-Ortiz, M, Merigó, JM & Fuster, V 2015, 'Linking the Development of Teamwork and Communication Skills in Higher Education' in Sustainable Learning in Higher Education, Springer International Publishing, pp. 63-73.
View/Download from: Publisher's site
García, JÁ, de la Cruz del Río Rama, M, González-Vázquez, E & Lindahl, JMM 2015, 'Motivations for Implementing a System of Quality Management in Spanish Thalassotherapy Centers' in Health and Wellness Tourism, Springer International Publishing, pp. 101-115.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. This article presents results from an empirical study of 31 thalassotherapy centers out of the total national number of 44 (as of 2011). The objective was to identify motivations that drive these centers to implement and certify a Quality Management System (QMS). Following a comprehensive theoretical review, the empirical research method consisted of descriptive and factor analyses to determine the importance and structure of motivations. Results show that the key motivations driving thalassotherapy centers to implement a QMS are enhancing service quality, improving processes and procedures, and creating awareness of quality in centers.
Guglyuvatyy, E & Stoianoff, NP 2015, 'Climate change law and policymaking: the utility of the Delphi method' in Kreiser, L, Andersen, MK, Olsen, BE, Speck, S, Milne, JE & Ashiabor, H (eds), Carbon Pricing, Edward Elgar Publishing, UK, pp. 177-190.
View/Download from: Publisher's site
View description>>
This chapter utilizes a policy evaluation study as an example of the utility of the Delphi method in climate change policymaking. In this study the Delphi method assisted in prioritizing the criteria used in the evaluation. The need for policy evaluation is not only emphasized within environmental research but also policymakers and administrators are more frequently articulating the necessity for environmental policy evaluations. This chapter discusses the Delphi method as a useful instrument in environmental policy research. Based on the findings of the Delphi study conducted to facilitate climate change policies assessment in Australia the authors analyse the strengths and limitations of the method.
Li, M, Li, J, Ou, Y & Cao, L 2015, 'A Coupled Similarity Kernel for Pairwise Support Vector Machine' in Lecture Notes in Computer Science, Springer International Publishing, pp. 114-123.
View/Download from: Publisher's site
Li, M, Li, J, Ou, Y & Luo, D 2015, 'A coupled similarity kernel for pairwise support vector machine' in Agents and Data Mining Interaction (LNCS), Springer, Germany, pp. 114-123.
View/Download from: Publisher's site
View description>>
Support vector machine is a supervised learning model with associated learning algorithms that analyzes data and recognizes patterns. In various applications, the SVM shows its advantage of the classification performance, however, the original SVM was designed for the numerical data. For using the SVM on the nominal data, most previous research used a certain number to replace each nominal value or transformed the nominal value into the one hot vector. Both methods could not present the original nominal data's structure and the similarity between them, which leads to information loss from the data and reduce the classification performance. In this work, we design a novel coupled similarity metric between nominally attributed data. This metric is pairwise, we also propose an adapted SVMwhich can handle this. The experiment result shows the proposed method outperforms the traditional SVM and other popular classification methods on various public data sets.
Anaissi, A, Goyal, M, Catchpoole, DR, Braytee, A & Kennedy, PJ 2015, 'Case-Based Retrieval Framework for Gene Expression Data', Cancer Informatics, vol. 14, pp. CIN.S22371-CIN.S22371.
View/Download from: Publisher's site
View description>>
Background The process of retrieving similar cases in a case-based reasoning system is considered a big challenge for gene expression data sets. The huge number of gene expression values generated by microarray technology leads to complex data sets and similarity measures for high-dimensional data are problematic. Hence, gene expression similarity measurements require numerous machine-learning and data-mining techniques, such as feature selection and dimensionality reduction, to be incorporated into the retrieval process. Methods This article proposes a case-based retrieval framework that uses a k-nearest-neighbor classifier with a weighted-feature-based similarity to retrieve previously treated patients based on their gene expression profiles. Results The herein-proposed methodology is validated on several data sets: a childhood leukemia data set collected from The Children's Hospital at Westmead, as well as the Colon cancer, the National Cancer Institute (NCI), and the Prostate cancer data sets. Results obtained by the proposed framework in retrieving patients of the data sets who are similar to new patients are as follows: 96% accuracy on the childhood leukemia data set, 95% on the NCI data set, 93% on the Colon cancer data set, and 98% on the Prostate cancer data set. Conclusion The designed case-based retrieval framework is an appropriate choice for retrieving previous patients who are similar to a new patient, on the basis of their gene expression data, for better diagnosis and treatment of childhood leukemia. Moreover, this framework can be applied to other gene expression data sets using some or all of its steps.
Ashraf, J, Chang, E, Hussain, OK & Hussain, FK 2015, 'Ontology usage analysis in the ontology lifecycle: A state-of-the-art review', KNOWLEDGE-BASED SYSTEMS, vol. 80, pp. 34-47.
View/Download from: Publisher's site
Ashraf, J, Hussain, OK & Hussain, FK 2015, 'Making sense from Big RDF Data: OUSAF for measuring ontology usage', Software: Practice and Experience, vol. 45, no. 8, pp. 1051-1071.
View/Download from: Publisher's site
View description>>
SummaryRecent growth and advancements in the Semantic Web have shifted the research focus from being knowledge‐centered to data‐centered. This has led to the increased use of ontologies to structurally represent the data, thereby generating huge amounts of RDF data, which we term Big RDF Data. Nevertheless, the literature lacks the tools to analyze Big RDF Data and make sense of it. Access to such tools would enable pragmatic inputs and insights for users in respect of such tasks as the usage and adoption of Ontologies, their uptake by different users in the community, and the identification of prevalent patterns. This analysis, which we term Ontology Usage, is important from the viewpoint of users who need informed inputs in the various stages of the ontology engineering lifecycle, such as ontology evolution, ontology population, and ontology deployment. In this paper, we propose the Ontology USage Analysis F̌ramework (OUSAF), which performs analysis of Ontology Usage on Big RDF Data and synthesizes the usage knowledge acquired. OUSAF provides a methodological approach to performing the various phases such as identifying, analyzing, representing, and utilizing the Ontology usage results from Big RDF Data. We describe in detail each of those phases and the metrics required to perform the analysis of each phase. The utilization of the OUSAF results obtained by users such as data publishers and ontology developers is demonstrated by a dataset collected in the e‐business domain. Copyright © 2014 John Wiley & Sons, Ltd.
Azadeh, A, Zia, NP, Saberi, M, Hussain, FK, Yoon, JH, Hussain, OK & Sadri, S 2015, 'A trust-based performance measurement modeling using t-norm and t-conorm operators', APPLIED SOFT COMPUTING, vol. 30, pp. 491-500.
View/Download from: Publisher's site
Bonilla, CA, Merigó, JM & Torres-Abad, C 2015, 'Economics in Latin America: a bibliometric analysis', Scientometrics, vol. 105, no. 2, pp. 1239-1252.
View/Download from: Publisher's site
View description>>
Bibliometrics is a research field that studies quantitatively the bibliographic material. This study analyzes the academic research developed in Latin America in economics between 1994 and 2013. The article uses the Web of Science database in order to collect the information and provides several bibliometric indicators including the total number of publications and citations, and the h-index. The results indicate that Brazil, Mexico, Chile, Argentina and Colombia are the only countries with a significant amount of publications in economics in Web of Science although Costa Rica and Uruguay have considerable results in per capita terms. The annual evolution shows a significant increase during the last 5 years that seems to continue in the future, probably with the objective of reaching similar standards than the most competitive countries around the World. The results also show that development, agricultural and health economics are the most significant topics in the region.
Cao, L, Yu, PS & Kumar, V 2015, 'Nonoccurring Behavior Analytics: A New Area', IEEE Intelligent Systems, vol. 30, no. 6, pp. 4-11.
View/Download from: Publisher's site
Casanovas, M, Torres-Martinez, A & Merigo, JM 2015, 'DECISION MAKING PROCESSES OF NON-LIFE INSURANCE PRICING USING FUZZY LOGIC AND OWA OPERATORS', ECONOMIC COMPUTATION AND ECONOMIC CYBERNETICS STUDIES AND RESEARCH, vol. 49, no. 2, pp. 169-187.
Casanovas, M, Torres-Martínez, A & Merigó, JM 2015, 'Decision making processes of non-life insurance pricing using fuzzy logic and OWA operators', Economic Computation and Economic Cybernetics Studies and Research, vol. 49, no. 2, pp. 1-19.
View description>>
Setting a commercial premium for an insurance policy is a complex process, even, though statistical tools provide fairly reliable information on the behavior of the frequency and cost of claims differentiated by risk profiles reflected in pure premium calculations. However lately setting the price the customer must pay has not been easy, because of the uncertainty of, having to use subjective criteria to analyze how demand may be affected by different price alternatives and economic situations. This article aims to develop this process in two stages. The first stage is carried out with the opinion of experts applied to uncertain numbers and Ordered Weighted Average (OWA) operators to assess the overall benefits of each profile to choose the best alternative. The second stage, which uses Heavy OWA (HOWA) operators, is based on the results obtained in the first stage and chooses a general price alternative for all profiles.
Chen, Q, Luo, H, Zhang, C & Chen, Y-PP 2015, 'Bioinformatics in protein kinases regulatory network and drug discovery', Mathematical Biosciences, vol. 262, pp. 147-156.
View/Download from: Publisher's site
View description>>
Protein kinases have been implicated in a number of diseases, where kinases participate many aspects that control cell growth, movement and death. The deregulated kinase activities and the knowledge of these disorders are of great clinical interest of drug discovery. The most critical issue is the development of safe and efficient disease diagnosis and treatment for less cost and in less time. It is critical to develop innovative approaches that aim at the root cause of a disease, not just its symptoms. Bioinformatics including genetic, genomic, mathematics and computational technologies, has become the most promising option for effective drug discovery, and has showed its potential in early stage of drug-target identification and target validation. It is essential that these aspects are understood and integrated into new methods used in drug discovery for diseases arisen from deregulated kinase activity. This article reviews bioinformatics techniques for protein kinase data management and analysis, kinase pathways and drug targets and describes their potential application in pharma ceutical industry.
Cui, Y, Zhang, J, Guo, D & Jin, Z 2015, 'Robust facial landmark localization using classified random ferns and pose-based initialization', Signal Processing, vol. 110, pp. 46-53.
View/Download from: Publisher's site
Deng, S, Wang, D, Li, X & Xu, G 2015, 'Exploring user emotion in microblogs for music recommendation', Expert Systems with Applications, vol. 42, no. 23, pp. 9284-9293.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Context-aware recommendation has become increasingly important and popular in recent years when users are immersed in enormous music contents and have difficulty to make their choices. User emotion, as one of the most important contexts, has the potential to improve music recommendation, but has not yet been fully explored due to the great difficulty of emotion acquisition. This article utilizes users' microblogs to extract their emotions at different granularity levels and during different time windows. The approach then correlates three elements: user, music and the user's emotion when he/she is listening to the music piece. Based on the associations extracted from a data set crawled from a Chinese Twitter service, we develop several emotion-aware methods to perform music recommendation. We conduct a series of experiments and show that the proposed solution proves that considering user emotional context can indeed improve recommendation performance in terms of hit rate, precision, recall, and F1 score.
Deng, Z, Cao, L, Jiang, Y & Wang, S 2015, 'Minimax Probability TSK Fuzzy System Classifier: A More Transparent and Highly Interpretable Classification Model', IEEE Transactions on Fuzzy Systems, vol. 23, no. 4, pp. 813-826.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. When an intelligent model is used for medical diagnosis, it is desirable to have a high level of interpretability and transparent model reliability for users. Compared with most of the existing intelligence models, fuzzy systems have shown a distinctive advantage in their interpretabilities. However, how to determine the model reliability of a fuzzy system trained for a recognition task is still an unsolved problem at present. In this study, a minimax probability Takagi-Sugeno-Kang (TSK) fuzzy system classifier called MP-TSK-FSC is proposed to train a fuzzy system classifier and determine the model reliability simultaneously. For the proposed MP-TSK-FSC, a lower bound of correct classification can be presented to the users to characterize the reliability of the trained fuzzy classifier. Thus, the obtained classifier has the distinctive characteristics of both a high level of interpretability and transparent model reliability inherited from the fuzzy system and minimax probability learning strategy, respectively. Our experiments on synthetic datasets and several real-world datasets for medical diagnosis have confirmed the distinctive characteristics of the proposed method.
Dong, H & Hussain, FK 2015, 'Service-requester-centered service selection and ranking model for digital transportation ecosystems', COMPUTING, vol. 97, no. 1, pp. 79-102.
View/Download from: Publisher's site
View description>>
Transport services are a fundamental utility that drives human society. A Digital Transportation Ecosystem is a sub-system of the Digital Ecosystem, which uses ICT resources to facilitate transport service transactions. This research focuses on the selection and ranking of online transport service information. The previous research in this area has been unable to achieve satisfactory performance or give sufficient freedom to service requesters to rank services based on their preferences. `User-centered design is a broad term to describe how end-users influence system design. In this research, we propose a Service-Requester-Centered Service Selection and Ranking Model, guided by the philosophy of user-centered design. Three major sub-models are involved in this model: a model for assisting service requesters to search appropriate transport service ontology concepts to denote their service requests, a model for enhancing the accuracy of automatic transport service concept recommendation by observing service requesters click behaviours, and a model for enabling service-requester-preference-based service ranking. Implementations and empirical experiments are conducted to evaluate the three sub-models and the drawn conclusions along with directions for future work are outlined.
Fan, H, Hussain, FK & Hussain, OK 2015, 'Semantic client-side approach for web personalization of SaaS-based cloud services', CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, vol. 27, no. 8, pp. 2144-2169.
View/Download from: Publisher's site
Fan, H, Hussain, FK, Younas, M & Hussain, OK 2015, 'An integrated personalization framework for SaaS-based cloud services', Future Generation Computer Systems, vol. 53, pp. 157-173.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. Software as a Service (SaaS) has recently emerged as one of the most popular service delivery models in cloud computing. The number of SaaS services and their users is continuously increasing and new SaaS service providers emerge on a regular basis. As users are exposed to a wide range of SaaS services, they may soon become more demanding when receiving/consuming such services. Similar to the web and/or mobile applications, personalization can play a critical role in modern SaaS-based cloud services. This paper introduces a fully designed, cloud-enabled personalization framework to facilitate the collection of preferences and the delivery of corresponding SaaS services. The approach we adapt in the design and development of the proposed framework is to synthesize various models and techniques in a novel way. The objective is to provide an integrated and structured environment wherein SaaS services can be provisioned with enhanced personalization quality and performance.
Fan, X & Cao, L 2015, 'A convergence theorem for graph shift-type algorithms', Pattern Recognition, vol. 48, no. 8, pp. 2751-2760.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Abstract The Robust Graph mode seeking by Graph Shift (Liu and Yan, 2010) (RGGS) algorithm represents a recent promising approach for discovering dense subgraphs in noisy data. However, there are no theoretical foundations for proving the convergence of the RGGS algorithm, leaving the question as to whether an algorithm works for solid reasons. In this paper, we propose a generic theoretical framework consisting of three key Graph Shift (GS) components: the simplex of a generated sequence set, the monotonic and continuous objective function and closed mapping. We prove that the GS-type algorithms built on such components can be transformed to fit Zangwill's theory, and the sequence set generated by the GS procedures always terminates at a local maximum, or at worst, contains a subsequence which converges to a local maximum of the similarity measure function. The framework is verified by theoretical analysis and experimental results of several typical GS-type algorithms.
Fan, X, Cao, L & Da Xu, RY 2015, 'Dynamic Infinite Mixed-Membership Stochastic Blockmodel', IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 9, pp. 2072-2085.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Directional and pairwise measurements are often used to model interactions in a social network setting. The mixed-membership stochastic blockmodel (MMSB) was a seminal work in this area, and its ability has been extended. However, models such as MMSB face particular challenges in modeling dynamic networks, for example, with the unknown number of communities. Accordingly, this paper proposes a dynamic infinite mixed-membership stochastic blockmodel, a generalized framework that extends the existing work to potentially infinite communities inside a network in dynamic settings (i.e., networks are observed over time). Additional model parameters are introduced to reflect the degree of persistence among one's memberships at consecutive time stamps. Under this framework, two specific models, namely mixture time variant and mixture time invariant models, are proposed to depict two different time correlation structures. Two effective posterior sampling strategies and their results are presented, respectively, using synthetic and real-world data.
Fariha, A, Ahmed, CF, Leung, CK, Samiullah, M, Pervin, S & Cao, L 2015, 'A new framework for mining frequent interaction patterns from meeting databases', Engineering Applications of Artificial Intelligence, vol. 45, pp. 103-118.
View/Download from: Publisher's site
Fournier-Viger, P, Wu, C-W, Tseng, VS, Cao, L & Nkambou, R 2015, 'Mining Partially-Ordered Sequential Rules Common to Multiple Sequences', IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 8, pp. 2203-2216.
View/Download from: Publisher's site
Gao, F, Musial, K, Cooper, C & Tsoka, S 2015, 'Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics', Scientific Programming, vol. 2015, pp. 1-13.
View/Download from: Publisher's site
View description>>
Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is theprediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.
Ghosh, S & Li, J 2015, 'Using sequential patterns as features for classification models to make accurate predictions on ICU events.', Annu Int Conf IEEE Eng Med Biol Soc, vol. 2015, pp. 8157-8160.
View/Download from: Publisher's site
View description>>
Pattern mining algorithms have previously been utilized to extract informative rules in various clinical contexts. However, the number of generated patterns are numerous. In most cases, the extracted rules are directly investigated by clinicians for understanding disease diagnoses. The elicitation of important patterns for clinical investigation places a significant demand for precision and interpretability. Hence, it is essential to obtain a set of informative interpretable patterns for building advanced learning models about a patient's physiological condition, specially in critical care units. In this study, a two stage sequential contrast patterns based classification framework is presented, which is used to detect critical patient events like hypotension. In the first stage, we obtain a set of sequential patterns by using a contrast mining algorithm. These sequential patterns undergo post-processing, for conversion to binary valued and frequency based features for developing a classification model, in the second stage. Our results on eight critical care datasets demonstrate better predictive capabilities, when sequential patterns are used as features.
Goodswen, SJ, Barratt, JLN, Kennedy, PJ & Ellis, JT 2015, 'Improving the gene structure annotation of the apicomplexan parasite Neospora caninum fulfils a vital requirement towards an in silico-derived vaccine', INTERNATIONAL JOURNAL FOR PARASITOLOGY, vol. 45, no. 5, pp. 305-318.
View/Download from: Publisher's site
View description>>
© 2015 Australian Society for Parasitology Inc. Neospora caninum is an apicomplexan parasite which can cause abortion in cattle, instigating major economic burden. Vaccination has been proposed as the most cost-effective control measure to alleviate this burden. Consequently the overriding aspiration for N. caninum research is the identification and subsequent evaluation of vaccine candidates in animal models. To save time, cost and effort, it is now feasible to use an in silico approach for vaccine candidate prediction. Precise protein sequences, derived from the correct open reading frame, are paramount and arguably the most important factor determining the success or failure of this approach. The challenge is that publicly available N. caninum sequences are mostly derived from gene predictions. Annotated inaccuracies can lead to erroneously predicted vaccine candidates by bioinformatics programs. This study evaluates the current N. caninum annotation for potential inaccuracies. Comparisons with annotation from a closely related pathogen, Toxoplasma gondii, are also made to distinguish patterns of inconsistency. More importantly, a mRNA sequencing (RNA-Seq) experiment is used to validate the annotation. Potential discrepancies originating from a questionable start codon context and exon boundaries were identified in 1943 protein coding sequences. We conclude, where experimental data were available, that the majority of N. caninum gene sequences were reliably predicted. Nevertheless, almost 28% of genes were identified as questionable. Given the limitations of RNA-Seq, the intention of this study was not to replace the existing annotation but to support or oppose particular aspects of it. Ideally, many studies aimed at improving the annotation are required to build a consensus. We believe this study, in providing a new resource on gene structure and annotation, is a worthy contributor to this endeavour.
Hasan, MM, Zhou, Y, Lu, X, Li, J, Song, J & Zhang, Z 2015, 'Computational Identification of Protein Pupylation Sites by Using Profile-Based Composition of k-Spaced Amino Acid Pairs', PLOS ONE, vol. 10, no. 6, pp. e0129635-e0129635.
View/Download from: Publisher's site
View description>>
© 2015 Hasan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Prokaryotic proteins are regulated by pupylation, a type of post-translational modification that contributes to cellular function in bacterial organisms. In pupylation process, the prokaryotic ubiquitin-like protein (Pup) tagging is functionally analogous to ubiquitination in order to tag target proteins for proteasomal degradation. To date, several experimental methods have been developed to identify pupylated proteins and their pupylation sites, but these experimental methods are generally laborious and costly. Therefore, computational methods that can accurately predict potential pupylation sites based on protein sequence information are highly desirable. In this paper, a novel predictor termed as pbPUP has been developed for accurate prediction of pupylation sites. In particular, a sophisticated sequence encoding scheme [i.e. the profile-based composition of k-spaced amino acid pairs (pbCKSAAP)] is used to represent the sequence patterns and evolutionary information of the sequence fragments surrounding pupylation sites. Then, a Support Vector Machine (SVM) classifier is trained using the pbCKSAAP encoding scheme. The final pbPUP predictor achieves an AUC value of 0.849 in10-fold cross-validation tests and outperforms other existing predictors on a comprehensive independent test dataset. The proposed method is anticipated to be a helpful computational resource for the prediction of pupylation sites. The web server and curated datasets in this study are freely available at http://protein.cau.edu.cn/pbPUP/.
Hussain, OK, Zia-ur-Rahman, Hussain, FK, Singh, J, Janjua, NK & Chang, E 2015, 'A User-Based Early Warning Service Management Framework in Cloud Computing', COMPUTER JOURNAL, vol. 58, no. 3, pp. 472-496.
View/Download from: Publisher's site
View description>>
Cloud computing is a very attractive option for service users and service providers for their businesses because of the benefits it provides. A major concern among service users regarding cloud adoption, however, is the unpredictability of performance in relation to the services provided. Even though guarantees in the form of service-level agreements are provided to users by service providers, real-time service-level degradability remains a critical concern; hence, there is a need for an approach that assists users to manage a service before it fails. The approaches proposed in the literature assess and evaluate the performance of the cloud infrastructure of providers, but this does not guarantee that a given service instance will meet the desired quality level because there may be factors other than the provider's infrastructure that will affect the level of quality of the service instance. In this paper, we present an approach that measures the quality of a service instance in real time and provides important analysis for service users as to whether they will achieve their desired objectives. This analysis also constitutes an important input for service users in the assessment and management of a service to avoid the failure to achieve objectives.
Janjua, NK, Hussain, OK, Hussain, FK & Chang, E 2015, 'Philosophical and Logic-Based Argumentation-Driven Reasoning Approaches and their Realization on the WWW: A Survey', The Computer Journal, vol. 58, no. 9, pp. 1967-1999.
View/Download from: Publisher's site
View description>>
Argumentation is the practice of systematic conscious reasoning involving the construction and evaluation of arguments to justify or support a particular conclusion. This article discusses, compares, contrasts and categorizes existing argumentation-based frameworks and applications as either philosophical or logic-based, and provides critical analysis that emphasizes the structure of arguments and the interactions between them. This review compares and contrasts the frameworks and applications of argumentation-based approaches on Web 2.0 and the Semantic Web, and subsequently highlights the importance and challenges of attaining monological argumentation on the Semantic Web.
Jeong, Y-S, Shyu, M-L, Xu, G & Wagner, RR 2015, 'Guest Editorial: Advanced Technologies and Services for Multimedia Big Data Processing', Multimedia Tools and Applications, vol. 74, no. 10, pp. 3413-3418.
View/Download from: Publisher's site
Jiang, Y, Tsai, P, Hao, Z & Cao, L 2015, 'Automatic multilevel thresholding for image segmentation using stratified sampling and Tabu Search', Soft Computing, vol. 19, no. 9, pp. 2605-2617.
View/Download from: Publisher's site
View description>>
Image segmentation techniques have been widely applied in many fields such as pattern recognition and feature extraction. For the primate visual attention model, the perceptual organization is an important process to automatically extract the desirable features. In this article, we propose a new method called an automatic multilevel thresholding algorithm using the stratified sampling and Tabu Search (AMTSSTS) by imitating the primate visual perceptual behaviors. In the AMTSSTS algorithm, a gray image is treated as a population with the gray values of pixels as the individuals. First, the image is evenly divided into several strata (blocks), and a sample is drawn from each stratum. Second, a Tabu Search-based optimization is applied to each sample to maximize the ratio between mean and variance for each sample. The threshold number and threshold values are preliminarily determined based on the optimized samples, and are further optimized by a deterministic method which includes a new local criterion function with property of local continuity of an image. Results of extensive simulations on Berkeley datasets indicate that AMTSSTS can obtain more effective, efficient and smooth segmentation, and can be applied to complex and real-time environments. © 2014 Springer-Verlag Berlin Heidelberg.
Jin, D, Gabrys, B & Dang, J 2015, 'Combined node and link partitions method for finding overlapping communities in complex networks', Scientific Reports, vol. 5, no. 1, p. 8600.
View/Download from: Publisher's site
View description>>
AbstractCommunity detection in complex networks is a fundamental data analysis task in various domains and how to effectively find overlapping communities in real applications is still a challenge. In this work, we propose a new unified model and method for finding the best overlapping communities on the basis of the associated node and link partitions derived from the same framework. Specifically, we first describe a unified model that accommodates node and link communities (partitions) together and then present a nonnegative matrix factorization method to learn the parameters of the model. Thereafter, we infer the overlapping communities based on the derived node and link communities, i.e., determine each overlapped community between the corresponding node and link community with a greedy optimization of a local community function conductance. Finally, we introduce a model selection method based on consensus clustering to determine the number of communities. We have evaluated our method on both synthetic and real-world networks with ground-truths and compared it with seven state-of-the-art methods. The experimental results demonstrate the superior performance of our method over the competing ones in detecting overlapping communities for all analysed data sets. Improved performance is particularly pronounced in cases of more complicated networked community structures.
Jing, D, Bhadri, VA, Beck, D, Thoms, JAI, Yakob, NA, Wong, JWH, Knezevic, K, Pimanda, JE & Lock, RB 2015, 'Opposing regulation of BIM and BCL2 controls glucocorticoid-induced apoptosis of pediatric acute lymphoblastic leukemia cells', BLOOD, vol. 125, no. 2, pp. 273-283.
View/Download from: Publisher's site
Kemp, M & Xu, RYD 2015, 'Geometrically-constrained balloon fitting for multiple connected ellipses', Pattern Recognition, vol. 48, no. 7, pp. 2198-2208.
View/Download from: Publisher's site
Le Thi My, H, Nguyen Thanh, B & Khuat Thanh, T 2015, 'Survey on Mutation-based Test Data Generation', International Journal of Electrical and Computer Engineering (IJECE), vol. 5, no. 5, pp. 1164-1164.
View/Download from: Publisher's site
View description>>
<span>The critical activity of testing is the systematic selection of suitable test cases, which be able to reveal highly the faults. Therefore, mutation coverage is an effective criterion for generating test data. Since the test data generation process is very labor intensive, time-consuming and error-prone when done manually, the automation of this process is highly aspired. The researches about automatic test data generation contributed a set of tools, approaches, development and empirical results. In this paper, we will analyse and conduct a comprehensive survey on generating test data based on mutation. The paper also analyses the trends in this field.</span>
Lemke, C, Budka, M & Gabrys, B 2015, 'Metalearning: a survey of trends and technologies', Artificial Intelligence Review, vol. 44, no. 1, pp. 117-130.
View/Download from: Publisher's site
Li, X, Xu, G, Chen, E & Zong, Y 2015, 'Learning recency based comparative choice towards point-of-interest recommendation', Expert Systems with Applications, vol. 42, no. 9, pp. 4274-4283.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. With the prevalence of GPS-enabled smart phones, Location Based Social Network (LBSN) has emerged and become a hot research topic during the past few years. As one of the most important components in LBSN, Points-of-Interests (POIs) has been extensively studied by both academia and industry, yielding POI recommendations to enhance user experience in exploring the city. In conventional methods, rating vectors for both users and POIs are utilized for similarity calculation, which might yield inaccuracy due to the differences of user biases. In our opinion, the rating values themselves do not give exact preferences of users, however the numeric order of ratings given by a user within a certain period provides a hint of preference order of POIs by such user. Firstly, we propose an approach to model users preference by employing utility theory. Secondly, We devise a collection-wise learning method over partial orders through an effective stochastic gradient descent algorithm. We test our model on two real world datasets, i.e., Yelp and TripAdvisor, by comparing with some state-of-the-art approaches including PMF and several user preference modeling methods. In terms of MAP and Recall, we averagely achieve 15% improvement with regard to the baseline methods. The results show the significance of comparative choice in a certain time window and show its superiority to the existing methods.
Liao, H, Xu, Z, Zeng, X-J & Merigo, JM 2015, 'Framework of Group Decision Making With Intuitionistic Fuzzy Preference Information', IEEE Transactions on Fuzzy Systems, vol. 23, no. 4, pp. 1211-1227.
View/Download from: Publisher's site
Liao, H, Xu, Z, Zeng, X-J & Merigó, JM 2015, 'Qualitative decision making with correlation coefficients of hesitant fuzzy linguistic term sets', Knowledge-Based Systems, vol. 76, pp. 127-138.
View/Download from: Publisher's site
Liu, H, Zhang, J, Ngo, HH, Guo, W, Wu, H, Cheng, C, Guo, Z & Zhang, C 2015, 'Carbohydrate-based activated carbon with high surface acidity and basicity for nickel removal from synthetic wastewater', RSC Advances, vol. 5, no. 64, pp. 52048-52056.
View/Download from: Publisher's site
View description>>
The feasibility of preparing activated carbon from carbohydrates (glucose, sucrose and starch) with H3PO4 activation was evaluated by comparing its physicochemical properties and Ni(ii) adsorption ability with a reference Phragmites australis-based activated carbon.
Liu, Q, Ren, J, Song, J & Li, J 2015, 'Co-Occurring Atomic Contacts for the Characterization of Protein Binding Hot Spots', PLOS ONE, vol. 10, no. 12, pp. e0144486-e0144486.
View/Download from: Publisher's site
View description>>
© 2015 Liu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. A binding hot spot is a small area at a protein-protein interface that can make significant contribution to binding free energy. This work investigates the substantial contribution made by some special co-occurring atomic contacts at a binding hot spot. A co-occurring atomic contact is a pair of atomic contacts that are close to each other with no more than three covalent-bond steps.We found that two kinds of co-occurring atomic contacts can play an important part in the accurate prediction of binding hot spot residues. One is the co-occurrence of two nearby hydrogen bonds. For example, mutations of any residue in a hydrogen bond network consisting of multiple co-occurring hydrogen bonds could disrupt the interaction considerably. The other kind of co-occurring atomic contact is the co-occurrence of a hydrophobic carbon contact and a contact between a hydrophobic carbon atom and a π ring. In fact, this co-occurrence signifies the collective effect of hydrophobic contacts. We also found that the B-factor measurements of several specific groups of amino acids are useful for the prediction of hot spots. Taking the B-factor, individual atomic contacts and the co-occurring contacts as features, we developed a new prediction method and thoroughly assessed its performance via cross-validation and independent dataset test. The results show that our method achieves higher prediction performance than well-known methods such as Robetta, FoldX and Hotpoint.We conclude that these contact descriptors, in particular the novel co-occurring atomic contacts, can be used to facilitate accurate and interpretable characterization of protein binding hot spots.
Liu, Q, Song, R & Li, J 2015, 'Inference of gene interaction networks using conserved subsequential patterns from multiple time course gene expression datasets', BMC Genomics, vol. 16, no. S12, pp. 1-16.
View/Download from: Publisher's site
View description>>
© 2015 Liu et al. Motivation: Deciphering gene interaction networks (GINs) from time-course gene expression (TCGx) data is highly valuable to understand gene behaviors (e.g., activation, inhibition, time-lagged causality) at the system level. Existing methods usually use a global or local proximity measure to infer GINs from a single dataset. As the noise contained in a single data set is hardly self-resolved, the results are sometimes not reliable. Also, these proximity measurements cannot handle the co-existence of the various in vivo positive, negative and time-lagged gene interactions. Methods and results: We propose to infer reliable GINs from multiple TCGx datasets using a novel conserved subsequential pattern of gene expression. A subsequential pattern is a maximal subset of genes sharing positive, negative or time-lagged correlations of one expression template on their own subsets of time points. Based on these patterns, a GIN can be built from each of the datasets. It is assumed that reliable gene interactions would be detected repeatedly. We thus use conserved gene pairs from the individual GINs of the multiple TCGx datasets to construct a reliable GIN for a species. We apply our method on six TCGx datasets related to yeast cell cycle, and validate the reliable GINs using protein interaction networks, biopathways and transcription factor-gene regulations. We also compare the reliable GINs with those GINs reconstructed by a global proximity measure Pearson correlation coefficient method from single datasets. It has been demonstrated that our reliable GINs achieve much better prediction performance especially with much higher precision. The functional enrichment analysis also suggests that gene sets in a reliable GIN are more functionally significant. Our method is especially useful to decipher GINs from multiple TCGx datasets related to less studied organisms where little knowledge is available except gene expression data.
Liu, W, Deng, Z-H, Cao, L, Xu, X, Liu, H & Gong, X 2015, 'Mining Top K Spread Sources for a Specific Topic and a Given Node', IEEE Transactions on Cybernetics, vol. 45, no. 11, pp. 2472-2483.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In social networks, nodes (or users) interested in specific topics are often influenced by others. The influence is usually associated with a set of nodes rather than a single one. An interesting but challenging task for any given topic and node is to find the set of nodes that represents the source or trigger for the topic and thus identify those nodes that have the greatest influence on the given node as the topic spreads. We find that it is an NP-hard problem. This paper proposes an effective framework to deal with this problem. First, the topic propagation is represented as the Bayesian network. We then construct the propagation model by a variant of the voter model. The probability transition matrix (PTM) algorithm is presented to conduct the probability inference with the complexity O{θ3log2θ), while θ is the number nodes in the given graph. To evaluate the PTM algorithm, we conduct extensive experiments on real datasets. The experimental results show that the PTM algorithm is both effective and efficient.
Liu, W, Jia, S, Li, P, Chen, X, Yang, J & Wu, Q 2015, 'An MRF-Based Depth Upsampling: Upsample the Depth Map With Its Own Property', IEEE Signal Processing Letters, vol. 22, no. 10, pp. 1708-1712.
View/Download from: Publisher's site
Liu, W, Xue, H, Yu, Z, Wu, Q & Yang, J 2015, 'RGB-D depth-map restoration using smooth depth neighborhood supports', Journal of Electronic Imaging, vol. 24, no. 3, pp. 033015-033015.
View/Download from: Publisher's site
Liu, X, Wang, L, Huang, G-B, Zhang, J & Yin, J 2015, 'Multiple kernel extreme learning machine', Neurocomputing, vol. 149, pp. 253-264.
View/Download from: Publisher's site
Liu, Z, Zhang, Z, Wu, Q & Wang, Y 2015, 'Enhancing person re-identification by integrating gait biometric', Neurocomputing, vol. 168, pp. 1144-1156.
View/Download from: Publisher's site
View description>>
Person re-identification is an important problem for associating behavior of people monitored in surveillance camera networks. The fundamental challenges of person re-identification are the large appearance distortions caused by view angles, illumination and occlusions. To address these challenges, a method is proposed in this paper to enhance person re-identification by integrating gait biometric. The proposed framework consists of the hierarchical feature extraction and descriptor matching with learned metric matrices. Considering the appearance feature is not discriminative in some cases, the feature in this work composes of the appearance features and the gait feature for shape and temporal information. In order to solve the view-angle change problem and measuring similarity, data are mapped into a metric space so that distances between people can be measured more accurately. Then two fusion strategies are adopted. The score-level fusion computes distances on the appearance feature and the gait feature, respectively, and combine them as the final distance between samples. The feature-level fusion firstly installs two types of features in series and then computes distances by the fused feature. Finally, our method is tested on the CASIA gait dataset. Experiments show that integrating gait biometric is an effective way to enhance person re-identification.
Llopis-Albert, C, Merigó, JM & Palacios-Marqués, D 2015, 'Structure Adaptation in Stochastic Inverse Methods for Integrating Information', Water Resources Management, vol. 29, no. 1, pp. 95-107.
View/Download from: Publisher's site
Lu, S, Mei, T, Wang, J, Zhang, J, Wang, Z & Li, S 2015, 'Exploratory Product Image Search With Circle-to-Search Interaction', IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 7, pp. 1190-1202.
View/Download from: Publisher's site
Ma, X, Liu, D, Zhang, J & Xin, J 2015, 'A fast affine-invariant features for image stitching under large viewpoint changes', Neurocomputing, vol. 151, no. P3, pp. 1430-1438.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier B.V. Image alignment and stitching is a popular application on many smart phones, but it is time consuming and creates a critical bottle neck in the course of implementation. In this paper, a fast and high-quality image stitching method is proposed. First, a series of simulated images is obtained by simulating the latitude and longitude angles of a raw image; second, FAST detector is used to detect the features of all the simulated images and described by Fast Retina Key-point (FREAK) before all the feature information is projected to the raw image; third, Hamming distance is used as a feature similarity metric and all the features are matched directly instead of using the repetitive projection in Affine-SIFT (ASIFT). RANSAC is then used to achieve the optimal affine-transformations, and lastly, a weighted average bending algorithm is used to smooth the intensities of the overlapping regions. The experimental results demonstrate that the proposed image stitching method greatly increases the speed of the image alignment process and produces a satisfactory result.
Merigó, JM, Engemann, KJ & Gil-Lafuente, AM 2015, 'Guest Editorial: Intelligent Systems in Business and Economics', Cybernetics and Systems, vol. 46, no. 3-4, pp. 145-149.
View/Download from: Publisher's site
Merigó, JM, Gil-Lafuente, AM & Yager, RR 2015, 'An overview of fuzzy research with bibliometric indicators', Applied Soft Computing, vol. 27, pp. 420-433.
View/Download from: Publisher's site
Merigó, JM, Guillén, M & Sarabia, JM 2015, 'The Ordered Weighted Average in the Variance and the Covariance', International Journal of Intelligent Systems, vol. 30, no. 9, pp. 985-1005.
View/Download from: Publisher's site
Merigó, JM, Mas-Tur, A, Roig-Tierno, N & Ribeiro-Soriano, D 2015, 'A bibliometric overview of the Journal of Business Research between 1973 and 2014', Journal of Business Research, vol. 68, no. 12, pp. 2645-2653.
View/Download from: Publisher's site
View description>>
The Journal of Business Research is a leading international journal in business research dating back to 1973. This study analyzes all the publications in the journal since its creation by using a bibliometric approach. The objective is to provide a complete overview of the main factors that affect the journal. This analysis includes key issues such as the publication and citation structure of the journal, the most cited articles, and the leading authors, institutions, and countries in the journal. Unsurprisingly, the USA is the leading region in the journal although a considerable dispersion exists, especially during the last years when European and Asian universities are taking a more significant position.
Merigó, JM, Palacios-Marqués, D & del Mar Benavides-Espinosa, M 2015, 'Aggregation methods to calculate the average price', Journal of Business Research, vol. 68, no. 7, pp. 1574-1580.
View/Download from: Publisher's site
Merigó, JM, Palacios-Marqués, D & Ribeiro-Navarrete, B 2015, 'Aggregation systems for sales forecasting', Journal of Business Research, vol. 68, no. 11, pp. 2299-2304.
View/Download from: Publisher's site
Musial, K, Brodka, P & Magnani, M 2015, 'Social Network Analysis in Applications', AI Communications, vol. 29, no. 1, pp. 55-56.
View/Download from: Publisher's site
Ni, W, Collings, IB, Lipman, J, Wang, X, Tao, M & Abolhasan, M 2015, 'GRAPH THEORY AND ITS APPLICATIONS TO FUTURE NETWORK PLANNING: SOFTWARE-DEFINED ONLINE SMALL CELL MANAGEMENT', IEEE WIRELESS COMMUNICATIONS, vol. 22, no. 1, pp. 52-60.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Network planning is facing new and critical challenges due to ad hoc deployment, unbalanced and drastically varying traffic demands, as well as limited backhaul and hardware resources in emerging small cell architectures. We discuss the application of graph theory to address the challenges. A clique-based software-defined online network management approach is proposed that captures traffic imbalance and fluctuation of small cells and optimally plans frequencies, infrastructures, and network structure at any instant. Its applications to three important small cell scenarios of cloud radio, point-to-point microwave backhaul, and interoperator spectrum sharing are demonstrated. Comparison studies show that in each of the scenarios, this new approach is able to significantly outperform conventional static offline network planning schemes in terms of throughput and satisfaction levels of small cells with regard to allocated bandwidths. Specifically, the throughput can be improved by 155 percent for the cloud radio scenario and 110.95 percent for the microwave backhaul scenario. The satisfaction level can be improved by 40 percent for interoperator spectrum sharing.
Noguera, M, Alvarez, C, Merigó, JM & Urbano, D 2015, 'Determinants of female entrepreneurship in Spain: an institutional approach', Computational and Mathematical Organization Theory, vol. 21, no. 4, pp. 341-355.
View/Download from: Publisher's site
Palacios-Marqués, D, Merigó, JM & Soto-Acosta, P 2015, 'Online social networks as an enabler of innovation in organizations', Management Decision, vol. 53, no. 9, pp. 1906-1920.
View/Download from: Publisher's site
View description>>
Purpose – The purpose of this paper is to study the effect of online social networks on firm performance and how this technology can help to create value. The authors approach the problem from the Resource-Based View in order to analyze if online social networks can be considered source of competitive advantage and how it can enhance or complement essential marketing competences. Design/methodology/approach – The data were obtained from a survey based on the Spanish hospitality firms. This sector was chosen because Web 2.0 is becoming an important marketing channel in the tourism industry, and especially in hospitality firms. In addition, Spain is the one of the largest tourist destination in the world and has a strong presence of social media and Web 2.0 use by the population and hospitality enterprises. Between February and June 2012, the questionnaire was sent to all top managers of four-star and five-star Spanish hospitality firms. The authors received 197 questionnaires, but four of them were eliminated due to errors or because they were received too late. Findings – Results show that there is a statistically significant positive relationship between online social networks and innovation capacity and that the relationship between online social networks and firm performance is fully mediated by innovation capacity. In turn, the authors find a statistically significant positive relationship between innovation capacity and performance in the hotel industry.
Palacios-Marqués, D, Soto-Acosta, P & Merigó, JM 2015, 'Analyzing the effects of technological, organizational and competition factors on Web knowledge exchange in SMEs', Telematics and Informatics, vol. 32, no. 1, pp. 23-32.
View/Download from: Publisher's site
Pan, S, Wu, J, Zhu, X & Zhang, C 2015, 'Graph Ensemble Boosting for Imbalanced Noisy Graph Stream Classification', IEEE TRANSACTIONS ON CYBERNETICS, vol. 45, no. 5, pp. 940-954.
View/Download from: Publisher's site
Pan, S, Wu, J, Zhu, X, Long, G & Zhang, C 2015, 'Finding the best not the most: regularized loss minimization subgraph selection for graph classification', PATTERN RECOGNITION, vol. 48, no. 11, pp. 3783-3796.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Classification on structure data, such as graphs, has drawn wide interest in recent years. Due to the lack of explicit features to represent graphs for training classification models, extensive studies have been focused on extracting the most discriminative subgraphs features from the training graph dataset to transfer graphs into vector data. However, such filter-based methods suffer from two major disadvantages: (1) the subgraph feature selection is separated from the model learning process, so the selected most discriminative subgraphs may not best fit the subsequent learning model, resulting in deteriorated classification results; (2) all these methods rely on users to specify the number of subgraph features K, and suboptimally specified K values often result in significantly reduced classification accuracy. In this paper, we propose a new graph classification paradigm which overcomes the above disadvantages by formulating subgraph feature selection as learning a K-dimensional feature space from an implicit and large subgraph space, with the optimal K value being automatically determined. To achieve the goal, we propose a regularized loss minimization-driven (RLMD) feature selection method for graph classification. RLMD integrates subgraph selection and model learning into a unified framework to find discriminative subgraphs with guaranteed minimum loss w.r.t. the objective function. To automatically determine the optimal number of subgraphs K from the exponentially large subgraph space, an effective elastic net and a subgradient method are proposed to derive the stopping criterion, so that K can be automatically obtained once RLMD converges. The proposed RLMD method enjoys gratifying property including proved convergence and applicability to various loss functions. Experimental results on real-life graph datasets demonstrate significant performance gain.
Peris-Ortiz, M & Merigó Lindahl, JM 2015, 'Preface', Innovation, Technology and Knowledge Management, pp. ix-xv.
Poulos, RC, Thoms, JAI, Shah, A, Beck, D, Pimanda, JE & Wong, JWH 2015, 'Systematic Screening of Promoter Regions Pinpoints Functional Cis-Regulatory Mutations in a Cutaneous Melanoma Genome', Molecular Cancer Research, vol. 13, no. 8, pp. 1218-1226.
View/Download from: Publisher's site
View description>>
Abstract With the recent discovery of recurrent mutations in the TERT promoter in melanoma, identification of other somatic causal promoter mutations is of considerable interest. Yet, the impact of sequence variation on the regulatory potential of gene promoters has not been systematically evaluated. This study assesses the impact of promoter mutations on promoter activity in the whole-genome sequenced malignant melanoma cell line COLO-829. Combining somatic mutation calls from COLO-829 with genome-wide chromatin accessibility and histone modification data revealed mutations within promoter elements. Interestingly, a high number of potential promoter mutations (n = 23) were found, a result mirrored in subsequent analysis of TCGA whole-melanoma genomes. The impact of wild-type and mutant promoter sequences were evaluated by subcloning into luciferase reporter vectors and testing their transcriptional activity in COLO-829 cells. Of the 23 promoter regions tested, four mutations significantly altered reporter activity relative to wild-type sequences. These data were then subjected to multiple computational algorithms that score the cis-regulatory altering potential of mutations. These analyses identified one mutation, located within the promoter region of NDUFB9, which encodes the mitochondrial NADH dehydrogenase (ubiquinone) 1 beta subcomplex 9, to be recurrent in 4.4% (19 of 432) of TCGA whole-melanoma exomes. The mutation is predicted to disrupt a highly conserved SP1/KLF transcription factor binding motif and its frequent co-occurrence with mutations in the coding sequence of NF1 supports a pathologic role for this mutation in melanoma. Taken together, these data show the relatively high prevalence of promoter mutations in the COLO-829 melanoma genome, and indicate that a proportion of these significantly alter the regulatory potential of gene promoters. Implications: Genom...
Qiao, M, Bian, W, Xu, RYD & Tao, D 2015, 'Diversified Hidden Markov Models for Sequential Labeling.', IEEE Trans. Knowl. Data Eng., vol. 27, no. 11, pp. 2947-2960.
View/Download from: Publisher's site
Ramezani, F, Lu, J, Taheri, J & Hussain, FK 2015, 'Evolutionary algorithm-based multi-objective task scheduling optimization model in cloud environments', WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, vol. 18, no. 6, pp. 1737-1757.
View/Download from: Publisher's site
View description>>
© 2015, Springer Science+Business Media New York. Optimizing task scheduling in a distributed heterogeneous computing environment, which is a nonlinear multi-objective NP-hard problem, plays a critical role in decreasing service response time and cost, and boosting Quality of Service (QoS). This paper, considers four conflicting objectives, namely minimizing task transfer time, task execution cost, power consumption, and task queue length, to develop a comprehensive multi-objective optimization model for task scheduling. This model reduces costs from both the customer and provider perspectives by considering execution and power cost. We evaluate our model by applying two multi-objective evolutionary algorithms, namely Multi-Objective Particle Swarm Optimization (MOPSO) and Multi-Objective Genetic Algorithm (MOGA). To implement the proposed model, we extend the Cloudsim toolkit by using MOPSO and MOGA as its task scheduling algorithms which determine the optimal task arrangement among VMs. The simulation results show that the proposed multi-objective model finds optimal trade-off solutions amongst the four conflicting objectives, which significantly reduces the job response time and makespan. This model not only increases QoS but also decreases the cost to providers. From our experimentation results, we find that MOPSO is a faster and more accurate evolutionary algorithm than MOGA for solving such problems.
Rehman, Z-U, Hussain, OK & Hussain, FK 2015, 'User-side cloud service management: State-of-the-art and future directions', Journal of Network and Computer Applications, vol. 55, pp. 108-122.
View/Download from: Publisher's site
Song, R, Catchpoole, DR, Kennedy, PJ & Li, J 2015, 'Identification of lung cancer miRNA–miRNA co-regulation networks through a progressive data refining approach', Journal of Theoretical Biology, vol. 380, pp. 271-279.
View/Download from: Publisher's site
Song, R, Liu, Q, Liu, T & Li, J 2015, 'Connecting rules from paired miRNA and mRNA expression data sets of HCV patients to detect both inverse and positive regulatory relationships', BMC Genomics, vol. 16, no. S2.
View/Download from: Publisher's site
View description>>
© 2015 Song et al.; licensee BioMed Central Ltd. Background: Intensive research based on the inverse expression relationship has been undertaken to discover the miRNA-mRNA regulatory modules involved in the infection of Hepatitis C virus (HCV), the leading cause of chronic liver diseases. However, biological studies in other fields have found that inverse expression relationship is not the only regulatory relationship between miRNAs and their targets, and some miRNAs can positively regulate a mRNA by binding at the 5' UTR of the mRNA.Results: This work focuses on the detection of both inverse and positive regulatory relationships from a paired miRNA and mRNA expression data set of HCV patients through a 'change-to-change' method which can derive connected discriminatory rules. Our study uncovered many novel miRNA-mRNA regulatory modules. In particular, it was revealed that GFRA2 is positively regulated by miR-557, miR-765 and miR-17-3p that probably bind at different locations of the 5' UTR of this mRNA. The expression relationship between GFRA2 and any of these three miRNAs has not been studied before, although separate research for this gene and these miRNAs have all drawn conclusions linked to hepatocellular carcinoma. This suggests that the binding of mRNA GFRA2 with miR-557, miR-765, or miR-17-3p, or their combinations, is worthy of further investigation by experimentation. We also report another mRNA QKI which has a strong inverse expression relationship with miR-129 and miR-493-3p which may bind at the 3' UTR of QKI with a perfect sequence match. Furthermore, the interaction between hsa-miR-129-5p (previous ID: hsa-miR-129) and QKI is supported with CLIP-Seq data from starBase. Our method can be easily extended for the expression data analysis of other diseases.Conclusion: Our rule discovery method is useful for integrating binding information and expression profile for identifying HCV miRNA-mRNA regulatory modules and can be applied to the study...
Stoianoff, NP & Roy, A 2015, 'Indigenous Knowledge and Culture In Australia — The Case for Sui Generis Legislation', Monash University Law Review, vol. 41, no. 3, pp. 745-784.
Tursky, ML, Beck, D, Thoms, JAI, Huang, Y, Kumari, A, Unnikrishnan, A, Knezevic, K, Evans, K, Richards, LA, Lee, E, Morris, J, Goldberg, L, Izraeli, S, Wong, JWH, Olivier, J, Lock, RB, MacKenzie, KL & Pimanda, JE 2015, 'Overexpression of ERG in cord blood progenitors promotes expansion and recapitulates molecular signatures of high ERG leukemias', Leukemia, vol. 29, no. 4, pp. 819-827.
View/Download from: Publisher's site
ur Rehman, Z, Hussain, OK, Hussain, FK, Chang, E & Dillon, T 2015, 'User-side QoS forecasting and management of cloud services', World Wide Web, vol. 18, no. 6, pp. 1677-1716.
View/Download from: Publisher's site
Versendaal, J & Merigó, JM 2015, 'Service business track at INBAM, Barcelona, 2014 “Service Design and Technology”', Service Business, vol. 9, no. 2, pp. 183-184.
View/Download from: Publisher's site
VIZUETE-LUCIANO, E, MERIGÓ, JM, GIL-LAFUENTE, AM & BORIA-REVERTER, S 2015, 'DECISION MAKING IN THE ASSIGNMENT PROCESS BY USING THE HUNGARIAN ALGORITHM WITH OWA OPERATORS', Technological and Economic Development of Economy, vol. 21, no. 5, pp. 684-704.
View/Download from: Publisher's site
View description>>
Assignment processes permit to coordinate two set of variables so each variable of the first set is connected to another variable of the second set. This paper develops a new assignment algorithm by using a wide range of aggregation operators in the Hungarian algorithm. A new process based on the use of the ordered weighted averaging distance (OWAD) operator and the induced OWAD (IOWAD) operator in the Hungarian algorithm is introduced. We refer to it as the Hungarian algorithm with the OWAD operator (HAOWAD) and the Hungarian algorithm with the IOWAD operator (HAIOWAD). The main advantage of this approach is that we can provide a parameterized family of aggregation operators between the minimum and the maximum. Thus, the information can be represented in a more complete way. Furthermore, we also present a general framework by using generalized and quasi-arithmetic means. Therefore, we can consider a wide range of particular cases including the Euclidean and the Minkowski distance. The paper ends with a practical application of the new approach in a financial decision making problem regarding the assignment of investments.
Wang, S, Wu, Q, He, X, Yang, J & Wang, Y 2015, 'Local $N$ -Ary Pattern and Its Extension for Texture Classification', IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 9, pp. 1495-1506.
View/Download from: Publisher's site
View description>>
© 1991-2012 IEEE. Texture image classification is important in computer vision research. To effectively capture texture patterns, a distinctive feature such as a local binary pattern (LBP) is needed. An LBP is robust against monotonic and gray-scale variations and it computes quickly. Its robustness and speed advantage have made it popular in various texture analysis applications. However, an LBP is sensitive to noise, particularly smooth weak illumination gradients in near-uniform regions. To mitigate the effect of noise and increase distinctiveness, a local ternary pattern (LTP) is proposed. Compared with a binary coding LBP, an LTP adopts ternary coding. As a result, an LTP can better tolerate noise and is significantly more distinctive. These advantages of an LTP effectively improve its classification accuracy. However, the potential of ternary coding is not fully explored in LTPs because a ternary pattern is split into a pair of binary patterns. In this paper, to fully explore the distinctiveness in the local pattern, the feature extraction process is formulated as an integer decomposition problem, which is a generalized version of the Bachet de Meziriac weight problem (BMWP). Following this generalization, a local n -ary pattern (LNP) is proposed, for which the LBP is a special case parametrized under n=2. The LTP is not a special case of the LNP. Both LBP and LTP are used as benchmark methods to evaluate LNPs performance due to their well-recognized success. In addition, a rotation-invariant and uniform LNP is also proposed and compared with a rotation-invariant and uniform LBP. The proposed LNP achieves significantly improved texture classification accuracy compared with the LBP and also demonstrates considerable improvement over the LTP.
Wang, S, Zhang, J, Han, TX & Miao, Z 2015, 'Sketch-Based Image Retrieval Through Hypothesis-Driven Object Boundary Selection With HLR Descriptor', IEEE Transactions on Multimedia, vol. 17, no. 7, pp. 1045-1057.
View/Download from: Publisher's site
View description>>
The appearance gap between sketches and photo- realistic images is a fundamental challenge in sketch-based image retrieval (SBIR) systems. The existence of noisy edges on photo- realistic images is a key factor in the enlargement of the appearance gap and significantly degrades retrieval performance . To bridge the gap, we propose a framework consisting of a new line segment -based descriptor named histogram of line relationship (HLR) and a new noise impact reduction algorithm known as object boundary selection . HLR treats sketches and extracted edges of photo- realistic images as a series of piece-wise line segments and captures the relationship between them. Based on the HLR, the object boundary selection algorithm aims to reduce the impact of noisy edges by selecting the shaping edges that best correspond to the object boundaries. Multiple hypotheses are generated for descriptors by hypothetical edge selection. The selection algorithm is formulated to find the best combination of hypotheses to maximize the retrieval score; a fast method is also proposed. To reduce the distraction of false matches in the scoring process, two constraints on spatial and coherent aspects are introduced . We tested the HLR descriptor and the proposed framework on public datasets and a new image dataset of three million images, which we recently collected for SBIR evaluation purposes. We compared the proposed HLR with state-of-the-art descriptors (SHoG, GF-HOG). The experimental results show that our HLR descriptor outperforms them. Combined with the object boundary selection algorithm, our framework significantly improves SBIR performance.
Wu, Z, Shi, J, Lu, C, Chen, E, Xu, G, Li, G, Xie, S & Yu, PS 2015, 'Constructing plausible innocuous pseudo queries to protect user query intention', Information Sciences, vol. 325, pp. 215-226.
View/Download from: Publisher's site
Xu, G, Wu, Z, Li, G & Chen, E 2015, 'Improving contextual advertising matching by using Wikipedia thesaurus knowledge', Knowledge and Information Systems, vol. 43, no. 3, pp. 599-631.
View/Download from: Publisher's site
View description>>
As a prevalent type of Web advertising, contextual advertising refers to the placement of the most relevant commercial ads within the content of a Web page, to provide a better user experience and as a result increase the user’s ad-click rate. However, due to the intrinsic problems of homonymy and polysemy, the low intersection of keywords, and a lack of sufficient semantics, traditional keyword matching techniques are not able to effectively handle contextual matching and retrieve relevant ads for the user, resulting in an unsatisfactory performance in ad selection. In this paper, we introduce a new contextual advertising approach to overcome these problems, which uses Wikipedia thesaurus knowledge to enrich the semantic expression of a target page (or an ad). First, we map each page into a keyword vector, upon which two additional feature vectors, the Wikipedia concept and category vector derived from the Wikipedia thesaurus structure, are then constructed. Second, to determine the relevant ads for a given page, we propose a linear similarity fusion mechanism, which combines the above three feature vectors in a unified manner. Last, we validate our approach using a set of real ads, real pages along with the external Wikipedia thesaurus. The experimental results show that our approach outperforms the conventional contextual advertising matching approaches and can substantially improve the performance of ad selection.
Xu, G, Wu, Z, Zhang, Y & Cao, J 2015, 'Social networking meets recommender systems: survey', International Journal of Social Network Mining, vol. 2, no. 1, pp. 64-64.
View/Download from: Publisher's site
View description>>
Today, the emergence of web-based communities and hosted services such as social networking sites, wikis and folksonomies, brings in tremendous freedom of web autonomy and facilitate collaboration and knowledge sharing between users. Along with the interaction between users and computers, social media is rapidly becoming an important part of our digital experience, ranging from digital textual information to diverse multimedia forms. These aspects and characteristics constitute of the core of second generation of web. Social networking (SN) and recommender system (RS) are two hot and popular topics in the current Web 2.0 era, where the former emphasises the generation, dissemination and evolution of user relations, and the latter focuses on the use of collective preferences of users so as to provide the better experience and loyalty of users in various web applications. Leveraging user social connections is able to alleviate the common problems of sparsity and cold-start encountered in RS. This paper aims to summarise the research progresses and findings in these two areas and showcase the empowerment of integrating these two kinds of research strengths.
Xu, G, Zong, Y, Jin, P, Pan, R & Wu, Z 2015, 'KIPTC: a kernel information propagation tag clustering algorithm', Journal of Intelligent Information Systems, vol. 45, no. 1, pp. 95-112.
View/Download from: Publisher's site
View description>>
In the social annotation systems, users annotate digital data sources by using tags which are freely chosen textual descriptions. Tags are used to index, annotate and retrieve resource as an additional metadata of resource. Poor retrieval performance remains a major challenge of most social annotation systems resulting from several problems of ambiguity, redundancy and less semantic nature of tags. Clustering is a useful tool to handle these problems in social annotation systems. In this paper, we propose a novel tag clustering algorithm based on kernel information propagation. This approach makes use of the kernel density estimation of the kNN neighborhood directed graph as a start to reveal the prestige rank of tags in tagging data. The random walk with restart algorithm is then employed to determine the center points of tag clusters. The main strength of the proposed approach is the capability of partitioning tags from the perspective of tag prestige rank rather than the intuitive similarity calculation itself. Experimental studies on the six real world data sets demonstrate the effectiveness and superiority of the proposed method against other state-of-the-art clustering approaches in terms of various evaluation metrics.
Xu, Y, Xu, A, Merigó, JM & Wang, H 2015, 'Hesitant fuzzy linguistic ordered weighted distance operators for group decision making', Journal of Applied Mathematics and Computing, vol. 49, no. 1-2, pp. 285-308.
View/Download from: Publisher's site
View description>>
Since the concept of hesitant fuzzy sets was put forward, different types of extensions have been proposed to deal with actual problems. A hesitant fuzzy linguistic term set provides a linguistic and computational basis to increase the flexibility and richness of linguistic elicitation based on the fuzzy linguistic approach. In this paper, we consider the concept of distance operator and develop a hesitant fuzzy linguistic ordered weighted distance (HFLOWD) operator. The HFLOWD operator is very suitable to deal with the uncertain situations with linguistic information. Moreover, it is also a new aggregation operator that provides parameterized families of distance aggregation operators between the minimum and the maximum distance. Some of its main properties and different families of HFLOWD operators are investigated. Finally, an application of the new approach is offered and comparative analyses are also provided to show the advantages over existing methods.
Yang, W, Gao, Y, Shi, Y & Cao, L 2015, 'MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis', IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 11, pp. 2801-2815.
View/Download from: Publisher's site
Yin, H, Cui, B, Chen, L, Hu, Z & Zhang, C 2015, 'Modeling Location-Based User Rating Profiles for Personalized Recommendation', ACM Transactions on Knowledge Discovery from Data, vol. 9, no. 3, pp. 1-41.
View/Download from: Publisher's site
View description>>
This article proposes LA-LDA, a location-aware probabilistic generative model that exploits location-based ratings to model user profiles and produce recommendations. Most of the existing recommendation models do not consider the spatial information of users or items; however, LA-LDA supports three classes of location-based ratings, namely spatial user ratings for nonspatial items, nonspatial user ratings for spatial items, and spatial user ratings for spatial items. LA-LDA consists of two components, ULA-LDA and ILA-LDA, which are designed to take into account user and item location information, respectively. The component ULA-LDA explicitly incorporates and quantifies the influence from local public preferences to produce recommendations by considering user home locations, whereas the component ILA-LDA recommends items that are closer in both taste and travel distance to the querying users by capturing item co-occurrence patterns, as well as item location co-occurrence patterns. The two components of LA-LDA can be applied either separately or collectively, depending on the available types of location-based ratings. To demonstrate the applicability and flexibility of the LA-LDA model, we deploy it to both top- k recommendation and cold start recommendation scenarios. Experimental evidence on large-scale real-world data, including the data from Gowalla (a location-based social network), DoubanEvent (an event-based social network), and MovieLens (a movie recommendation system), reveal that LA-LDA models user profiles more accurately by outperforming existing recommendation models for top- k recommendation and the cold start problem.
Yin, H, Cui, B, Chen, L, Hu, Z & Zhou, X 2015, 'Dynamic User Modeling in Social Media Systems', ACM Transactions on Information Systems, vol. 33, no. 3, pp. 1-44.
View/Download from: Publisher's site
View description>>
Social media provides valuable resources to analyze user behaviors and capture user preferences. This article focuses on analyzing user behaviors in social media systems and designing a latent class statistical mixture model, named temporal context-aware mixture model (TCAM), to account for the intentions and preferences behind user behaviors. Based on the observation that the behaviors of a user in social media systems are generally influenced by intrinsic interest as well as the temporal context (e.g., the public's attention at that time), TCAM simultaneously models the topics related to users' intrinsic interests and the topics related to temporal context and then combines the influences from the two factors to model user behaviors in a unified way. Considering that users' interests are not always stable and may change over time, we extend TCAM to a dynamic temporal context-aware mixture model (DTCAM) to capture users' changing interests. To alleviate the problem of data sparsity, we exploit the social and temporal correlation information by integrating a social-temporal regularization framework into the DTCAM model. To further improve the performance of our proposed models (TCAM and DTCAM), an item-weighting scheme is proposed to enable them to favor items that better represent topics related to user interests and topics related to temporal context, respectively. Based on our proposed models, we design a temporal context-aware recommender system (TCARS). To speed up the process of producing the top- k recommendations from large-scale social media data, we develop an efficient query-processing technique to support TCARS. Extensive experiments have been conducted to evaluate the performance of our models on four real-world dataset...
Yue, XD, Cao, LB, Miao, DQ, Chen, YF & Xu, B 2015, 'Multi-view attribute reduction model for traffic bottleneck analysis', Knowledge-Based Systems, vol. 86, pp. 1-10.
View/Download from: Publisher's site
Yuwei Wu, Yunde Jia, Peihua Li, Jian Zhang & Junsong Yuan 2015, 'Manifold Kernel Sparse Representation of Symmetric Positive-Definite Matrices and Its Applications', IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3729-3741.
View/Download from: Publisher's site
Zarzour, P, Boelen, L, Luciani, F, Beck, D, Sakthianandeswaren, A, Mouradov, D, Sieber, OM, Hawkins, NJ, Hesson, LB, Ward, RL & Wong, JWH 2015, 'Single Nucleotide Polymorphism Array Profiling Identifies Distinct Chromosomal Aberration Patterns Across Colorectal Adenomas and Carcinomas', GENES CHROMOSOMES & CANCER, vol. 54, no. 5, pp. 303-314.
View/Download from: Publisher's site
Zeng, Y, Chen, C, Liu, W, Fu, Q, Han, Z, Li, Y, Feng, S, Li, X, Qi, C, Wu, J, Wang, D, Corbett, C, Chan, BP, Ruan, D & Du, Y 2015, 'Injectable microcryogels reinforced alginate encapsulation of mesenchymal stromal cells for leak-proof delivery and alleviation of canine disc degeneration', Biomaterials, vol. 59, pp. 53-65.
View/Download from: Publisher's site
View description>>
In situ crosslinked thermo-responsive hydrogel applied for minimally invasive treatment of intervertebral disc degeneration (IVDD) may not prevent extrusion of cell suspension from injection site due to high internal pressure of intervertebral disc (IVD), causing treatment failure or osteophyte formation. In this study, mesenchymal stromal cells (MSCs) were encapsulated in alginate precursor and loaded into previously developed macroporous PGEDA-derived microcryogels (PMs) to form three-dimensional (3D) microscale cellular niches, enabling non-thermo-responsive alginate hydrogel to be injectable. The PMs reinforced alginate hydrogel showed superior elasticity compared to alginate hydrogel alone and could well protect encapsulated cells through injection. Chondrogenic committed MSCs in the injectable microniches expressed higher level of nucleus pulposus (NP) cell markers compared to 2D cultured cells. In an exvivo organ culture model, injection of MSCs-laden PMs into NP tissue prevented cell leakage, improved cell retention and survival compared to free cell injection. In canine IVDD models, alleviated degeneration was observed in MSCs-laden PMs treated group after six months which was superior to other treated groups. Our results provide in-depth demonstration of injectable alginate hydrogel reinforced by PMs as a leak-proof cell delivery system for augmented regenerative therapy of IVDD in canine models.
Zhang, G & Piccardi, M 2015, 'Structural SVM with Partial Ranking for Activity Segmentation and Classification', IEEE Signal Processing Letters, vol. 22, no. 12, pp. 2344-2348.
View/Download from: Publisher's site
View description>>
© 1994-2012 IEEE. Structural SVM is an extension of the support vector machine for the joint prediction of structured labels from multiple measurements. Following a large margin principle, the training of structural SVM ensures that the ground-Truth labeling of each sample receives a score higher than that of any other labeling. However, no specific score ranking is imposed among the other labelings. In this letter, we extend the standard constraint set of structural SVM with constraints between 'almost-correct' labelings and less desirable ones to obtain a partial-ranking structural SVM (PR-SSVM) approach. Experimental results on action segmentation and classification with two challenging datasets (the TUM Kitchen mocap dataset and the CMU-MMAC video dataset) show that the proposed method achieves better detection and false alarm rates and higher F1 scores than both the conventional structural SVM and a comparable unstructured predictor. The proposed method also achieves higher accuracy than the state of the art on these datasets in excess of 14 and 31 percentage points, respectively.
Zhang, T, Yang, Z, Jia, W, Wu, Q, Yang, J & He, X 2015, 'Fast and robust head detection with arbitrary pose and occlusion', Multimedia Tools and Applications, vol. 74, no. 21, pp. 9365-9385.
View/Download from: Publisher's site
View description>>
© 2014, Springer Science+Business Media New York. Head detection in images and videos plays an important role in a wide range of computer vision and surveillance applications. Aiming to detect heads with arbitrarily occluded faces and head pose, in this paper, we propose a novel Gaussian energy function based algorithm for elliptical head contour detection. Starting with the localization of head and shoulder by an improved Gaussian Mixture Model (GMM) approach, the precise head contour is obtained by making use of the Omega shape formed from the head and shoulder. Experimental results on several benchmark datasets demonstrate the superiority of the proposed idea over the state-of-the-art in both detection accuracy and processing speed, even though there are various types of severe occlusions in faces.
Zhang, Z, Concha, OP & Piccardi, M 2015, 'Tracking people under heavy occlusions by layered data association', Multimedia Tools and Applications, vol. 74, no. 17, pp. 7239-7259.
View/Download from: Publisher's site
View description>>
© 2014, Springer Science+Business Media New York. One of the main difficulties in video tracking of people arises in scenarios where targets are repeatedly and extensively occluded by other moving objects. These types of occlusions significantly affect the measurements of the person’s position, motion, shape and appearance, posing major challenges to correct tracking and data association. In this paper, we present a method for tracking people in videos based on a simplified part-based model only loosely associated with body parts. Data association is provided by a layered data association approach which performs association at feature, part and global levels in a hierarchical fashion. Occlusions are detected and managed at the part level, with corresponding model update strategies. In addition, the tracker does not make any assumption on the target’s motion direction, thus allowing tracking to withstand abrupt sideways movements and changes of directions that frequently occur in busy scenes. Experimental results against popular trackers such as mean shift, particle filters and the recent k-shortest paths (KSP) tracker based on a variety of performance indicators and datasets including ETISEO, AVSS 2007 and PETS 2009 show the effectiveness of the proposed tracker.
Zhao, Z-Q, Han, G-S, Yu, Z-G & Li, J 2015, 'Laplacian normalization and random walk on heterogeneous networks for disease-gene prioritization', Computational Biology and Chemistry, vol. 57, pp. 21-28.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Random walk on heterogeneous networks is a recently emerging approach to effective disease gene prioritization. Laplacian normalization is a technique capable of normalizing the weight of edges in a network. We use this technique to normalize the gene matrix and the phenotype matrix before the construction of the heterogeneous network, and also use this idea to define the transition matrices of the heterogeneous network. Our method has remarkably better performance than the existing methods for recovering known gene-phenotype relationships. The Shannon information entropy of the distribution of the transition probabilities in our networks is found to be smaller than the networks constructed by the existing methods, implying that a higher number of top-ranked genes can be verified as disease genes. In fact, the most probable gene-phenotype relationships ranked within top 3 or top 5 in our gene lists can be confirmed by the OMIM database for many cases. Our algorithms have shown remarkably superior performance over the state-of-the-art algorithms for recovering gene-phenotype relationships. All Matlab codes can be available upon email request.
Abdullaev, S, McBurney, P & Musial, K 1970, 'Direct Exchange Mechanisms for Option Pricing', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 269-284.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. This paper presents the design and simulation of direct exchange mechanisms for pricing European options. It extends McAfee's single-unit double auction to multi-unit format, and then applies it for pricing options through aggregating agent predictions of future asset prices. We will also propose the design of a combinatorial exchange for the simulation of agents using option trading strategies. We present several option trading strategies that are commonly used in real option markets to minimise the risk of future loss, and assume that agents can submit them as a combinatorial bid to the market maker. We provide simulation results for proposed mechanisms, and compare them with existing Black-Scholes model mostly used for option pricing. The simulation also tests the effect of supply and demand changes on option prices. It also takes into account agents with different implied volatility. We also observe how option prices are affected by the agents’ choices of option trading strategies.
Alkalbani, A, Shenoy, A, Hussain, FK, Hussain, OK & Xiang, Y 1970, 'Design and Implementation of the Hadoop-based Crawler for SaaS Service Discovery', 2015 IEEE 29th International Conference on Advanced Information Networking and Applications (IEEE AINA 2015), International Conference on Advanced Information Networking and Applications (was ICOIN), IEEE, Gwangju, SOUTH KOREA, pp. 785-790.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Software as a Service is the most adopted cloud service (46%) compared with Infrastructure as a Service (IaaS) (35%) and Platform as a Service (PaaS) (34%) [1]. Currently, the capability of discovering a SaaS of interest online across multiple cloud providers and reviews websites is a significant challenge, especially when using general search mechanisms (Google and Yahoo!) and search tools provided by existing reviews and directories. Discovering a SaaS is time-consuming, requiring consumers to browse several websites to select the appropriate service. This paper addresses the issues related to the efficient discovery of SaaS across review websites by developing the SaaS Nutch Hadoop-based Crawler Engine - SaaS Nhbased Crawler. The crawler is capable of crawling cloud reviews to find SaaSs of interest and enable the establishment of a central repository that could be used to discover SaaSs much more efficiently. The results show that the SaaS Nhbased crawler can effectively crawl review websites and provide a list of the latest SaaS being offered.
Alshehri, MD & Hussain, FK 1970, 'A Comparative Analysis of Scalable and Context-Aware Trust Management Approaches for Internet of Things', NEURAL INFORMATION PROCESSING, ICONIP 2015, PT IV, International Conference on Neural Information Processing, Springer, Istanbul, TURKEY, pp. 596-605.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. The Internet of Things - IoT - is a new paradigm in technology that allows most physical ‘things’ to contact each other. Trust between IoT devices is a critical factor. Trust in the IoT environment can be modeled using various approaches, such as confidence level and reputation parameters. Furthermore, trust is an important element in engineering reliable and scalable networks. In this paper, we survey scalable and context-aware trust management for IoT from three perspectives. First, we present an overview of the IoT and the importance of trust in relation to it, and then we provide an in-depth trust/reliable management protocol for the IoT and evaluate comparable trust management protocols. We also investigate a scalable solution for trust management in the IoT and provide a comparative evaluation of existing trust solutions. We then pre-sent a context-aware assessment for the IoT and compare the different trust solutions. Lastly, we give a full comparative analysis of trust/reliability management in the IoT. Our results are drawn from this comparative analysis, and directions for future research are outlined.
Awwad, S, Hussein, F, Piccardi, M & ACM 1970, 'Local Depth Patterns for Tracking in Depth Videos', MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, ACM International Conference on Multimedia, ACM, Australia, Brisbane, pp. 1115-1118.
View/Download from: Publisher's site
View description>>
Conventional video tracking operates over RGB or grey-level data which contain significant clues for the identification of the targets. While this is often desirable in a video surveillance context, use of video tracking in privacy-sensitive environments such as hospitals and care facilities is often perceived as intrusive. Therefore, in this work we present a tracker that provides effective target tracking based solely on depth data. The proposed tracker is an extension of the popular Struck algorithm which leverages a structural SVM framework for tracking. The main contributions of this work are novel depth features based on local depth patterns and a heuristic for effectively handling occlusions. Experimental results over the challenging Princeton Tracking Benchmark (PTB) dataset report a remarkable accuracy compared to the original Stuck tracker and other state-of-the-art trackers using depth and RGB data.
Bakirov, R, Gabrys, B & Fay, D 1970, 'On sequences of different adaptive mechanisms in non-stationary regression problems', 2015 International Joint Conference on Neural Networks (IJCNN), 2015 International Joint Conference on Neural Networks (IJCNN), IEEE.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Existing adaptive predictive methods often use multiple adaptive mechanisms as part of their coping strategy in non-stationary environments. These mechanisms are usually deployed in a prescribed order which does not change. In this work we investigate and provide a comparative analysis of the effects of using a flexible order of adaptive mechanisms' deployment resulting in varying adaptation sequences. As a vehicle for this comparison, we use an adaptive ensemble method for regression in batch learning mode which employs several adaptive mechanisms to react to the changes in data. Using real world data from the process industry we demonstrate that such flexible deployment of available adaptive methods embedded in a cross-validatory framework can benefit the predictive accuracy over time.
Blanco-Mesa, FR, Gil-Lafuente, AM & Merigó, JM 1970, 'New Aggregation Methods for Decision-Making in the Selection of Business Opportunities', SCIENTIFIC METHODS FOR THE TREATMENT OF UNCERTAINTY IN SOCIAL SCIENCES, 18th International SIGEF Congress on Scientific methods for the treatment of uncertainty in social sciences, Springer International Publishing, Girona, SPAIN, pp. 3-18.
View/Download from: Publisher's site
Braytee, A, Gill, AQ, Kennedy, PJ & Hussain, FK 1970, 'A Review and Comparison of Service E-Contract Architecture Metamodels.', ICONIP (4), International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 583-595.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. An adaptive service e-contract is an electronic agreement which is required to enable adaptive or agile service sourcing and pro- visioning. There are a number of e-contract metamodels that can be used to create a context specific adaptive service e-contract. The chal- lenge is which one to choose and adopt for adaptive services. This paper presents a review and comparison of well-known e-contract metamod- els using the architecture theory. The architecture theory allows the analysis of the e-contract metamodels using a three-dimension analyt- ical lens: structure, behavior and technology. The results of this paper highlight the metamodels structural, behavioral and technological differ- ences and similarities. This paper will help researchers and practitioners to observe the existing e-contract metamodels are appropriate to the adaptive services or if thwhetherere is a need to merge and integrate the concepts of these metamodels to propose a new unifying adaptive service e-contract metamodel. This paper is limited to the number of compared metamodels.
Braytee, A, Hussain, FK, Anaissi, A & Kennedy, PJ 1970, 'ABC-sampling for Balancing Imbalanced Datasets Based on Artificial Bee Colony Algorithm', 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), IEEE, Miami, Florida, pp. 594-599.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Class imbalanced data is a common problem for predictive modelling in domains such as bioinformatics. It occurs when the distribution of classes is not uniform among samples and results in a biased prediction of learning towards majority classes. In this study, we propose the ABC-Sampling algorithm based on a swarm optimization method called Artificial Bee Colony, which models the natural foraging behaviour of honeybees. Our algorithm lessens the effects of imbalanced classes by selecting the most informative majority samples using a forward search and storing them in a ranked subset. Then we construct a balanced dataset with a planned undersampling strategy to extract the most frequent majority samples from the top ranked subset and combine them with all minority samples. Our algorithm is superior to a state-of-the-art method on nine benchmark datasets with various levels of imbalance ratios.
Cao, L, Zhang, C, Joachims, T, Webb, G, Margineantu, D, Williams, G, Parekh, R, Fayyad, U, Eliassi-Rad, T, Fürnkranz, J, Pei, J, Zhou, ZH, Bekkerman, R & Tang, J 1970, 'Foreword', Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. iii-iv.
Chandrakanthan, V, Jair, K, Oliver, R, Qiao, Q, Kang, YC, Zarzour, P, Beck, D, Boelen, L, Unnikrihnan, A, Villanueva, J, Nunez, A, Knezevic, K, Palu, C, Nasrallah, R, Hardy, P, Grey, S, Whan, R, Walkley, C, Purton, LE, Ward, R, Wong, J, Hesson, L, Ittner, L, Walsh, W & Pimanda, J 1970, 'PDGF-AB AND AZACITIDINE INDUCED REPROGRAMMING OF SOMATIC CELLS INTO TISSUE REGENERATIVE MULTNOTENT STEM CELLS', EXPERIMENTAL HEMATOLOGY, 44th Annual Scientific Meeting of the International-Society-for-Experimental-Hematology (ISEH), ELSEVIER SCIENCE INC, Kyoto, JAPAN, pp. S89-S89.
View/Download from: Publisher's site
Chen, Q, Hu, L, Xu, J, Liu, W & Cao, L 1970, 'Document similarity analysis via involving both explicit and implicit semantic couplings', 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), IEEE, Paris.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Document similarity analysis is increasingly critical since roughly 80% of big data is unstructured. Accordingly, semantic couplings (relatedness) have been recognized valuable for capturing the relationships between terms (words or phrases). Existing work focuses more on explicit relatedness, with respective models built. In this paper, we propose a comprehensive semantic similarity measure: Semantic Coupling Similarity (SCS), which (1) captures intra-term pair couplings within term pairs represented by patterns of explicit term co-occurrences in a document set, (2) extracts inter-term pair couplings between term pairs indicated by implicit couplings between term pairs through indirectly linked terms and paths between terms after term connections are converted to a graph presentation; and (3) semantic coupling similarity, integrating intra- and inter-term pair couplings towards a comprehensive capturing of explicit and implicit couplings between terms across documents. SCS caters for both synonymy and polysemy, and outperforms baseline methods consistently on all real data sets.
Cheng, H, Zhang, J, An, P & Liu, Z 1970, 'A Novel Saliency Model for Stereoscopic Images', 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Adelaide, pp. 1-7.
View/Download from: Publisher's site
View description>>
In this paper, we propose a novel saliency model
for stereoscopic images. To improve depth information for stereo
saliency analysis, this model exploits depth information from
three aspects: 1) we extract the low-level features based on the
color-depth contrast features in a local and global search range
(local-global contrast); 2) to extract the topological structural
from a depth map, a surrounding map based on a Boolean
map is obtained as a weight value to enhance the local-global
contrast features; and 3) based on the saliency probability
distribution in depth information, we employ stereo center prior
enhancement to compute the final saliency. Experimental results
on two recent eye-tracking databases show that our proposed
method outperforms the state-of-the-art saliency models
Chotipant, S, Hussain, FK & Hussain, OK 1970, 'An Automated and Fuzzy Approach for Semantically Annotating Services', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul, TURKEY.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. In the recent past, semantic technologies have played an significant role in service retrieval and service querying. Annotating services semantically enables machines to understand the purpose of services and can further assist in intelligent and precise service retrieval, selection and composition. A key issue in semantically annotating services is the manual nature of service annotation. Manual service annotation requires a large amount of time and updating happens infrequently, hence annotations may get out-of-date due to service description changes. Although some researchers have studied semantic service annotation, they have only focused on web services not business service information. Moreover, their approaches are semi-automated, and still require service providers to select appropriate service annotations. In this paper, we propose a completely automated semantic annotation approach for e-services. The aim of this paper is to semantically annotate a service to relevant service concepts in domain-specific ontologies. Services and service concepts are represented by an extended VSM model, based on fuzzy rules. Then, we link a service to a concept, based on the similarity value of the representing vectors. We found during the experimentation process that the performances of the proposed approach and the VSM-based approach were quite similar and, as a result, developed a system to retrieve services that are annotated to relevant concepts. Experiments using a high service retrieval threshold demonstrated a retrieval approach based on extended VSM annotation performed much better than an approach based on VSM annotation.
Chotipant, S, Hussain, FK, Dong, H & Hussain, OK 1970, 'A Neural Network Based Approach for Semantic Service Annotation', NEURAL INFORMATION PROCESSING, PT II, International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 292-300.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Nowadays, a large number of business owners provide advertising for their services on the web. Semantically annotating those services, which assists machines to understand their purpose, is a significant factor for improving the performance of automated service retrieval, selection, and composition. Unfortunately, most of the existing research into semantic service annotation focuses on annotating web services, not on business service information. Moreover, all are semi-automated approaches that require service providers to select proper annotations. As a result, those approaches are unsuitable for annotating very large numbers of services that have accrued or been updated over time. This paper outlines our proposal for a Neural Network (NN)-based approach to annotate business services. Its aim is to link a given service to a relevant service concept. In this case, we treat the task as a service classification problem. We apply a feed forward neural network and a radial basis function network to determine relevance scores between service information and service concepts. A service is then linked to a service concept if its relevance score reaches the threshold. To evaluate the performance of this approach, it is compared with the ECBR algorithm. The experimental results demonstrate that the NN-based approach performs significantly better than the ECBR approach.
Curiskis, SA, Osborn, TR & Kennedy, PJ 1970, 'Link prediction and topological feature importance in social networks', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, Australian Computer Society, Sydney, pp. 39-50.
View description>>
The problem of link prediction describes how to account for the development of connection structure in a graph. There are many applications of link prediction, such as predicting missing links and future links in online social networks. Much of the literature has focused on limited characteristics of the graph topology or on node attributes, rather than a broad range of measures. There is a rich spectrum of topological features associated with a graph, such as neighbourhood similarity scores, node centrality measures, community structure and path-based distance measures. In this paper we formulate a supervised learning approach to link prediction using a feature set of graph measures chosen to capture a wide range of topological structure. This approach has the advantage that it can be applied to any graph where the connection structure is known. Random forest learning models are used for their high accuracy and measures of feature importance. The feature importance scores reveal the strength of contribution of the topological predictors for link prediction in a variety of synthetically generated network datasets, as well as three real world citation networks. We investigate both undirected and directed cases. Our results show that this approach can deliver very high model precision and recall performance in certain graphs, and good performance generally. Our models also consistently outperform a simpler comparison model we developed to resemble earlier work. In addition, our analysis of variable importance for each dataset reveals meaningful information regarding deep network properties.
Cuzzocrea, A, Moussa, R, Xu, G & Grasso, GM 1970, 'Cloud-Based OLAP over Big Data: Application Scenarios and Performance Analysis', 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), IEEE, Shen Zhen, pp. 921-927.
View/Download from: Publisher's site
View description>>
Following our previous research results, in this paper we provide two authoritative application scenarios that build on top of OLAP*, a middleware for parallel processing of OLAP queries that truly realizes effective and efficiently OLAP over Big Data. We have provided two authoritative case studies, namely parallel OLAP data cube processing and virtual OLAP data cube design, for which we also propose a comprehensive performance evaluation and analysis. Derived analysis clearly confirms the benefits of our proposed framework.
Cuzzocrea, A, Xu, G & Grasso, GM 1970, 'OLAP-enabled web search of complex objects', Proceedings of the 17th International Conference on Information Integration and Web-based Applications & Services, iiWAS '15: The 17th International Conference on Information Integration and Web-based Application & Services, ACM, Brussels, Belgium.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Inspired by the actual trend of empowering traditional Web search methodologies by means of novel computational paradigms, in this paper we propose and experimentally assess WebClustCube, a novel system that allows OLAP-enabled Web search of complex objects, thus adding new value to the potentialities of current Web search paradigms. In particular, WebClustCube supports the building and the interactive manipulation of OLAP-enabled Web views over complex objects extracted from distributed databases. The data management, OLAP-like support of WebClustCube is provided by ClustCube, a state-of-the-art framework for coupling OLAP methodologies and clustering algorithms with the goal of analyzing and mining of complex database objects. A case study that clearly shows the potentialities of WebClustCube in the context of next-generation Web search environments is provided. We complement of analytical contribution by means of an experimental assessment and analysis of WebClustCube according to several metric perspectives.
Fu, B, Xu, G, Cao, L, Wang, Z & Wu, Z 1970, 'Coupling Multiple Views of Relations for Recommendation', Advances in Knowledge Discovery and Data Mining - LNCS, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Ho Chi Minh City, Vietnam, pp. 732-743.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Learning user/item relation is a key issue in recommender system, and existing methods mostly measure the user/item relation from one particular aspect, e.g., historical ratings, etc. However, the relations between users/items could be influenced by multifaceted factors, so any single type of measure could get only a partial view of them. Thus it is more advisable to integrate measures from different aspects to estimate the underlying user/item relation. Furthermore, the estimation of underlying user/item relation should be optimal for current task. To this end, we propose a novel model to couple multiple relations measured on different aspects, and determine the optimal user/item relations via learning the optimal way of integrating these relation measures. Specifically, matrix factorization model is extended in this paper by considering the relations between latent factors of different users/items. Experiments are conducted and our method shows good performance and outperforms other baseline methods.
Gil-Aluja, J, Terceño-Gómez, A, Ferrer-Comalat, JC, Merigó-Lindahl, JM & Linares-Mustarós, S 1970, 'Scientific Methods for the Treatment of Uncertainty in Social Sciences', Advances in Intelligent Systems and Computing, Springer International Publishing.
View/Download from: Publisher's site
Gong, C, Tao, D, Liu, W, Maybank, SJ, Fang, M, Fu, K, Yang, J & IEEE 1970, 'Saliency Propagation from Simple to Difficult', 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, MA, pp. 2531-2539.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Saliency propagation has been widely adopted for identifying the most attractive object in an image. The propagation sequence generated by existing saliency detection methods is governed by the spatial relationships of image regions, i.e., the saliency value is transmitted between two adjacent regions. However, for the inhomogeneous difficult adjacent regions, such a sequence may incur wrong propagations. In this paper, we attempt to manipulate the propagation sequence for optimizing the propagation quality. Intuitively, we postpone the propagations to difficult regions and meanwhile advance the propagations to less ambiguous simple regions. Inspired by the theoretical results in educational psychology, a novel propagation algorithm employing the teaching-to-learn and learning-to-teach strategies is proposed to explicitly improve the propagation quality. In the teaching-to-learn step, a teacher is designed to arrange the regions from simple to difficult and then assign the simplest regions to the learner. In the learning-to-teach step, the learner delivers its learning confidence to the teacher to assist the teacher to choose the subsequent simple regions. Due to the interactions between the teacher and learner, the uncertainty of original difficult regions is gradually reduced, yielding manifest salient objects with optimized background suppression. Extensive experimental results on benchmark saliency datasets demonstrate the superiority of the proposed algorithm over twelve representative saliency detectors.
Guo, M, Yang, K, Musial-Gabrys, K, Min, G, Yin, H, Nguyen, NP, Jiang, Y, Kourtellis, N, Cheng, X, Leng, S, Wang, H & Dokoohaki, N 1970, 'Message from the MSNCom 2015 Workshop Chairs', 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), IEEE, p. lvi.
View/Download from: Publisher's site
Haishuai Wang, Zhang, P, Ling Chen, Huan Liu & Chengqi Zhang 1970, 'Online diffusion source detection in social networks', 2015 International Joint Conference on Neural Networks (IJCNN), 2015 International Joint Conference on Neural Networks (IJCNN), IEEE, Killarney, Ireland, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. In this paper we study a new problem of online diffusion source detection in social networks. Existing work on diffusion source detection focuses on offline learning, which assumes data collected from network detectors are static and a snapshot of network is available before learning. However, an offline learning model does not meet the needs of early warning, real-time awareness, and real-time response of malicious information spreading in social networks. In this paper, we combine online learning and regression-based detection methods for real-time diffusion source detection. Specifically, we propose a new ℓ1 non-convex regression model as the learning function, and an Online Stochastic Sub-gradient algorithm (OSS for short). The proposed model is empirically evaluated on both synthetic and real-world networks. Experimental results demonstrate the effectiveness of the proposed model.
Hanh, LTM, Binh, NT & Tung, KT 1970, 'A Novel Test Data Generation Approach Based Upon Mutation Testing by Using Artificial Immune System for Simulink Models', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 169-181.
View/Download from: Publisher's site
View description>>
Software testing is costly, labor intensive, and time consuming activity. Test data generation is one of the most important steps in testing process in terms of revealing faults in software. A set of test data is considered as good quality if it is highly capable of discovering possible faults. Mutation analysis is an effective way to assess the quality of a test set. Nowadays, high level models such as Simulink are widely used to reduce the time of software development in many industrial fields. This also allows faults to be detected at the earlier stages. Verification and validation of Simulink models are becoming vital to users. In this paper, we propose the automated test data generation approach based on mutation testing for Simulink models by using Artificial Immune System (AIS) in order to evolve test data. The approach was integrated into the MuSimulink tool [15]. It has been applied to some different case studies and the obtained results are very promising.
Hazber, MAG, Li, R, Gu, X, Xu, G & Li, Y 1970, 'Semantic SPARQL Query in a Relational Database Based on Ontology Construction', 2015 11th International Conference on Semantics, Knowledge and Grids (SKG), 2015 11th International Conference on Semantics, Knowledge and Grids (SKG), IEEE, China, pp. 25-32.
View/Download from: Publisher's site
View description>>
© 2015 IEEE.Constructing an ontology from RDBs and its query through ontologies is a fundamental problem for the development of the semantic web. This paper proposes an approach to extract ontology directly from RDB in the form of OWL/RDF triples, to ensure its availability at semantic web. We automatically construct an OWL ontology from RDB schema using direct mapping rules. The mapping rules provide the basic rules for generating RDF triples from RDB data even for column contents null value, and enable semantic query engines to answer more relevant queries. Then we rewriting SPARQL query from SQL by translating SQL relational algebra into an equivalent SPARQL. The proposed method is demonstrated with examples and the effectiveness of the proposed approach is evaluated by experimental results.
Hazber, MAG, Li, R, Zhang, Y & Xu, G 1970, 'An Approach for Mapping Relational Database into Ontology', 2015 12th Web Information System and Application Conference (WISA), 2015 12th Web Information System and Application Conference (WISA), IEEE, Jinan, China, pp. 120-125.
View/Download from: Publisher's site
View description>>
© 2015 IEEE.Sharing and reusing the big data in relational databases in a semantic way have become a big challenge. In this paper, we propose a new approach to enable semantic web applications to access relational databases (RDBs) and their contents by semantic methods. Domain ontologies can be used to formulate RDB schema and data in order to simplify the mapping of the underlying data sources. Our method consists of two main phases: building ontology from an RDB schema and the generation of ontology instances from an RDB data automatically. In the first phase, we studied different cases of RDB schema to be mapped into ontology represented in RDF(S)-OWL, while in the second phase, the mapping rules are used to transform RDB data to ontological instances represented in RDF triples. Our approach is demonstrated with examples and validated by ontology validator.
Huang, S, Zhang, J, Lu, S & Hua, X-S 1970, 'Social Friend Recommendation Based on Network Correlation and Feature Co-Clustering', Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, ICMR '15: International Conference on Multimedia Retrieval, ACM, Shanghai, pp. 315-322.
View/Download from: Publisher's site
View description>>
Friend recommendation is an important recommender application in social media. Major social websites such as Twitter and Facebook are all capable of recommending friends to individuals. However, friend recommendation is a difficult task and most social websites use simple friend recommendation algorithms such as similarity and popularity, whose level of accuracy does do not satisfy the majority of users.
In this paper we propose a two-stage procedure for more accurate friend recommendation: In the rest stage, based on the relationship of different social networks, the Flickr tag network and contact network are aligned to generate a "possible friend list"; In the second stage, making the assumption that a friend's friends also tend to be friends",
co-clustering is applied to the tag and image information of the list to refine the recommendation result in the first stage. Experimental results show that the proposed method achieves good performance and every stage contributes to the recommendation.
Huang, X, Yuan, C & Zhang, J 1970, 'Graph Cuts Stereo Matching Based on Patch-Match and Ground Control Points Constraint', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Rim Conference on Multimedia, Springer International Publishing, Gwangju, South Korea, pp. 14-23.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Stereo matching methods based on Patch-Match obtain good results on complex texture regions but show poor ability on low texture regions. In this paper, a new method that integrates Patch-Match and graph cuts (GC) is proposed in order to achieve good results in both complex and low texture regions. A label is randomly assigned for each pixel and the label is optimized through propagation process. All these labels constitute a label space for each iteration in GC. Also, a Ground Control Points (GCPs) constraint term is added to the GC to overcome the disadvantages of Patch-Match stereo in low texture regions. The proposed method has the advantage of the spatial propagation of Patch- Match and the global property of GC. The results of experiments are tested on the Middlebury evaluation system and outperform all the other PatchMatch based methods.
Huang, X, Zhang, J, Wu, Q, Yuan, C & Fan, L 1970, 'Dense Correspondence Using Non-Local DAISY Forest', 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, Adelaide, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Dense correspondence computation is a critical computer vision task with many applications. The most existing dense correspondence methods consider all the neighbors connected to the center pixels and use local support region. However, such approach might only achieve a locally-optimal solution.In this paper, we propose a non-local dense correspondence computation method by calculating the match cost on a tree structure. It is non-local because all other nodes on the tree contribute to the match cost computing for the current node. The proposed method consists of three steps, namely: 1) DAISY descriptor computation, 2) edge-preserving segmentation and forest construction, 3) PatchMatch fast search. We test our algorithm on the Middlebury and Moseg datasets. The results show that the proposed method outperforms the state-of-The-Art methods in dense correspondence computing and has a low computation complexity.
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Comparative Analysis of Consumer Profile-based Methods to Predict SLA Violation', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. A Service Level Agreement (SLA) is a contract between a service provider and a consumer which specifies in detail the level of service expected from the service provider, obligations, commitment and objectives. In the cloud computing environment, both the cloud provider and the cloud consumer want to know of a likely service violation before the actual violation occurs and to adjust the scaling of the cloud resources appropriately. A consumer's previous resource usage profile is a key element in determining the possibility of service violation in the cloud computing environment, which has not been an area of research focus so far. In this paper, we analyze and compare QoS prediction by considering the consumer's previous resource usage profile in various conditions. From comparative analysis, we observe that by combining a consumer's previous resource usage profile history along with the previous resource usage profile history of its nearest neighbors, we obtain an optimal result.
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Towards Soft Computing Approaches for Formulating Viable Service Level Agreements in Cloud', NEURAL INFORMATION PROCESSING, ICONIP 2015, PT IV, International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 639-646.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. A service level agreement (SLA) is a legal document that binds consumers and providers together for the delivery of specific services for a certain period of time. Providers need a viable SLA to maintain successful relationships with consumers. A viable SLA, based on the previous profile of a consumer, will help a service provider determine whether to accept or reject a consumer’s request and the amount of resources to offer them. In this paper we propose a softcomputing based approach to form a personalized and viable SLA. This process is carried out in the pre-interaction time phase. We build a Fuzzy Inference System (FIS) and consider a consumer’s reliability value and contract duration as the input factors to determine the amount of resources to offer to the consumer. In addition to the Fuzzy Inference System, we tested various Neural Network-based methods for viable SLA formation and compared their prediction accuracy with the output of the FIS.
Hussain, W, Hussain, FK, Hussain, O & Chang, E 1970, 'Profile-based viable Service Level Agreement (SLA) Violation Prediction Model in the Cloud', 2015 10TH INTERNATIONAL CONFERENCE ON P2P, PARALLEL, GRID, CLOUD AND INTERNET COMPUTING (3PGCIC), International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, IEEE, Krakow, Poland, pp. 268-272.
View/Download from: Publisher's site
View description>>
The worldwide web (www) provides a platform that enables service providers to transcend barriers and engage with current or potential customers globally resulting in their economic growth and expanded business horizons -- thereby creating the internet economy. It enables customers to receive desired services in a cost effective way, but given the open and ubiquitous nature of the www, particularly in cloud computing, both service providers and service consumers need efficient approach that guarantee their business requirements will be met. Additionally, all stakeholders need an efficient system that predicts any violation before it occurs and recommends how to mitigate those violations to avoid any penalties. In this paper we propose an intelligent, profile-based SLA violation prediction model, from the provider's perspective. The model begins monitoring an SLA in the pre-interaction time phase, before finalizing the SLA. It intelligently predicts the consumer's likely resouce usage, by considering the consumer's reputation from its previous transaction history, and determines the level of required resources based on their reliability. The framework helps service providers: make decisions about whether to form SLAs, maximize profit, and avoid service violations in post-interaction time phase.
Ikram, MA, Alshehri, MD, Hussain, FK & IEEE 1970, 'Architecture of an IoT-based System for Football Supervision (IoT Football)', 2015 IEEE 2ND WORLD FORUM ON INTERNET OF THINGS (WF-IOT), IEEE World Forum on Internet of Things (WF-IoT), IEEE, Milan, Italy, pp. 69-74.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Football, also called soccer, is one of the most popular sports in the world, if one considers the number of fans as well as the number of players. However, footballers face serious injuries during the match and even during training. Concussion, hypoglycemia, swallowing the tongue and shortness of breath are examples of the health problems footballers face, and in extreme cases, may lead to death. In addition, many sport clubs and sport academies spend millions of dollars contracting new professional footballers or even developing new professional footballers. The Internet of Things (IoT) is a new paradigm that combines various technologies to enhance our lives. Today's technology can protect footballers by diagnosing any health problems, which may occur during the match or training session, which, if detected early, may prevent any adverse effects on their long-term health. This paper proposes an IoT-based architecture for the sport of football, called IoT Football. Our proposal aims to embed sensing devices (e.g. sensors and RFID), telecommunication technologies (e.g. ZigBee) and cloud computing in the sport of football in order monitor the health of footballers and reduce the occurrence of adverse health conditions. The aim is to integrate the IoT environment, in particular the IoT application, into the field of sport in the form of a new application.
Jiang, X, Liu, W, Cao, L & Long, G 1970, 'Coupled collaborative filtering for context-aware recommendation', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI, Austin Texas, USA, pp. 4172-4173.
View description>>
Context-aware features have been widely recognized as important factors in recommender systems. However, as a major technique in recommender systems, traditional Collaborative Filtering (CF) does not provide a straightforward way of integrating the context-aware information into personal recommendation. We propose a Coupled Collaborative Filtering (CCF) model to measure the contextual information and use it to improve recommendations. In the proposed approach, coupled similarity computation is designed to be calculated by inter-item, intra-context and inter-context interactions among item, user and context-ware factors. Experiments based on different types of CF models demonstrate the effectiveness of our design.
Kajdanowicz, T, Michalski, R, Musial, K & Kazienko, P 1970, 'Learning in unlabeled networks – An active learning and inference approach', AI Communications, IOS Press, pp. 123-148.
View/Download from: Publisher's site
La Paz, A, Merigó, JM, Ramaprasad, A & Syn, T 1970, 'Impact aspirations of mis journals: An ontological analysis', Pacific Asia Conference on Information Systems, PACIS 2015 - Proceedings.
View description>>
Journal impact is an ill-structured, complex construct. Present bibliometric and survey measures do not capture it fully. The paper deconstructs the combinatorial complexity of the construct using an ontology which encapsulates 2500 potential components of the construct. The ontology is a parsimonious, systemic, and systematic representation of journal impact. The paper presents an ontological analysis of the impact aspirations of 31 top MIS journals (from one of the published surveys) based on their editorial statements. These statements were mapped to the ontology by the authors using consensus coding. The ontological and heat maps derived from the editorial statements reveal significant 'bright', 'light', and 'blank/blind' spots - aspects with heavy, light, and no emphasis. The differences in luminosity pose a number of questions about the impact these journals seek in the emerging turbulent, competitive research publication market. A comparison of these maps with the journals' bibliometric and survey impact measures highlights the differences between the impact measures, their strengths and weaknesses. The ontology and ontological mapping can be used by the journal editors to realign their impact aspirations and strategies in the emerging marketplace.
Laengle, S, Loyola, G & Merigó, JM 1970, 'OWA Operators in Portfolio Selection', SCIENTIFIC METHODS FOR THE TREATMENT OF UNCERTAINTY IN SOCIAL SCIENCES, 18th International SIGEF Congress on Scientific methods for the treatment of uncertainty in social sciences, Springer International Publishing, Girona, SPAIN, pp. 53-64.
View/Download from: Publisher's site
Li, L, Su, C, Sun, Y, Xiong, S & Xu, G 1970, 'Hashtag Biased Ranking for Keyword Extraction from Microblog Posts', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Knowledge Science, Engineering and Management, Springer International Publishing, Chongqing, pp. 348-359.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Nowadays, a huge amount of text is being generated for social networking purpose on the Web. Keyword extraction from such text benefit many applications such as advertising, search, and content filtering. Recent studies show that graph based ranking is more effective than traditional term or document frequecy based approaches. However, most work in the literature constructs word to word graph within a document or a collection of documents before applying a kind of random walk. Such a graph does not consider the influence of document importance on keyword extraction. Moreover, social text like a microblog post usually has speical social features such as hashtag and so on, which can help us understand its topic. In this paper, we propose hashtag biased ranking for keyword extraction from a collection of microblog posts. We first build a word-post weighted graph by taking into account the posts themselves. Then, a hashtag biased random walk is applied on this graph, which guides our approach to extract keywords according to the hashtag topic. Last, the final ranking of a word is determined by the stationary probability after a number of interations. We evaluate our proposed method on a real Chinese microblog posts. Experiments show that our method is more effective than the traditional word to word graph based ranking in terms of precision.
Li, M, Da Xu, RY & He, X 1970, 'Face hallucination based on nonparametric Bayesian learning', 2015 IEEE International Conference on Image Processing (ICIP), 2015 IEEE International Conference on Image Processing (ICIP), IEEE, Quebec City, Canada, pp. 986-990.
View/Download from: Publisher's site
View description>>
In this paper, we propose a novel example-based face hallucination method through nonparametric Bayesian learning based on the assumption that human faces have similar local pixel structure. We cluster the low resolution (LR) face image patches by nonparametric method distance dependent Chinese Restaurant process (ddCRP) and calculate the centres of the clusters (i.e., subspaces). Then, we learn the mapping coefficients from the LR patches to high resolution (HR) patches in each subspace. Finally, the HR patches of input low resolution face image can be efficiently generated by a simple linear regression. The spatial distance constraint is employed to aid the learning of subspace centers so that every subspace will better reflect the detailed information of image patches. Experimental results show our method is efficient and promising for face hallucination.
Li, X, Xu, G, Chen, E & Li, L 1970, 'Learning User Preferences across Multiple Aspects for Merchant Recommendation', 2015 IEEE International Conference on Data Mining, 2015 IEEE International Conference on Data Mining (ICDM), IEEE, Atlantic City, NJ, pp. 865-870.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. With the pervasive use of mobile devices, Location Based Social Networks(LBSNs) have emerged in past years. These LBSNs, allowing their users to share personal experiences and opinions on visited merchants, have very rich and useful information which enables a new breed of location-based services, namely, Merchant Recommendation. Existing techniques for merchant recommendation simply treat each merchant as an item and apply conventional recommendation algorithms, e.g., Collaborative Filtering, to recommend merchants to a target user. However, they do not differentiate the user's real preferences on various aspects, and thus can only achieve limited success. In this paper, we aim to address this problem by utilizing and analyzing user reviews to discover user preferences in different aspects. Following the intuition that a user rating represents a personalized rational choice, we propose a novel utility-based approach by combining collaborative and individual views to estimate user preference (i.e., rating). An optimization algorithm based on a Gaussian model is developed to train our merchant recommendation approach. Lastly we evaluate the proposed approach in terms of effectiveness, efficiency and cold-start using two real-world datasets. The experimental results show that our approach outperforms the state-of-the-art methods. Meanwhile, a real mobile application is implemented to demonstrate the practicability of our method.
Li, X, Xu, G, Chen, E & Li, L 1970, 'MARS: A multi-aspect Recommender system for Point-of-Interest', 2015 IEEE 31st International Conference on Data Engineering, 2015 IEEE 31st International Conference on Data Engineering (ICDE), IEEE, Seoul, Korea, pp. 1436-1439.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. With the pervasive use of GPS-enabled smart phones, location-based services, e.g., Location Based Social Networking (LBSN) have emerged. Point-of-Interests (POIs) Recommendation, as a typical component in LBSN, provides additional values to both customers and merchants in terms of user experience and business turnover. Existing POI recommendation systems mainly adopt Collaborative Filtering (CF), which only exploits user given ratings (i.e., user overall evaluation) about a merchant while regardless of the user preference difference across multiple aspects, which exists commonly in real scenarios. Meanwhile, besides ratings, most LBSNs also provide the review function to allow customers to give their opinions when dealing with merchants, which is often overlooked in these recommender systems. In this demo, we present MARS, a novel POI recommender system based on multi-aspect user preference learning from reviews by using utility theory. We first introduce the organization of our system, and then show how the user preferences across multiple aspects are integrated into our system alongside several case studies of mining user preference and POI recommendations.
Linares-Mustarós, S, Merigó, JM & Ferrer-Comalat, JC 1970, 'Processing Extreme Values in Sales Forecasting', Cybernetics and Systems, Informa UK Limited, pp. 207-229.
View/Download from: Publisher's site
Liu, B, Chen, L, Liu, C, Zhang, C & Qiu, W 1970, 'RCP Mining: Towards the Summarization of Spatial Co-location Patterns', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Symposium on Advances in Spatial and Temporal Databases, Springer International Publishing, Hong Kong, China, pp. 451-469.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Co-location pattern mining is an important task in spatial data mining. However, the traditional framework of co-location pattern mining produces an exponential number of patterns because of the downward closure property, which makes it hard for users to understand, or apply. To address this issue, in this paper, we study the problem of mining representative co-location patterns (RCP). We first define a covering relationship between two co-location patterns by finding a new measure to appropriately quantify the distance between patterns in terms of their prevalence, based on which the problem of RCP mining is formally formulated. To solve the problem of RCP mining, we first propose an algorithm called RCPFast, adopting the post-mining framework that is commonly used by existing distance-based pattern summarization techniques. To address the peculiar challenge in spatial data mining, we further propose another algorithm, RCPMS, which employs the mineand- summarize framework that pushes pattern summarization into the co-location mining process. Optimization strategies are also designed to further improve the performance of RCPMS. Our experimental results on both synthetic and real-world data sets demonstrate that RCP mining effectively summarizes spatial co-location patterns, and RCPMS is more efficient than RCPFast, especially on dense data sets.
Liu, C & Cao, L 1970, 'A Coupled k-Nearest Neighbor Algorithm for Multi-label Classification', Advances in Knowledge Discovery and Data Mining - LNCS, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Hi Chi Minh City, Vietnam, pp. 176-187.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. ML-kNN is a well-known algorithm for multi-label classification. Although effective in some cases, ML-kNN has some defect due to the fact that it is a binary relevance classifier which only considers one label every time. In this paper, we present a new method for multi-label classification, which is based on lazy learning approaches to classify an unseen instance on the basis of its k nearest neighbors. By introducing the coupled similarity between class labels, the proposed method exploits the correlations between class labels, which overcomes the shortcoming of ML-kNN. Experiments on benchmark data sets show that our proposed Coupled Multi-Label k Nearest Neighbor algorithm (CML-kNN) achieves superior performance than some existing multi-label classification algorithms.
Liu, X, Wang, L, Yin, J, Dou, Y & Zhang, J 1970, 'Absent multiple kernel learning', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Publications, Austin, Texas, pp. 2807-2813.
View description>>
Multiple kernel learning (MKL) optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels are missing, which is common in practical applications. This paper proposes an absent MKL (AMKL) algorithm to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithm directly classifies each sample with its observed channels. In specific, we define a margin for each sample in its own relevant space, which corresponds to the observed channels of that sample. The proposed AMKL algorithm then maximizes the minimum of all sample-based margins, and this leads to a difficult optimization problem. We show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio.
Lu Qi, Huang, Y, Li, L & Xu, G 1970, 'Learning to rank domain experts in microblogging by combining text and non-text features', 2015 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2015 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Nanjing, China, pp. 28-31.
View/Download from: Publisher's site
View description>>
Currently microblog search engines have the function to find related users according to input topic keywords. Traditional approaches rank users by their authentication information or their self descriptions (introductions or labels).However, many users may not publish the posts closely related to their certification profile. In this paper, we study the problem of identifying domain-dependent influential users (or topic experts). We propose to fuse of non-text features and text features to analysis the influence of the users. In addition we compare three kinds of sorting methods, i.e., order-based rank aggregation, greedy selection based rank aggregation, SVM Rank method. Our experimental results show that the highest precision is achieved by SVM rank method.
Medvediev, K, Berkovsky, S, Xu, G, Onikienko, Y & IEEE 1970, 'An Analysis of New Visitors' Website Behaviour before & after TV Advertising', PROCEEDINGS OF 2015 IEEE INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, SOCIO-CULTURAL COMPUTING (BESC), International Conference on Behavioral, Economic and Socio-cultural Computing, IEEE, Nanjing, China, pp. 109-115.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. This paper explores and analyses the actions of users on an e-commerce website after they have watched TV-advertising. The analysis considers factors such as month, day and time of the website visit. This article utilises visualization tools for the analysis of the frequency ratios (probabilities) of searches, conversions, bookings made by new visitors on the website.
Merigó, JM, Yang, J-B & Xu, D-L 1970, 'A Bibliometric Overview of Financial Studies', SCIENTIFIC METHODS FOR THE TREATMENT OF UNCERTAINTY IN SOCIAL SCIENCES, 18th International SIGEF Congress on Scientific methods for the treatment of uncertainty in social sciences, Springer International Publishing, Girona, SPAIN, pp. 245-254.
View/Download from: Publisher's site
Rossetti, M, Stella, F, Cao, L & Zanker, M 1970, 'Analysing User Reviews in Tourism with Topic Models', Springer International Publishing, pp. 47-58.
View/Download from: Publisher's site
Shao, J, Yin, J, Liu, W & Cao, L 1970, 'Actionable combined high utility itemset mining', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Austin, Texas, USA, pp. 4206-4207.
View description>>
The itemsets discovered by traditional High Utility Itemsets Mining (HUIM) methods are more useful than frequent itemset mining outcomes; however, they are usually disordered and not actionable, and sometime accidental, because the utility is the only judgement and no relations among itemsets are considered. In this paper, we introduce the concept of combined mining to select combined itemsets that are not only high utility and high frequency, but also involving relations between itemsets. An effective method for mining such actionable combined high utility itemsets is proposed. The experimental results are promising, compared to those from traditional HUIM algorithm (UP-Growth).
Song, K, Feng, S, Gao, W, Wang, D, Chen, L & Zhang, C 1970, 'Build Emotion Lexicon from Microblogs by Combining Effects of Seed Words and Emoticons in a Heterogeneous Graph', Proceedings of the 26th ACM Conference on Hypertext & Social Media - HT '15, the 26th ACM Conference, ACM Press, Guzelyurt, Northern Cyprus, pp. 283-292.
View/Download from: Publisher's site
View description>>
© 2015 ACM. As an indispensable resource for emotion analysis, emotion lexicons have attracted increasing attention in recent years. Most existing methods focus on capturing the single emotional effect of words rather than the emotion distributions which are helpful to model multiple complex emotions in a subjective text. Meanwhile, automatic lexicon building methods are overly dependent on seed words but neglect the effect of emoticons which are natural graphical labels of fine-grained emotion. In this paper, we propose a novel emotion lexicon building framework that leverages both seed words and emoticons simultaneously to capture emotion distributions of candidate words more accurately. Our method overcomes the weakness of existing methods by combining the effects of both seed words and emoticons in a unified three-layer heterogeneous graph, in which a multi-label random walk (MLRW) algorithm is performed to strengthen the emotion distribution estimation. Experimental results on real-world data reveal that our constructed emotion lexicon achieves promising results for emotion classification compared to the state-of-the-art lexicons.
Tsakonas, A & Gabrys, B 1970, 'Application of Base Learners as Conditional Input for Fuzzy Rule-Based Combined System', Studies in Computational Intelligence, Springer International Publishing, pp. 19-32.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2014. The aim of this work is to examine the possibility of using the output of base learners as antecedents for fuzzy rule-based hybrid ensembles. We select a flexible, grammar-driven framework for generating ensembles that combines multilayer perceptrons and support vector machines by means of genetic programming. We assess the proposed model in three real-world regression problems and we test it against multi-level, hierarchical ensembles. Our first results show that for a given large size of the base learners pool, the outputs of some of them can be useful in the antecedent parts to produce accurate ensembles, while at the same time other more accurate members of the same pool contribute in the consequent part.
Wang, H, Zhang, P, Chen, L & Zhang, C 1970, 'SocialAnalysis: A Real-Time Query and Mining System from Social Media Data Streams', Databases Theory and Applications (LNCS), Australasian Database Conference, Springer International Publishing, Melbourne, Australia, pp. 318-322.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. In this paper, we present our recent progress of designing a real-time system, SocialAnalysis, to discover and summarize emergent social events from social media data streams. In social networks era, people always frequently post messages or comments about their activities and opinions. Hence, there exist temporal correlations between the physical world and virtual social networks, which can help us to monitor and track social events, detecting and positioning anomalous events before their outbreakings, so as to provide early warning. The key technologies in the system include: (1) Data denoising methods based on multi-features, which screens out the query-related event data from massive background data. (2) Abnormal events detection methods based on statistical learning, which can detect anomalies by analyzing and mining a series of observations and statistics on the time axis. (3) Geographical position recognition, which is used to recognize regions where abnormal events may happen.
Wang, H, Zhang, P, Tsang, I, Chen, L & Zhang, C 1970, 'Defragging Subgraph Features for Graph Classification', Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM'15: 24th ACM International Conference on Information and Knowledge Management, ACM, Melbourne, VIC, Australia, pp. 1687-1690.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Graph classification is an important tool for analysing structured and semi-structured data, where subgraphs are commonly used as the feature representation. However, the number and size of subgraph features crucially depend on the threshold parameters of frequent subgraph mining algorithms. Any improper setting of the parameters will generate many trivial short-pattern subgraph fragments which dominate the feature space, distort graph classifiers and bury interesting long-pattern subgraphs. In this paper, we propose a new Subgraph Join Feature Selection (SJFS) algorithm. The SJFS algorithm, by forcing graph classifiers to join short-pattern subgraph fragments, can defrag trivial subgraph features and deliver long-pattern interesting subgraphs. Experimental results on both synthetic and real-world social network graph data demonstrate the performance of the proposed method.
Wang, W, Yin, H, Chen, L, Sun, Y, Sadiq, S & Zhou, X 1970, 'Geo-SAGE: A Geographical Sparse Additive Generative Model for Spatial Item Recommendation', Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM International Conference on Knowledge Discovery and Data Mining, ACM, Sydney, NSW, Australia, pp. 1255-1264.
View/Download from: Publisher's site
View description>>
With the rapid development of location-based social networks (LBSNs), spatialitem recommendation has become an important means to help people discoverattractive and interesting venues and events, especially when users travel outof town. However, this recommendation is very challenging compared to thetraditional recommender systems. A user can visit only a limited number ofspatial items, leading to a very sparse user-item matrix. Most of the itemsvisited by a user are located within a short distance from where he/she lives,which makes it hard to recommend items when the user travels to a far awayplace. Moreover, user interests and behavior patterns may vary dramaticallyacross different geographical regions. In light of this, we propose Geo-SAGE, ageographical sparse additive generative model for spatial item recommendationin this paper. Geo-SAGE considers both user personal interests and thepreference of the crowd in the target region, by exploiting both theco-occurrence pattern of spatial items and the content of spatial items. Tofurther alleviate the data sparsity issue, Geo-SAGE exploits the geographicalcorrelation by smoothing the crowd's preferences over a well-designed spatialindex structure called spatial pyramid. We conduct extensive experiments toevaluate the performance of our Geo-SAGE model on two real large-scaledatasets. The experimental results clearly demonstrate our Geo-SAGE modeloutperforms the state-of-the-art in the two tasks of both out-of-town andhome-town recommendations.
Wang, Y, Zhang, J, Liu, Z, Wu, Q, Chou, P, Zhang, Z & Jia, Y 1970, 'Completed Dense Scene Flow in RGB-D Space', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Asian Conference on Computer Vision, Springer International Publishing, Singapore, pp. 191-205.
View/Download from: Publisher's site
View description>>
© 2015, Springer International Publishing Switzerland. Conventional scene flow containing only translational vectors is not able to model 3D motion with rotation properly. Moreover, the accuracy of 3D motion estimation is restricted by several challenges such as large displacement, noise, and missing data (caused by sensing techniques or occlusion). In terms of solution, there are two kinds of approaches: local approaches and global approaches. However, local approaches can not generate smooth motion field, and global approaches is difficult to handle large displacement motion. In this paper, a completed dense scene flow framework is proposed, which models both rotation and translation for general motion estimation. It combines both a local method and a global method considering their complementary characteristics to handle large displacement motion and enforce smoothness respectively. The proposed framework is applied on the RGB-D image space where the computation efficiency is further improved. According to the quantitative evaluation based on Middlebury dataset, our method outperforms other published methods. The improved performance is further confirmed on the real data acquired by Kinect sensor.
Wang, Z, Yang, Y, Chang, S, Li, J, Fong, S & Huang, TS 1970, 'A joint optimization framework of sparse coding and discriminative clustering', IJCAI International Joint Conference on Artificial Intelligence, 1st International Workshop on Social Influence Analysis / 24th International Joint Conference on Artificial Intelligence (IJCAI), IJCAI-INT JOINT CONF ARTIF INTELL, Buenos Aires, ARGENTINA, pp. 3932-3938.
View description>>
Many clustering methods highly depend on extracted features. In this paper, we propose a joint optimization framework in terms of both feature extraction and discriminative clustering. We utilize graph regularized sparse codes as the features, and formulate sparse coding as the constraint for clustering. Two cost functions are developed based on entropy-minimization and maximum-margin clustering principles, respectively, as the objectives to be minimized. Solving such a bi-level optimization mutually reinforces both sparse coding and clustering steps. Experiments on several benchmark datasets verify remarkable performance improvements led by the proposed joint optimization.
Xie, K, Fu, K, Zhou, T, Yang, J, Wu, Q & He, X 1970, 'Small target detection using an optimization-based filter', 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, South Brisbane, Australia, pp. 1583-1587.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Small target detection is a critical problem in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some challenges remained, e.g. cloud edges and horizontal lines are likely to cause false alarms. This paper proposes a novel method using an optimization-based filter to detect infrared small target in heavy clutter. First, we design a certain pixel area as active area. Second, a weighted quadratic cost function is performed in the active area. Finally, a filter based on statistics of active area is derived from the cost function. Our method could preserve heterogeneous area, meanwhile, remove target region. Experimental results show our method achieves satisfied performance in heavy clutter.
Xu, W, Miao, Z, Zhang, J & Tian, Y 1970, 'Learning Spatio-Temporal Features for Action Recognition with Modified Hidden Conditional Random Field', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), European Conference on Computer Vision, Springer International Publishing, Zurich; Switzerland, pp. 786-801.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Previous work on human action analysis mainly focuses on designing hand-crafted local features and combining their context information. In this paper, we propose using supervised feature learning as a way to learn spatio- temporal features. More specifically, a modified hidden conditional random field is applied to learn two high-level features conditioned on a certain action label. Among them, the individual features can describe the appearance of local parts and the interaction features can capture their spatial constraints. In order to make the best of what have been learned, a new categorization model is proposed for action matching. It is inspired by the Deformable Part Model and the intuition is that actions can be modeled by local features in a changeable spatial and temporal dependency. Experimental result shows that our algorithm can successfully recognize human actions with high accuracies both on the simple atomic action database (KTH and Weizmann) and complex interaction activity database (CASIA).
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 1970, 'Infinite Author Topic Model based on Mixed Gamma-Negative Binomial Process', 2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), IEEE International Conference on Data Mining, IEEE, Atlantic City, USA, pp. 489-498.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Incorporating the side information of text corpus, i.e., authors, time stamps, and emotional tags, into the traditionaltext mining models has gained significant interests in the area of information retrieval, statistical natural language processing, andmachine learning. One branch of these works is the so-called Author Topic Model (ATM), which incorporates the authors'sinterests as side information into the classical topic model. However, the existing ATM needs to predefine the number of topics, which is difficult and inappropriate in many real-world settings. In this paper, we propose an Infinite Author Topic (IAT) modelto resolve this issue. Instead of assigning a discrete probability on fixed number of topics, we use a stochastic process to determinethe number of topics from the data itself. To be specific, we extend a gamma-negative binomial process to three levels in orderto capture the author-document-keyword hierarchical structure. Furthermore, each document is assigned a mixed gamma processthat accounts for the multi-author's contribution towards this document. An efficient Gibbs sampling inference algorithm witheach conditional distribution being closed-form is developed for the IAT model. Experiments on several real-world datasets showthe capabilities of our IAT model to learn the hidden topics, authors' interests on these topics and the number of topicssimultaneously.
Yan, Y, Tan, M, Tsang, I, Yang, Y, Zhang, C & Shi, Q 1970, 'Scalable maximum margin matrix factorization by active riemannian subspace search', IJCAI International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence, AAAI, Buenos Aires, Argentina, pp. 3988-3994.
View description>>
The user ratings in recommendation systems are usually in the form of ordinal discrete values. To give more accurate prediction of such rating data, maximum margin matrix factorization (M3F) was proposed. Existing M3F algorithms, however, either have massive computational cost or require expensive model selection procedures to determine the number of latent factors (i.e. the rank of the matrix to be recovered), making them less practical for large scale data sets. To address these two challenges, in this paper, we formulate M3F with a known number of latent factors as the Riemannian optimization problem on a fixed-rank matrix manifold and present a block-wise nonlinear Riemannian conjugate gradient method to solve it efficiently. We then apply a simple and efficient active subspace search scheme to automatically detect the number of latent factors. Empirical studies on both synthetic data sets and large real-world data sets demonstrate the superior efficiency and effectiveness of the proposed method.
Yusoff, B & Merigo Lindahl, JM 1970, 'Heavy weighted geometric aggregation operators in analytic hierarchy process-group decision making', 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Istanbul, TURKEY.
View/Download from: Publisher's site
Zhang, C, Huang, W, Shi, Y, Yu, PS, Zhu, Y, Tian, Y, Zhang, P & He, J 1970, 'Data Science', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing.
View/Download from: Publisher's site
Zhang, Q, Zhang, P, Long, G, Ding, W, Zhang, C & Wu, X 1970, 'Towards Mining Trapezoidal Data Streams', 2015 IEEE International Conference on Data Mining, 2015 IEEE International Conference on Data Mining (ICDM), IEEE, Atlantic City, New Jersey, United States, pp. 1111-1116.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. We study a new problem of learning from doubly-streaming data where both data volume and feature space increase over time. We refer to the problem as mining trapezoidal data streams. The problem is challenging because both data volume and feature space are increasing, to which existing online learning, online feature selection and streaming feature selection algorithms are inapplicable. We propose a new Sparse Trapezoidal Streaming Data mining algorithm (STSD) and its two variants which combine online learning and online feature selection to enable learning trapezoidal data streams with infinite training instances and features. Specifically, when new training instances carrying new features arrive, the classifier updates the existing features by following the passive-aggressive update rule used in online learning and updates the new features with the structural risk minimization principle. Feature sparsity is also introduced using the projected truncation techniques. Extensive experiments on the demonstrated UCI data sets show the performance of the proposed algorithms.
Zhao, M, Zhang, C, Zhang, W, Li, W & Zhang, J 1970, 'Decorrelation-stretch based cloud detection for total sky images', 2015 Visual Communications and Image Processing (VCIP), 2015 Visual Communications and Image Processing (VCIP), IEEE, Singapore.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Cloud detection plays an important role in total-sky images based solar forecasting and has received more attention in recent years. Accurate cloud detection for complicated total-sky images is especially changeling due to the low contrast and vague boundaries between cloud and sky regions. Unlike the existing cloud detection method without any preprocessing, one novel decorrelation-stretch (DS) based method is proposed in this work, where the total-sky images are preprocessed using the DS algorithm firstly. With this enhancement, color feature disparity of cloud and sky can be intensified notably, and then a more accurate threshold can be obtained by applying the Minimum Cross Entropy (MCE) to the preprocessed image. Experimental results demonstrated the proposed scheme achieves better performance than the existing cloud detection methods on total-sky images, especially for images with low contrast or vague boundaries between cloud and sky regions.
Zhou, X, Chen, L, Zhang, Y, Cao, L, Huang, G & Wang, C 1970, 'Online Video Recommendation in Sharing Community', Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, SIGMOD/PODS'15: International Conference on Management of Data, ACM, Melbourne, Victoria, Australia, pp. 1645-1656.
View/Download from: Publisher's site
View description>>
The creation of sharing communities has resulted in the astonishing increasing of digital videos, and their wide applications in the domains such as entertainment, online news broadcasting etc. The improvement of these applications relies on effective solutions for social user access to video data. This fact has driven the recent research interest in social recommendation in shared communities. Although certain effort has been put into video recommendation in shared communities, the contextual information on social users has not been well exploited for effective recommendation. In this paper, we propose an approach based on the content and social information of videos for the recommendation in sharing communities. Specifically, we first exploit a robust video cuboid signature together with the Earth Mover's Distance to capture the content relevance of videos. Then, we propose to identify the social relevance of clips using the set of users belonging to a video. We fuse the content relevance and social relevance to identify the relevant videos for recommendation. Following that, we propose a novel scheme called sub-community-based approximation together with a hash-based optimization for improving the efficiency of our solution. Finally, we propose an algorithm for efficiently maintaining the social updates in dynamic shared communities. The extensive experiments are conducted to prove the high effectiveness and efficiency of our proposed video recommendation approach.
Bargi, A, Xu, RYD & Piccardi, M 2015, 'An Adaptive Online HDP-HMM for Segmentation and Classification of Sequential Data'.
Gil-Aluja, J, Terceño-Gómez, A, Ferrer-Comalat, JC, Merigó-Lindahl, JM & Linares-Mustarós, S 2015, 'Preface', pp. v-vi.
Kajdanowicz, T, Michalski, R, Musiał, K & Kazienko, P 2015, 'Learning in Unlabeled Networks - An Active Learning and Inference Approach'.
View description>>
The task of determining labels of all network nodes based on the knowledge
about network structure and labels of some training subset of nodes is called
the within-network classification. It may happen that none of the labels of the
nodes is known and additionally there is no information about number of classes
to which nodes can be assigned. In such a case a subset of nodes has to be
selected for initial label acquisition. The question that arises is: 'labels of
which nodes should be collected and used for learning in order to provide the
best classification accuracy for the whole network?'. Active learning and
inference is a practical framework to study this problem.
A set of methods for active learning and inference for within network
classification is proposed and validated. The utility score calculation for
each node based on network structure is the first step in the process. The
scores enable to rank the nodes. Based on the ranking, a set of nodes, for
which the labels are acquired, is selected (e.g. by taking top or bottom N from
the ranking). The new measure-neighbour methods proposed in the paper suggest
not obtaining labels of nodes from the ranking but rather acquiring labels of
their neighbours. The paper examines 29 distinct formulations of utility score
and selection methods reporting their impact on the results of two collective
classification algorithms: Iterative Classification Algorithm and Loopy Belief
Propagation.
We advocate that the accuracy of presented methods depends on the structural
properties of the examined network. We claim that measure-neighbour methods
will work better than the regular methods for networks with higher clustering
coefficient and worse than regular methods for networks with low clustering
coefficient. According to our hypothesis, based on clustering coefficient we
are able to recommend appropriate active learning and inference method.
Peris-Ortiz, M & Merigó-Lindahl, JM 2015, 'Entrepreneurship, Regional Development and Culture', Springer International Publishing, pp. 1-216.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. The aim of this book is to analyze the relationships among entrepreneurship, regional development and culture in the current economy. Using an institutional approach, it examines the main theoretical issues and practices and their effect on different dimensions of society and the economy. Business creation is considered a key element of economic growth, innovation and employment. In recent years, entrepreneurial scholars have studied the factors that affect entrepreneurship and drive economic growth. In doing so, these scholars have aimed to understand what promotes entrepreneurial activity and also how to improve the development of regions or countries to increase wealth in society. The institutional approach can be applied to the entrepreneurship field to understand the phenomenon of entrepreneurship. This view considers the role of environment in the decision to create a company, which is critical to entrepreneurship, innovation and economic growth. Environment relates to legal aspects, public policy and support services (formal institutions) but is especially important in terms of sociocultural context (informal institutions). The creation of new ventures is greatly influenced by culture. Furthermore, it is important to highlight the influence of entrepreneurship on regional development, specifically through job creation, stimulation of economic growth and innovation. Thus, entrepreneurship, regional development and culture are fundamental for understanding economic growth and development as well as other phenomena such as technology transfer or women's entrepreneurship. Featuring contributions and cases studies from various countries and sectors, this volume provides an essential reference for scholars, academics, and researchers in entrepreneurship, business management, innovation and economics.