A. Cancino, C, Nuñez, A & M. Merigó, J 2019, 'Influence of a seed capital program for supporting high growth firms in Chile', Contaduría y Administración, vol. 64, no. 1, pp. 65-65.
View/Download from: Publisher's site
View description>>
<p>The main economic development agency in Chile, CORFO, implemented in 2001 a Seed Capital Program (SCP) to promote the development of high-growth firms. The SCP not only provides financial aid to entrepreneurs but also technical and administrative assistance through the support of incubators. Incubators may be universities incubators (UI) or private firms (NUI). The aim of this paper is to know the performance of beneficiaries according to the assistance of UI or NUI. A total of 238 new firms beneficiaries with the CORFO program were surveyed (84 supported by UI and 154 supported by NUI). Two logistic regression models were used, a first model to assess the probability that a new firm achieves positive sales, and a second model to assess the probability that the new firm reaches a high growth during the first five years from its inception. Overall, mixed results were found. SCP’s beneficiaries supported by either UI and NUI have the same probability of having positive sales when starting their operations. However, five years after started their operations, businesses supported by UI have higher probabilities of achieving high growth than businesses supported by NUI. The results highlight a positive interaction between private entrepreneurs, public agencies and university incubators.<strong></strong></p>
Adak, C, Chaudhuri, BB, Lin, C-T & Blumenstein, M 2019, 'Intra-Variable Handwriting Inspection Reinforced with Idiosyncrasy Analysis', IEEE Transactions on Information Forensics and Security, 2020, vol. 15, pp. 3567-3579.
View/Download from: Publisher's site
View description>>
In this paper, we work on intra-variable handwriting, where the writingsamples of an individual can vary significantly. Such within-writer variationthrows a challenge for automatic writer inspection, where the state-of-the-artmethods do not perform well. To deal with intra-variability, we analyze theidiosyncrasy in individual handwriting. We identify/verify the writer fromhighly idiosyncratic text-patches. Such patches are detected using a deeprecurrent reinforcement learning-based architecture. An idiosyncratic score isassigned to every patch, which is predicted by employing deep regressionanalysis. For writer identification, we propose a deep neural architecture,which makes the final decision by the idiosyncratic score-induced weightedaverage of patch-based decisions. For writer verification, we propose twoalgorithms for patch-fed deep feature aggregation, which assist inauthentication using a triplet network. The experiments were performed on twodatabases, where we obtained encouraging results.
Afzal, MK, Khan, WZ, Umer, T, Kim, B-S & Yu, S 2019, 'Editorial of cross-layer design issues, challenges and opportunities for future intelligent heterogeneous networks', Journal of Ambient Intelligence and Humanized Computing, vol. 10, no. 11, pp. 4207-4208.
View/Download from: Publisher's site
Agarwal, A, Dowsley, R, McKinney, ND, Wu, D, Lin, C-T, De Cock, M & Nascimento, ACA 2019, 'Protecting Privacy of Users in Brain-Computer Interface Applications', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 8, pp. 1546-1555.
View/Download from: Publisher's site
View description>>
Machine learning (ML) is revolutionizing research and industry. Many ML applications rely on the use of large amounts of personal data for training and inference. Among the most intimate exploited data sources is electroencephalogram (EEG) data, a kind of data that is so rich with information that application developers can easily gain knowledge beyond the professed scope from unprotected EEG signals, including passwords, ATM PINs, and other intimate data. The challenge we address is how to engage in meaningful ML with EEG data while protecting the privacy of users. Hence, we propose cryptographic protocols based on secure multiparty computation (SMC) to perform linear regression over EEG signals from many users in a fully privacy-preserving (PP) fashion, i.e., such that each individual's EEG signals are not revealed to anyone else. To illustrate the potential of our secure framework, we show how it allows estimating the drowsiness of drivers from their EEG signals as would be possible in the unencrypted case, and at a very reasonable computational cost. Our solution is the first application of commodity-based SMC to EEG data, as well as the largest documented experiment of secret sharing-based SMC in general, namely, with 15 players involved in all the computations.
Alfaro-García, VG, Merigó, JM, Plata-Pérez, L, Alfaro-Calderón, GG & Gil-Lafuente, AM 2019, 'INDUCED AND LOGARITHMIC DISTANCES WITH MULTI-REGION AGGREGATION OPERATORS', Technological and Economic Development of Economy, vol. 0, no. 0, pp. 1-29.
View/Download from: Publisher's site
View description>>
This paper introduces the induced ordered weighted logarithmic averaging IOWLAD and multiregion induced ordered weighted logarithmic averaging MR-IOWLAD operators. The distinctive characteristic of these operators lies in the notion of distance measures combined with the complex reordering mechanism of inducing variables and the properties of the logarithmic averaging operators. The main advantage of MR-IOWLAD operators is their design, which is specifically thought to aid in decision-making when a set of diverse regions with different properties must be considered. Moreover, the induced weighting vector and the distance measure mechanisms of the operator allow for the wider modeling of problems, including heterogeneous information and the complex attitudinal character of experts, when aiming for an ideal scenario. Along with analyzing the main properties of the IOWLAD operators, their families and specific cases, we also introduce some extensions, such as the induced generalized ordered weighted averaging IGOWLAD operator and Choquet integrals. We present the induced Choquet logarithmic distance averaging ICLD operator and the generalized induced Choquet logarithmic distance averaging IGCLD operator. Finally, an illustrative example is proposed, including real-world information retrieved from the United Nations World Statistics for global regions.
Aliyu, A, El-Sayed, H, Abdullah, AH, Alam, I, Li, J & Prasad, M 2019, 'Video Streaming in Urban Vehicular Environments: Junction-Aware Multipath Approach', Electronics, vol. 8, no. 11, pp. 1239-1239.
View/Download from: Publisher's site
View description>>
In multipath video streaming transmission, the selection of the best vehicle for video packet forwarding considering the junction area is a challenging task due to the several diversions in the junction area. The vehicles in the junction area change direction based on the different diversions, which lead to video packet drop. In the existing works, the explicit consideration of different positions in the junction areas has not been considered for forwarding vehicle selection. To address the aforementioned challenges, a Junction-Aware vehicle selection for Multipath Video Streaming (JA-MVS) scheme has been proposed. The JA-MVS scheme considers three different cases in the junction area including the vehicle after the junction, before the junction and inside the junction area, with an evaluation of the vehicle signal strength based on the signal to interference plus noise ratio (SINR), which is based on the multipath data forwarding concept using greedy-based geographic routing. The performance of the proposed scheme is evaluated based on the Packet Loss Ratio (PLR), Structural Similarity Index (SSIM) and End-to-End Delay (E2ED) metrics. The JA-MVS is compared against two baseline schemes, Junction-Based Multipath Source Routing (JMSR) and the Adaptive Multipath geographic routing for Video Transmission (AMVT), in urban Vehicular Ad-Hoc Networks (VANETs).
Alkalbani, AM, Hussain, W & Kim, JY 2019, 'A Centralised Cloud Services Repository (CCSR) Framework for Optimal Cloud Service Advertisement Discovery From Heterogenous Web Portals', IEEE Access, vol. 7, pp. 128213-128223.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. A cloud service marketplace is the first point for a consumer to discovery, select and possible composition of different services. Although there are some private cloud service marketplaces, such as Microsoft Azure, that allow consumers to search service advertainment belonging to a given vendor. However, due to an increase in the number of cloud service advertisement, a consumer needs to find related services across the worldwide web (WWW). A consumer mostly uses a search engine such as Google, Bing, for the service advertisement discovery. However, these search engines are insufficient in retrieving related cloud services advertainments on time. There is a need for a framework that effectively and efficiently discovery of the related service advertisement for ordinary users. This paper addresses the issue by proposing a user-friendly harvester and a centralised cloud service repository framework. The proposed Centralised Cloud Service Repository (CCSR) framework has two modules - Harvesting as-a-Service (HaaS) and the service repository module. The HaaS module allows users to extract real-time data from the web and make it available to different file format without the need to write any code. The service repository module provides a centralised cloud service repository that enables a consumer for efficient and effective cloud service discovery. We validate and demonstrate the suitability of our framework by comparing its efficiency and feasibility with three widely used open-source harvesters. From the evaluative result, we observe that when we harvest a large number of services advertisements, the HaaS is more efficient compared with the traditional harvesting tools. Our cloud services advertisements dataset is publicly available for future research at: http://cloudmarketregistry.com/cloud-market-registry/home.html.
Al-Najjar, HAH, Kalantar, B, Pradhan, B, Saeidi, V, Halin, AA, Ueda, N & Mansor, S 2019, 'Land Cover Classification from fused DSM and UAV Images Using Convolutional Neural Networks', Remote Sensing, vol. 11, no. 12, pp. 1461-1461.
View/Download from: Publisher's site
View description>>
In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants wer...
Alshehri, MD & Hussain, FK 2019, 'A fuzzy security protocol for trust management in the internet of things (Fuzzy-IoT)', Computing, vol. 101, no. 7, pp. 791-818.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Austria, ein Teil von Springer Nature. Recently, the Internet of things (IoT) has received a lot of attention from both industry and academia. A reliable and secure IoT connection and communication is essential for the proper working of the IoT network as a whole. One of the ways to achieve robust security in an IoT network is to enable and build trusted communication among the things (nodes). In this area, the existing IoT literature faces many critical issues, such as the lack of intelligent cluster-based trust approaches for IoT networks and the detection of attacks on the IoT trust system from malicious nodes, such as bad service providers. The existing literature either does not address these issues or only addresses them partially. Our proposed solution can firstly detect on-off attacks using the proposed fuzzy-logic based approach, and it can detect contradictory behaviour attacks and other malicious nodes. Secondly, we develop a fuzzy logic-based approach to detect malicious nodes involved in bad service provisioning. Finally, to maintain the security of the IoT network, we develop a secure messaging system that enables secure communication between nodes. This messaging system uses hexadecimal values with a structure similar to serial communication. We carried out extensive experimentation under varying network sizes to validate the working of our proposed solution and also to test the efficiency of the proposed methods in relation to various types of malicious behavior. The experiment results demonstrate the effectiveness of our approach under various conditions.
Altaee, A, Braytee, A, Millar, GJ & Naji, O 2019, 'Energy efficiency of hollow fibre membrane module in the forward osmosis seawater desalination process', Journal of Membrane Science, vol. 587, pp. 117165-117165.
View/Download from: Publisher's site
View description>>
© 2019 This study provided new insights regarding the energy efficiency of hollow fibre forward osmosis modules for seawater desalination; and as a consequence an approach was developed to improve the process performance. Previous analysis overlooked the relationship between the energy efficiency and operating modes of the hollow fibre forward osmosis membrane when the process was scaled-up. In this study, the module length and operating parameters were incorporated in the design of an energy-efficient forward osmosis system. The minimum specific power consumption for seawater desalination was calculated at the thermodynamic limits. Two FO operating modes: (1) draw solution in the lumen and (2) feed solution in the lumen, were evaluated in terms of the desalination energy requirements at a minimum draw solution flow rate. The results revealed that the operating mode of the forward osmosis membrane was important in terms of reducing the desalination energy. In addition, the length of the forward osmosis module was also a significant factor and surprisingly increasing the length of the forward osmosis module was not always advantageous in improving the performance. The study outcomes also showed that seawater desalination by the forward osmosis process was less energy efficient at low and high osmotic draw solution concentration and performed better at 1.2–1.4 M sodium chloride draw solution concentrations. The findings of this study provided a platform to the manufacturers and operators of hollow fibre forward osmosis membrane to improve the energy efficiency of the desalination process.
Amirbagheri, K, Núñez-Carballosa, A, Guitart-Tarrés, L & Merigó, JM 2019, 'Research on green supply chain: a bibliometric analysis', Clean Technologies and Environmental Policy, vol. 21, no. 1, pp. 3-22.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. Abstract: Recently, the emergent concept of green supply chain has received increasing attention. Although popular among scholars, many literature reviews have only examined GSC from a general point of view or focused on a specific issue related to GSC. This study presents a comprehensive analysis of the influence and productivity of research on GSC from 1995 to 2017 by reporting trends among authors, countries and institutions based on a bibliometric approach. To this end, the study analyzes around 1900 papers on GSC. This study uses the Web of Science Core Collection database to analyze the bibliometric data and the visualization of similarities viewer method to graphically map those data. The graphical analysis uses bibliographic coupling, co-citation, co-authorship and co-occurrence of keywords. Graphical abstract: [Figure not available: see fulltext.].
Andrade-Valbuena, NA, Merigó-Lindahl, JM, Fernández, LV & Nicolas, C 2019, 'Mapping leading universities in strategy research: Three decades of collaborative networks', Cogent Business & Management, vol. 6, no. 1, pp. 1632569-1632569.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license. This paper presents a longitudinal classification of the impact that universities have on strategy research from three decades of publications, between 1987 and 2016, by using bibliometric techniques and distance-based analysis of networks applied at the level of universities. Using the WoS database, this study proposes a general overview of three decades of strategic management research. Using these techniques we (i) categorize the last 30 years of academic production of research institutions in terms of strategy, evaluating their impact; (ii) analyze which universities are publishing the most in journals whose scope of publication covers strategic management; and (iii) map the network of collaboration structures among research organizations, determining its relationship and analyzing its evolution in those three decades. We found that the University of Pennsylvania was the most prominent institution throughout the years, showing the broadest network of citations according to our network analysis. There was also a remarkable presence of international universities from the UK, Canada, France and the Netherlands, however, the citation pattern among them is still low. We also observed evidence of inner knowledge flowing among different fields based on the deliberate multidisciplinary nature of research in strategy, as the strong coincidence with the ranking of the main journals in the marketing field when comparing the bibliometric studies of both fields. This analysis contributes to strategy research, first by delivering insights based on the impact of academic production and secondly through the evolution of collaborative network linkages in terms of strategy investigations undertaken to build collective knowledge.
Asadabadi, MR, Chang, E & Saberi, M 2019, 'Are MCDM methods useful? A critical review of Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP)', Cogent Engineering, vol. 6, no. 1.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license. Although Multi Criteria Decision Making (MCDM) methods have been applied in numerous case studies, many companies still avoid employing these methods in making their decisions and prefer to decide intuitively. There are studies claiming that MCDM methods provide better rankings for companies than intuitive approaches. This study argues that this claim may have low validity from a company’s perspective. For this purpose, it focuses on one of the MCDM methods referred to as the Analytic Hierarchy Process (AHP) and shows that AHP is very likely to provide a ranking of options that would not be acceptable by a rational person. The main reason that many companies do not rely on current MCDM methods can be due to the fact that managers intuitively notice ranking errors. Future studies should end the promotion of outdated approaches, pay closer attention to the deficiencies of the current MCDM processes, and develop more useful methods.
Atov, I, Chen, K-C & Yu, S 2019, 'Data Science and Artificial Intelligence for Communications', IEEE Communications Magazine, vol. 57, no. 5, pp. 56-56.
View/Download from: Publisher's site
Atov, I, Chen, K-C, Kamal, A & Yu, S 2019, 'Data Science and Artificial Intelligence for Communications', IEEE Communications Magazine, vol. 57, no. 11, pp. 82-83.
View/Download from: Publisher's site
Baier-Fuentes, H, Merigó, JM, Amorós, JE & Gaviria-Marín, M 2019, 'International entrepreneurship: a bibliometric overview', International Entrepreneurship and Management Journal, vol. 15, no. 2, pp. 385-429.
View/Download from: Publisher's site
View description>>
© 2018 Springer Science+Business Media, LLC, part of Springer Nature The aim of this paper is to provide an overview of the academic research on International Entrepreneurship (IE). To accomplish this, an exhaustive bibliometric analysis was carried out, involving a bibliometric performance analysis and a graphic mapping of the references in this field. Our analysis focuses on journals, papers, authors, institutions and countries. To perform the performance analysis, the work uses a series of bibliometric indicators such as h-index, productivity and citations. Furthermore, the VOS viewer to graphically map the bibliographic material is used. The graphical analysis uses co-citation, bibliographic coupling and co-occurrence of keywords. The results of both analyzes are consistent among them, and show that the USA is the most influential country in IE research as it houses the main authors and institutions in this research field. Moreover, is observed and expected the continued growth of the field globally. Our research plays an informative and complementary role as it presents most of the key aspects in International Entrepreneurship research.
Bano, M, Zowghi, D, Ferrari, A, Spoletini, P & Donati, B 2019, 'Teaching requirements elicitation interviews: an empirical study of learning from mistakes.', Requir. Eng., vol. 24, no. 3, pp. 259-289.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag London Ltd., part of Springer Nature. Interviews are the most widely used elicitation technique in requirements engineering (RE). However, conducting a requirements elicitation interview is challenging. The mistakes made in design or conduct of the interviews can create problems in the later stages of requirements analysis. Empirical evidence about effective pedagogical approaches for training novices on conducting requirements elicitation interviews is scarce. In this paper, we present a novel pedagogical approach for training student analysts in the art of elicitation interviews. Our study is conducted in two parts: first, we perform an observational study of interviews performed by novices, and we present a classification of the most common mistakes made; second, we utilize this list of mistakes and monitor the students’ progress in three set of interviews to discover the individual areas for improvement. We conducted an empirical study involving role-playing and authentic assessment in two semesters on two different cohorts of students. In the first semester, we had 110 students, teamed up in 28 groups, to conduct three interviews with stakeholders. We qualitatively analysed the data to identify and classify the mistakes made from their first interview only. In the second semester, we had 138 students in 34 groups and we monitored and analysed their progress in all three interviews by utilizing the list of mistakes from the first study. First, we identified 34 unique mistakes classified into seven high-level themes, namely question formulation, question omission, interview order, communication skills, analyst behaviour, customer interaction, teamwork and planning. In the second study, we discovered that the students struggled mostly in the areas of question formulation, question omission and interview order and did not manage to improve their skills throughout the three interviews. Our study presents a novel and repeatable pe...
Beydoun, G, Abedin, B, Merigó, JM & Vera, M 2019, 'Twenty Years of Information Systems Frontiers.', Inf. Syst. Frontiers, vol. 21, no. 2, pp. 485-494.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. Information Systems Frontiers is a leading international journal that publishes research at the interface between information systems and information technology. The journal was launched in 1999. In 2019, the journal celebrates the 20th anniversary. Motivated by this event, this paper aims to review this first twenty years of publication record to uncover trends most influential on ISF. The analysis considers various metics including citation structure of the journal, most-cited papers, the most influential authors, institutions and countries, and citing articles. Importantly, the paper presents a thematic analysis of the publications that appeared in ISF in the past 20 years. The thematic analysis is evidenced by two sources of data: First, a bibliometric analysis highlighting core topics within the past 20 years is presented. Second, a semantic analysis of keywords introduced by the authors themselves is applied.
Bharill, N, Patel, OP, Tiwari, A, Mu, L, Li, D-L, Mohanty, M, Kaiwartya, O & Prasad, M 2019, 'A Generalized Enhanced Quantum Fuzzy Approach for Efficient Data Clustering', IEEE Access, vol. 7, pp. 50347-50361.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Data clustering is a challenging task to gain insights into data in various fields. In this paper, an Enhanced Quantum-Inspired Evolutionary Fuzzy C-Means (EQIE-FCM) algorithm is proposed for data clustering. In the EQIE-FCM, quantum computing concept is utilized in combination with the FCM algorithm to improve the clustering process by evolving the clustering parameters. The improvement in the clustering process leads to improvement in the quality of clustering results. To validate the quality of clustering results achieved by the proposed EQIE-FCM approach, its performance is compared with the other quantum-based fuzzy clustering approaches and also with other evolutionary clustering approaches. To evaluate the performance of these approaches, extensive experiments are being carried out on various benchmark datasets and on the protein database that comprises of four superfamilies. The results indicate that the proposed EQIE-FCM approach finds the optimal value of fitness function and the fuzzifier parameter for the reported datasets. In addition to this, the proposed EQIE-FCM approach also finds the optimal number of clusters and more accurate location of initial cluster centers for these benchmark datasets. Thus, it can be regarded as a more efficient approach for data clustering.
Blanco-Mesa, F, León-Castro, E & Merigó, JM 2019, 'A bibliometric analysis of aggregation operators', Applied Soft Computing, vol. 81, pp. 105488-105488.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Aggregation operators consist of mathematical functions that enable the combining and processing of different types of information. The aim of this work is to present the main contributions in this field by a bibliometric review approach. The paper employs an extensive range of bibliometric indicators using the Web of Science (WoS) Core Collection and Scopus datasets. The work considers leading journals, articles, authors, institutions countries and patterns. This paper highlights that Xu is the most productive author and Yager is the most influential author in the field. Likewise, China is leading the field with many new researchers who have entered the field in recent years. This discipline has been strengthening to create a unique theory and will continue to expand with many new theoretical developments and applications.
Blanco‐Mesa, F, León‐Castro, E, Merigó, JM & Herrera‐Viedma, E 2019, 'Variances with Bonferroni means and ordered weighted averages', International Journal of Intelligent Systems, vol. 34, no. 11, pp. 3020-3045.
View/Download from: Publisher's site
View description>>
© 2019 Wiley Periodicals, Inc. The variance is a statistical measure frequently used for analysis of dispersion in the data. This paper presents new types of variances that use Bonferroni means and ordered weighted averages in the aggregation process of the variance. The main advantage of this approach is that we can underestimate or overestimate the variance according to the attitudinal character of the decision-maker. The work considers several particular cases including the minimum and the maximum variance and presents some numerical examples. The article also develops some extensions and generalizations by using induced aggregation operators and generalized and quasi-arithmetic means. These approaches provide a more general framework that can consider a lot of other particular cases and a complex attitudinal character that could be affected by a wide range of variables. The study ends with an application of the new approach in a business decision-making problem regarding strategic analysis in enterprise risk management.
Blanco-Mesa, F, León-Castro, E, Merigó, JM & Xu, Z 2019, 'Bonferroni means with induced ordered weighted average operators', International Journal of Intelligent Systems, vol. 34, no. 1, pp. 3-23.
View/Download from: Publisher's site
View description>>
© 2018 Wiley Periodicals, Inc. The induced ordered weighted average is an averaging aggregation operator that provides a parameterized family of aggregation operators between the minimum and the maximum. This paper presents some new generalizations by using Bonferroni means (BM) forming induced BM. The main advantage of this approach is the possibility of reordering the results according to complex ranking processes based on order-inducing variables. The work also presents some additional extensions by using the weighted ordered weighted average, immediate weights, and hybrid averages. Some further generalizations with generalized and quasi-arithmetic means are also developed to consider a wide range of particular cases including quadratic and geometric aggregations. The article also considers the applicability of the new approach in-group decision-making developing an application in sales forecasting.
Braytee, A, Liu, W, Anaissi, A & Kennedy, PJ 2019, 'Correlated Multi-label Classification with Incomplete Label Space and Class Imbalance', ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 5, pp. 1-26.
View/Download from: Publisher's site
View description>>
Multi-label classification is defined as the problem of identifying the multiple labels or categories of new observations based on labeled training data. Multi-labeled data has several challenges, including class imbalance, label correlation, incomplete multi-label matrices, and noisy and irrelevant features. In this article, we propose an integrated multi-label classification approach with incomplete label space and class imbalance (ML-CIB) for simultaneously training the multi-label classification model and addressing the aforementioned challenges. The model learns a new label matrix and captures new label correlations, because it is difficult to find a complete label vector for each instance in real-world data. We also propose a label regularization to handle the imbalanced multi-labeled issue in the new label, and l 1 regularization norm is incorporated in the objective function to select the relevant sparse features. A multi-label feature selection (ML-CIB-FS) method is presented as a variant of the proposed ML-CIB to show the efficacy of the proposed method in selecting the relevant features. ML-CIB is formulated as a constrained objective function. We use the accelerated proximal gradient method to solve the proposed optimisation problem. Last, extensive experiments are conducted on 19 regular-scale and large-scale imbalanced multi-labeled datasets. The promising results show that our method significantly outperforms the state-of-the-art.
Bródka, P, Musial, K & Jankowski, J 2019, 'Interacting spreading processes in multilayer networks', IEEE Access, volume 8, 2020, vol. 8, pp. 10316-10341.
View/Download from: Publisher's site
View description>>
The world of network science is fascinating and filled with complex phenomenathat we aspire to understand. One of them is the dynamics of spreadingprocesses over complex networked structures. Building the knowledge-base in thefield where we can face more than one spreading process propagating over anetwork that has more than one layer is a challenging task, as the complexitycomes both from the environment in which the spread happens and fromcharacteristics and interplay of spreads' propagation. As thiscross-disciplinary field bringing together computer science, network science,biology and physics has rapidly grown over the last decade, there is a need tocomprehensively review the current state-of-the-art and offer to the researchcommunity a roadmap that helps to organise the future research in this area.Thus, this survey is a first attempt to present the current landscape of themulti-processes spread over multilayer networks and to suggest the potentialways forward.
Brown, P, Tan, A-C, El-Esawi, MA, Liehr, T, Blanck, O, Gladue, DP, Almeida, GMF, Cernava, T, Sorzano, CO, Yeung, AWK, Engel, MS, Chandrasekaran, AR, Muth, T, Staege, MS, Daulatabad, SV, Widera, D, Zhang, J, Meule, A, Honjo, K, Pourret, O, Yin, C-C, Zhang, Z, Cascella, M, Flegel, WA, Goodyear, CS, van Raaij, MJ, Bukowy-Bieryllo, Z, Campana, LG, Kurniawan, NA, Lalaouna, D, Hüttner, FJ, Ammerman, BA, Ehret, F, Cobine, PA, Tan, E-C, Han, H, Xia, W, McCrum, C, Dings, RPM, Marinello, F, Nilsson, H, Nixon, B, Voskarides, K, Yang, L, Costa, VD, Bengtsson-Palme, J, Bradshaw, W, Grimm, DG, Kumar, N, Martis, E, Prieto, D, Sabnis, SC, Amer, SEDR, Liew, AWC, Perco, P, Rahimi, F, Riva, G, Zhang, C, Devkota, HP, Ogami, K, Basharat, Z, Fierz, W, Siebers, R, Tan, K-H, Boehme, KA, Brenneisen, P, Brown, JAL, Dalrymple, BP, Harvey, DJ, Ng, G, Werten, S, Bleackley, M, Dai, Z, Dhariwal, R, Gelfer, Y, Hartmann, MD, Miotla, P, Tamaian, R, Govender, P, Gurney-Champion, OJ, Kauppila, JH, Zhang, X, Echeverría, N, Subhash, S, Sallmon, H, Tofani, M, Bae, T, Bosch, O, Cuív, PO, Danchin, A, Diouf, B, Eerola, T, Evangelou, E, Filipp, FV, Klump, H, Kurgan, L, Smith, SS, Terrier, O, Tuttle, N, Ascher, DB, Janga, SC, Schulte, LN, Becker, D, Browngardt, C, Bush, SJ, Gaullier, G, Ide, K, Meseko, C, Werner, GDA, Zaucha, J, Al-Farha, AA, Greenwald, NF, Popoola, SI, Rahman, MS, Xu, J, Yang, SY, Hiroi, N, Alper, OM, Baker, CI, Bitzer, M, Chacko, G, Debrabant, B, Dixon, R, Forano, E, Gilliham, M, Kelly, S, Klempnauer, K-H, Lidbury, BA, Lin, MZ, Lynch, I, Ma, W, Maibach, EW, Mather, DE, Nandakumar, KS, Ohgami, RS, Parchi, P, Tressoldi, P, Xue, Y, Armitage, C, Barraud, P, Chatzitheochari, S, Coelho, LP, Diao, J, Doxey, AC, Gobet, A, Hu, P, Kaiser, S, Mitchell, KM, Salama, MF, Shabalin, IG, Song, H, Stevanovic, D, Yadollahpour, A, Zeng, E, Zinke, K, Alimba, CG, Beyene, TJ, Cao, Z, Chan, SS, Gatchell, M, Kleppe, A, Piotrowski, M, Torga, G, Woldesemayat, AA, Cosacak, MI, Haston, S, Ross, SA, Williams, R, Wong, A, Abramowitz, MK, Effiong, A, Lee, S, Abid, MB, Agarabi, C, Alaux, C, Albrecht, DR, Atkins, GJ, Beck, CR, Bonvin, AMJJ, Bourke, E, Brand, T, Braun, RJ, Bull, JA, Cardoso, P, Carter, D, Delahay, RM, Ducommun, B, Duijf, PHG, Epp, T, Eskelinen, E-L, Fallah, M, Farber, DB, Fernandez-Triana, J, Feyerabend, F, Florio, T, Friebe, M, Furuta, S, Gabrielsen, M, Gruber, J, Grybos, M, Han, Q & et al. 2019, 'Large expert-curated database for benchmarking document similarity detection in biomedical literature search', Database, vol. 2019, pp. 1-66.
View/Download from: Publisher's site
View description>>
Abstract Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.
Cancino, CA, Amirbagheri, K, Merigó, JM & Dessouky, Y 2019, 'A bibliometric analysis of supply chain analytical techniques published in Computers & Industrial Engineering', Computers & Industrial Engineering, vol. 137, pp. 106015-106015.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Computers & Industrial Engineering (CAIE) is a leading international journal that publishes manuscripts in the field of supply chain. Due to the recent advances of different analytical techniques applied in order to address supply chain related problems, the aim of this work is to study CAIE publications with a focus on the supply chain using a bibliometric approach that can identify the leading trends in this area by analysing the most significant papers, keywords, authors, institutions and countries. The work also develops a graphical mapping of the bibliographic material by using the visualization of similarities (VOS) viewer software. With this software, the study analyses bibliographic coupling, co-occurrence of author keywords and how the journal is connected with other journals through co-citation analysis. The results indicate that Computers and Industrial Engineering has the fourth highest publications in this area among leading journals that publish in Supply Chain, and China and Iran are the leading publishing countries while Taiwan and Singapore have the highest publications per capita. Finally, supply chain optimization modelling received the highest number of publications in the study.
Cao, X, Qiu, B & Xu, G 2019, 'BorderShift: toward optimal MeanShift vector for cluster boundary detection in high-dimensional data', Pattern Analysis and Applications, vol. 22, no. 3, pp. 1015-1027.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag London Ltd., part of Springer Nature. We present a cluster boundary detection scheme that exploits MeanShift and Parzen window in high-dimensional space. To reduce the noises interference in Parzen window density estimation process, the kNN window is introduced to replace the sliding window with fixed size firstly. Then, we take the density of sample as the weight of its drift vector to further improve the stability of MeanShift vector which can be utilized to separate boundary points from core points, noise points, isolated points according to the vector models in multi-density data sets. Under such circumstance, our proposed BorderShift algorithm doesn’t need multi-iteration to get the optimal detection result. Instead, the developed Shift value of each data point helps to obtain it in a liner way. Experimental results on both synthetic and real data sets demonstrate that the F-measure evaluation of BorderShift is higher than that of other algorithms.
Cao, X, Qiu, B, Li, X, Shi, Z, Xu, G & Xu, J 2019, 'Multidimensional Balance-Based Cluster Boundary Detection for High-Dimensional Data', IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 6, pp. 1867-1880.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The balance of neighborhood space around a central point is an important concept in cluster analysis. It can be used to effectively detect cluster boundary objects. The existing neighborhood analysis methods focus on the distribution of data, i.e., analyzing the characteristic of the neighborhood space from a single perspective, and could not obtain rich data characteristics. In this paper, we analyze the high-dimensional neighborhood space from multiple perspectives. By simulating each dimension of a data point's k nearest neighbors space (k NNs) as a lever, we apply the lever principle to compute the balance fulcrum of each dimension after proving its inevitability and uniqueness. Then, we model the distance between the projected coordinate of the data point and the balance fulcrum on each dimension and construct the DHBlan coefficient to measure the balance of the neighborhood space. Based on this theoretical model, we propose a simple yet effective cluster boundary detection algorithm called Lever. Experiments on both low- and high-dimensional data sets validate the effectiveness and efficiency of our proposed algorithm.
Cao, Z, Chuang, C-H, King, J-K & Lin, C-T 2019, 'Multi-channel EEG recordings during a sustained-attention driving task', Scientific Data, vol. 6, no. 1, p. 19.
View/Download from: Publisher's site
View description>>
AbstractWe describe driver behaviour and brain dynamics acquired from a 90-minute sustained-attention task in an immersive driving simulator. The data included 62 sessions of 32-channel electroencephalography (EEG) data for 27 subjects driving on a four-lane highway who were instructed to keep the car cruising in the centre of the lane. Lane-departure events were randomly induced to cause the car to drift from the original cruising lane towards the left or right lane. A complete trial included events with deviation onset, response onset, and response offset. The next trial, in which the subject was instructed to drive back to the original cruising lane, began 5–10 seconds after finishing the previous trial. We believe that this dataset will lead to the development of novel neural processing methodology that can be used to index brain cortical dynamics and detect driving fatigue and drowsiness. This publicly available dataset will be beneficial to the neuroscience and brain-computer interface communities.
Cao, Z, Lin, C-T, Ding, W, Chen, M-H, Li, C-T & Su, T-P 2019, 'Identifying Ketamine Responses in Treatment-Resistant Depression Using a Wearable Forehead EEG', IEEE Transactions on Biomedical Engineering, vol. 66, no. 6, pp. 1668-1679.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. This study explores responses to ketamine in patients with treatment-resistant depression (TRD) using a wearable forehead electroencephalography (EEG) device. We recruited and randomly assigned 55 outpatients with TRD into three approximately equal-sized groups (A: 0.5-mg/kg ketamine; B: 0.2-mg/kg ketamine; and C: normal saline) under double-blind conditions. The ketamine responses were measured by EEG signals and Hamilton depression rating scale scores. At baseline, the responders showed significantly weaker EEG theta power than the non-responders (p < 0.05). Compared to the baseline, the responders exhibited higher EEG alpha power but lower EEG alpha asymmetry and theta cordance post-treatment (p < 0.05). Furthermore, our baseline EEG predictor classified the responders and non-responders with 81.3 ± 9.5% accuracy, 82.1 ± 8.6% sensitivity, and 91.9 ± 7.4% specificity. In conclusion, the rapid antidepressant effects of mixed doses of ketamine are associated with prefrontal EEG power, asymmetry, and cordance at baseline and early post-treatment changes. Prefrontal EEG patterns at baseline may serve as indicators of ketamine effects. Our randomized double-blind placebo-controlled study provides information regarding the clinical impacts on the potential targets underlying baseline identification and early changes from the effects of ketamine in patients with TRD.
Chacon, D, Braytee, A, Huang, Y, Thoms, J, Subramanian, S, Sauerland, MC, Bohlander, SK, Braess, J, Wörmann, BJ, Berdel, WE, Hiddemann, W, Gabrys, B, Metzeler, KH, Herold, T, Pimanda, J & Beck, D 2019, 'Prospective Identification of Acute Myeloid Leukemia Patients Who Benefit from Gene-Expression Based Risk Stratification', Blood, vol. 134, no. Supplement_1, pp. 1397-1397.
View/Download from: Publisher's site
View description>>
Background: Acute myeloid leukemia (AML) is a highly heterogeneous malignancy and risk stratification based on genetic and clinical variables is standard practice. However, current models incorporating these factors accurately predict clinical outcomes for only 64-80% of patients and fail to provide clear treatment guidelines for patients with intermediate genetic risk. A plethora of prognostic gene expression signatures (PGES) have been proposed to improve outcome predictions but none of these have entered routine clinical practice and their role remains uncertain. Methods: To clarify clinical utility, we performed a systematic evaluation of eight highly-cited PGES i.e. Marcucci-7, Ng-17, Li-24, Herold-29, Eppert-LSCR-48, Metzeler-86, Eppert-HSCR-105, and Bullinger-133. We investigated their constituent genes, methodological frameworks and prognostic performance in four cohorts of non-FAB M3 AML patients (n= 1175). All patients received intensive anthracycline and cytarabine based chemotherapy and were part of studies conducted in the United States of America (TCGA), the Netherlands (HOVON) and Germany (AMLCG). Results: There was a minimal overlap of individual genes and component pathways between different PGES and their performance was inconsistent when applied across different patient cohorts. Concerningly, different PGES often assigned the same patient into opposing adverse- or favorable- risk groups (Figure 1A: Rand index analysis; RI=1 if all patients were assigned to equal risk groups and RI =0 if all patients were assigned to different risk groups). Differences in the underlying methodological framework of different PGES and the molecular heterogeneity between AMLs contributed to these low-fidelity risk assignments. However, all PGES consistently assigned a significant subset of patients into the same adverse- or favorable-risk groups (40%-70%; Figure 1B: Principal componen...
CHEN, J, SU, S & WANG, X 2019, 'Towards Privacy-Preserving Location Sharing over Mobile Online Social Networks', IEICE Transactions on Information and Systems, vol. E102.D, no. 1, pp. 133-146.
View/Download from: Publisher's site
Chen, J, Tian, Z, Cui, X, Yin, L & Wang, X 2019, 'Trust architecture and reputation evaluation for internet of things', Journal of Ambient Intelligence and Humanized Computing, vol. 10, no. 8, pp. 3099-3107.
View/Download from: Publisher's site
Chen, S, Wang, Y, Lin, C-T, Ding, W & Cao, Z 2019, 'Semi-supervised feature learning for improving writer identification', Information Sciences, vol. 482, pp. 156-170.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Inc. Data augmentation is typically used by supervised feature learning approaches for offline writer identification, but such approaches require a mass of additional training data and potentially lead to overfitting errors. In this study, a semi-supervised feature learning pipeline is proposed to improve the performance of writer identification by training with extra unlabeled data and the original labeled data simultaneously. Specifically, we propose a weighted label smoothing regularization (WLSR) method for data augmentation, which assigns a weighted uniform label distribution to the extra unlabeled data. The WLSR method regularizes the convolutional neural network (CNN) baseline to allow more discriminative features to be learned to represent the properties of different writing styles. The experimental results on well-known benchmark datasets (ICDAR2013 and CVL) showed that our proposed semi-supervised feature learning approach significantly improves the baseline measurement and perform competitively with existing writer identification approaches. Our findings provide new insights into offline writer identification.
Chen, Z, Li, J & You, X 2019, 'Learn to focus on objects for visual detection', Neurocomputing, vol. 348, pp. 27-39.
View/Download from: Publisher's site
View description>>
© 2018 State-of-art visual detectors utilize object proposals as the reference of objects to achieve higher efficiency. However, the number of the proposal to ensure full coverage of potential objects is still large because the proposals are generated with thread and thrum, exposing proposal computation as a bottleneck. This paper presents a complementary technique that aims to work with any existing proposal generating system, amending the work-flow from “propose-assess” to “propose-adjust-assess”. Inspired by the biological processing, we propose to improve the quality of object proposals by analyzing visual contexts and gradually focusing proposals on targets. In particular, the proposed method can be employed with existing proposals generation algorithms based on both hand-crafted features and Convolutional Neural Network (CNN) features. For the former, we realize the focusing function by two learning-based transformation models, which are trained for identifying generic objects using image cues. For the latter, a Focus Proposal Net (FoPN) with cascaded layers, which can be directly injected into CNN models in an end-to-end manner, is developed as the implementation of focusing operation. Experiments on real-life image data sets demonstrate that the quality of the proposal is improved by the proposed technique. Besides, it can reduce the number of proposals to achieve high recall rate of the objects based on both hand-crafted features and CNN-features, and can boost the performance of state-of-art detectors.
Cheng, C, Xiao, F & Cao, Z 2019, 'A New Distance for Intuitionistic Fuzzy Sets Based on Similarity Matrix', IEEE Access, vol. 7, pp. 70436-70446.
View/Download from: Publisher's site
Cheng, E-J, Chou, K-P, Rajora, S, Jin, B-H, Tanveer, M, Lin, C-T, Young, K-Y, Lin, W-C & Prasad, M 2019, 'Deep Sparse Representation Classifier for facial recognition and detection system', Pattern Recognition Letters, vol. 125, pp. 71-77.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. This paper proposes a two-layer Convolutional Neural Network (CNN) to learn the high-level features which utilizes to the face identification via sparse representation. Feature extraction plays a vital role in real-world pattern recognition and classification tasks. The details description of the given input face image, significantly improve the performance of the facial recognition system. Sparse Representation Classifier (SRC) is a popular face classifier that sparsely represents the face image by a subset of training data, which is known as insensitive to the choice of feature space. The proposed method shows the performance improvement of SRC via a precisely selected feature exactor. The experimental results show that the proposed method outperforms other methods on given datasets.
Cheng, EJ, Young, K-Y & Lin, C-T 2019, 'Temporal EEG Imaging for Drowsy Driving Prediction', Applied Sciences, vol. 9, no. 23, pp. 5078-5078.
View/Download from: Publisher's site
View description>>
As a major cause of vehicle accidents, the prevention of drowsy driving has received increasing public attention. Precisely identifying the drowsy state of drivers is difficult since it is an ambiguous event that does not occur at a single point in time. In this paper, we use an electroencephalography (EEG) image-based method to estimate the drowsiness state of drivers. The driver’s EEG measurement is transformed into an RGB image that contains the spatial knowledge of the EEG. Moreover, for considering the temporal behavior of the data, we generate these images using the EEG data over a sequence of time points. The generated EEG images are passed into a convolutional neural network (CNN) to perform the prediction task. In the experiment, the proposed method is compared with an EEG image generated from a single data time point, and the results indicate that the approach of combining EEG images in multiple time points is able to improve the performance for drowsiness prediction.
Cui, L, Qu, Y, Nosouhi, MR, Yu, S, Niu, J-W & Xie, G 2019, 'Improving Data Utility Through Game Theory in Personalized Differential Privacy', Journal of Computer Science and Technology, vol. 34, no. 2, pp. 272-286.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC & Science Press, China. Due to dramatically increasing information published in social networks, privacy issues have given rise to public concerns. Although the presence of differential privacy provides privacy protection with theoretical foundations, the trade-off between privacy and data utility still demands further improvement. However, most existing studies do not consider the quantitative impact of the adversary when measuring data utility. In this paper, we firstly propose a personalized differential privacy method based on social distance. Then, we analyze the maximum data utility when users and adversaries are blind to the strategy sets of each other. We formalize all the payoff functions in the differential privacy sense, which is followed by the establishment of a static Bayesian game. The trade-off is calculated by deriving the Bayesian Nash equilibrium with a modified reinforcement learning algorithm. The proposed method achieves fast convergence by reducing the cardinality from n to 2. In addition, the in-place trade-off can maximize the user’s data utility if the action sets of the user and the adversary are public while the strategy sets are unrevealed. Our extensive experiments on the real-world dataset prove the proposed model is effective and feasible.
Cutler, RL, Torres-Robles, A, Wiecek, E, Drake, B, Van der Linden, N, Benrimoj, SIC & Garcia-Cardenas, V 2019, '<p>Pharmacist-led medication non-adherence intervention: reducing the economic burden placed on the Australian health care system</p>', Patient Preference and Adherence, vol. Volume 13, pp. 853-862.
View/Download from: Publisher's site
View description>>
© 2019 Cutler et al. Background: Scarcity of prospective medication non-adherence cost measurements for the Australian population with no directly measured estimates makes determining the burden medication non-adherence places on the Australian health care system difficult. This study aims to indirectly estimate the national cost of medication non-adherence in Australia comparing the cost prior to and following a community pharmacy-led intervention. Methods: Retrospective observational study. A de-identified database of dispensing data from 20,335 patients (n=11,257 on rosuvastatin, n=6,797 on irbesartan and n=2,281 on desvenlafaxine) was analyzed and average adherence rate determined through calculation of PDC. Included patients received a pharmacist-led medication adherence intervention and had twelve months dispensing records; six months before and six months after the intervention. The national cost estimate of medication non-adherence in hypertension, dyslipidemia and depression pre-and post-intervention was determined through utilization of disease prevalence and comorbidity, non-adherence rates and per patient disease-specific adherence-related costs. Results: The total national cost of medication non-adherence across three prevalent conditions, hypertension, dyslipidemia and depression was $10.4 billion equating to $517 per adult. Following enrollment in the pharmacist-led intervention medication non-adherence costs per adult decreased $95 saving the Australian health care system and patients $1.9 billion annually. Conclusion: In the absence of a directly measured national cost of medication non-adherence, this estimate demonstrates that pharmacists are ideally placed to improve patient adherence and reduce financial burden placed on the health care system due to non-adherence. Funding of medication adherence programs should be considered by policy and decision makers to ease the current burden and improve patient health outcomes moving forward.
Daniel, J, Naderpour, M & Lin, C-T 2019, 'A Fuzzy Multilayer Assessment Method for EFQM', IEEE Transactions on Fuzzy Systems, vol. 27, no. 6, pp. 1252-1262.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Although the European Foundation for Quality Management (EFQM) is one of the best-known business excellence frameworks, its inherent self-assessment approaches have several limitations. A critical review of self-assessment models reveals that most models are ambiguous and limited to precise data. In addition, the impact of expert knowledge on scoring is overly subjective, and most methodologies assume the relationships between variables are linear. This paper presents a new fuzzy multilayer assessment method that relies on fuzzy inference systems to accommodate imprecise data and varying assessor experiences to overcome uncertainty and complexity in the EFQM model. The method was implemented, tested, and verified under real conditions at a regional electricity company. The case was assessed by internal company experts and external assessors from an EFQM business excellence organization and the model was implemented using MATLAB software. When comparing the classical model with the new model, assessors and experts favored outputs from the new model.
Dekhtyar, A, Huffman Hayes, J, Hadar, I, Combs, E, Ferrari, A, Gregory, S, Horkoff, J, Levy, M, Nayebi, M, Paech, B, Payne, J, Primrose, M, Spoletini, P, Clarke, S, Brophy, C, Amyot, D, Maalej, W, Ruhe, G, Cleland-Huang, J & Zowghi, D 2019, 'Requirements Engineering (RE) for Social Good: RE Cares [Requirements]', IEEE Software, vol. 36, no. 1, pp. 86-94.
View/Download from: Publisher's site
View description>>
© 1984-2012 IEEE. As researchers and teachers and practitioners, we software types excel at multitasking. This, in part, led us to ask the question: Can one attend a software engineering conference and do something good for society? We found the answer to be a resounding yes. In this article, we present our first experience of running RE Cares, a conference collocated event. This event included a workshop, conference sessions, and a hackathon for developing an application to support emergency field activity for Mutual Aid Alberta, a nonprofit organization coordinating natural disaster responses in the Canadian province.
Ding, W, Lin, C-T & Cao, Z 2019, 'Deep Neuro-Cognitive Co-Evolution for Fuzzy Attribute Reduction by Quantum Leaping PSO With Nearest-Neighbor Memeplexes', IEEE Transactions on Cybernetics, vol. 49, no. 7, pp. 2744-2757.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Attribute reduction with many patterns and indicators has been regarded as an important approach for large-scale data mining and machine learning tasks. However, it is extremely difficult for researchers to inadequately extract knowledge and insights from multiple overlapping and interdependent fuzzy datasets from the current changing and interconnected big data sources. This paper proposes a deep neuro-cognitive co-evolution for fuzzy attribute reduction (DNCFAR) that contains a combination of quantum leaping particle swarm optimization with nearest-neighbor memeplexes. A key element of DNCFAR resides in its deep neuro-cognitive cooperative co-evolution structure, which is explicitly permitted to identify interdependent variables and adaptively decompose them in the same neuro-subpopulation, with minimizing the complexity and nonseparability of interdependent variables among different fuzzy attribute subsets. Next DNCFAR formalizes to the different types of quantum leaping particles with nearest-neighbor memeplexes to share their respective solutions and deeply cooperate to evolve the assigned fuzzy attribute subsets. The experimental results demonstrate that DNCFAR can achieve competitive performance in terms of average computational efficiency and classification accuracy while reinforcing noise tolerance. Furthermore, it can be well applied to clearly identify different longitudinal surfaces of infant cerebrum regions, which indicates its great potential for brain disorder prediction based on fMRI.
Ding, W, Lin, C-T & Cao, Z 2019, 'Shared Nearest-Neighbor Quantum Game-Based Attribute Reduction With Hierarchical Coevolutionary Spark and Its Application in Consistent Segmentation of Neonatal Cerebral Cortical Surfaces', IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 7, pp. 2013-2027.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. The unprecedented increase in data volume has become a severe challenge for conventional patterns of data mining and learning systems tasked with handling big data. The recently introduced Spark platform is a new processing method for big data analysis and related learning systems, which has attracted increasing attention from both the scientific community and industry. In this paper, we propose a shared nearest-neighbor quantum game-based attribute reduction (SNNQGAR) algorithm that incorporates the hierarchical coevolutionary Spark model. We first present a shared coevolutionary nearest-neighbor hierarchy with self-evolving compensation that considers the features of nearest-neighborhood attribute subsets and calculates the similarity between attribute subsets according to the shared neighbor information of attribute sample points. We then present a novel attribute weight tensor model to generate ranking vectors of attributes and apply them to balance the relative contributions of different neighborhood attribute subsets. To optimize the model, we propose an embedded quantum equilibrium game paradigm (QEGP) to ensure that noisy attributes do not degrade the big data reduction results. A combination of the hierarchical coevolutionary Spark model and an improved MapReduce framework is then constructed that it can better parallelize the SNNQGAR to efficiently determine the preferred reduction solutions of the distributed attribute subsets. The experimental comparisons demonstrate the superior performance of the SNNQGAR, which outperforms most of the state-of-the-art attribute reduction algorithms. Moreover, the results indicate that the SNNQGAR can be successfully applied to segment overlapping and interdependent fuzzy cerebral tissues, and it exhibits a stable and consistent segmentation performance for neonatal cerebral cortical surfaces.
Do, T-TN, Chuang, C-H, Hsiao, S-J, Lin, C-T & Wang, Y-K 2019, 'Neural Comodulation of Independent Brain Processes Related to Multitasking', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 6, pp. 1160-1169.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Distracted driving is regarded as an integrated task requiring different regions of the brain to receive sensory data, coordinate information, make decisions, and synchronize movements. In this paper, we applied an independent modulator analysis (IMA) method to temporally independent electroencephalography (EEG) components to understand how the human executive control system coordinates different brain regions to simultaneously perform multiple tasks with distractions presented in different modalities. The behavioral results showed that the reaction time (RT) in response to traffic events increased while multitasking. Moreover, the RT was longer when the distractor was presented in an auditory form versus a visual form. The IMA results showed that there were performance-related IMs coordinating different brain regions during distracted driving. The component spectral fluctuations affected by the modulators were distinct between the single- and dual-task conditions. Specifically, more modulatory weight was projected to the occipital region to address the additional distracting stimulus in both visual and auditory modality in the dual-task conditions. A comparison of modulatory weights between auditory and visual distractors showed that more modulatory weight was projected to the frontal region during the processing of the auditory distractor. This paper provides valuable insights into the temporal dynamics of attentional modulation during multitasking as well as an understanding of the underlying brain mechanisms that mediate the synchronization across brain regions and govern the allocation of attention in distracted driving.
Dou, W, Tang, W, Li, S, Yu, S & Raymond Choo, K-K 2019, 'A heuristic line piloting method to disclose malicious taxicab driver’s privacy over GPS big data', Information Sciences, vol. 483, pp. 247-261.
View/Download from: Publisher's site
View description>>
© 2018 While privacy preservation is important, there are occasions when an individual's privacy should not be preserved (e.g., those involved in the case of a terrorist attack). Existing works do not generally make such a distinction. We posit the importance of classifying an individual's privacy as positive and negative, say in the case of a misbehaving driver (e.g., a driver involved in a hit-and-run or terrorist attack). This will allow us to revoke the right of the misbehaving driver's right to privacy to facilitate investigation. Hence, we propose a heuristic line piloting method, hereafter referred to as HelpMe. Using taxi services as a case study, we explain how the proposed method constantly accumulates the knowledge of taxi routes from related historical GPS datasets using machine-learning techniques. Hence, a taxi deviating from the typical route could be detected in real-time, which may be used to raise an alert (e.g., the taxi may be hijacked by criminals). We also evaluate the utility of our method on real-life GPS datasets.
Etchebarne, MS, Cancino, CA & Merigó, JM 2019, 'Evolution of the business and management research in Chile', International Journal of Technology, Policy and Management, vol. 19, no. 2, pp. 108-108.
View/Download from: Publisher's site
View description>>
Copyright © 2019 Inderscience Enterprises Ltd. Different aspects have enhanced the development of scientific research in business and management in Chile. The aim of this paper is to analyse the characterisation of this scientific evolution. The method used is a Bibliometric analysis. Our sample examines any paper published between 1991 and 2015 in the Web of Science (WoS) database in the area of business and management. The main results show that the publications have had a significant increase. Scientific productivity increase may be related, among other factors: to the efforts of the Chilean universities that reward and incentivise publications in WoS; the participation of academics in competitive grants (Fondecyt); and international accreditations that demand more productive universities in terms of research. The results of the study could be interesting for universities from developing countries wishing to generate policies to increase the productivity in the areas of business and management.
Fang, XS, Sheng, QZ, Wang, X, Chu, D & Ngu, AHH 2019, 'SmartVote: a full-fledged graph-based model for multi-valued truth discovery', World Wide Web, vol. 22, no. 4, pp. 1855-1885.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. In the era of Big Data, truth discovery has emerged as a fundamental research topic, which estimates data veracity by determining the reliability of multiple, often conflicting data sources. Although considerable research efforts have been conducted on this topic, most current approaches assume only one true value for each object. In reality, objects with multiple true values widely exist and the existing approaches that cope with multi-valued objects still lack accuracy. In this paper, we propose a full-fledged graph-based model, SmartVote, which models two types of source relations with additional quantification to precisely estimate source reliability for effective multi-valued truth discovery. Two graphs are constructed and further used to derive different aspects of source reliability (i.e., positive precision and negative precision) via random walk computations. Our model incorporates four important implications, including two types of source relations, object popularity, loose mutual exclusion, and long-tail phenomenon on source coverage, to pursue better accuracy in truth discovery. Empirical studies on two large real-world datasets demonstrate the effectiveness of our approach.
Feng, B, Li, G, Li, G, Zhang, Y, Zhou, H & Yu, S 2019, 'Enabling Efficient Service Function Chains at Terrestrial-Satellite Hybrid Cloud Networks', IEEE Network, vol. 33, no. 6, pp. 94-99.
View/Download from: Publisher's site
View description>>
The great improvements in both satellite and terrestrial networks have motivated the academic and industrial communities to rethink their integration. As a result, there is an increasing interest in new-generation hybrid satellite-terrestrial networks, where sufficient flexibility should be enabled to deploy customized SFCs to satisfy the growing diversity of user needs. However, it is still challenging to achieve such a nice vision, since many key issues remain unaddressed comprehensively such as framework design, communication procedures and resource optimization. Therefore, in this article, we focus on how to efficiently deploy customized SFCs at terrestrial- satellite hybrid cloud networks. In particular, we first propose an elastic framework used for SFC deployment at clouds, and second propose an efficient SFC mapping approach for improvement of system resource utilization. Finally, we verify the proposed framework at a proof-ofconcept prototype via a number of use cases, and evaluate the proposed mapping approach through extensive simulations based on a realworld topology. Related experimental and simulation results have confirmed the feasibility and benefits of our proposed framework and mapping approach.
Feng, S, Shen, S, Huang, L, Champion, AC, Yu, S, Wu, C & Zhang, Y 2019, 'Three-dimensional robot localization using cameras in wireless multimedia sensor networks', Journal of Network and Computer Applications, vol. 146, pp. 102425-102425.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd We consider three-dimensional (3D) localization in wireless multimedia sensor networks (WMSNs) and seek optimal localization accuracy in order to ensure real-time data fusion of mobile robots in WMSNs. To this end, we propose a real-time 3D localization algorithm realized by a distributed architecture with various smart devices to overcome network instability and the bottleneck channel at the coordinator. We then employ the recursive least squares (RLS) algorithm to fuse the 2D image coordinates from multiple views synchronously in WMSNs and determine the mobile robot's 3D location in an indoor environment. To minimize wireless data transmission, we also develop a distributed architecture that combines various smart devices by defining the data content transmitted from multiple wireless visual sensors. Moreover, we analyze the factors influencing the network instability of various smart devices, and factors influencing the localization performance of mobile robots in a multiple-view system. Experimental results show the proposed algorithm can achieve reliable, efficient, and real-time 3D localization in indoor WMSNs.
Fis, AM & Cetindamar, D 2019, 'Unlocking the Relationship between Corporate Entrepreneurship and Firm Performance', Entrepreneurship Research Journal, vol. 0, no. 0, pp. 1-47.
View/Download from: Publisher's site
View description>>
AbstractThis paper explores the relationship between corporate entrepreneurship and performance by developing a comprehensive theoretical model based on Schumpeterian understanding of entrepreneurship supported with the Theory of Planned Behavior from social psychology. The model shows how organizational culture (value) triggers a chain effect through its influence on entrepreneurial orientation (attitude) and managerial support (intentions) that ultimately generate impact on corporate entrepreneurship (behavior). We test our model in an emerging economy context and present our results with implications to theory and practice.
Fu, C, Liu, X-Y, Yang, J, Yang, LT, Yu, S & Zhu, T 2019, 'Wormhole: The Hidden Virus Propagation Power of the Search Engine in Social Networks', IEEE Transactions on Dependable and Secure Computing, vol. 16, no. 4, pp. 693-710.
View/Download from: Publisher's site
View description>>
© 2004-2012 IEEE. Today search engines are tightly coupled with social networks, and present users with a double-edged sword: They are able to acquire information interesting to users but are also capable of spreading viruses introduced by hackers. It is challenging to characterize how a search engine spreads viruses, since the search engine serves as a virtual virus pool and creates propagation paths over the underlying network structure. In this paper, we quantitatively analyze virus propagation effects and the stability of the virus propagation process in the presence of a search engine. First, although social networks have a community structure that impedes virus propagation, we find that a search engine generates a propagation wormhole. Second, we propose an epidemic feedback model and quantitatively analyze propagation effects based on a model employing four metrics: infection density, the propagation wormhole effect, the epidemic threshold, and the basic reproduction number. Third, we verify our analyses on four real-world data sets and two simulated data sets. Moreover, we prove that the proposed model has the property of partial stability. Evaluation results show that, compared the cases without a search engine, virus propagation with the search engine has a higher infection density, shorter network diameter, greater propagation velocity, lower epidemic threshold, and larger basic reproduction number.
Garcia, JA 2019, 'A Virtual Reality Game-Like Tool for Assessing the Risk of Falling in the Elderly.', Stud Health Technol Inform, vol. 266, pp. 63-69.
View/Download from: Publisher's site
View description>>
In recent years, the use of interactive game technology has gained much interest in the research community as a means to measure indicators associated with the risk of falling in the elderly. Input devices used for gaming offer an inexpensive but yet reliable alternative to the costly apparatuses used in clinics and medical centers. In this paper, we explore the feasibility of using virtual reality technology as a tool to assess the risk of falling in the senior community in a more immersive, intuitive and descriptive manner. Our VR-based tool captures stepping performance parameters in order to fulfill the requirements of a well-established clinical test for fall risk assessment. The use of virtual reality allows for an immersive experience where elderly users can fully concentrate on the motor and cognitive functions being assessed rather than the technology being used.
Gaviria-Marin, M, Merigó, JM & Baier-Fuentes, H 2019, 'Knowledge management: A global examination based on bibliometric analysis', Technological Forecasting and Social Change, vol. 140, pp. 194-220.
View/Download from: Publisher's site
View description>>
© 2018 Knowledge management (KM) is a field of research that has gained wide acceptance in the scientific community and management literature. This article presents a bibliometric overview of the academic research on KM in the business and management areas. Various bibliometric methods are used to perform this overview, including performance analysis and science mapping of the KM field. The performance analysis uses a series of bibliometric indicators, such as the h-index, productivity and citations. In addition, the VOSviewer software is used to map the bibliographic material. Science mapping uses co-citations and the concurrency of keywords. References were obtained from the Web of Science database. We identified and classified the most relevant research in the field according to journals, articles, authors, institutions and countries. The results show that research in this field has increased significantly in the last ten years and that the USA is the most influential country in all aspects in this field. It is important to consider, however, that science continues to advance in this and in all fields and that data rapidly change over time. Therefore, this paper fulfills an informational role that shows that most of the fundamental research of KM is in business and management areas.
Gay, VC, Garcia, JA & Leong, TW 2019, 'Using Asynchronous Exergames to Encourage an Active Ageing Lifestyle: Solitaire Fitness Study Protocol.', Stud Health Technol Inform, vol. 266, pp. 70-75.
View/Download from: Publisher's site
View description>>
A healthy and active lifestyle can significantly improve the well-being and quality of life; however, some elderly people struggle to stay motivated and engaged with any form of exercise. The project Elaine (Elderly, AI and New Experiences) addresses this problem by seeking to improve the quality of life of the elderly through exergames. Currently, the project explores a novel approach in the field of health informatics called asynchronous exergaming. This approach, a new trend in games in the health domain, allows the elderly to workout at their own pace, and in their own time, with their physical activity linked asynchronously to a game. This paper presents the study protocol for Solitaire Fitness, a new asynchronous exergame developed by the team. The game aims at increasing the motivation of the elderly to engage in physical exercise whilst helping to maintain their cognitive abilities. It also describes the protocol for the trial. The result of this research has the potential to benefit elderly that need nudging to be motivated to exercise, health care providers treating people with sedentary lifestyles and researchers investigating ways to encourage the elderly to exercise.
Ghantous, GB & Gill, AQ 2019, 'An Agile-DevOps Reference Architecture for Teaching Enterprise Agile', International Journal of Learning, Teaching and Educational Research, vol. 18, no. 7, pp. 128-144.
View/Download from: Publisher's site
View description>>
©2019 The authors and IJLTER.ORG. All rights reserved. DevOps emerged as an important extension to support the Agile development for frequent and continuous software delivery. The adoption of Agile-DevOps for large scale enterprise agility depends on the most important human capability such as people competency and experience. Hence, academic education and professional training is key to the successful adoption of Agile-DevOps approach. Thus, education and training providers need to teach Agile-DevOps. However, the challenge is: how to establish and simulate an effective Agile-DevOps technology environment for teaching Enterprise Agile? This paper introduces the integrated Adaptive Enterprise Project Management (AEPM) and DevOps Reference Architecture (DRA) approach for adopting and teaching the Agile-DevOps with the help of a teaching case study from the University of Technology - Sydney (UTS), Australia. These learnings can be utilised by educators to develop and teach practice-oriented Agile-DevOps for software engineering courses. Furthermore, the experience and observations can be employed by researchers and practitioners aiming to integrate Agile-DevOps at the large enterprise scale.
Gill, AQ & Chew, E 2019, 'Configuration information system architecture: Insights from applied action design research.', Inf. Manag., vol. 56, no. 4, pp. 507-525.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. One of the critical information systems that enables service resilience is the service configuration information system (CiS). The fundamental challenge for organisations is the effective designing and implementation of the CiS architecture. This paper addresses this important research problem and reports insights from a completed applied action design research (ADR) project in an Australian financial services organisation. This paper aims to provide guidance to researchers and practitioners contemplating ADR, rooted in the organisational context, for practice-oriented academia-industry collaborative research. This research also contributes in terms of the CiS reference architecture design knowledge and demonstrates the applicability of the ADR method.
Guan, Z, Zhang, Y, Zhu, L, Wu, L & Yu, S 2019, 'EFFECT: an efficient flexible privacy-preserving data aggregation scheme with authentication in smart grid', Science China Information Sciences, vol. 62, no. 3.
View/Download from: Publisher's site
View description>>
© 2019, Science China Press and Springer-Verlag GmbH Germany, part of Springer Nature. Smart grid is considered as a promising approach to solve the problems of carbon emission and energy crisis. In smart grid, the power consumption data are collected to optimize the energy utilization. However, security issues in communications still present practical concerns. To cope with these challenges, we propose EFFECT, an efficient flexible privacy-preserving aggregation scheme with authentication in smart grid. Specifically, in the proposed scheme, we achieve both data source authentication and data aggregation in high efficiency. Besides, in order to adapt to the dynamic smart grid system, the threshold for aggregation is adjusted according to the energy consumption information of each particular residential area and the time period, which can support fault-tolerance while ensuring individual data privacy during aggregation. Detailed security analysis shows that our scheme can satisfy the desired security requirements of smart grid. In addition, we compare our scheme with existing schemes to demonstrate the effectiveness of our proposed scheme in terms of low computational complexity and communication overhead.
Gupta, D, Pratama, M, Ma, Z, Li, J & Prasad, M 2019, 'Financial time series forecasting using twin support vector regression', PLOS ONE, vol. 14, no. 3, pp. e0211402-e0211402.
View/Download from: Publisher's site
View description>>
© 2019 Gupta et al. Financial time series forecasting is a crucial measure for improving and making more robust financial decisions throughout the world. Noisy data and non-stationarity information are the two key factors in financial time series prediction. This paper proposes twin support vector regression for financial time series prediction to deal with noisy data and nonstationary information. Various interesting financial time series datasets across a wide range of industries, such as information technology, the stock market, the banking sector, and the oil and petroleum sector, are used for numerical experiments. Further, to test the accuracy of the prediction of the time series, the root mean squared error and the standard deviation are computed, which clearly indicate the usefulness and applicability of the proposed method. The twin support vector regression is computationally faster than other standard support vector regression on the given 44 datasets.
Halasi, Z, Maróti, A, Pyber, L & Qiao, Y 2019, 'An improved diameter bound for finite simple groups of Lie type', Bulletin of the London Mathematical Society, vol. 51, no. 4, pp. 645-657.
View/Download from: Publisher's site
View description>>
© 2019 London Mathematical Society For a finite group (Formula presented.), let (Formula presented.) denote the maximum diameter of a connected Cayley graph of (Formula presented.). A well-known conjecture of Babai states that (Formula presented.) is bounded by (Formula presented.) in case (Formula presented.) is a non-abelian finite simple group. Let (Formula presented.) be a finite simple group of Lie type of Lie rank (Formula presented.) over the field (Formula presented.). Babai's conjecture has been verified in case (Formula presented.) is bounded, but it is wide open in case (Formula presented.) is unbounded. Recently, Biswas and Yang proved that (Formula presented.) is bounded by (Formula presented.). We show that in fact (Formula presented.) holds. Note that our bound is significantly smaller than the order of (Formula presented.) for (Formula presented.) large, even if (Formula presented.) is large. As an application, we show that more generally (Formula presented.) holds for any subgroup (Formula presented.) of (Formula presented.), where (Formula presented.) is a vector space of dimension (Formula presented.) defined over the field (Formula presented.).
Han, B, Tsang, IW, Chen, L, Zhou, JT & Yu, CP 2019, 'Beyond Majority Voting: A Coarse-to-Fine Label Filtration for Heavily Noisy Labels', IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 12, pp. 3774-3787.
View/Download from: Publisher's site
View description>>
Crowdsourcing has become the most appealing way to provide a plethora of labels at a low cost. Nevertheless, labels from amateur workers are often noisy, which inevitably degenerates the robustness of subsequent learning models. To improve the label quality for subsequent use, majority voting (MV) is widely leveraged to aggregate crowdsourced labels due to its simplicity and scalability. However, when crowdsourced labels are "heavily" noisy (e.g., 40% of noisy labels), MV may not work well because of the fact "garbage (heavily noisy labels) in, garbage (full aggregated labels) out." This issue inspires us to think: if the ultimate target is to learn a robust model using noisy labels, why not provide partial aggregated labels and ensure that these labels are reliable enough for learning models? To solve this challenge by improving MV, we propose a coarse-to-fine label filtration model called double filter machine (DFM), which consists of a (majority) voting filter and a sparse filter serially. Specifically, the DFM refines crowdsourced labels from coarse filtering to fine filtering. In the stage of coarse filtering, the DFM aggregates crowdsourced labels by voting filter, which yields (quality-acceptable) full aggregated labels. In the stage of fine filtering, DFM further digs out a set of high-quality labels from full aggregated labels by sparse filter, since this filter can identify high-quality labels by the methodology of support selection. Based on the insight of compressed sensing, DFM recovers a ground-truth signal from heavily noisy data under a restricted isometry property. To sum up, the primary benefits of DFM are to keep the scalability by voting filter, while improve the robustness by sparse filter. We also derive theoretical guarantees for the convergence and recovery of DFM and reveal its complexity. We conduct comprehensive experiments on both the UCI simulated and the AMT crowdsourced datasets. Empirical results show that partial aggregated labels...
Hao, P, Zhang, G, Martinez, L & Lu, J 2019, 'Regularizing Knowledge Transfer in Recommendation With Tag-Inferred Correlation', IEEE Transactions on Cybernetics, vol. 49, no. 1, pp. 83-96.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Traditional recommender systems suffer from the data sparsity problem. However, user knowledge acquired in one domain can be transferred and exploited in several other relevant domains. In this context, cross-domain recommender systems have been proposed to create a new and effective recommendation paradigm in which to exploit rich data from auxiliary domains to assist recommendations in a target domain. Before knowledge transfer takes place, building reliable and concrete domain correlation is the key ensuring that only relevant knowledge will be transferred. Social tags are used to explicitly link different domains, especially when neither users nor items overlap. However, existing models only exploit a subset of tags that are shared by heterogeneous domains. In this paper, we propose a complete tag-induced cross-domain recommendation (CTagCDR) model, which infers interdomain and intradomain correlations from tagging history and applies the learned structural constraints to regularize joint matrix factorization. Compared to similar models, CTagCDR is able to fully explore knowledge encoded in both shared and domain-specific tags. We demonstrate the performance of our proposed model on three public datasets and compare it with five state-of-the-art single and cross-domain recommendation approaches. The results show that CTagCDR works well in both rating prediction and item recommendation tasks, and can effectively improve recommendation performance.
Hesamian, MH, Jia, W, He, X & Kennedy, P 2019, 'Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges', Journal of Digital Imaging, vol. 32, no. 4, pp. 582-596.
View/Download from: Publisher's site
View description>>
© 2019, The Author(s). Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions.
Hu, Y, Manzoor, A, Ekparinya, P, Liyanage, M, Thilakarathna, K, Jourjon, G & Seneviratne, A 2019, 'A Delay-Tolerant Payment Scheme Based on the Ethereum Blockchain', IEEE Access, vol. 7, pp. 33159-33172.
View/Download from: Publisher's site
Huang, K, Chuang, C, Wang, Y, Hsieh, C, King, J & Lin, C 2019, 'The effects of different fatigue levels on brain–behavior relationships in driving', Brain and Behavior, vol. 9, no. 12, p. e01379.
View/Download from: Publisher's site
View description>>
AbstractBackgroundIn the past decade, fatigue has been regarded as one of the main factors impairing task performance and increasing behavioral lapses during driving, even leading to fatal car crashes. Although previous studies have explored the impact of acute fatigue through electroencephalography (EEG) signals, it is still unclear how different fatigue levels affect brain–behavior relationships.MethodsA longitudinal study was performed to investigate the brain dynamics and behavioral changes in individuals under different fatigue levels by a sustained attention task. This study used questionnaires in combination with actigraphy, a noninvasive means of monitoring human physiological activity cycles, to conduct longitudinal assessment and tracking of the objective and subjective fatigue levels of recruited participants. In this study, degrees of effectiveness score (fatigue rating) are divided into three levels (normal, reduced, and high risk) by the SAFTE fatigue model.ResultsResults showed that those objective and subjective indicators were negatively correlated to behavioral performance. In addition, increased response times were accompanied by increased alpha and theta power in most brain regions, especially the posterior regions. In particular, the theta and alpha power dramatically increased in the high‐fatigue (high‐risk) group. Additionally, the alpha power of the occipital regions showed an inverted U‐shaped change.ConclusionOur results help to explain the inconsistent findings among existing studies, which considered the effects of only acute fatigue on driving performance while ignoring different levels of resident fatigue, and potentially lead to practical and precise biomathematical mode...
Huang, L, Zhang, G, Yu, S, Fu, A & Yearwood, J 2019, 'SeShare: Secure cloud data sharing based on blockchain and public auditing', Concurrency and Computation: Practice and Experience, vol. 31, no. 22, pp. 1-15.
View/Download from: Publisher's site
View description>>
SummaryIn a data sharing group, each user can upload, modify, and access group files and a user is required to generate a new signature for the modified file after modification. There is a situation that two or more users modify the same file at almost the same time, which should be avoided as it gives rise to a signature conflict. However, the existing schemes do not take it into consideration. In this paper, we proposed a new mechanism SeShare for data storing based on blockchain to realize signature uniqueness, which solves the problem of generating signatures for the same file meanwhile by different group users. Specifically, we record every signature of a file in a blockchain in chronological order, and only one user is allowed to add new signature at the end of the blockchain when modification conflicts occur. On the other hand, to provide a secure data sharing service, SeShare introduces an efficient public auditing scheme for file integrity verification when a group user leaves the group. We also prove the security of the proposed scheme and evaluate the performance at the end of this paper. Our experimental results demonstrate the efficiency of public auditing for user leaving.
Hussain, W & Sohaib, O 2019, 'Analysing Cloud QoS Prediction Approaches and Its Control Parameters: Considering Overall Accuracy and Freshness of a Dataset.', IEEE Access, vol. 7, pp. 82649-82671.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. service level agreement (SLA) management is one of the key issues in cloud computing. The primary goal of a service provider is to minimize the risk of service violations, as these results in penalties in terms of both money and a decrease in trustworthiness. To avoid SLA violations, the service provider needs to predict the likelihood of violation for each SLO and its measurable characteristics (QoS parameters) and take immediate action to avoid violations occurring. There are several approaches discussed in the literature to predict service violation; however, none of these explores how a change in control parameters and the freshness of data impact prediction accuracy and result in the effective management of an SLA of the cloud service provider. The contribution of this paper is two-fold. First, we analyzed the accuracy of six widely used prediction algorithms - simple exponential smoothing, simple moving average, weighted moving average, Holt-Winter double exponential smoothing, extrapolation, and the autoregressive integrated moving average - by varying their individual control parameters. Each of the approaches is compared to 10 different datasets at different time intervals between 5 min and 4 weeks. Second, we analyzed the prediction accuracy of the simple exponential smoothing method by considering the freshness of a data; i.e., how the accuracy varies in the initial time period of prediction compared to later ones. To achieve this, we divided the cloud QoS dataset into sets of input values that range from 100 to 500 intervals in sets of 1-100, 1-200, 1-300, 1-400, and 1-500. From the analysis, we observed that different prediction methods behave differently based on the control parameter and the nature of the dataset. The analysis helps service providers choose a suitable prediction method with optimal control parameters so that they can obtain accurate prediction results to manage SLA intelligently and avoid violation penalties.
Islam, MR, Lu, H, Hossain, MJ & Li, L 2019, 'Mitigating unbalance using distributed network reconfiguration techniques in distributed power generation grids with services for electric vehicles: A review', Journal of Cleaner Production, vol. 239, pp. 117932-117932.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd With rapid movement to combat climate change by reducing greenhouse gases, there is an increasing trend to use more electric vehicles (EVs) and renewable energy sources (RES). With more EVs integration into electricity grid, this raises many challenges for the distribution service operators (DSOs) to integrate such RES-based, distributed generation (DG) and EV-like distributed loads into distribution grids. Effective management of distribution network imbalance is one of the challenges. The distribution network reconfiguration (DNR) techniques are promising to address the issue of imbalance along with other techniques such as the optimal distributed generation placement and allocation (OPDGA) method. This paper presents a systematic and thorough review of DNR techniques for mitigating unbalance of distribution networks, based on papers published in peer-reviewed journals in the last three decades. It puts more focus on how the DNR techniques have been used to manage network imbalance due to distributed loads and DG units. To the best of our knowledge, this is the first attempt to review the research works in the field using DNR techniques to mitigate unbalanced distribution networks. Therefore, this paper will serve as a prime source of the guidance for mitigating network imbalance using the DNR techniques to the new researchers in this field.
Ivanyos, G & Qiao, Y 2019, 'Algorithms Based on *-Algebras, and Their Applications to Isomorphism of Polynomials with One Secret, Group Isomorphism, and Polynomial Identity Testing', SIAM Journal on Computing, vol. 48, no. 3, pp. 926-963.
View/Download from: Publisher's site
View description>>
© 2019 Society for Industrial and Applied Mathematics. We consider two basic algorithmic problems concerning tuples of (skew-)symmetric matrices. The first problem asks us to decide, given two tuples of (skew-)symmetric matrices (B1,..., Bm) and (C1,..., Cm), whether there exists an invertible matrix A such that for every i ∈ {1,..., m}, AtBiA = Ci. We show that this problem can be solved in randomized polynomial time over finite fields of odd size, the reals, and the complex numbers. The second problem asks us to decide, given a tuple of square matrices (B1,..., Bm), whether there exist invertible matrices A and D, such that for every i ∈ {1,..., m}, ABiD is (skew-)symmetric. We show that this problem can be solved in deterministic polynomial time over fields of characteristic not 2. For both problems we exploit the structure of the underlying ∗-algebras (algebras with an involutive antiautomorphism) and utilize results and methods from the module isomorphism problem. Applications of our results range from multivariate cryptography to group isomorphism and to polynomial identity testing. Specifically, these results imply efficient algorithms for the following problems. (1) Test isomorphism of quadratic forms with one secret over a finite field of odd size. This problem belongs to a family of problems that serves as the security basis of certain authentication schemes proposed by Patarin [J. Patarin, in Advances in Cryptology, EUROCRYPT'96, Springer, Berlin, 1996, pp. 33-48]. (2) Test isomorphism of p-groups of class 2 and exponent p (p odd) with order p in time polynomial in the group order, when the commutator subgroup is of order pO(). (3) Deterministically reveal two families of singularity witnesses caused by the skew-symmetric structure. This represents a natural next step for the polynomial identity testing problem, in the direction set up by the recent resolution of the noncommutative rank problem [A. Garg et al., in Proceedings of the 57th Annu...
Jiang, J, Gao, L, Jin, J, Luan, TH, Yu, S, Xiang, Y & Garg, S 2019, 'Sustainability Analysis for Fog Nodes With Renewable Energy Supplies', IEEE Internet of Things Journal, vol. 6, no. 4, pp. 6725-6735.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. There is a growing interest in the use of renewable energy sources to power fog networks in order to mitigate the detrimental effects of conventional energy production. However, renewable energy sources, such as solar and wind, are by nature unstable in their availability and capacity. The dynamics of energy supply hence impose new challenges for network planning and resource management. In this paper, the sustainable performance of a fog node powered by renewable energy sources is studied. We develop a generic analytical model to study the energy sustainability of fog nodes powered by renewable energy sources, by generalizing the leaky bucket model to shape and police traffic source for rate-based congestion control in high-speed fog networks. Based on the closed-form solutions of energy buffer analysis, i.e., the energy depletion probability and mean energy length, we study the energy sustainability in two special but real-happening scenarios. The experimental results show that with proper design the leaky bucket model effectively reflects the energy sustainability of data traffic in fog networks. Numerical results also reveal that the model performance is sensitive to certain traffic source characteristics in fog networks.
Jiang, P, Wang, B, Li, H & Lu, H 2019, 'Modeling for chaotic time series based on linear and nonlinear framework: Application to wind speed forecasting', Energy, vol. 173, pp. 468-482.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Wind-speed forecasting plays a crucial part in improving the operational efficiency of wind power generation. However, accurate forecasts are difficult owing to the uncertainty of the wind speed. Although numerous investigations of wind-speed forecasting have been performed, many of the previous studies used wind-speed data directly to make forecasts, which were rarely based on the structural characteristics of the data. Therefore, in this study, a hybrid linear-nonlinear modeling method based on the chaos theory was successfully employed to capture the linear and nonlinear factors hidden in chaotic time series. Before the forecast, the noise in the data was removed using a decomposition algorithm. Then, through the phase-space reconstruction, the one-dimensional time series were extended to the multi-dimensional space to determine the utilization form of the data. Finally, Holt's exponential smoothing based on the firefly optimization algorithm and support vector regression were combined to predict the wind speed. The experimental results show that the proposed model is not only better than the comparison models but also has great application potential in the wind power generation system.
Jin, X, Gu, F, Niu, J, Yu, S & Ouyang, Z 2019, 'HRCal: An effective calibration system for heart rate detection during exercising', Journal of Network and Computer Applications, vol. 136, pp. 1-10.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Heart rate directly reflects heart health and the detection of heart rate contributes to finding the abnormal performance of heart activity in a timely manner. Nevertheless, there is scope for a significant improvement in current heart rate detection systems and devices, especially during strenuous exercise. Motion compensation algorithm is used in most current systems to improve the monitoring accuracy, but it is limited by sensors and its performance is not satisfactory. In this paper, we propose HRCal, a novel Heart Rate Calibration System, which establishes a Long Short-Term Memory (LSTM) model to calibrate the detection of heart rate based on multisensor data fusion. Specifically, HRCal utilizes the built-in sensors (e.g. accelerometer, gyroscope and magnetometer) from smart devices (smartphones and sports watches) to collect users' motion data. Then a LSTM model is proposed and trained with different features to improve the accuracy and reliability of heart rate detection. In addition, we also elaborately design an evaluation scheme to compare HRCal with other approaches. We have fully implemented HRCal on Android platform and the experimental results (8 subjects) demonstrate that HRCal has a remarkable effect on common sports watches, to improve their accuracy of heart rate detection in physical training (up to 12.5% for moto 360 and 6.8% for Mio Alpha).
Jin, Y, Wu, H, Merigó, JM & Peng, B 2019, 'Generalized Hamacher Aggregation Operators for Intuitionistic Uncertain Linguistic Sets: Multiple Attribute Group Decision Making Methods', Information, vol. 10, no. 6, pp. 206-206.
View/Download from: Publisher's site
View description>>
In this paper, we consider multiple attribute group decision making (MAGDM) problems in which the attribute values take the form of intuitionistic uncertain linguistic variables. Based on Hamacher operations, we developed several Hamacher aggregation operators, which generalize the arithmetic aggregation operators and geometric aggregation operators, and extend the algebraic aggregation operators and Einstein aggregation operators. A number of special cases for the two operators with respect to the parameters are discussed in detail. Also, we developed an intuitionistic uncertain linguistic generalized Hamacher hybrid weighted average operator to reflect the importance degrees of both the given intuitionistic uncertain linguistic variables and their ordered positions. Based on the generalized Hamacher aggregation operator, we propose a method for MAGDM for intuitionistic uncertain linguistic sets. Finally, a numerical example and comparative analysis with related decision making methods are provided to illustrate the practicality and feasibility of the proposed method.
Kacprzyk, J, Yager, RR & Merigo, JM 2019, 'Towards Human-Centric Aggregation via Ordered Weighted Aggregation Operators and Linguistic Data Summaries: A New Perspective on Zadeh's Inspirations', IEEE Computational Intelligence Magazine, vol. 14, no. 1, pp. 16-30.
View/Download from: Publisher's site
View description>>
© 2005-2012 IEEE. This work presents a new perspective on how Zadeh's ideas related to fuzzy logic and computing with words have influenced the crucial issue of information aggregation and have led to what may be called a human-centric aggregation. We indicate a need to develop tools and techniques to reflect some fine shades of meaning regarding what can be considered the very purpose of human-centric aggregation, notably stated by various modalities in natural language specifications, in particular the usuality. We advocate the use of the ordered weighted average (OWA) operator, which is a formidable tool that can easily be tailored to a user?s intention as to the purpose and method of aggregation, generalizing many simple and natural aggregation types, such as the arithmetic mean, maximum and minimum, and probability. We show some of the most representative extensions and generalizations, including the induced OWA, the generalized OWA, the probabilistic OWA, and the OWA distance. We show their use in the basic case of the aggregation of numerical values and in social choice (voting) results. Then, we claim that linguistic data summaries in Yager?s sense can be considered an »ultimately human consistent» form of human-centric aggregation and show how the OWA operators can be used therein.
Kalantar, Al-Najjar, Pradhan, Saeidi, Halin, Ueda & Naghibi 2019, 'Optimized Conditioning Factors Using Machine Learning Techniques for Groundwater Potential Mapping', Water, vol. 11, no. 9, pp. 1909-1909.
View/Download from: Publisher's site
View description>>
Assessment of the most appropriate groundwater conditioning factors (GCFs) is essential when performing analyses for groundwater potential mapping. For this reason, in this work, we look at three statistical factor analysis methods—Variance Inflation Factor (VIF), Chi-Square Factor Optimization, and Gini Importance—to measure the significance of GCFs. From a total of 15 frequently used GCFs, 11 most effective ones (i.e., altitude, slope angle, plan curvature, profile curvature, topographic wetness index, distance from river, distance from fault, river density, fault density, land use, and lithology) were finally selected. In addition, 917 spring locations were identified and used to train and test three machine learning algorithms, namely Mixture Discriminant Analysis (MDA), Linear Discriminant Analysis (LDA) and Random Forest (RF). The resultant trained models were then applied for groundwater potential prediction and mapping in the Haraz basin of Mazandaran province, Iran. MDA has been successfully applied for soil erosion and landslide mapping, but has not yet been fully explored for groundwater potential mapping (GPM). Although other discriminant methods, such as LDA, exist, MDA is worth exploring due to its capability to model multivariate nonlinear relationships between variables; it also undertakes a mixture of unobserved subclasses with regularization of non-linear decision boundaries, which could potentially provide more accurate classification. For the validation, areas under Receiver Operating Characteristics (ROC) curves (AUC) were calculated for the three algorithms. RF performed better with AUC value of 84.4%, while MDA and LDA yielded 75.2% and 74.9%, respectively. Although MDA performance is lower than RF, the result is satisfactory, because it is within the acceptable standard of environmental modeling. The outcome of factor analysis and groundwater maps emphasizes on optimization of multicolinearity factors for faster spatial m...
Kieferová, M, Scherer, A & Berry, DW 2019, 'Simulating the dynamics of time-dependent Hamiltonians with a truncated Dyson series', Physical Review A, vol. 99, no. 4.
View/Download from: Publisher's site
Ko, L-W, Lin, C-T, Lu, Y-C, Bustince, H, Chang, Y-C, Chang, Y, Ferandez, J, Wang, Y-K, Sanz, JA & Pereira Dimuro, G 2019, 'Multimodal Fuzzy Fusion for Enhancing the Motor-Imagery-Based Brain Computer Interface', IEEE Computational Intelligence Magazine, vol. 14, no. 1, pp. 96-106.
View/Download from: Publisher's site
View description>>
© 2005-2012 IEEE. Brain-computer interface technologies, such as steady-state visually evoked potential, P300, and motor imagery are methods of communication between the human brain and the external devices. Motor imagery-based brain-computer interfaces are popular because they avoid unnecessary external stimuli. Although feature extraction methods have been illustrated in several machine intelligent systems in motor imagery-based brain-computer interface studies, the performance remains unsatisfactory. There is increasing interest in the use of the fuzzy integrals, the Choquet and Sugeno integrals, that are appropriate for use in applications in which fusion of data must consider possible data interactions. To enhance the classification accuracy of brain-computer interfaces, we adopted fuzzy integrals, after employing the classification method of traditional brain-computer interfaces, to consider possible links between the data. Subsequently, we proposed a novel classification framework called the multimodal fuzzy fusion-based brain-computer interface system. Ten volunteers performed a motor imagery-based brain-computer interface experiment, and we acquired electroencephalography signals simultaneously. The multimodal fuzzy fusion-based brain-computer interface system enhanced performance compared with traditional brain-computer interface systems. Furthermore, when using the motor imagery-relevant electroencephalography frequency alpha and beta bands for the input features, the system achieved the highest accuracy, up to 78.81% and 78.45% with the Choquet and Sugeno integrals, respectively. Herein, we present a novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain-computer interface commands.
Kocaballi, AB, Berkovsky, S, Quiroz, JC, Laranjo, L, Tong, HL, Rezazadegan, D, Briatore, A & Coiera, E 2019, 'The Personalization of Conversational Agents in Health Care: Systematic Review', Journal of Medical Internet Research, vol. 21, no. 11, pp. e15360-e15360.
View/Download from: Publisher's site
View description>>
Background The personalization of conversational agents with natural language user interfaces is seeing increasing use in health care applications, shaping the content, structure, or purpose of the dialogue between humans and conversational agents. Objective The goal of this systematic review was to understand the ways in which personalization has been used with conversational agents in health care and characterize the methods of its implementation. Methods We searched on PubMed, Embase, CINAHL, PsycInfo, and ACM Digital Library using a predefined search strategy. The studies were included if they: (1) were primary research studies that focused on consumers, caregivers, or health care professionals; (2) involved a conversational agent with an unconstrained natural language interface; (3) tested the system with human subjects; and (4) implemented personalization features. Results The search found 1958 publications. After abstract and full-text screening, 13 studies were included in the review. Common examples of personalized content included feedback, daily health reports, alerts, warnings, and recommendations. The personalization features were implemented without a theoretical framework of customization and with limited evaluation of its impact. While conversational agents with personalization features were reported to improve user satisfaction, user engagement and dialogue quality, the role of personalization in improving health outcomes was not assessed directly. Conclusions ...
Kocaballi, AB, Coiera, E, Tong, HL, White, SJ, Quiroz, JC, Rezazadegan, F, Willcock, S & Laranjo, L 2019, 'A network model of activities in primary care consultations', Journal of the American Medical Informatics Association, vol. 26, no. 10, pp. 1074-1082.
View/Download from: Publisher's site
View description>>
AbstractObjectiveThe objective of this study is to characterize the dynamic structure of primary care consultations by identifying typical activities and their inter-relationships to inform the design of automated approaches to clinical documentation using natural language processing and summarization methods.Materials and MethodsThis is an observational study in Australian general practice involving 31 consultations with 4 primary care physicians. Consultations were audio-recorded, and computer interactions were recorded using screen capture. Physical interactions in consultation rooms were noted by observers. Brief interviews were conducted after consultations. Conversational transcripts were analyzed to identify different activities and their speech content as well as verbal cues signaling activity transitions. An activity transition analysis was then undertaken to generate a network of activities and transitions.ResultsObserved activity classes followed those described in well-known primary care consultation models. Activities were often fragmented across consultations, did not flow necessarily in a defined order, and the flow between activities was nonlinear. Modeling activities as a network revealed that discussing a patient’s present complaint was the most central activity and was highly connected to medical history taking, physical examination, and assessment, forming a highly interrelated bundle. Family history, allergy, and investigation discussions were less connected suggesting less dependency on other activities. Clear verbal signs were often identifiable at transitions between activities.DiscussionPrimary care consultations do not appear to follow a classic linear model of defined inform...
Kocaballi, AB, Laranjo, L & Coiera, E 2019, 'Understanding and Measuring User Experience in Conversational Interfaces', Interacting with Computers, vol. 31, no. 2, pp. 192-207.
View/Download from: Publisher's site
View description>>
AbstractAlthough various methods have been developed to evaluate conversational interfaces, there has been a lack of methods specifically focusing on evaluating user experience. This paper reviews the understandings of user experience (UX) in conversational interfaces literature and examines the six questionnaires commonly used for evaluating conversational systems in order to assess the potential suitability of these questionnaires to measure different UX dimensions in that context. The method to examine the questionnaires involved developing an assessment framework for main UX dimensions with relevant attributes and coding the items in the questionnaires according to the framework. The results show that (i) the understandings of UX notably differed in literature; (ii) four questionnaires included assessment items, in varying extents, to measure hedonic, aesthetic and pragmatic dimensions of UX; (iii) while the dimension of affect was covered by two questionnaires, playfulness, motivation, and frustration dimensions were covered by one questionnaire only. The largest coverage of UX dimensions has been provided by the Subjective Assessment of Speech System Interfaces (SASSI). We recommend using multiple questionnaires to obtain a more complete measurement of user experience or improve the assessment of a particular UX dimension.RESEARCH HIGHLIGHTSVarying understandings of UX in conversational interfaces literature. A UX assessment framework with UX dimensions and their relevant attributes. Descriptions of the six main questionnaires for evaluating conversational interfaces. A comparison of the six questionnaires based on their coverage of UX dimensions.
Krivtsov, AV, Evans, K, Gadrey, JY, Eschle, BK, Hatton, C, Uckelmann, HJ, Ross, KN, Perner, F, Olsen, SN, Pritchard, T, McDermott, L, Jones, CD, Jing, D, Braytee, A, Chacon, D, Earley, E, McKeever, BM, Claremon, D, Gifford, AJ, Lee, HJ, Teicher, BA, Pimanda, JE, Beck, D, Perry, JA, Smith, MA, McGeehan, GM, Lock, RB & Armstrong, SA 2019, 'A Menin-MLL Inhibitor Induces Specific Chromatin Changes and Eradicates Disease in Models of MLL-Rearranged Leukemia', Cancer Cell, vol. 36, no. 6, pp. 660-673.e11.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Inc. Inhibition of the Menin (MEN1) and MLL (MLL1, KMT2A) interaction is a potential therapeutic strategy for MLL-rearranged (MLL-r) leukemia. Structure-based design yielded the potent, highly selective, and orally bioavailable small-molecule inhibitor VTP50469. Cell lines carrying MLL rearrangements were selectively responsive to VTP50469. VTP50469 displaced Menin from protein complexes and inhibited chromatin occupancy of MLL at select genes. Loss of MLL binding led to changes in gene expression, differentiation, and apoptosis. Patient-derived xenograft (PDX) models derived from patients with either MLL-r acute myeloid leukemia or MLL-r acute lymphoblastic leukemia (ALL) showed dramatic reductions of leukemia burden when treated with VTP50469. Multiple mice engrafted with MLL-r ALL remained disease free for more than 1 year after treatment. These data support rapid translation of this approach to clinical trials.
Kuang, B, Fu, A, Yu, S, Yang, G, Su, M & Zhang, Y 2019, 'ESDRA: An Efficient and Secure Distributed Remote Attestation Scheme for IoT Swarms', IEEE Internet of Things Journal, vol. 6, no. 5, pp. 8372-8383.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. An Internet of Things (IoT) system generally contains thousands of heterogeneous devices which often operate in swarms - large, dynamic, and self-organizing networks. Remote attestation is an important cornerstone for the security of these IoT swarms, as it ensures the software integrity of swarm devices and protects them from attacks. However, current attestation schemes suffer from single point of failure verifier. In this paper, we propose an Efficient and Secure Distributed Remote Attestation (ESDRA) scheme for IoT swarms. We present the first many-to-one attestation scheme for device swarms, which reduces the possibility of single point of failure verifier. Moreover, we utilize distributed attestation to verify the integrity of each node and apply accusation mechanism to report the invaded nodes, which makes ESDRA much easier to feedback the certain compromised nodes and reduces the run-time of attestation. We analyze the security of ESDRA and do some simulation experiments to show its practicality and efficiency. Especially, ESDRA can significantly reduce the attestation time and has a better performance in the energy consumption comparing with list-based attestation schemes.
León-Castro, E, Espinoza-Audelo, LF, Aviles-Ochoa, E, Merigó, JM & Kacprzyk, J 2019, 'A NEW MEASURE OF VOLATILITY USING INDUCED HEAVY MOVING AVERAGES', Technological and Economic Development of Economy, vol. 25, no. 4, pp. 576-599.
View/Download from: Publisher's site
View description>>
The volatility is a dispersion technique widely used in statistics and economics. This paper presents a new way to calculate volatility by using different extensions of the ordered weighted average (OWA) operator. This approach is called the induced heavy ordered weighted moving average (IHOWMA) volatility. The main advantage of this operator is that the classical volatility formula only takes into account the standard deviation and the average, while with this formulation it is possible to aggregate information according to the decision maker knowledge, expectations and attitude about the future. Some particular cases are also presented when the aggregation information process is applied only on the standard deviation or on the average. An example in three different exchange rates for 2016 are presented, these are for: USD/MXN, EUR/MXN and EUR/USD
León-Castro, E, Merigó, JM, Avilés-Ochoa, E, Gil-Lafuent, AM & Herrera-Viedma, E 2019, 'MODELLING AND SIMULATION IN BUSINESS, ECONOMICS AND MANAGEMENT', Technological and Economic Development of Economy, vol. 25, no. 4, pp. 571-575.
View/Download from: Publisher's site
View description>>
Modelling and Simulation in Business, Economics and Management. Technological and Economic Development of Economy, 25(4), pp. 571-575.
Li, B, Xiong, J, Liu, B, Gui, L, Qiu, M & Shi, Z 2019, 'Cache-Based Popular Services Pushing on High-Speed Train by Using Converged Broadcasting and Cellular Networks', IEEE Transactions on Broadcasting, vol. 65, no. 3, pp. 577-588.
View/Download from: Publisher's site
View description>>
© 1963-12012 IEEE. This paper presents a cache-based popular services pushing solution on high-speed train (HST) by using converged wireless broadcasting and cellular networks. Pushing and caching popular services on the HST to improve the capacity of the network is a very efficient way; and it can also bring a better user experience. The most popular services are transmitted and cached on the vehicle relay station of the train ahead the departure time in the proposed model. Then, the most popular services are broadcasted and cached on the User Equipment after all the passengers are on the train; the less popular services are delivered to the passengers by P2P mode through the relayed cellular network on the train. Specifically, we firstly use the dynamic programming algorithm to maximize the network capacity in limited pushing time, which can be converted to the 0-1 Knapsack problem. Furthermore, we propose three greedy algorithms to approximate the optimal solution on account of the high time complexity of dynamic programming when the input scale gets bigger. And simulation results show that the proposed popularity-based greedy algorithm performs well. Moreover, as the passengers may get on and off the HST when arriving at an intermediate station, a services rebroadcast algorithm is employed when more intermediate stations are considered. U-shaped distribution is adopted to indicate the number of passengers getting on and off the train. Simulations also show that the proposed rebroadcast algorithm can efficiently improve the capacity of the converged networks.
Li, D, Ye, D, Gao, N & Wang, S 2019, 'Service Selection With QoS Correlations in Distributed Service-Based Systems', IEEE Access, vol. 7, pp. 88718-88732.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Service selection is an important research problem in distributed service-based systems, which aims to select proper services to meet user requirements. A number of service selection approaches have been proposed in recent years. Most of them, however, overlook quality-of-service (QoS) correlations, which broadly exist in distributed service-based systems. The concept of QoS correlations involves two aspects: 1) QoS correlations among services and 2) QoS correlations of user requirements. The first aspect means that some QoS attributes of service not only depend on the service itself but also have correlations with other services, e.g., buying service 1 and then getting service 2 with half price. The second aspect means the relationships among QoS attributes of user requirements, e.g., a user can accept a service with fast response time and high service cost or the user can also accept a service with slow response time and low service cost (Fig. 1). These correlations significantly affect user selection of services. Currently, only a few existing approaches have considered QoS correlations among services, i.e., the first aspect, but they still overlook QoS correlations of user requirements, i.e., the second aspect, which are also very important in distributed service-based systems. In this paper, a novel service selection approach is proposed, which not only considers QoS correlations of services but also accounts for QoS correlations of user requirements. This approach, to the best of our knowledge, is the first one which considers QoS correlations of user requirements. Also, this approach is decentralized which can avoid the single point of failure. The experimental results demonstrate the effectiveness of the proposed approach.
Li, G, He, J, Peng, S, Jia, W, Wang, C, Niu, J & Yu, S 2019, 'Energy Efficient Data Collection in Large-Scale Internet of Things via Computation Offloading', IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4176-4187.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Internet of Things (IoT) can be used to promote many advanced applications by utilizing the sensed data collected from various settings. To reduce the energy consumption of IoT devices, and to extend the lifetime of network, the sensed data are usually compressed before their transmission through compressed sensing theory. By reconstructing the sensed data at the edge of network with more resourceful devices, such as laptops and servers, the intensive computation and energy consumption of the IoT nodes could be effectively offloaded. However, most of the existing data collection schemes are limited in their scalability, because the unified data reconstruction models of them are not suitable for large-scale surveillance scenarios. In our proposed scheme, the whole network is first partitioned into a number of data correlated clusters based on spatial correlation. Then, a data collection tree is built to collect the compressed data in a hybrid mode. Finally, the data reconstruction problem is modelled as a group sparse problem and solved through using an alternating direction method of multiplier-based algorithm. The performance of data communication and reconstruction of the proposed scheme is evaluated through experiments with real data set. The experimental results show that the proposed scheme can indeed lower the amount of data transmission, prolong the network life, and achieve a higher level of accuracy in data collection compared to existing data collection schemes.
Li, H, Wang, J, Li, R & Lu, H 2019, 'Novel analysis–forecast system based on multi-objective optimization for air quality index', Journal of Cleaner Production, vol. 208, pp. 1365-1383.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd The air quality index (AQI) is an important indicator of air quality. Owing to the randomness and non-stationarity inherent in AQI, it is still a challenging task to establish a reasonable analysis–forecast system for AQI. Previous studies primarily focused on enhancing either forecasting accuracy or stability and failed to improve both aspects simultaneously, leading to unsatisfactory results. In this study, a novel analysis–forecast system is proposed that consists of complexity analysis, data preprocessing, and optimize–forecast modules and addresses the problems of air quality monitoring. The proposed system performs a complexity analysis of the original series based on sample entropy and data preprocessing using a novel feature selection model that integrates a decomposition technique and an optimization algorithm for removing noise and selecting the optimal input structure, and then forecasts hourly AQI series by utilizing a modified least squares support vector machine optimized by a multi-objective multi-verse optimization algorithm. Experiments based on datasets from eight major cities in China demonstrated that the proposed system can simultaneously obtain high accuracy and strong stability and is thus efficient and reliable for air quality monitoring.
Li, M, Sun, Y, Su, S, Tian, Z, Wang, Y & Wang, X 2019, 'DPIF: A Framework for Distinguishing Unintentional Quality Problems From Potential Shilling Attacks', Computers, Materials & Continua, vol. 59, no. 1, pp. 331-344.
View/Download from: Publisher's site
View description>>
Copyright © 2019 Tech Science Press. Maliciously manufactured user profiles are often generated in batch for shilling attacks. These profiles may bring in a lot of quality problems but not worthy to be repaired. Since repairing data always be expensive, we need to scrutinize the data and pick out the data that really deserves to be repaired. In this paper, we focus on how to distinguish the unintentional data quality problems from the batch generated fake users for shilling attacks. A two-steps framework named DPIF is proposed for the distinguishment. Based on the framework, the metrics of homology and suspicious degree are proposed. The homology can be used to represent both the similarities of text and the data quality problems contained by different profiles. The suspicious degree can be used to identify potential attacks. The experiments on real-life data verified that the proposed framework and the corresponding metrics are effective.
Li, Q, Cao, Z, Zhong, J & Li, Q 2019, 'Graph representation learning with encoding edges', Neurocomputing, vol. 361, pp. 29-39.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Network embedding aims at learning the low dimensional representation of nodes. These representations can be widely used for network mining tasks, such as link prediction, anomaly detection, and classification. Recently, a great deal of meaningful research work has been carried out on this emerging network analysis paradigm. The real-world network contains different size clusters because of the edges with different relationship types. These clusters also reflect some features of nodes, which can contribute to the optimization of the feature representation of nodes. However, existing network embedding methods do not distinguish these relationship types. In this paper, we propose an unsupervised network representation learning model that can encode edge relationship information. Firstly, an objective function is defined, which can learn the edge vectors by implicit clustering. Then, a biased random walk is designed to generate a series of node sequences, which are put into Skip-Gram to learn the low dimensional node representations. Extensive experiments are conducted on several network datasets. Compared with the state-of-art baselines, the proposed method is able to achieve favorable and stable results in multi-label classification and link prediction tasks.
Li, Q, Zhong, J, Li, Q, Wang, C & Cao, Z 2019, 'A Community Merger of Optimization Algorithm to Extract Overlapping Communities in Networks', IEEE Access, vol. 7, pp. 3994-4005.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. A community in networks is a subset of vertices primarily connecting internal components, yet less connecting to the external vertices. The existing algorithms aim to extract communities of the topological features in networks. However, the edges of practical complex networks involving a weight that represents the tightness degree of connection and robustness, which leads a significant influence on the accuracy of community detection. In our study, we propose an overlapping community detection method based on the seed expansion strategy applying to both the unweighted and the weighted networks, called OCSE. First, it redefines the edge weight and the vertex weight depending on the influence of the network topology and the original edge weight, and then selects the seed vertices and updates the edges weight. Comparisons between OCSE approach and existing community detection methods on synthetic and real-world networks, the results of the experiment show that our proposed approach has the significantly better performance in terms of the accuracy.
Liang, H, Wang, H, Li, Q, Wang, J, Xu, G, Chen, J, Wei, J-M & Yang, Z 2019, 'A general framework for learning prosodic-enhanced representation of rap lyrics', World Wide Web, vol. 22, no. 6, pp. 2267-2289.
View/Download from: Publisher's site
Liang, T, Chen, L, Wu, J, Xu, G & Wu, Z 2019, 'SMS: A Framework for Service Discovery by Incorporating Social Media Information', IEEE Transactions on Services Computing, vol. 12, no. 3, pp. 384-397.
View/Download from: Publisher's site
View description>>
With the explosive growth of services, including Web services, cloud services, APIs and mashups, discovering the appropriate services for consumers is becoming an imperative issue. The traditional service discovery approaches mainly face two challenges: 1) the single source of description documents limits the effectiveness of discovery due to the insufficiency of semantic information; 2) more factors should be considered with the generally increasing functional and nonfunctional requirements of consumers. In this paper, we propose a novel framework, called SMS, for effectively discovering the appropriate services by incorporating social media information. Specifically, we present different methods to measure four social factors (semantic similarity, popularity, activity, decay factor) collected from Twitter. Latent Semantic Indexing (LSI) model is applied to mine semantic information of services from meta-data of Twitter Lists that contains them. In addition, we assume the target query-service matching function as a linear combination of multiple social factors and design a weight learning algorithm to learn an optimal combination of the measured social factors. Comprehensive experiments based on a real-world dataset crawled from Twitter demonstrate the effectiveness of the proposed framework SMS, through some compared approaches.
Lin, A, Li, J & Ma, Z 2019, 'On Learning and Learned Data Representation by Capsule Networks', IEEE Access, vol. 7, pp. 50808-50822.
View/Download from: Publisher's site
View description>>
Capsule networks (CapsNet) are recently proposed neural network models containing newly introduced processing layer, which are specialized in entity representation and discovery in images. CapsNet is motivated by a view of parse tree-like information processing mechanism and employs an iterative routing operation dynamically determining connections between layers composed of capsule units, in which the information ascends through different levels of interpretations, from raw sensory observation to semantically meaningful entities represented by active capsules. The CapsNet architecture is plausible and has been proven to be effective in some image data processing tasks, the newly introduced routing operation is mainly required for determining the capsules' activation status during the forward pass. However, its influence on model fitting and the resulted representation is barely understood. In this work, we investigate the following: 1) how the routing affects the CapsNet model fitting; 2) how the representation using capsules helps discover global structures in data distribution, and; 3) how the learned data representation adapts and generalizes to new tasks. Our investigation yielded the results some of which have been mentioned in the original paper of CapsNet, they are: 1) the routing operation determines the certainty with which a layer of capsules pass information to the layer above and the appropriate level of certainty is related to the model fitness; 2) in a designed experiment using data with a known 2D structure, capsule representations enable a more meaningful 2D manifold embedding than neurons do in a standard convolutional neural network (CNN), and; 3) compared with neurons of the standard CNN, capsules of successive layers are less coupled and more adaptive to new data distribution.
Lin, C-T, Chiu, C-Y, Singh, AK, King, J-T, Ko, L-W, Lu, Y-C & Wang, Y-K 2019, 'A Wireless Multifunctional SSVEP-Based Brain-Computer Interface Assistive System.', IEEE Trans. Cogn. Dev. Syst., vol. 11, no. 3, pp. 375-383.
View/Download from: Publisher's site
View description>>
IEEE Several kinds of brain-computer interface (BCI) systems have been proposed to compensate for the lack of medical technology for assisting patients who lose the ability to use motor functions to communicate with the outside world. However, most of the proposed systems are limited by their non-portability, impracticality and inconvenience because of the adoption of wired or invasive electroencephalography (EEG) acquisition devices. Another common limitation is the shortage of functions provided because of the difficulty of integrating multiple functions into one BCI system. In this study, we propose a wireless, non-invasive and multifunctional assistive system which integrates steady state visually evoked potential (SSVEP)-based BCI and a robotic arm to assist patients to feed themselves. Patients are able to control the robotic arm via the BCI to serve themselves food. Three other functions: video entertainment, video calling, and active interaction are also integrated. This is achieved by designing a functional menu and integrating multiple subsystems. A refinement decision-making mechanism is incorporated to ensure the accuracy and applicability of the system. Fifteen participants were recruited to validate the usability and performance of the system. The averaged accuracy and information transfer rate (ITR) achieved is 90.91% and 24.94 bit per min respectively. The feedback from the participants demonstrates that this assistive system is able to significantly improve the quality of daily life.
Lin, C-T, King, J-T, Bharadwaj, P, Chen, C-H, Gupta, A, Ding, W & Prasad, M 2019, 'EOG-Based Eye Movement Classification and Application on HCI Baseball Game', IEEE Access, vol. 7, pp. 96166-96176.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Electrooculography (EOG) is considered as the most stable physiological signal in the development of human-computer interface (HCI) for detecting eye-movement variations. EOG signal classification has gained more traction in recent years to overcome physical inconvenience in paralyzed patients. In this paper, a robust classification technique, such as eight directional movements is investigated by introducing a concept of buffer along with a variation of the slope to avoid misclassification effects in EOG signals. Blinking detection becomes complicated when the magnitude of the signals are considered. Hence, a correction technique is introduced to avoid misclassification for oblique eye movements. Meanwhile, a case study has been considered to apply these correction techniques to HCI baseball game to learn eye-movements.
Lin, C-T, Liu, C-H, Wang, P-S, King, J-T & Liao, L-D 2019, 'Design and Verification of a Dry Sensor-Based Multi-Channel Digital Active Circuit for Human Brain Electroencephalography Signal Acquisition Systems', Micromachines, vol. 10, no. 11, pp. 720-720.
View/Download from: Publisher's site
View description>>
A brain–computer interface (BCI) is a type of interface/communication system that can help users interact with their environments. Electroencephalography (EEG) has become the most common application of BCIs and provides a way for disabled individuals to communicate. While wet sensors are the most commonly used sensors for traditional EEG measurements, they require considerable preparation time, including the time needed to prepare the skin and to use the conductive gel. Additionally, the conductive gel dries over time, leading to degraded performance. Furthermore, requiring patients to wear wet sensors to record EEG signals is considered highly inconvenient. Here, we report a wireless 8-channel digital active-circuit EEG signal acquisition system that uses dry sensors. Active-circuit systems for EEG measurement allow people to engage in daily life while using these systems, and the advantages of these systems can be further improved by utilizing dry sensors. Moreover, the use of dry sensors can help both disabled and healthy people enjoy the convenience of BCIs in daily life. To verify the reliability of the proposed system, we designed three experiments in which we evaluated eye blinking and teeth gritting, measured alpha waves, and recorded event-related potentials (ERPs) to compare our developed system with a standard Neuroscan EEG system.
Linares-Mustarós, S, Ferrer-Comalat, JC, Corominas-Coll, D & Merigó, JM 2019, 'The ordered weighted average in the theory of expertons', International Journal of Intelligent Systems, vol. 34, no. 3, pp. 345-365.
View/Download from: Publisher's site
View description>>
© 2018 Wiley Periodicals, Inc. This work presents a data-fusion mathematical object that incorporates the optimism level of a decision-making agent. The new fusion object is constructed by extending the ordered weighted averaging (OWA) operator in the process of creating an experton. The main advantage of this approach is that it can represent the attitudinal character of the decision maker in the construction of the experton. Therefore, this approach represents a new method for addressing multiperson problems by using optimistic and pessimistic perspectives. The work presents different practical examples based on the absolute hierarchical relationships of the “minimum of the bottom end of the intervals,” “minimum of the top end of the intervals,” and “minimum size of the interval.” The work also considers a wide range of particular cases of the OWA-experton, including the minimum experton, the maximum experton, the average experton, and the olympic experton. In addition, the study presents software for the calculation of OWA-expertons. Finally, the paper ends with an application in business decision-making regarding the calculation of expected benefits.
Liu, B, Chen, L, Zhu, X & Qiu, W 2019, 'Encrypted data indexing for the secure outsourcing of spectral clustering', Knowledge and Information Systems, vol. 60, no. 3, pp. 1307-1328.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag London Ltd., part of Springer Nature. Spectral clustering is one of the most popular clustering methods and is particularly useful for pattern recognition and image analysis. When using spectral clustering for analysis, users are either required to implement their own platforms, which requires strong data analytics and machine learning skills, or allow a third party to access and analyze their data, which may compromise their data privacy or security. Traditionally, this problem is solved by privacy-preserving data mining using randomization perturbation or secure multi-party computation. However, the existing methods suffer from the problems of inaccurate results or high computational requirements on the data owner’s side. To address these problems, in this paper, we propose a new secure outsourcing data mining (SODM) paradigm, which allows data owners to encrypt their data to ensure maximum data security. After the encryption, data owners can outsource their encrypted data to data analytics service providers (i.e., data analytics agent) for knowledge discovery, with a guarantee that neither the data analytics agent nor the other parties can compromise data privacy. To allow data mining to be efficiently carried out on encrypted data, we design a secure KD-tree to index all the encrypted data. Based on the SODM framework, a secure spectral clustering algorithm is proposed. The experiments on real-world datasets demonstrate the effectiveness and the efficiency of the system for the secure outsourcing of data mining.
Liu, B, Ding, M, Zhu, T, Xiang, Y & Zhou, W 2019, 'Adversaries or allies? Privacy and deep learning in big data era', Concurrency and Computation: Practice and Experience, vol. 31, no. 19.
View/Download from: Publisher's site
View description>>
SummaryDeep learning methods have become the basis of new AI‐based services on the Internet in big data era because of their unprecedented accuracy. Meanwhile, it raises obvious privacy issues. The deep learning–assisted privacy attack can extract sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against deep learning tools, along with two new metrics that measure image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our solution is validated by simulations on two different datasets. Our research shows that we can protect the image privacy by adding a small amount of noise that has a humanly imperceptible impact on the image quality, especially for images of complex structures and textures.
Liu, G, Quan, W, Cheng, N, Zhang, H & Yu, S 2019, 'Efficient DDoS attacks mitigation for stateful forwarding in Internet of Things', Journal of Network and Computer Applications, vol. 130, pp. 1-13.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Stateful forwarding plane is fully considered as a novel forwarding paradigm, which is proven to be beneficial to delivery efficiency and resilient to certain types of attacks. However, this fresh attempt also introduces “varietal” Denial-of-Service attack due to complicated forwarding state operations, which may cause long-term memory exhaustion of forwarding nodes, especially for resource-limited IoT nodes. This new distributed exhaustion attack is extremely hidden and there is currently no effective defense against it. In this paper, we first establish a game model to analyze the attack benefit between attacker and defender. To further make the defender obtain more utility, it is significative to make the defender manage expired state-entries during stateful forwarding. To this end, we propose an enhanced distributed low-rate attack mitigating (eDLAM) mechanism. Particularly, eDLAM maintains a lightweight malicious request table (MRT), which is very small, to offload burden of practical forwarding state table. When a packet request is matched in MRT, it will be marked and dropped directly without any impact on forwarding state table. Based on this, eDLAM adopts an optimal threshold update method for MRT to achieve a maximum defender utility. We evaluate the eDLAM performance in terms of false negatives rate (FNR) and false positives rate (FPR). Extensive experimental results show that eDLAM can reduce FNR by 10.5% and FPR by 44% on average compared with state-of-the-art mechanisms.
Liu, M, Luo, Y, Nanda, P, Yu, S & Zhang, J 2019, 'Efficient solution to the millionaires' problem based on asymmetric commutative encryption scheme', Computational Intelligence, vol. 35, no. 3, pp. 555-576.
View/Download from: Publisher's site
View description>>
AbstractSecure multiparty computation is an important scheme in cryptography and can be applied in various real‐life problems. The first secure multiparty computation problem is the millionaires' problem, and its protocol is an important building block. Because of the less efficiency of public key encryption scheme, most existing solutions based on public key cryptography to this problem are inefficient. Thus, a solution based on the symmetric encryption scheme has been proposed. In this paper, we formally analyse the vulnerability of this solution, and propose a new scheme based on the decisional Diffie‐Hellman assumption. Our solution also uses 0‐encoding and 1‐encoding generated by our modified encoding method to reduce the computation cost. We implement the solution based on symmetric encryption scheme and our protocol. Extensive experiments are conducted to evaluate the efficiency of our solution, and the experimental results show that our solution can be much more efficient and be approximately 8000 times faster than the solution based on symmetric encryption scheme for a 32‐bit input and short‐term security. Moreover, our solution is also more efficient than the state‐of‐the‐art solution without precomputation and can also compare well with the state‐of‐the‐art protocol while the bit length of private inputs is large enough.
Liu, X, Iftikhar, N, Huo, H, Li, R & Nielsen, PS 2019, 'Two approaches for synthesizing scalable residential energy consumption data', Future Generation Computer Systems, vol. 95, pp. 586-600.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Many fields require scalable and detailed energy consumption data for different study purposes. However, due to privacy issues, it is often difficult to obtain sufficiently large datasets. This paper proposes two different methods for synthesizing fine-grained energy consumption data for residential households, namely a regression-based method and a probability-based method. They each use a supervised machine learning method, which trains models with a relatively small real-world dataset and then generates large-scale time series based on the models. This paper describes the two methods in details, including data generation process, optimization techniques, and parallel data generation. This paper evaluates the performance of the two methods, which compare the resulting consumption profiles with real-world data, including patterns, statistics, and parallel data generation in the cluster. The results demonstrate the effectiveness of the proposed methods and their efficiency in generating large-scale datasets.
Liu, X-P, Zhang, G-Q, Lu, J & Zhang, J-Q 2019, 'Risk assessment using transfer learning for grassland fires', Agricultural and Forest Meteorology, vol. 269-270, pp. 102-111.
View/Download from: Publisher's site
View description>>
© 2019 A new direction of risk assessment research in grassland fire management is data-driven prediction, in which data are collected from particular regions. Since some regions have rich datasets that can easily generate knowledge for risk prediction, and some have no data available, this study addresses how we can leverage the knowledge learned from one grassland risk assessment to assist with a current assessment task. In this paper, we first introduce the transfer learning methodology to map and update risk maps in grassland fire management, and we propose a new grassland fire risk analysis method. In this study, two major grassland areas (Xilingol and Hulunbuir) in northern China are selected as the study areas, and five representative indicators (features) are extracted from grassland fuel, fire climate, accessibility, human and social economy. Taking Xilingol as the source domain (where sufficient labelled data are available) and Hulunbuir as the target domain (which contains insufficient data but requires risk assessment/prediction), we then establish the mapping relationship between grassland fire indicators and the degrees of grassland fire risk by using a transfer learning method. Finally, the fire risk in the Hulunbuir grassland is assessed using the transfer learning method. Experiments show that the prediction accuracy reached 87.5% by using the transfer learning method, representing a significant increase over existing methods.
Llanos-Herrera, GR & Merigo, JM 2019, 'Overview of brand personality research with bibliometric indicators', Kybernetes, vol. 48, no. 3, pp. 546-569.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to present a global view of the research that has been conducted regarding brand personality by using the Core Collection of the Web of Science (WoS) as a reference. The main bibliometric indicators considered are number of articles, number of citations, main authors, principal journals, institutions, countries and keywords.Design/methodology/approachThrough a bibliometric investigation, this paper performs an analysis of investigations of brand personality that have been conducted to date. In particular, the analysis focuses on the papers that have generated the greatest impact in the scientific community, the journals that have given the most attention to this concept and the authors who have most strongly influenced the academic world in this field. The analysis reveals a series of relationships between the bases of knowledge considered for different authors and journals and the structure of those relationships based on the keywords considered in each contribution.FindingsThis analysis allows to obtain a general and impartial view of brand personality research, and it reveals the most relevant contributions to the academic world in terms of authors, journals, institutions, countries and keywords. The analysis shows that the concept under study seems to still be in an early stage of development and there may well be an important amount of development ahead. Although there have been important contributions to this field, work is still required to consolidate this knowledge.Research limitations/implicationsThe information provided pertains to a re...
Lu, J, Yan, Z, Han, J & Zhang, G 2019, 'Data-Driven Decision-Making (D3M): Framework, Methodology, and Directions', IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 3, no. 4, pp. 286-296.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. A decision problem, according to traditional principles, is approached by finding an optimal solution to an analytical programming decision model, which is known as model-driven decision-making. The fidelity of the model determines the quality and reliability of the decision-making; however, the intrinsic complexity of many real-world decision problems leads to significant model mismatch or infeasibility in deriving a model using the first principle. To overcome the challenges that are present in the big data era, both researchers and practitioners emphasize the importance of making decisions that are backed up by data related to decision tasks, a process called data-driven decision-making (D3M). By building on data science, not only can decision models be predicted in the presence of uncertainty or unknown dynamics, but also inherent rules or knowledge can be extracted from data and directly utilized to generate decision solutions. This position paper systematically discusses the basic concepts and prevailing techniques in data-driven decision-making and clusters-related developments in technique into two main categories: programmable data-driven decision-making (P-D3M) and nonprogrammable data-driven decision-making (NP-D3M). This paper establishes a D3M technical framework, main methodologies, and approaches for both categories of D3M, as well as identifies potential methods and procedures for using data to support decision-making. It also provides examples of how D3M is implemented in practice and identifies five further research directions in the D3M area. We believe that this paper will directly support researchers and professionals in their understanding of the fundamentals of D3M and of the developments in technical methods.
Lu, S, Oberst, S, Zhang, G & Luo, Z 2019, 'Bifurcation analysis of dynamic pricing processes with nonlinear external reference effects', Communications in Nonlinear Science and Numerical Simulation, vol. 79, pp. 104929-104929.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Dynamic pricing has been widely implemented to hedge against volatile demand. One challenging problem is the study of optimal price choices under the influence of this volatility. Stochastic demand is a prevalent assumption when it comes to model the volatility on pricing decisions. However, the demand volatility might also be produced by deterministic chaos, which has rarely been studied in this field of research to-date. We propose deterministic dynamic pricing processes that aim to maximise the revenue and to mimic a real pricing decision. Our model includes nonlinear consumer expectations that explain the effects of external information on consumers and discrete optimisations due to a non-smooth demand function that considers asymmetries in the perceptions of gains or losses of consumers and finite price choices of companies. Volatile markets can show up because of non-periodic consumer expectations, period adding bifurcations, codimension-2 points and coexisting solutions. Results highlight that optimal pricing strategies should agree with the dynamics of consumer expectations. Disregarding deterministic dynamics may not only cause revenue losses in practice but might also mislead regulators about the underlying mechanisms that consumers and companies respond to. We introduce for the first time an irregular pricing strategy: a company can make the first return iteration of each sales price non-periodic to follow non-periodic consumer expectations when having finite price choices. These results may justify implementing irregular pricing strategies in the case of practical pricing decisions. Here, the existence of coexisting solutions can assist to identify potential market manipulations within a monopoly market. This not only contributes to a fresh look on volatile markets but also emphasises the importance of initial conditions to pricing decisions and price regulations.
Luo, F, Jiang, C, Yu, S, Wang, J, Li, Y & Ren, Y 2019, 'Stability of Cloud-Based UAV Systems Supporting Big Data Acquisition and Processing', IEEE Transactions on Cloud Computing, vol. 7, no. 3, pp. 866-877.
View/Download from: Publisher's site
View description>>
Unmanned Aerial Vehicle (UAV) technology has been widely applied in both military and civilian applications. Recent researches on UAV systems feature in the dramatic augment of the variety and number of equipped sensors, which results in such an issue that multiple UAVs cannot afford to handle the big data generated by a range of sensors in the air. Considering this practical problem, in this paper, we propose a cloud-based UAV system which incorporates the computing capability of the terrestrial cloud into the UAV systems. Relying on proposed cloud-based UAV system, one critical theoretic issue is how to acquire the big data generated by the sensors while guaranteeing a stable operation state of the system. First, we analyze the cloud-based system’s on-demand service ability as well as its impact on UAVs’ control procedure. Second, the UAV cloud control system is modeled as a network control system. Moreover, the stable condition of the UAV cloud control system is derived, which reveals the relationship between the acquisition rate of sensor data and the stability of the cloud-based UAV system. Finally, simulations are conducted to verify the effectiveness of our theoretical analysis.
Luo, M, Yan, C, Zheng, Q, Chang, X, Chen, L & Nie, F 2019, 'Discrete Multi-Graph Clustering', IEEE Transactions on Image Processing, vol. 28, no. 9, pp. 4701-4712.
View/Download from: Publisher's site
View description>>
© 1992-2012 IEEE. Spectral clustering plays a significant role in applications that rely on multi-view data due to its well-defined mathematical framework and excellent performance on arbitrarily-shaped clusters. Unfortunately, directly optimizing the spectral clustering inevitably results in an NP-hard problem due to the discrete constraints on the clustering labels. Hence, conventional approaches intuitively include a relax-and-discretize strategy to approximate the original solution. However, there are no principles in this strategy that prevent the possibility of information loss between each stage of the process. This uncertainty is aggravated when a procedure of heterogeneous features fusion has to be included in multi-view spectral clustering. In this paper, we avoid an NP-hard optimization problem and develop a general framework for multi-view discrete graph clustering by directly learning a consensus partition across multiple views, instead of using the relax-and-discretize strategy. An effective re-weighting optimization algorithm is exploited to solve the proposed challenging problem. Further, we provide a theoretical analysis of the model's convergence properties and computational complexity for the proposed algorithm. Extensive experiments on several benchmark datasets verify the effectiveness and superiority of the proposed algorithm on clustering and image segmentation tasks.
Ma, L, Pei, Q, Xiang, Y, Yao, L & Yu, S 2019, 'A reliable reputation computation framework for online items in E-commerce', Journal of Network and Computer Applications, vol. 134, pp. 13-25.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Most of online trading platforms allow consumers to give personal ratings to online items. By computing the weighted mean of the ratings, the reputation values of online items can be derived to assist consumers to make purchasing decisions. However, it is never a simple task to derive a reliable reputation value of any given item and existing works fail to achieve this. Thus, in this paper, we propose a reliable reputation computation framework for online items which can be adopted by online trading platforms or run by a third party to provide reputation computation as a service. At first, a fine-grained two-phase detection method is proposed to detect malicious ratings. After filtering out the ratings detected as malicious, the weights of the remaining ratings are determined by computing the degrees to which the users giving these ratings are interested in a target item. Extensive experiments verify that the proposed reliable reputation computation framework is effective to detect different kinds of malicious ratings and determine the interest degrees of users.
Ma, W, Cai, L, He, T, Chen, L, Cao, Z & Li, R 2019, 'Local Expansion and Optimization for Higher-Order Graph Clustering', IEEE Internet of Things Journal, vol. 6, no. 5, pp. 8702-8713.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Graph clustering aims to identify clusters that feature tighter connections between internal nodes than external nodes. We noted that conventional clustering approaches based on a single vertex or edge cannot meet the requirements of clustering in a higher-order mixed structure formed by multiple nodes in a complex network. Considering the above limitation, we are aware of the fact that a clustering coefficient can measure the degree to which nodes in a graph tend to cluster, even if only a small area of the graph is given. In this paper, we introduce a new cluster quality score, i.e., the local motif rate, which can effectively respond to the density of clusters in a higher-order graph. We also propose a motif-based local expansion and optimization algorithm (MLEO) to improve local higher-order graph clustering. This algorithm is a purely local algorithm and can be applied directly to higher-order graphs without conversion to a weighted graph, thus avoiding distortion of the transform. In addition, we propose a new seed-processing strategy in a higher-order graph. The experimental results show that our proposed strategy can achieve better performance than the existing approaches when using a quadrangle as the motif in the LFR network and the value of the mixing parameter \mu exceeds 0.6.
Manzoor, M, Hussain, W, Sohaib, O, Hussain, FK & Alkhalaf, S 2019, 'Methodological investigation for enhancing the usability of university websites.', J. Ambient Intell. Humaniz. Comput., vol. 10, no. 2, pp. 531-549.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. For university websites to be successful and to increase the chance of converting a prospective student into a current student, it is necessary to increase the visibility and accessibility of all related content so that a student can achieve their desired task in the fastest possible time. The criteria for evaluating university websites are very vague and are usually unknown to most developers, which adversely impacts the user-experience of the students visiting such websites. To solve this problem, we devised a usability metric and examined the leading university websites to analyze whether these websites were able to meet the requirements of students. In this research, we applied qualitative and quantitative approaches by considering 300 students and evaluating 86 university websites (26 from Canada, 30 from the United States, and 30 from Europe) based on a six-attribute metric comprising navigation, organization, ease of use (simplicity), design (layout), communication and content. From the evaluation results, we find that the 88% of the students are satisfied with our proposed usability attributes, but that most universities fail to meet basic standards of usability as desired by the students. The findings also show that the usability evaluation score for each usability feature varies from country to country, such as for (1) multiple language support − 23% of the Canadian websites, 63% of the European websites and none of the USA websites has the feature; for (2) Scholarships/Funding/Financial Aid link − 24% of the Canadian websites, 80% of the European and the USA websites has the feature; for (3) admission link − 88% of the Canadian websites, 20% of the European websites and 90% of the USA websites has the feature. In addition, from the evaluative result we find that our proposed approach will not only increase the usability of academic websites but will also provide an easiest way to ...
Mao, M, Lu, J, Han, J & Zhang, G 2019, 'Multiobjective e-commerce recommendations based on hypergraph ranking', Information Sciences, vol. 471, pp. 269-287.
View/Download from: Publisher's site
View description>>
© 2018 Recommender systems are emerging in e-commerce as important promotion tools to assist customers to discover potentially interesting items. Currently, most of these are single-objective and search for items that fit the overall preference of a particular user. In real applications, such as restaurant recommendations, however, users often have multiple objectives such as group preferences and restaurant ambiance. This paper highlights the need for multi-objective recommendations and provides a solution using hypergraph ranking. A general User–Item–Attribute–Context data model is proposed to summarize different information resources and high-order relationships for the construction of a multipartite hypergraph. This study develops an improved balanced hypergraph ranking method to rank different types of objects in hypergraph data. An overall framework is then proposed as a guideline for the implementation of multi-objective recommender systems. Empirical experiments are conducted with the dataset from a review site Yelp.com, and the outcomes demonstrate that the proposed model performs very well for multi-objective recommendations. The experiments also demonstrate that this framework is still compatible for traditional single-objective recommendations and can improve accuracy significantly. In conclusion, the proposed multi-objective recommendation framework is able to handle complex and changing demands for e-commerce customers.
MARICRUZ, O-L, ERNESTO, L-C, LUIS FERNANDO, E-A, JOSE MARIA, M & ANNA MARÍA, GL 2019, 'Forgotten Effects and Heavy Moving Averages in Exchange Rate Forecasting', ECONOMIC COMPUTATION AND ECONOMIC CYBERNETICS STUDIES AND RESEARCH, vol. 53, no. 4/2019, pp. 79-96.
View/Download from: Publisher's site
Mashat, MEM, Lin, C-T & Zhang, D 2019, 'Effects of Task Complexity on Motor Imagery-Based Brain–Computer Interface', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 10, pp. 2178-2185.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. The performance of electroencephalogram (EEG)-based brain-computer interfaces (BCIs) still needs improvements for real world applications. An improvement on BCIs could be achieved by enhancing brain signals from the source via subject intention-based modulation. In this work, we aim to investigate the effects of task complexity on performance of motor imagery (MI) based BCIs. In specific, we studied the effects of motor imagery of a complex task versus a simple task on discriminability of brain activation patterns using EEG. The results show an increase of up to 7.25% in BCI classification accuracy for motor imagery of the complex task in comparison to the simple task. Furthermore, spectral power analysis in low frequency bands, alpha and beta, shows a significant decrease in power value for the complex task. However, high frequency gamma band analysis unveils a significant increase for the complex task. These findings may lead to designing better BCIs with high performance.
Mastio, E & Dovey, K 2019, 'Power dynamics in organizational change: an Australian case', International Journal of Sociology and Social Policy, vol. 39, no. 9/10, pp. 796-811.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to contribute to the understanding of the role of abstract forms of power in organizational change by exploring the role of such forms of power in the recent structural transformation of an iconic Australian Intellectual Property law firm. The research literature reflects relatively few studies on the increasing complexity of power dynamics in organizational and institutional arrangements.Design/methodology/approachThe complexity of the investigated phenomena led to the adoption of three qualitative methods in order to access the specific forms of data that were perceived to be relevant to answering the research question (“How did abstract power dynamics influence the nature and outcomes of the firm’s structural transformation?”). Ethnography was used in the attempt to discern, through participation and observation, the assumptions that manifested in action and/or inaction; phenomenology in the exploration through unstructured interviews with 41 staff members and 4 clients of the firm, of their interpretation and “sense-making” of their “lived experience” of “what was going on” in the firm; and narrative enquiry in establishing a narrative of critical events, and their impact on “what was going on” in the firm, including those that had occurred over the years prior to this research initiative.FindingsThe research shows the effects of contradicting forms of abstract power (namely, hegemonic (ideological) power, dominant institutional logic and structural power) as the firm struggled to address challenges to its existence. The impact of these forms of power upon the partners’ apprehension and interpretation of the emerging challenges to the ...
Mastio, E, Chew, E & Dovey, KA 2019, 'The learning organization as a context for value co-creation', The Learning Organization, vol. 27, no. 4, pp. 291-303.
View/Download from: Publisher's site
View description>>
PurposeThis paper aims to explore the relationship between the concept of the learning organization and that of the co-creation of value.Design/methodology/approachThe paper is conceptual in nature and draws on data from a case study of a small highly innovative Australian company.FindingsThe authors show that, from a value co-creation perspective, the learning organization can be viewed as an open, collaborative, social/economic actor engaged in social/economic activities with other interdependent actors (organizations or stakeholders) in a network or ecosystem of actors to serve its mission/purpose and the well-being of the ecosystem.Research limitations/implicationsAs a conceptual paper, the authors rely primarily on previous research as the basis for the argument. The implications of the findings are that, as value co-creation practices are founded upon the generation and leveraging of specific intangible capital resources, more research located in alternative research paradigms is required.Practical implicationsThere are important implications for organizational leadership in that the practices that underpin value co-creation require the leadership to be able to work constructively with multiple forms of systemic and agentic power.Social implicationsIn increasingly turbulent and hyper-competitive global operational contexts, sustainable value creation is becoming recogn...
Mas-Tur, A, Modak, NM, Merigó, JM, Roig-Tierno, N, Geraci, M & Capecchi, V 2019, 'Half a century of Quality & Quantity: a bibliometric review', Quality & Quantity, vol. 53, no. 2, pp. 981-1020.
View/Download from: Publisher's site
View description>>
© 2018, Springer Nature B.V. The Quality & Quantity was established in 1967 and in 2017 it completed its half century. The journal is interdisciplinary in nature and it mainly discusses methodological application of mathematics and statistics in the social sciences, particularly sociology, economics, and social psychology. It was created with the idea of advancing methodology of the various social studies. This study looks back journey of the journal from 1967 to 2017 aims to develop a bibliometric analysis of all the publications of the journal. Web of Science Core Collection database is used to collect data. The present study discovered the significant contributions of the journal in terms of impact, topics, authors, universities and countries. Utrecht University of Netherlands is the most productive university. Asian Universities are emerging and growing quickly in the recent years. Although USA leads among the countries but Europe leads among the six supranational regions. Finally, the visualization of similarities viewer software is used to present network visualization of the bibliographic coupling, co-citation, citation, co-authorship and co-occurrence of keywords.
Melnikov, A, Chiang, YK, Quan, L, Oberst, S, Alù, A, Marburg, S & Powell, D 2019, 'Acoustic meta-atom with experimentally verified maximum Willis coupling', Nature Communications, vol. 10, no. 1, pp. 3148-3148.
View/Download from: Publisher's site
View description>>
AbstractAcoustic metamaterials are structures with exotic acoustic properties, with promising applications in acoustic beam steering, focusing, impedance matching, absorption and isolation. Recent work has shown that the efficiency of many acoustic metamaterials can be enhanced by controlling an additional parameter known as Willis coupling, which is analogous to bianisotropy in electromagnetic metamaterials. The magnitude of Willis coupling in a passive acoustic meta-atom has been shown theoretically to have an upper limit, however the feasibility of reaching this limit has not been experimentally investigated. Here we introduce a meta-atom with Willis coupling which closely approaches this theoretical limit, that is much simpler and less prone to thermo-viscous losses than previously reported structures. We perform two-dimensional experiments to measure the strong Willis coupling, supported by numerical calculations. Our meta-atom geometry is readily modeled analytically, enabling the strength of Willis coupling and its peak frequency to be easily controlled.
Merigó, JM & Yager, RR 2019, 'Aggregation operators with moving averages', Soft Computing, vol. 23, no. 21, pp. 10601-10615.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag GmbH Germany, part of Springer Nature. A moving average is an average that aggregates a subset of variables from the set and moves across the sample. It is widely used in time-series forecasting. This paper studies the use of moving averages in some representative aggregation operators. The ordered weighted averaging weighted moving averaging (OWAWMA) operator is introduced. It is a new approach based on the use of the moving average in a unified model between the weighted average and the ordered weighted average. Its main advantage is that it provides a parameterized family of moving aggregation operators between the moving minimum and the moving maximum. Moreover, it also includes the weighted moving average and the ordered weighted moving average as particular cases. This approach is further extended by using generalized aggregation operators, obtaining the generalized OWAWMA operator. The construction of interval and fuzzy numbers with these operators obtaining the concept of moving interval number and moving fuzzy number is also studied. The paper ends analyzing the applicability of this new approach in some key statistical concepts such as the variance and the covariance and with a numerical example regarding sales forecasting.
Merigó, JM, Cobo, MJ, Laengle, S, Rivas, D & Herrera-Viedma, E 2019, 'Twenty years of Soft Computing: a bibliometric overview', Soft Computing, vol. 23, no. 5, pp. 1477-1497.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. The journal Soft Computing was launched in 1997, and it is dedicated to promote advancements in soft computing theories, which includes fuzzy sets theory, neural networks, evolutionary computation, probabilistic reasoning and hybrid theories. 2017 marks the 20th anniversary of the journal. Motivated by this anniversary, this study presents a bibliometric analysis of the current publications in the journal in order to identify the leading trends ruling the journal. The paper also develops a mapping analysis of the bibliographic material by using the visualization of similarities viewer software. The results show that researchers from all over the world publish regularly in the journal. Soft Computing is growing significantly during the last years, becoming one of the leading journals in the field.
Merigó, JM, Etchebarne, MS & Cancino, CA 2019, 'Evolution of the business and management research in Chile', International Journal of Technology, Policy and Management, vol. 19, no. 2, pp. 108-108.
View/Download from: Publisher's site
Merigó, JM, Miranda, J, Modak, NM, Boustras, G & de la Sotta, C 2019, 'Forty years of Safety Science: A bibliometric overview', Safety Science, vol. 115, pp. 66-88.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Safety Science was established in 1976 as the Journal of Occupational Accidents. Safety Science was established with the vision of promoting multidisciplinary research in the science and technology of human and industrial safety and serving as a guide for the safety of people at work and in other spheres, such as transportation, energy or infrastructure, as well as in every other field of hazardous human activities. To celebrate 40 years of publishing outstanding research, this study intends to develop a bibliometric analysis of the publications of the journal between 1976 and 2016. The purpose is to identify the leading trends of the journal in terms of impact, topics, authors, universities and countries. This study uses the most reliable database, the Web of Science Core Collection. Moreover, the work analyses the mapping of bibliographic couplings, co-citations, citations, co-authorships and co-occurrences of keywords.
Merigó, JM, Mulet-Forteza, C, Valencia, C & Lew, AA 2019, 'Twenty years of Tourism Geographies: a bibliometric overview', Tourism Geographies, vol. 21, no. 5, pp. 881-910.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group. Tourism Geographies is a prominently ranked journal that emerged from activities of the Tourism Commission of the International Geographical Union. It is indexed in the ‘Tourism, Leisure and Hospitality Management’ and ‘Geography, Planning and Development’ fields in the Scopus database and published its 20th volume in 2018. A bibliometric assessment of the articles and authors who have contributed to Tourism Geographies over its first two decades highlights major trends and dominant issues covered by the journal’s content. Key indicators include the most published and most cited authors and articles, the institutions and countries that those authors are affiliated with, other academic journals that are closely linked to the journal through citations, and the most used keywords in the journal. The Scopus database provides access to these basic bibliometric data, while the VOSviewer software enables graphical analyses and displays of co-citations, co-occurrences of keywords, and bibliographic couplings (shared references) across papers and authors. Overall, Tourism Geographies is closely linked to other leading journals indexed by Scopus in the ‘Tourism’ and ‘Geography’ fields and publishes papers from around the world. Research topics that have been most prominent in the journal include tourism development, tourist destinations, tourist attractions, heritage tourism, tourism perceptions, sustainable tourism, and travel behavior. Among the most viewed individual papers have been those addressing issues related to sustainability, poverty issues (related to tourism in poor areas, volunteering, sustainable tourism, and the environment), and community planning (sustainable tourism planning, tourist routes and movement, and new locations for tourism development).
Merigó, JM, Muller, C, Modak, NM & Laengle, S 2019, 'Research in Production and Operations Management: A University-Based Bibliometric Analysis', Global Journal of Flexible Systems Management, vol. 20, no. 1, pp. 1-29.
View/Download from: Publisher's site
View description>>
© 2018, Global Institute of Flexible Systems Management. Universities across the world are contributing greatly to production and operations management (POM) research and playing significant roles in social and economic development. This article analyzes the performance of universities in POM research and development between 1990 and 2014. The Web of Science core collection database is used to collect all the necessary data. The results show a wide diversity among the countries of origin of the top universities, with some of them being in Asia, Europe, and North America. These results are quite different from many other management areas where English-speaking countries, especially the USA, tend to be dominant. Hong Kong Polytechnic University is the most productive university, while Michigan State University is the most influential one. Time-based evolution reveals that the USA previously had a more dominant position, while now there is more distribution of top universities around the world. The analysis of selected journals indicates that many journals tend to be more influenced by their respective countries of origin. However, other journals show a more general profile by publishing papers from most of the countries around the world.
Milfont, TL, Amirbagheri, K, Hermanns, E & Merigó, JM 2019, 'Celebrating Half a Century of Environment and Behavior: A Bibliometric Review', Environment and Behavior, vol. 51, no. 5, pp. 469-501.
View/Download from: Publisher's site
View description>>
Environment and Behavior is a leading international journal that publishes research examining the relationships between human behavior and the built and natural environments since 1969. Motivated by its half-century anniversary, the present article uses the Web of Science Core Collection database to provide a bibliometric overview of the leading trends that have occurred in the journal during the 1969-2018 period. The impact of the journal has increased over the years, Gary W. Evans is the author with most published papers, articles by Paul C. Stern and Thomas Dietz have made a notable scientific impact, the University of Michigan is the institution with the highest number of publications, and there is a growing trend in the number of women and international contributors to the journal. This bibliographic review provides strong evidence of the scientific impact of the journal, and the wider Environment-and-Behavior community should be proud of its story of success.
Mills, PW, Rundle, RP, Samson, JH, Devitt, SJ, Tilma, T, Dwyer, VM & Everitt, MJ 2019, 'Quantum invariants and the graph isomorphism problem', Physical Review A, vol. 100, no. 5.
View/Download from: Publisher's site
Milne, DN, McCabe, KL & Calvo, RA 2019, 'Improving Moderator Responsiveness in Online Peer Support Through Automated Triage', Journal of Medical Internet Research, vol. 21, no. 4, pp. e11410-e11410.
View/Download from: Publisher's site
View description>>
© 2019 Journal of Medical Internet Research. All rights reserved. Background: Online peer support forums require oversight to ensure they remain safe and therapeutic. As online communities grow, they place a greater burden on their human moderators, which increases the likelihood that people at risk may be overlooked. This study evaluated the potential for machine learning to assist online peer support by directing moderators' attention where it is most needed. Objective: This study aimed to evaluate the accuracy of an automated triage system and the extent to which it influences moderator behavior. Methods: A machine learning classifier was trained to prioritize forum messages as green, amber, red, or crisis depending on how urgently they require attention from a moderator. This was then launched as a set of widgets injected into a popular online peer support forum hosted by ReachOut.com, an Australian Web-based youth mental health service that aims to intervene early in the onset of mental health problems in young people. The accuracy of the system was evaluated using a holdout test set of manually prioritized messages. The impact on moderator behavior was measured as response ratio and response latency, that is, the proportion of messages that receive at least one reply from a moderator and how long it took for these replies to be made. These measures were compared across 3 periods: before launch, after an informal launch, and after a formal launch accompanied by training. Results: The algorithm achieved 84% f-measure in identifying content that required a moderator response. Between prelaunch and post-training periods, response ratios increased by 0.9, 4.4, and 10.5 percentage points for messages labelled as crisis, red, and green, respectively, but decreased by 5.0 percentage points for amber messages. Logistic regression indicated that the triage system was a significant contributor to response ratios for green, amber, and red messages, but not fo...
Ming, Y, Ding, W, Pelusi, D, Wu, D, Wang, Y-K, Prasad, M & Lin, C-T 2019, 'Subject adaptation network for EEG data analysis', Applied Soft Computing, vol. 84, pp. 105689-105689.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Biosignals tend to display manifest intra- and cross-subject variance, which generates numerous challenges for electroencephalograph (EEG) data analysis. For instance, in the context of classification, the discrepancy between EEG data can make the trained model generalising poorly for new test subjects. In this paper, a subject adaptation network (SAN) inspired by the generative adversarial network (GAN) to mitigate different variances is proposed for analysing EEG data. First the challenges faced by traditional approaches employed for EEG signal processing are emphasised. Then the problem is formulated from mathematical perspective to highlight the key points in resolving such discrepancies. Third, the motivation behind and design principle of the SAN are described in an intuitive manner to reflect its suitability for analysing EEG data. Then after depicting the overall architecture of the SAN, several experiments are used to justify the practicality and efficiency of using the proposed model from different perspectives. For instance, an EEG dataset captured during a stereotypical neurophysiological experiment called the VEP oddball task is utilised to demonstrate the performance improvement achieved by running the SAN.
Ming, Y, Lin, C-T, Bartlett, SD & Zhang, W-W 2019, 'Quantum topology identification with deep neural networks and quantum walks', npj Computational Materials, vol. 5, no. 1.
View/Download from: Publisher's site
View description>>
AbstractTopologically ordered materials may serve as a platform for new quantum technologies, such as fault-tolerant quantum computers. To fulfil this promise, efficient and general methods are needed to discover and classify new topological phases of matter. We demonstrate that deep neural networks augmented with external memory can use the density profiles formed in quantum walks to efficiently identify properties of a topological phase as well as phase transitions. On a trial topological ordered model, our method’s accuracy of topological phase identification reaches 97.4%, and is shown to be robust to noise on the data. Furthermore, we demonstrate that our trained DNN is able to identify topological phases of a perturbed model, and predict the corresponding shift of topological phase transitions without learning any information about the perturbations in advance. These results demonstrate that our approach is generally applicable and may be used to identify a variety of quantum topological materials.
Modak, NM, Merigó, JM, Weber, R, Manzor, F & Ortúzar, JDD 2019, 'Fifty years of Transportation Research journals: A bibliometric overview', Transportation Research Part A: Policy and Practice, vol. 120, pp. 188-223.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Transportation Research (TR) was established in 1967 with the vision of promoting multi-disciplinary (economics, engineering, sociology, psychology) research on transport systems. The journal has continuously expanded its wings becoming a world-leading journal, now publishing research work through six parts, A to F, respectively addressing Policy and Practice, Methodological, Emerging Technologies, Transport and Environment, Logistics and Transportation Review, and Traffic Psychology and Behaviour. This study aims to celebrate the first half century of the journal through a bibliometric study of the publications on all six parts between 1967 and 2016. It uses the most reliable database for academic research, the Web of Science Core Collection, to identify the leading trends in all TR journals in terms of impact, topics, authors, universities, and countries. Moreover, it uses the Visualization of Similarities (VOS) viewer software to analyse bibliographic coupling, co-citation, citation, co-authorship, and co-occurrence of keywords.
Mulet-Forteza, C, Genovart-Balaguer, J, Mauleon-Mendez, E & Merigó, JM 2019, 'A bibliometric research in the tourism, leisure and hospitality fields', Journal of Business Research, vol. 101, pp. 819-827.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Inc. This paper presents a study of the most cited papers, the most productive and influential institutions and countries, and the most influential authors in the tourism, leisure, and hospitality fields. The number of publications in journals focused on these areas has increased exponentially over the past 40 years. This paper examines the fundamental contributions in these areas using a bibliometric approach. This paper also uses the visualization of similarities to graphically map the main topics and keywords. No study has examined all journals indexed in the Web of Science in these fields over a period as wide as the one considered in this study. This study is valuable for several reasons. It can help scholars and researchers to identify the countries and institutions with the most potential to develop and share research, as well as where it would be interesting to carry out their doctoral studies and develop their careers.
Mulet-Forteza, C, Genovart-Balaguer, J, Merigó, JM & Mauleon-Mendez, E 2019, 'Bibliometric structure of IJCHM in its 30 years', International Journal of Contemporary Hospitality Management, vol. 31, no. 12, pp. 4574-4604.
View/Download from: Publisher's site
View description>>
PurposeThe International Journal of Contemporary Hospitality Management is a leading international journal in the field of hospitality and tourism management. It was started in 1989, and it turns 30 years old this year. To celebrate this anniversary, this paper presents a bibliometric overview of the publication and citation structure of the journal over the past 30 years. The purpose of this paper is to identify the relevant issues in terms of keywords and topics and who is achieving better results in terms of authors, universities and countries.Design/methodology/approachThe Scopus database is used to collect the bibliographical material. A graphical mapping of the bibliographic data is developed by using VOSviewer software. It produces graphical maps with several bibliometric techniques, including co-citation, bibliographic coupling and co-occurrence of keywords.FindingsThe results indicate that English-speaking countries are producing the highest number of articles in the journal, followed by Asian institutions, with the Hong Kong Polytechnic University as the most productive institution.Originality/valueTo the best of the authors’ knowledge, there are no papers that present a general overview of the publication and citation structure of this journal. Its 30th anniversary is a good moment to develop this study.
Musial, K, Bródka, P & De Meo, P 2019, 'Analysis and Applications of Complex Social Networks 2018', Complexity, vol. 2019, pp. 1-2.
View/Download from: Publisher's site
Natgunanathan, I, Mehmood, A, Xiang, Y, Gao, L & Yu, S 2019, 'Location Privacy Protection in Smart Health Care System', IEEE Internet of Things Journal, vol. 6, no. 2, pp. 3055-3069.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In a smart health system, patients' location information is periodically sent to hospitals and this information helps hospitals to provide improved health care services. The location information together with time stamp alone can reveal a patient's private information, such as person's life style, places frequently visited by the person, and personal interests. Thus, it is important to protect the location privacy of a patient. In the existing privacy protection mechanisms, trusted third party (TTP) and location perturbation techniques are used. However, in TTP-based mechanism, an adversary who illegally gets access to TTP server will have access to the private location information. On the other hand, in location perturbation technique, utility of the location information is significantly compromised. In this paper, we propose a location privacy protection mechanism in which location privacy is protected while maintaining the utility of the location data. In the proposed mechanism, a main processing unit attached to a patient's body generates the perturbed location by considering the distance between the patient's location and the preidentified patient's sensitive locations. This adaptive generation of perturbed location, removes the necessity to trust other parties while preserving the privacy and utility of the location data. The validity of the proposed mechanism is demonstrated by simulation results.
Nawaz, F, Hussain, O, Hussain, FK, Janjua, NK, Saberi, M & Chang, E 2019, 'Proactive management of SLA violations by capturing relevant external events in a Cloud of Things environment', Future Generation Computer Systems, vol. 95, pp. 26-44.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. The cloud of things (CoT) is an emerging paradigm that has merged and combined cloud computing and the Internet of Things (IoT). Such a paradigm has enabled service providers to provide on-demand computing resources from devices spread across different locations for service users to be dynamically connected to them. While this benefits the CoT service providers and users in many ways, it also brings a key challenge of ensuring that the service is delivered according to the promised quality. Failure to ensure this will result in the service provider experiencing penalties of different types and the service user experiencing disruptions. The literature addresses this problem by proactively managing for SLA violations. However, given the geographically dispersed region of a formed CoT service, in this paper we argue that for proactive SLA violation identification, we need specialized techniques that also consider events that are outside the usual control of service providers and users, but will impact the CoT environment and the quality of service. We propose a framework that identifies such external events of interest and ascertains their impact on achieving the service according to the promised quality. We explain the working of our proposed framework in detail and demonstrate its superiority in proactively determining SLA violations as compared to existing approaches.
Nicolas, C, Valenzuela-Fernandez, L & Merigó, JM 2019, 'Mapping retailing research with bibliometric indicators', Journal of Promotion Management, vol. 25, no. 5, pp. 664-680.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 Taylor & Francis Group, LLC. Our study aims to give a global perspective regarding scientific research on retailing for the 1990–2014 period. The research shows a knowledge-domain-map that identifies the collaboration networks between authors and the links between journals. This was conducted through a bibliometric study that can be viewed with Visualization of similarities (VOS) viewer software. The results show that the Journal of Retailing and Management Science is the current leader in the field. In addition, Morgan and Hunt’s (1994) article in the Journal of Marketing is the most cited source to date.
Nuvoli, S, Hernandez, A, Esperança, C, Scateni, R, Cignoni, P & Pietroni, N 2019, 'QuadMixer: layout preserving blending of quadrilateral meshes.', ACM Trans. Graph., vol. 38, no. 6, pp. 180:1-180:1.
View/Download from: Publisher's site
View description>>
© 2019 Copyright held by the owner/author(s). We propose QuadMixer, a novel interactive technique to compose quad mesh components preserving the majority of the original layouts. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving shape. Our technique keeps untouched all the quads in the patches which are not involved in the blending. We first perform robust boolean operations on the corresponding triangle meshes. Then we use this result to identify and build new surface patches for small regions neighboring the intersection curves. These blending patches are carefully quadrangulated respecting boundary constraints and stitched back to the untouched parts of the original models. The resulting mesh preserves the designed edge flow that, by construction, is captured and incorporated to the new quads as much as possible. We present our technique in an interactive tool to show its usability and robustness.
Oberst, S, Lenz, M, Lai, JCS & Evans, TA 2019, 'Termites manipulate moisture content of wood to maximize foraging resources', Biology Letters, vol. 15, no. 7, pp. 20190365-20190365.
View/Download from: Publisher's site
View description>>
Animals use cues to find their food, in microhabitats within their physiological tolerances. Termites build and modify their microhabitat, to transform hostile environments into benign ones, which raises questions about the relative importance of cues. Termites are desiccation intolerant and foraging termites are attracted to water, so most research has considered moisture to be a cue. However, termites can also transport water to food, and so moisture may play other roles than previously considered. To examine the role of moisture, we compared Coptotermes acinaciformis termite foraging decisions in laboratory experiments when they were offered dry and moist wood, with and without load. Without load, termites preferred moist wood and ate it without any building, whereas they moistened dry wood after wrapping it in a layer of clay. For the ‘With load’ units, termites substituted some of the wood for load-bearing clay walls, and kept the wood drier than on the unloaded units. As drier wood has higher compressive strength and higher rigidity, it allows more of the wood to be consumed. These results suggest that moisture plays a more important role in termite ecology than previously thought. Termites manipulate the moisture content according to the situational context and use it for multiple purposes: increased moisture levels soften the fibre, which facilitates foraging, yet keeping the wood dry provides higher structural stability against buckling which is especially important when foraging on wood under load.
Odriozola-Fernández, I, Berbegal-Mirabent, J & Merigó-Lindahl, JM 2019, 'Open innovation in small and medium enterprises: a bibliometric analysis', Journal of Organizational Change Management, vol. 32, no. 5, pp. 533-557.
View/Download from: Publisher's site
View description>>
PurposeThe open innovation (OI) paradigm suggests that firms should use inflows and outflows of knowledge in order to accelerate innovation and leverage markets. Literature examining how firms are adopting OI practices is rich; notwithstanding, little research has addressed this topic from the perspective of small- and medium-sized enterprises (SMEs). Given the relevance of SMEs in worldwide economies, the purpose of this paper is to provide a comprehensive overview of research on OI in SMEs.Design/methodology/approachIn total, 112 academic articles were selected from the Web of Science database. Following a bibliometric analysis, the most relevant authors, journals, institutions and countries are presented. Additionally, the main areas these articles cover are summarized.FindingsResults are consistent in that the most prolific authors are affiliated with the universities leading the ranking of institutions. However, it is remarkable that top authors in this field do not possess a large number of publications on OI in SMEs, but combine this research topic with other related ones. At the country level, European countries are on the top together with South Korea.Research limitations/implicationsDespite following a rigorous method, other relevant documents not included in the selected databases might have been ignored.Practical implicationsThis paper outlines the main topics of interest within this area: impact of OI on firm performance and on organizations’ structure, OI as a mechanism to haste...
Oltra-Badenes, R, Gil-Gomez, H, Merigo, JM & Palacios-Marques, D 2019, 'Methodology and model-based DSS to managing the reallocation of inventory to orders in LHP situations. Application to the ceramics sector', PLOS ONE, vol. 14, no. 7, pp. e0219433-e0219433.
View/Download from: Publisher's site
View description>>
© 2019 Oltra-Badenes et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Lack of homogeneity in the product (LHP) is a problem when customers require homogeneous units of a single product. In such cases, the optimal allocation of inventory to orders becomes much more complex. Furthermore, in an MTS environment, an optimal initial allocation may become less than ideal over time, due to different circumstances. This problem occurs in the ceramics sector, where the final product varies in tone and calibre. This paper proposes a methodology for the reallocation of inventory to orders in LHP situation (MERIO-LHP) and a model-based decision-support system (DSS) to support the methodology, which enables an optimal reallocation of inventory to order lines to be carried out in real businesses environments in which LHP is inherent. The proposed methodology and modelbased DSS were validated by applying it to a real case at a ceramics company. The analysis of the results indicates that considerable improvements can be obtained with regard to the quantity of orders fulfilled and sales turnover.
Oña, ED, García, JA, Raffe, W, Jardón, A & Balaguer, C 2019, 'Assessment of Manual Dexterity in VR: Towards a Fully Automated Version of the Box and Blocks Test.', Stud Health Technol Inform, vol. 266, pp. 57-62.
View/Download from: Publisher's site
View description>>
In recent years, the possibility of using serious gaming technology for the automation of clinical procedures for assessment of motor function have captured the interest of the research community. In this paper, a virtual version of the Box and Blocks Test (BBT) for manual dexterity assessment is presented. This game-like system combines the classical BBT mechanics with a play-centric approach to accomplish a fully automated test for assessing hand motor function, making it more accessible and easier to administer. Additionally, some variants of the traditional mechanics are proposed in order to fully exploit the advantages of the chosen technology. This ongoing research aims to provide the clinical practitioners with a customisable, intuitive, and reliable tool for the assessment and rehabilitation of hand motor function.
Orth, D, Thurgood, C & Hoven, EVD 2019, 'Designing Meaningful Products in the Digital Age', ACM Transactions on Computer-Human Interaction, vol. 26, no. 5, pp. 1-28.
View/Download from: Publisher's site
View description>>
Devices such as phones, laptops and tablets have become central to the ways in which many people communicate with others, conduct business and spend their leisure time. This type of product uniquely contains both physical and digital components that affect how they are perceived and valued by users. This article investigates the nature of attachment in the context of technological possessions to better understand ways in which designers can create devices that are meaningful and kept for longer. Findings from our study of the self-reported associations and meaningfulness of technological possessions revealed that the digital contents of these possessions were often the primary source of meaning. Technological possessions were frequently perceived as systems of products rather than as singular devices. We identified several design opportunities for materialising the associations ascribed to the digital information contained within technological products to more meaningfully integrate their physical and digital components.
Paler, A, Herr, D & Devitt, SJ 2019, 'Really Small Shoe Boxes: On Realistic Quantum Resource Estimation', Computer, vol. 52, no. 6, pp. 27-37.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The reliable resource estimation and benchmarking of quantum algorithms is a critical component of the development cycle of viable quantum applications for quantum computers of all sizes. Determining resource bottlenecks in algorithms, especially when resource intensive error correction protocols are required, is crucial to reduce the cost of implementing viable algorithms on actual quantum hardware.
Patel, OP, Bharill, N, Tiwari, A, Patel, V, Gupta, O, Cao, J, Li, J & Prasad, M 2019, 'Advanced Quantum Based Neural Network Classifier and Its Application for Objectionable Web Content Filtering', IEEE Access, vol. 7, pp. 98069-98082.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In this paper, an Advanced Quantum-based Neural Network Classifier (AQNN) is proposed. The proposed AQNN is used to form an objectionable Web content filtering system (OWF). The aim is to design a neural network with a few numbers of hidden layer neurons with the optimal connection weights and the threshold of neurons. The proposed algorithm uses the concept of quantum computing and genetic concept to evolve connection weights and the threshold of neurons. Quantum computing uses qubit as a probabilistic representation which is the smallest unit of information in the quantum computing concept. In this algorithm, a threshold boundary parameter is also introduced to find the optimal value of the threshold of neurons. The proposed algorithm forms neural network architecture which is used to form an objectionable Web content filtering system which detects objectionable Web request by the user. To judge the performance of the proposed AQNN, a total of 2000 (1000 objectionable + 1000 non-objectionable) Website's contents have been used. The results of AQNN are also compared with QNN-F and well-known classifiers as backpropagation, support vector machine (SVM), multilayer perceptron, decision tree algorithm, and artificial neural network. The results show that the AQNN as classifier performs better than existing classifiers. The performance of the proposed objectionable Web content filtering system (OWF) is also compared with well-known objectionable Web filtering software and existing models. It is found that the proposed OWF performs better than existing solutions in terms of filtering objectionable content.
Patel, OP, Tiwari, A, Chaudhary, R, Nuthalapati, SV, Bharill, N, Prasad, M, Hussain, FK & Hussain, OK 2019, 'Enhanced quantum-based neural network learning and its application to signature verification', Soft Computing, vol. 23, no. 9, pp. 3067-3080.
View/Download from: Publisher's site
View description>>
© 2017, Springer-Verlag GmbH Germany, part of Springer Nature. In this paper, an enhanced quantum-based neural network learning algorithm (EQNN-S) which constructs a neural network architecture using the quantum computing concept is proposed for signature verification. The quantum computing concept is used to decide the connection weights and threshold of neurons. A boundary threshold parameter is introduced to optimally determine the neuron threshold. This parameter uses min, max function to decide threshold, which assists efficient learning. A manually prepared signature dataset is used to test the performance of the proposed algorithm. To uniquely identify the signature, several novel features are selected such as the number of loops present in the signature, the boundary calculation, the number of vertical and horizontal dense patches, and the angle measurement. A total of 45 features are extracted from each signature. The performance of the proposed algorithm is evaluated by rigorous training and testing with these signatures using partitions of 60–40 and 70–30%, and a tenfold cross-validation. To compare the results derived from the proposed quantum neural network, the same dataset is tested on support vector machine, multilayer perceptron, back propagation neural network, and Naive Bayes. The performance of the proposed algorithm is found better when compared with the above methods, and the results verify the effectiveness of the proposed algorithm.
Peng, S, Wang, G, Zhou, Y, Wan, C, Wang, C, Yu, S & Niu, J 2019, 'An Immunization Framework for Social Networks Through Big Data Based Influence Modeling', IEEE Transactions on Dependable and Secure Computing, vol. 16, no. 6, pp. 984-995.
View/Download from: Publisher's site
View description>>
© 2004-2012 IEEE. Social networks are critical in terms of information or malware propagation. However, how to contain the spreading of malware in social networks is still an open and challenging issue. In this paper, we propose a novel defending method through big data based influence modeling. We first establish a social interaction graph based on big data sets of the studied object. Based on the graph, we are able to measure direct influence of individuals by computing each node's strength, which includes the degree of the node and the total number of messages sent by each user to her friends. Then, we design an algorithm to construct influence spreading tree using the breadth first search strategy, and measure indirect influence of individuals by traversing the tree. We identify the top kk influential nodes among all the nodes via the social influence strength, and propose an immunization algorithm to defend social networks against various attacks. The extensive experiments show that influence can spread easily in social networks, and the greater the influence of initial spread node is, the more impact it is on the malware propagation in social networks. The proposed method provides an effective solution to the prevention of malware or malicious messages propagation in social networks.
Pérez-Arellano, LA, León-Castro, E, Avilés-Ochoa, E & Merigó, JM 2019, 'Prioritized induced probabilistic operator and its application in group decision making', International Journal of Machine Learning and Cybernetics, vol. 10, no. 3, pp. 451-462.
View/Download from: Publisher's site
View description>>
© 2017, Springer-Verlag GmbH Germany. A new extension of the ordered weighted average (OWA) operator is presented. This new operator includes the characteristics of three other operators: the prioritized, induced and probabilistic. The name is the prioritized induced probabilistic ordered weighted average (PIPOWA) operator. This operator can be used in a group decision-making process for selection of an alternative, taking into account three aspects: (1) not all of the decision-makers are equally important, (2) the probability of success of each alternative, and (3) an induced weighted vector. In the paper, some families of this operator are presented such as the prioritized probabilistic weighted average (PPOWA) operator and the prioritized induced ordered weighted average (PIOWA) operator. Additionally, some of the parameterized family of the aggregation operators, such as the minimum, maximum and total operator, are presented as special cases. The article also generalizes the PIPOWA operator by using quasi-arithmetic means. Finally, an example for selecting an alternative dispute resolution method in a commercial dispute is presented.
Pileggi, SF & Voinov, A 2019, 'PERSWADE-CORE: A Core Ontology for Communicating Socio-Environmental and Sustainability Science', IEEE Access, vol. 7, pp. 127177-127188.
View/Download from: Publisher's site
Qiao, C, Lu, L, Yang, L & Kennedy, PJ 2019, 'Identifying Brain Abnormalities with Schizophrenia Based on a Hybrid Feature Selection Technology', Applied Sciences, vol. 9, no. 10, pp. 2148-2148.
View/Download from: Publisher's site
View description>>
Many medical imaging data, especially the magnetic resonance imaging (MRI) data, usually have a small sample size, but a large number of features. How to reduce effectively the data dimension and locate accurately the biomarkers from such kinds of data are quite crucial for diagnosis and further precision medicine. In this paper, we propose a hybrid feature selection method based on machine learning and traditional statistical approaches and explore the brain abnormalities of schizophrenia by using the functional and structural MRI data. The results show that the abnormal brain regions are mainly distributed in the supramarginal gyrus, cingulate gyrus, frontal gyrus, precuneus and caudate, and the abnormal functional connections are related to the caudate nucleus, insula and rolandic operculum. In addition, some complex network analyses based on graph theory are utilized on the functional connection data, and the results demonstrate that the located abnormal functional connections in brain can distinguish schizophrenia patients from healthy controls. The identified abnormalities in brain with schizophrenia by the proposed hybrid feature selection method show that there do exist some abnormal brain regions and abnormal disruption of the network segregation and network integration for schizophrenia, and these changes may lead to inaccurate and inefficient information processing and synthesis in the brain, which provide further evidence for the cognitive dysmetria of schizophrenia.
Qiao, M, Yu, J, Bian, W, Li, Q & Tao, D 2019, 'Adapting Stochastic Block Models to Power-Law Degree Distributions', IEEE Transactions on Cybernetics, vol. 49, no. 2, pp. 626-637.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Stochastic block models (SBMs) have been playing an important role in modeling clusters or community structures of network data. But, it is incapable of handling several complex features ubiquitously exhibited in real-world networks, one of which is the power-law degree characteristic. To this end, we propose a new variant of SBM, termed power-law degree SBM (PLD-SBM), by introducing degree decay variables to explicitly encode the varying degree distribution over all nodes. With an exponential prior, it is proved that PLD-SBM approximately preserves the scale-free feature in real networks. In addition, from the inference of variational E-Step, PLD-SBM is indeed to correct the bias inherited in SBM with the introduced degree decay factors. Furthermore, experiments conducted on both synthetic networks and two real-world datasets including Adolescent Health Data and the political blogs network verify the effectiveness of the proposed model in terms of cluster prediction accuracies.
Quiroz, JC, Laranjo, L, Kocaballi, AB, Berkovsky, S, Rezazadegan, D & Coiera, E 2019, 'Challenges of developing a digital scribe to reduce clinical documentation burden', npj Digital Medicine, vol. 2, no. 1, p. 114.
View/Download from: Publisher's site
View description>>
AbstractClinicians spend a large amount of time on clinical documentation of patient encounters, often impacting quality of care and clinician satisfaction, and causing physician burnout. Advances in artificial intelligence (AI) and machine learning (ML) open the possibility of automating clinical documentation with digital scribes, using speech recognition to eliminate manual documentation by clinicians or medical scribes. However, developing a digital scribe is fraught with problems due to the complex nature of clinical environments and clinical conversations. This paper identifies and discusses major challenges associated with developing automated speech-based documentation in clinical settings: recording high-quality audio, converting audio to transcripts using speech recognition, inducing topic structure from conversation data, extracting medical concepts, generating clinically meaningful summaries of conversations, and obtaining clinical data for AI and ML algorithms.
Raeisi, S, Kieferová, M & Mosca, M 2019, 'Novel Technique for Robust Optimal Algorithmic Cooling', Physical Review Letters, vol. 122, no. 22, pp. 220501-220501.
View/Download from: Publisher's site
View description>>
Heat-bath algorithmic cooling provides algorithmic ways to improve the purity of quantum states. These techniques are complex iterative processes that change from each iteration to the next and this poses a significant challenge to implementing these algorithms. Here, we introduce a new technique that on a fundamental level, shows that it is possible to do algorithmic cooling and even reach the cooling limit without any knowledge of the state and using only a single fixed operation, and on a practical level, presents a more feasible and robust alternative for implementing heat-bath algorithmic cooling. We also show that our new technique converges to the asymptotic state of heat-bath algorithmic cooling and that the cooling algorithm can be efficiently implemented; however, the saturation could require exponentially many iterations and remains impractical. This brings heat-bath algorithmic cooling to the realm of feasibility and makes it a viable option for realistic application in quantum technologies.
Raza, M, Hussain, FK, Hussain, OK, Zhao, M & Rehman, ZU 2019, 'A comparative analysis of machine learning models for quality pillar assessment of SaaS services by multi-class text classification of users’ reviews', Future Generation Computer Systems, vol. 101, pp. 341-371.
View/Download from: Publisher's site
Razzak, I, A. Hameed, I & Xu, G 2019, 'Robust Sparse Representation and Multiclass Support Matrix Machines for the Classification of Motor Imagery EEG Signals', IEEE Journal of Translational Engineering in Health and Medicine, vol. 7, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Background: EEG signals are extremely complex in comparison to other biomedical signals, thus require an efficient feature selection as well as classification approach. Traditional feature extraction and classification methods require to reshape the data into vectors that results in losing the structural information exist in the original featured matrix. Aim: The aim of this work is to design an efficient approach for robust feature extraction and classification for the classification of EEG signals. Method: In order to extract robust feature matrix and reduce the dimensionality of from original epileptic EEG data, in this paper, we have applied robust joint sparse PCA (RJSPCA), Outliers Robust PCA (ORPCA) and compare their performance with different matrix base feature extraction methods, followed by classification through support matrix machine. The combination of joint sparse PCA with robust support matrix machine showed good generalization performance for classification of EEG data due to their convex optimization. Results: A comprehensive experimental study on the publicly available EEG datasets is carried out to validate the robustness of the proposed approach against outliers. Conclusion: The experiment results, supported by the theoretical analysis and statistical test, show the effectiveness of the proposed framework for solving classification of EEG signals.
Razzak, I, Blumenstein, M & Xu, G 2019, 'Multiclass Support Matrix Machines by Maximizing the Inter-Class Margin for Single Trial EEG Classification', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 6, pp. 1117-1127.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Accurate classification of Electroencephalogram (EEG) signals plays an important role in diagnoses of different type of mental activities. One of the most important challenges, associated with classification of EEG signals is how to design an efficient classifier consisting of strong generalization capability. Aiming to improve the classification performance, in this paper, we propose a novel multiclass support matrix machine (M-SMM) from the perspective of maximizing the inter-class margins. The objective function is a combination of binary hinge loss that works on C matrices and spectral elastic net penalty as regularization term. This regularization term is a combination of Frobenius and nuclear norm, which promotes structural sparsity and shares similar sparsity patterns across multiple predictors. It also maximizes the inter-class margin that helps to deal with complex high dimensional noisy data. The extensive experiment results supported by theoretical analysis and statistical tests show the effectiveness of the M-SMM for solving the problem of classifying EEG signals associated with motor imagery in brain-computer interface applications.
Razzak, MI, Imran, M & Xu, G 2019, 'Efficient Brain Tumor Segmentation With Multiscale Two-Pathway-Group Conventional Neural Networks', IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 5, pp. 1911-1919.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive.
Reddy, TK, Arora, V, Behera, L, Wang, Y-K & Lin, C-T 2019, 'Multiclass Fuzzy Time-Delay Common Spatio-Spectral Patterns With Fuzzy Information Theoretic Optimization for EEG-Based Regression Problems in Brain–Computer Interface (BCI)', IEEE Transactions on Fuzzy Systems, vol. 27, no. 10, pp. 1943-1951.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Electroencephalogram (EEG) signals are one of the most widely used noninvasive signals in brain-computer interfaces. Large dimensional EEG recordings suffer from poor signal-to-noise ratio. These signals are very much prone to artifacts and noise, so sufficient preprocessing is done on raw EEG signals before using them for classification or regression. Properly selected spatial filters enhance the signal quality and subsequently improve the rate and accuracy of classifiers, but their applicability to solve regression problems is quite an unexplored objective. This paper extends common spatial patterns (CSP) to EEG state space using fuzzy time delay and thereby proposes a novel approach for spatial filtering. The approach also employs a novel fuzzy information theoretic framework for filter selection. Experimental performance on EEG-based reaction time (RT) prediction from a lane-keeping task data from 12 subjects demonstrated that the proposed spatial filters can significantly increase the EEG signal quality. A comparison based on root-mean-squared error (RMSE), mean absolute percentage error (MAPE), and correlation to true responses is made for all the subjects. In comparison to the baseline fuzzy CSP regression one versus rest, the proposed Fuzzy Time-delay Common Spatio-Spectral filters reduced the RMSE on an average by 9.94%, increased the correlation to true RT on an average by 7.38%, and reduced the MAPE by 7.09%.
Reddy, TK, Arora, V, Kumar, S, Behera, L, Wang, Y-K & Lin, C-T 2019, 'Electroencephalogram Based Reaction Time Prediction With Differential Phase Synchrony Representations Using Co-Operative Multi-Task Deep Neural Networks', IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 3, no. 5, pp. 369-379.
View/Download from: Publisher's site
Rialp, A, Merigó, JM, Cancino, CA & Urbano, D 2019, 'Twenty-five years (1992–2016) of the International Business Review: A bibliometric overview', International Business Review, vol. 28, no. 6, pp. 101587-101587.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd The International Business Review (IBR) is a leading international academic journal in the field of International Business (IB). Such leadership is reflected in the large number of publications that grow year after year and particularly in the large number of citations received from other journals of high academic prestige. The aim of this study is to conduct a bibliometric overview of the leading trends regarding the journal's publications and citations since its creation in 1992 until 2016. The work identifies the authors, universities, and countries that publish the most in IBR by mainly using the Scopus database though eventually complemented with Web of Science (WoS) Core Collection. It also analyzes the most cited papers and articles of the journal. Besides, the study graphically maps the bibliographic material by using the visualization of similarities (VOS) viewer software. In order to do so, the work uses co-citation analysis, bibliographic coupling, and co-occurrence of author keywords. The results show the prominent European profile of the journal where contributors from European universities and countries are the most productive ones in the journal. Particularly, British and Scandinavian universities obtain the most remarkable results. However, mostly scholars from North America, but also from Oceania and East Asia are increasingly and regularly publishing in the journal. In addition, IBR is very well connected to other leading journals in the field, such as the Journal of International Business Studies (JIBS) and the Journal of World Business (JWB), as well as with other top management journals, thus demonstrating its core position in IB research conducted worldwide.
Saberi, M, Hussain, OK & Chang, E 2019, 'Quality Management of Workers in an In-House Crowdsourcing-Based Framework for Deduplication of Organizations’ Databases', IEEE Access, vol. 7, pp. 90715-90730.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. While organizations in the current era of big data are generating massive volumes of data, they also need to ensure that its quality is maintained for it to be useful in decision-making purposes. The problem of dirty data plagues every organization. One aspect of dirty data is the presence of duplicate data records that negatively impact the organization's operations in many ways. Many existing approaches attempt to address this problem by using traditional data cleansing methods. In this paper, we address this problem by using an in-house crowdsourcing-based framework, namely, DedupCrowd. One of the main obstacles of crowdsourcing-based approaches is to monitor the performance of the crowd, by which the integrity of the whole process is maintained. In this paper, a statistical quality control-based technique is proposed to regulate the performance of the crowd. We apply our proposed framework in the context of a contact center, where the Customer Service Representatives are used as the crowd to assist in the process of deduplicating detection. By using comprehensive working examples, we show how the different modules of the DedupCrowd work not only to monitor the performance of the crowd but also to assist in duplicate detection.
Saberi, Z, Saberi, M, Hussain, O & Chang, E 2019, 'Stackelberg model based game theory approach for assortment and selling price planning for small scale online retailers', Future Generation Computer Systems, vol. 100, pp. 1088-1102.
View/Download from: Publisher's site
View description>>
© 2019 Assortment Planning (AP) is one of the most significant and challenging decision for online retailers (e-tailers) to make. This decision becomes even more complex when a supplier is considered as a distinctive participant in decision making model. In the bricks and mortar mode of retailing, retailers are more powerful than suppliers in getting the required goods in the required quantity. However, this is not the case for small scale e-tailers. Such e-tailers are faced with situations where large-scale retailers indirectly force the suppliers to refuse supplying to them. In such cases, effective AP decision making approaches are needed for small scale e-tailers to get the required goods to satisfy the customers’ demand. While current advancement in smart cities provide a powerful platform and support for successful operations of online retailing, this needs to be supported by appropriated modeling approaches that assist the e-tailer in getting their required product assortment. In this paper, a game-theoretic model is developed to support the small scale e-tailer in AP decision making. Such that it has two strategies to decide from. The first strategy is that it can offer the product with supreme quality by procuring it from the main powerful supplier and the second strategy is to offer the product from a less popular brand. The first strategy is modeled as a non-cooperative Stackelberg supply chain in which the supplier plays a leader and the e-tailer is a follower and the second strategy is modeled as an assortment planning problem while considering utility degradation of providing alternative brand to the customers. Various analyses are done to find the best strategy in different scenarios before recommending the best strategy to be followed by the e-tailer in given situations.
Saeed, Z, Abbasi, RA, Maqbool, O, Sadaf, A, Razzak, I, Daud, A, Aljohani, NR & Xu, G 2019, 'What’s Happening Around the World? A Survey and Framework on Event Detection Techniques on Twitter', Journal of Grid Computing, vol. 17, no. 2, pp. 279-312.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature B.V. In the last few years, Twitter has become a popular platform for sharing opinions, experiences, news, and views in real-time. Twitter presents an interesting opportunity for detecting events happening around the world. The content (tweets) published on Twitter are short and pose diverse challenges for detecting and interpreting event-related information. This article provides insights into ongoing research and helps in understanding recent research trends and techniques used for event detection using Twitter data. We classify techniques and methodologies according to event types, orientation of content, event detection tasks, their evaluation, and common practices. We highlight the limitations of existing techniques and accordingly propose solutions to address the shortcomings. We propose a framework called EDoT based on the research trends, common practices, and techniques used for detecting events on Twitter. EDoT can serve as a guideline for developing event detection methods, especially for researchers who are new in this area. We also describe and compare data collection techniques, the effectiveness and shortcomings of various Twitter and non-Twitter-based features, and discuss various evaluation measures and benchmarking methodologies. Finally, we discuss the trends, limitations, and future directions for detecting events on Twitter.
Saeed, Z, Abbasi, RA, Razzak, I, Maqbool, O, Sadaf, A & Xu, G 2019, 'Enhanced Heartbeat Graph for emerging event detection on Twitter using time series networks', Expert Systems with Applications, vol. 136, pp. 115-132.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd With increasing popularity of social media, Twitter has become one of the leading platforms to report events in real-time. Detecting events from Twitter stream requires complex techniques. Event-related trending topics consist of a group of words which successfully detect and identify events. Event detection techniques must be scalable and robust, so that they can deal with the huge volume and noise associated with social media. Existing event detection methods mostly rely on burstiness, mainly the frequency of words and their co-occurrences. However, burstiness sometimes dominates other relevant details in the data which could be equally significant. Besides, the topological and temporal relationships in the data are often ignored. In this work, we propose a novel graph-based approach, called the Enhanced Heartbeat Graph (EHG), which detects events efficiently. EHG suppresses dominating topics in the subsequent data stream, after their first detection. Experimental results on three real-world datasets (i.e., Football Association Challenge Cup Final, Super Tuesday, and the US Election 2012) show superior performance of the proposed approach in comparison to the state-of-the-art techniques.
Saeed, Z, Ayaz Abbasi, R, Razzak, MI & Xu, G 2019, 'Event Detection in Twitter Stream Using Weighted Dynamic Heartbeat Graph Approach [Application Notes]', IEEE Computational Intelligence Magazine, vol. 14, no. 3, pp. 29-38.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Once an event is detected, WDHG approach suppresses the bursty keywords at subsequent time intervals. This characteristic enables other related information to be more visible and helps in capturing new and emerging events.
Salamai, A, Hussain, OK, Saberi, M, Chang, E & Hussain, FK 2019, 'Highlighting the Importance of Considering the Impacts of Both External and Internal Risk Factors on Operational Parameters to Improve Supply Chain Risk Management', IEEE Access, vol. 7, pp. 49297-49315.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Operational risk management in supply chain activities is important for the successful achievement of the desired outcomes. Although it is an active area of research with an aim of improving a firm's success in its operations, a drawback of existing approaches is that they analyze it from only the perspective of events local to the supply chain. In this paper, we argue that it is also important for firms in a supply chain to consider external events as they will directly influence the internal ones and use various real-world examples of the risks in different processes of a supply chain as justification to prove our point. We then consider supply chain risk management not only as an operational research process, as do all the relevant survey papers, but a data science problem to gain deeper real-time insights for information risk management. Then, we suggest directions for future research that will assist supply chain risk managers to undertake better supply chain risk management processes.
Sanders, YR, Low, GH, Scherer, A & Berry, DW 2019, 'Black-Box Quantum State Preparation without Arithmetic', Physical Review Letters, vol. 122, no. 2, p. 020502.
View/Download from: Publisher's site
View description>>
Black-box quantum state preparation is an important subroutine in many quantum algorithms. The standard approach requires the quantum computer to do arithmetic, which is a key contributor to the complexity. Here we present a new algorithm that avoids arithmetic. We thereby reduce the number of gates by a factor of 286-374 over the best prior work for realistic precision; the improvement factor increases with the precision. As quantum state preparation is a crucial subroutine in many approaches to simulating physics on a quantum computer, our new method brings useful quantum simulation closer to reality.
Sani, AS, Yuan, D, Jin, J, Gao, L, Yu, S & Dong, ZY 2019, 'Cyber security framework for Internet of Things-based Energy Internet', Future Generation Computer Systems, vol. 93, pp. 849-859.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. With the significant improvement in deployment of Internet of Things (IoT) into the smart grid infrastructure, the demand for cyber security is rapidly growing. The Energy Internet (EI) also known as the integrated internet-based smart grid and energy resources inherits all the security vulnerabilities of the existing smart grid. The security structure of the smart grid has become inadequate in meeting the security needs of energy domains in the 21st century. In this paper, we propose a cyber security framework capable of providing adequate security and privacy, and supporting efficient energy management in the EI. The proposed framework uses an identity-based security mechanism (I-ICAAAN), a secure communication protocol and an Intelligent Security System for Energy Management (ISSEM) to certify security and privacy in the EI. Nash Equilibrium solution of game theory is applied for the evaluation of our proposed ISSEM based on security events allocation. The formal verification and theoretical analysis show that our proposed framework provides security and privacy improvement for IoT-based EI.
Saqib, M, Khan, SD, Sharma, N & Blumenstein, M 2019, 'Crowd Counting in Low-Resolution Crowded Scenes Using Region-Based Deep Convolutional Neural Networks', IEEE Access, vol. 7, pp. 35317-35329.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Crowd counting and density estimation is an important and challenging problem in the visual analysis of the crowd. Most of the existing approaches use regression on density maps for the crowd count from a single image. However, these methods cannot localize individual pedestrian and therefore cannot estimate the actual distribution of pedestrians in the environment. On the other hand, detection-based methods detect and localize pedestrians in the scene, but the performance of these methods degrades when applied in high-density situations. To overcome the limitations of pedestrian detectors, we proposed a motion-guided filter (MGF) that exploits spatial and temporal information between consecutive frames of the video to recover missed detections. Our framework is based on the deep convolution neural network (DCNN) for crowd counting in the low-to-medium density videos. We employ various state-of-the-art network architectures, namely, Visual Geometry Group (VGG16), Zeiler and Fergus (ZF), and VGGM in the framework of a region-based DCNN for detecting pedestrians. After pedestrian detection, the proposed motion guided filter is employed. We evaluate the performance of our approach on three publicly available datasets. The experimental results demonstrate the effectiveness of our approach, which significantly improves the performance of the state-of-the-art detectors.
Sarker, PC, Islam, MR, Guo, Y, Zhu, J & Lu, HY 2019, 'State-of-the-Art Technologies for Development of High Frequency Transformers with Advanced Magnetic Materials', IEEE Transactions on Applied Superconductivity, vol. 29, no. 2, pp. 1-11.
View/Download from: Publisher's site
View description>>
© 2002-2011 IEEE. With the development of advanced soft magnetic materials of high-saturation flux density and low specific core loss and semiconductor power devices, the high-frequency transformer (HFT) has received significant attention in recent years for its widespread emerging applications. The optimal design of high-power-density HFTs for high-performance energy conversion systems is, however, a multiphasic problem that needs special considerations on various aspects such as core material selection, minimization of parasitic components, and thermal management. This paper presents a comprehensive review on advancement of soft magnetic materials for high-power-density magnetic devices and advanced technologies for characterizations and optimal design of HFTs. The future research and development trends are also discussed.
Shen, S, Zhou, H, Feng, S, Huang, L, Liu, J, Yu, S & Cao, Q 2019, 'HSIRD: A model for characterizing dynamics of malware diffusion in heterogeneous WSNs', Journal of Network and Computer Applications, vol. 146, pp. 102420-102420.
View/Download from: Publisher's site
View description>>
© 2019 Heterogeneous wireless sensor networks (HWSNs), as blocks of the Internet of Things, are vulnerable to malware diffusion breaking the data confidentiality and service availability, owing to their weak defense mechanism and poor resilience. Thus, constructing a malware diffusion model and revealing the rules of malware diffusion in HWSNs are urgently needed. In this context, we propose a Heterogeneous Susceptible-Infectious-Removed-Dead (HSIRD) model based on epidemiology, in order to not only characterize the dead state where a heterogeneous sensor node (HSN) may lose its functionality owing to physical damage or malware attacks but also represent the HSN communication connectivity, which is one of the heterogeneities that exist universally in HWSNs. We then analyze the dynamics of the fractions of HSNs belonging to different degrees in different states and obtain the corresponding differential equations. Using these equations, we prove the existence of equilibrium points of the HSIRD model. Subsequently, we attain the basic reproduction number governing the stability of the equilibrium points. We further prove the stability of the equilibrium points of the model and attain the conditions indicating whether malware in HWSNs will diffuse or die out. Finally, we validate the effectiveness of the model via simulation. The results provide a theoretical foundation for suppressing malware diffusion in malware-infected HWSNs.
Shen, W, Zhang, C & Yu, S 2019, 'An Energy-Efficient Scheme for Constructing Underwater Sensor Barrier with Minimum Mobile Sensors', AD HOC & SENSOR WIRELESS NETWORKS, vol. 43, no. 1-2, pp. 57-84.
Sohaib, O, Kang, K & Miliszewska, I 2019, 'Uncertainty Avoidance and Consumer Cognitive Innovativeness in E-Commerce.', J. Glob. Inf. Manag., vol. 27, no. 2, pp. 59-77.
View/Download from: Publisher's site
View description>>
Copyright © 2019, IGI Global. This article describes how despite the extensive academic interest in e-commerce, an investigation of consumer cognitive innovativeness towards new product purchase intention has been neglected. Based on the stimulus–organism–response (S–O–R) model, this study investigates the consumer cognitive innovativeness and the moderating role of the individual consumer-level uncertainty avoidance cultural value towards new product purchase intention in business-to-consumer (B2C) e-commerce. Structural equation modelling, such as partial least squares (PLS) path modelling was used to test the model, using a sample of 255 participants in Australia who have had prior online shopping experience. The findings show that the online store web atmosphere influences consumers’ cognitive innovativeness to purchase new products in countries with diverse degrees of uncertainty avoidance such as Australia. The results provide some guidance for a B2C website design based on how individual’s uncertainty avoidance and cognitive innovativeness can aid the online consumer purchasing decision-making process.
Sohaib, O, Naderpour, M, Hussain, W & Martínez-López, L 2019, 'Cloud computing model selection for e-commerce enterprises using a new 2-tuple fuzzy linguistic decision-making method.', Comput. Ind. Eng., vol. 132, pp. 47-58.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Cloud computing is truly transforming the way e-commerce firms do business. While there has been a sharp increase in the use of cloud computing in e-commerce, the benefits of cloud service models have yet to be explored, particularly for small-to-medium-sized businesses. A strong e-commerce offering depends on a reliable and secure online store, therefore it is important for decision makers to adopt the optimal cloud computing service model such as software-as-a-service (SaaS), platform-as-a-service (PaaS), or infrastructure-as-a-service (IaaS), which is a multi-criteria decision-making problem (MCDM). To address this MCDM problem, we propose a novel 2-tuple fuzzy linguistic multi-criteria group decision-making method based on the technique for order preference by similarity to ideal solution (TOPSIS) and rely upon a technology-organization-environment (TOE) framework to determine a set of appropriate criteria. The proposed methodology is applied to a small-to-medium-sized company to facilitate assessing the factors associated with cloud-based e-commerce and making the decision. The result analysis indicates that SaaS is the best choice for small and medium-sized e-commerce businesses considering criteria such as complexity, reliability, security and privacy, organization readiness and firm size, while the selection of PaaS or IaaS can be reinforced considering their compatibility and scalability.
Sohaib, O, Solanki, H, Dhaliwa, N, Hussain, W & Asif, M 2019, 'Integrating design thinking into extreme programming.', J. Ambient Intell. Humaniz. Comput., vol. 10, no. 6, pp. 2485-2492.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. The increased demand for information systems drives businesses to rethink their customer needs to a greater extent and undertake innovation to compete in the marketplace. The design thinking (DT) is a human-centered methodology leads to creativity and innovation. The agile applications development such as extreme programming (XP) as a rapid application development approach tends to focus on perfecting functionality requirement and technical implementation. However, it causes significant challenges to building software/applications to meet the needs of end-user. This study integrates DT practices into XP methodology to improve the quality of software product for the end-users and enable businesses to achieve creativity and innovation. The proposed integrated DT@XP framework presents the various DT practices (empathy, define, persona, DT user stories) are adapted into XP exploration phase, prototyping and usability evaluation into XP planning phase. Our work demonstrates the applicability of DT concepts to analyze customer/user involvement in XP projects.
Sood, K, Karmakar, KK, Varadharajan, V, Tupakula, U & Yu, S 2019, 'Analysis of Policy-Based Security Management System in Software-Defined Networks', IEEE Communications Letters, vol. 23, no. 4, pp. 612-615.
View/Download from: Publisher's site
View description>>
© 1997-2012 IEEE. In software-defined networks, policy-based security management or architecture (PbSA) is an ideal way to dynamically control the network. We observe that on the one hand, this enables security capabilities intelligently and enhance fine-grained control over end user behavior. But, on the other hand, dynamic variations in network, rapid increases in security attacks, geographical distribution of nodes, complex heterogeneous networks, and so on have serious effects on the performance of PbSAs. These affect the flow specific quality of service requirements with further degradation of the performance of the security context. Hence, in this letter, PbSA's performance is evaluated. The key factors including a number of rules, rule-table size, position of rules, flow arrival rate, and CPU utilization are examined, and found to have considerable impact on the performance of PbSAs.
Stender, M, Oberst, S & Hoffmann, N 2019, 'Recovery of Differential Equations from Impulse Response Time Series Data for Model Identification and Feature Extraction', Vibration, vol. 2, no. 1, pp. 25-46.
View/Download from: Publisher's site
View description>>
Time recordings of impulse-type oscillation responses are short and highly transient. These characteristics may complicate the usage of classical spectral signal processing techniques for (a) describing the dynamics and (b) deriving discriminative features from the data. However, common model identification and validation techniques mostly rely on steady-state recordings, characteristic spectral properties and non-transient behavior. In this work, a recent method, which allows reconstructing differential equations from time series data, is extended for higher degrees of automation. With special focus on short and strongly damped oscillations, an optimization procedure is proposed that fine-tunes the reconstructed dynamical models with respect to model simplicity and error reduction. This framework is analyzed with particular focus on the amount of information available to the reconstruction, noise contamination and nonlinearities contained in the time series input. Using the example of a mechanical oscillator, we illustrate how the optimized reconstruction method can be used to identify a suitable model and how to extract features from uni-variate and multivariate time series recordings in an engineering-compliant environment. Moreover, the determined minimal models allow for identifying the qualitative nature of the underlying dynamical systems as well as testing for the degree and strength of nonlinearity. The reconstructed differential equations would then be potentially available for classical numerical studies, such as bifurcation analysis. These results represent a physically interpretable enhancement of data-driven modeling approaches in structural dynamics.
Stender, M, Oberst, S, Tiedemann, M & Hoffmann, N 2019, 'Complex machine dynamics: systematic recurrence quantification analysis of disk brake vibration data', Nonlinear Dynamics, vol. 97, no. 4, pp. 2483-2497.
View/Download from: Publisher's site
View description>>
Complex machine dynamics, as caused by friction-induced vibrations and related to brake squeal, have gained significant attention in research and industry for decades. Today, remedies heavily rely on experimental testing due to the low prediction quality of numerical models. However, there is considerable lack of in-depth studies in characterizing self-excited oscillations encoded in scalar measurements. We complement previous works on phase-space reconstruction and recurrence plots analysis to a larger data base by applying a novel systematic approach using a large
data base. This framework considers appropriate delay embedding, time series partitioning into squealing and non-squealing parts and comparison to operational parameters of the brake system. By means of recurrence plot analysis, we illustrate that friction-excited vibrations are multi-scale in nature. Results confirm the existence of low-dimensional attractors in squealing regimes with increasing values of determinism and periodicity with rising vibration levels. It is shown that the squeal propensity can be directly linked to recurrence quantification measures. Using determinism and the clustering coefficient as metrics, we show for the first time that is possible to predict instabilities in regions of non-squealing conditions.
Sun, L, Dong, H, Hussain, OK, Hussain, FK & Liu, AX 2019, 'A framework of cloud service selection with criteria interactions', Future Generation Computer Systems, vol. 94, pp. 749-764.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Existing cloud service selection techniques assume that service evaluation criteria are independent. In reality, there are different types of interactions between criteria. These interactions influence the performance of a service selection system in different ways. In addition, a lack of measurement indices to validate the performance of service selection methods has hindered the development of decision making techniques in the service selection area. This paper addresses these critical issues of modeling the interactions between cloud service selection criteria, and designing indices to validate service selection methods. In this paper, we propose a Cloud Service Selection with Criteria Interactions framework (CSSCI) that applies a fuzzy measure and Choquet integral to measure and aggregate non-linear relations between criteria. We employ a non-linear constraint optimization model to estimate the Shapley importance and criteria interaction indices. In addition, we design a priority-based CSSCI (PCSSCI) to solve service selection problems in the situation where there is a lack of historical information to determine criteria relations and weights. Furthermore, we discuss an approximate solution for CSSCI to reduce its computing complexity. Finally, we design three indices to validate the cloud service selection methods. The experimental results preliminarily prove the technical advantage of the proposed models in contrast to several existing models.
Sun, Y, Tian, Z, Wang, Y, Li, M, Su, S, Wang, X & Fan, D 2019, 'Lightweight Anonymous Geometric Routing for Internet of Things', IEEE Access, vol. 7, pp. 29754-29762.
View/Download from: Publisher's site
Torres-Robles, A, Wiecek, E, Cutler, R, Drake, B, Benrimoj, SI, Fernandez-Llimos, F & Garcia-Cardenas, V 2019, 'Using Dispensing Data to Evaluate Adherence Implementation Rates in Community Pharmacy', Frontiers in Pharmacology, vol. 10, no. FEB, p. 130.
View/Download from: Publisher's site
View description>>
Copyright © 2019 Torres-Robles, Wiecek, Cutler, Drake, Benrimoj, Fernandez-Llimos and Garcia-Cardenas. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Background: Medication non-adherence remains a significant problem for the health care system with clinical, humanistic and economic impact. Dispensing data is a valuable and commonly utilized measure due accessibility in electronic health data. The purpose of this study was to analyze the changes on adherence implementation rates before and after a community pharmacist intervention integrated in usual real life practice, incorporating big data analysis techniques to evaluate Proportion of Days Covered (PDC) from pharmacy dispensing data. Methods: Retrospective observational study. A de-identified database of dispensing data from 20,335 patients (n = 11,257 on rosuvastatin, n = 6,797 on irbesartan, and n = 2,281 on desvenlafaxine) was analyzed. Included patients received a pharmacist-led medication adherence intervention and had dispensing records before and after the intervention. As a measure of adherence implementation, PDC was utilized. Analysis of the database was performed using SQL and Python. Results: Three months after the pharmacist intervention there was an increase on average PDC from 50.2% (SD: 30.1) to 66.9% (SD: 29.9) for rosuvastatin, from 50.8% (SD: 30.3) to 68% (SD: 29.3) for irbesartan and from 47.3% (SD: 28.4) to 66.3% (SD: 27.3) for desvenlafaxine. These rates declined over 12 months to 62.1% (SD: 32.0) for rosuvastatin, to 62.4% (SD: 32.5) for irbesartan and to 58.1% (SD: 31.1) for desvenla...
Valenzuela Fernandez, LM, Nicolas, C, Merigó, JM & Arroyo-Cañada, F-J 2019, 'Industrial marketing research: a bibliometric analysis (1990-2015)', Journal of Business & Industrial Marketing, vol. 34, no. 3, pp. 550-560.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to determine the most influential countries and universities that have contributed to science in the field of industrial marketing research during the period from 1990 to 2015.Design/methodology/approachBibliometric methodology is adopted, focusing on the most productive and influential countries and universities within this discipline, for the scientific community analyzing journals listed in the Web of Science (WoS) database from 1990 to 2015 and is supplemented by using VOS viewer to graph the existing bibliometric networks for each and every variable.FindingsEvidence that the USA and UK remain leaders in the investigation of industrial marketing research. Finland stands at the third place, leaving Australia and Germany behind. In reference to the universities, Michigan State University ranks as the leader.Research limitations/implicationsThe process of data classification originates from WoS. Moreover, to provide a comprehensive analytical scenario, other factors could have potentially been considered such as the editor’s commitment to leading journals, to partnerships and conferences, as well as other databases.Originality/valueThis paper takes into account alternative variables that have not been previously considered in previous studies, such as universities and countries in which the transcendental contributions to this field have taken place, providing a closer look, which gives rise to further discussions and studies with more detail to the histor...
Valenzuela-Fernandez, L, Merigó, JM, Lichtenthal, JD & Nicolas, C 2019, 'A Bibliometric Analysis of the First 25 Years of the Journal of Business-to-Business Marketing', Journal of Business-to-Business Marketing, vol. 26, no. 1, pp. 75-94.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 Taylor & Francis Group, LLC. Purpose: As part of the recognition of the 25th anniversary of the Journal of Business-to-Business Marketing (JBBM), this paper presents an overview of the JBBM through a bibliometric analysis (BA) of its content from 1992 to 2016. The analysis focuses on the most cited articles and authors, h-index, publications per year, among others that typically are conducted for BAs. Design/Methodology/Approach: This paper begins with an introduction to the JBBM, showing its characteristics, its history as well as the editorial development and subsequent journal positioning. This information is followed by an analysis based on bibliometric methodology (BM) which considers the h-index, total citations (TCs), total papers (TPs), TC/TP ratio and other similar measures. To display this information, investigation was done to determine the most cited journals, articles, authors, universities and countries, ergo with the greatest incidence within JBBM. Analyzed are 329 articles, reviews and notes taken from the Scopus database for the periods between 1992 and 2016 for the JBBM. Findings: At the time of this work, the completion of the 25th anniversary of this journal, there is a rising trend in the number of JBBM publications per year. The researchers from the United States were most frequent contributors to the journal, while researchers from Germany, Australia, Norway and the United Kingdom were well represented. Multiple coauthors were more frequent while topics across the general model of business-to-business (B-to-B) marketing were typically found. Special issues on all three university-level education, technology in the classroom as well as Internet in effect B-to-B tactical marketing. Practical Implications: After observing the different perspectives of the journal’s production, we gain another objective view on the evolution of the JBBM in prior 25 years. This approach is useful for the readers of this journal in order to obtai...
Valenzuela-Fernández, LM, Merigó, JM, Nicolas, C & Kleinaltenkamp, M 2019, 'Leaders in industrial marketing research: 25 years of analysis', Journal of Business & Industrial Marketing, vol. 35, no. 3, pp. 586-601.
View/Download from: Publisher's site
View description>>
PurposeThis paper aims to present a bibliometric overview of the leading trends of the journals in industrial marketing during for 25 years. Thus, the purpose is to carry out an analysis about contributions that industrial marketing or business to business (B2B) marketing discipline has done for scientific investigation, presenting a ranking of the 30 most influential journals and their global evolution by five-year periods from 1992 to 2016. Moreover, this study presents the amount of citations, who quotes who from the top 15 ranking and self-citations.Design/methodology/approachThis study analyzes 3,587 documents classified as articles, letters, notes and reviews from Clarivate Analytics’ Web of Science for the period 1992- 2016, by bibliometric indicators such as H-index, total citations (TC), total papers (TP), TC/TP. Furthermore, this paper develops a graphical visualization of the bibliographic material by using the visualization of similarities viewer software for constructing and visualizing bibliometric networks in leading journals, publications and keywords with bibliographic coupling and co-citation analysis.FindingsIndustrial Marketing Management is the leader of the ranking, representing 34 per cent of the total manuscripts considered in this study. The most influential journals were classified by periods of five years and the top five for the period 2012-2016 were in ascending order: Industrial Marketing Management, Journal of Business & Industrial Marketing, Journal of Business-to-Business Marketing, Journal of Business Research
Vallaster, C, Kraus, S, Merigó Lindahl, JM & Nielsen, A 2019, 'Ethics and entrepreneurship: A bibliometric study and literature review', Journal of Business Research, vol. 99, pp. 226-237.
View/Download from: Publisher's site
View description>>
© 2019 The entrepreneurship literature pays increasing attention to the ethical aspects of the field. However, only a fragmented understanding is known about how the context influences the ethical judgment of entrepreneurs. We argue that individual socio-cultural background, organizational and societal context shape entrepreneurial ethical judgment. In our article, we contribute to contemporary literature by carving out the intersections between Ethics and Entrepreneurship. We do this by employing a two-step research approach: 1) We use bibliometric techniques to analyze 719 contributions in Business and Economics research and present a comprehensive contextual picture of ethics in entrepreneurship research by a analyzing the 30 most relevant foundation articles. 2) A subsequent content analysis of the 50 most relevant academic contributions was carried with an enlarged database out to augment these findings, detailing ethics and entrepreneurship research on an individual, organizational and societal level of analyses. By comparing the two analyses, this paper concludes by outlining possible avenues for future research.
Verma, R & Merigó, JM 2019, 'On generalized similarity measures for Pythagorean fuzzy sets and their applications to multiple attribute decision‐making', International Journal of Intelligent Systems, vol. 34, no. 10, pp. 2556-2583.
View/Download from: Publisher's site
View description>>
© 2019 Wiley Periodicals, Inc. In this paper, we develop a new and flexible method for Pythagorean fuzzy decision-making using some trigonometric similarity measures. We first introduce two new generalized similarity measures between Pythagorean fuzzy sets based on cosine and cotangent functions and prove their validity. These similarity measures include some well-known Pythagorean fuzzy similarity measures as their particular and limiting cases. The measures are demonstrated to satisfy some very elegant properties which prepare the ground for applications in different areas. Further, the work defines a generalized hybrid trigonometric Pythagorean fuzzy similarity measure and discuss its properties with particular cases. Then, based on the generalized hybrid trigonometric Pythagorean fuzzy similarity measure, a method for dealing with multiple attribute decision-making problems under Pythagorean fuzzy environment is developed. Finally, a numerical example is given to demonstrate the flexibility and effectiveness of the developed approach in solving real-life problems.
Verma, R & Merigó, JM 2019, 'Variance measures with ordered weighted aggregation operators', International Journal of Intelligent Systems, vol. 34, no. 6, pp. 1184-1205.
View/Download from: Publisher's site
View description>>
© 2019 Wiley Periodicals, Inc. The variance is a statistical measure widely used in many real-life application areas. This article makes an extensive investigation on variance measure in the case when the uncertainty is not of a probabilistic nature. It generalizes the notion of variance as well as the theory of ordered weighted aggregation operators. First, we extend the idea of representative value/expected value of a decision maker and develop some new deviation measures based on ordered weighted geometric (OWG) average and ordered weighted harmonic average (OWHA) operators. These measures are developed with the consideration that decision maker can represent his/her attitudinal expected value by using any one of the ordered weighted aggregation (OWA) operators. Further, this study proposes some deviation measures by using the generalized-OWA (GOWA) and Quasi-OWA as an expected value of decision maker and discusses their particular cases. Second, a number of generalized deviation measures are introduced by taking the generalized arithmetic mean and quasi-arithmetic means for aggregation of the individual dispersion. This approach provides an ability to the user for considering the deviation under different realistic-scenario. These measures lead to many interesting particular and limiting cases and enrich the use of ordered weighted aggregation operators in the variance.
Vijayan, MK, Chitambar, E & Hsieh, M-H 2019, 'Simple Bounds for One-shot Pure-State Distillation in General Resource Theories', Phys. Rev. A, vol. 102, no. 5, p. 052403.
View/Download from: Publisher's site
View description>>
We present bounds for distilling many copies of a pure state from anarbitrary initial state in a general quantum resource theory. Our bounds applyto operations that are able to generate no more than a $\delta$ amount ofresource, where $\delta \geq 0$ is a given parameter. To maximize applicabilityof our upper bound, we assume little structure on the set of free states underconsideration besides a weak form of superadditivity of the function$G_{min}(\rho)$, which measures the overlap between $\rho$ and the set of freestates. Our bounds are given in terms of this function and the robustness ofresource. Known results in coherence and entanglement theory are reproduced inthis more general framework.
Vijayan, MK, Lund, AP & Rohde, PP 2019, 'A robust W-state encoding for linear quantum optics', Quantum, vol. 4, p. 303.
View/Download from: Publisher's site
View description>>
Error-detection and correction are necessary prerequisites for any scalablequantum computing architecture. Given the inevitability of unwanted physicalnoise in quantum systems and the propensity for errors to spread ascomputations proceed, computational outcomes can become substantiallycorrupted. This observation applies regardless of the choice of physicalimplementation. In the context of photonic quantum information processing,there has recently been much interest in passive linear optics quantumcomputing, which includes boson-sampling, as this model eliminates thehighly-challenging requirements for feed-forward via fast, active control. Thatis, these systems are passive by definition. In usual scenarios, errordetection and correction techniques are inherently active, making themincompatible with this model, arousing suspicion that physical error processesmay be an insurmountable obstacle. Here we explore a photonic error-detectiontechnique, based on W-state encoding of photonic qubits, which is entirelypassive, based on post-selection, and compatible with these near-term photonicarchitectures of interest. We show that this W-state redundant encodingtechniques enables the suppression of dephasing noise on photonic qubits viasimple fan-out style operations, implemented by optical Fourier transformnetworks, which can be readily realised today. The protocol effectively mapsdephasing noise into heralding failures, with zero failure probability in theideal no-noise limit. We present our scheme in the context of a single photonicqubit passing through a noisy communication or quantum memory channel, whichhas not been generalised to the more general context of full quantumcomputation.
Wahid-Ul-Ashraf, A, Budka, M & Musial, K 2019, 'How to predict social relationships — Physics-inspired approach to link prediction', Physica A: Statistical Mechanics and its Applications, vol. 523, pp. 1110-1129.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Link prediction in social networks has a long history in complex network research area. The formation of links in networks has been approached by scientists from different backgrounds, ranging from physics to computer science. To predict the formation of new links, we consider measures which originate from network science and use them in the place of mass and distance within the formalism of Newton's Gravitational Law. The attraction force calculated in this way is treated as a proxy for the likelihood of link formation. In particular, we use three different measures of vertex centrality as mass, and 13 dissimilarity measures including shortest path and inverse Katz score in place of distance, leading to over 50 combinations that we evaluate empirically. Combining these through gravitational law allows us to couple popularity with similarity, two important characteristics for link prediction in social networks. Performance of our predictors is evaluated using Area Under the Precision–Recall Curve (AUC)for seven different real-world network datasets. The experiments demonstrate that this approach tends to outperform the setting in which vertex similarity measures like Katz are used on their own. Our approach also gives us the opportunity to combine network's global and local properties for predicting future or missing links. Our study shows that the use of the physical law which combines node importance with measures quantifying how distant the nodes are, is a promising research direction in social link prediction.
Wakefield, J, Tyler, J, Dyson, LE & Frawley, JK 2019, 'Implications of Student‐Generated Screencasts on Final Examination Performance', Accounting & Finance, vol. 59, no. 2, pp. 1415-1446.
View/Download from: Publisher's site
View description>>
© 2017 AFAANZ. While educational technologies can play a vital role in students' active participation in introductory accounting subjects, learning outcome implications are less clear. We believe this is the first accounting education study examining the implications of student-generated screencast assignments. We find benefits in developing the graduate attributes of communication, creativity and multimedia skills, consistent with calls by the profession. Additionally, we find improvement in final examination performance related to the assignment topic, notably in lower performing students. The screencast assignment was optional, and the findings suggest a tailored approach to assignment design related to students' developmental needs is appropriate.
Wan Mohd, WR, Abdullah, L, Yusoff, B, Taib, CMIC & Merigo, JM 2019, 'An Integrated MCDM Model based on Pythagorean Fuzzy Sets for Green Supplier Development Program', Malaysian Journal of Mathematical Sciences, vol. 13, pp. 23-37.
View description>>
Green supplier development is becoming vital for many industrial firms for effective green supply chain management. Most of the suppliers are willing to invest in many green supplier programs that developed in their firms’ performance. The evaluation and selection of an adequate green supplier development program is too complex and challenging as it has multiple criteria and alternatives to be chosen. These criteria involve both qualitative and quantitative information. To select the best alternative of the green supplier development program, it is necessary to settle these problems using multi-criteria decision-making (MCDM) method. This paper proposes the integration of Pythagorean fuzzy AHP and Pythagorean fuzzy VIKOR approach to resolve the green supplier development program selection. The main goal of this study is to present a useful and reliable method to identify the most important criteria and alternatives using Pythagorean fuzzy AHP and Pythagorean fuzzy VIKOR. The first innovation is finding the weight for each criteria using Pythagorean fuzzy AHP. In order to do so, the crisp value evaluated by the decision makers (DMs) are presented in the pair-wise comparison matrix and converted to Pythagorean fuzzy number. The VIKOR is used to rank the alternatives of the green supplier development programs and suggest which program is the best program. Then, the obtained results are compared with the existing VIKOR method in the same case study. The results found the supplier training is the best alternative to select in the green supplier development programs. It is noted that the integration of Pythagorean fuzzy AHP and Pythagorean fuzzy VIKOR is a holistic approach to the MCDM problem.
Wang, G, Zhang, G, Choi, K-S & Lu, J 2019, 'Deep Additive Least Squares Support Vector Machines for Classification With Model Transfer', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 7, pp. 1527-1540.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. The additive kernel least squares support vector machine (AK-LS-SVM) has been well used in classification tasks due to its inherent advantages. For example, additive kernels work extremely well for some specific tasks, such as computer vision classification, medical research, and some specialized scenarios. Moreover, the analytical solution using AK-LS-SVM can formulate leave-one-out cross-validation error estimates in a closed form for parameter tuning, which drastically reduces the computational cost and guarantee the generalization performance especially on small and medium datasets. However, AK-LS-SVM still faces two main challenges: 1) improving the classification performance of AK-LS-SVM and 2) saving time when performing a grid search for model selection. Inspired by the stacked generalization principle and the transfer learning mechanism, a layer-by-layer combination of AK-LS-SVM classifiers embedded with transfer learning is proposed in this paper. This new classifier is called deep transfer additive kernel least square support vector machine (DTA-LS-SVM) which overcomes these two challenges. Also, considering that imbalanced datasets are involved in many real-world scenarios, especially for medical data analysis, the deep-transfer element is extended to compensate for this imbalance, thus leading to the development of another new classifier iDTA-LS-SVM. In the hierarchical structure of both DTA-LS-SVM and iDTA-LS-SVM, each layer has an AK-LS-SVM and the predictions from the previous layer act as an additional input feature for the current layer. Importantly, transfer learning is also embedded to guarantee generalization consistency between the adjacent layers. Moreover, both iDTA-LS-SVM and DTA-LS-SVM can ensure the minimal leave-one-out error by using the proposed fast leave-one-out cross validation strategy on the training set in each layer. We compared the proposed classifiers DTA-LS-SVM and iDTA-LS-SVM with the traditional LS-...
Wang, J, Zhang, N & Lu, H 2019, 'A novel system based on neural networks with linear combination framework for wind speed forecasting', Energy Conversion and Management, vol. 181, pp. 425-442.
View/Download from: Publisher's site
View description>>
The absence of accurate and stable prediction of wind speed remains a major obstacle to the rational planning, scheduling, and maintenance of wind power generation. Currently, an extensive body of methods that aim to enhance the accuracy of wind speed prediction have been proposed. However, the majority of previous studies have tended to emphasize the structural improvement of individual forecasting models without considering the validity of data preprocessing. This can result in poor forecasting accuracy due to their failure to fully capture the effective information of the wind speed data. A new approach is proposed in this paper that successfully combines a data preprocessing technique with a linear combination method. Further, a new neural network framework is employed to determine the required combination weights to ensure improved prediction performance, thereby overcoming the drawback of the low accuracy of individual prediction models. Six wind speed datasets from Penglai are regarded as expository cases to analyze the forecasting validity and stability of the developed model. It can be concluded from the experiments that the combined forecasting system outperforms the individual models and the traditional linear combination models with higher accuracy and stronger stability.
Wang, M, Xu, C, Chen, X, Hao, H, Zhong, L & Yu, S 2019, 'Differential Privacy Oriented Distributed Online Learning for Mobile Social Video Prefetching', IEEE Transactions on Multimedia, vol. 21, no. 3, pp. 636-651.
View/Download from: Publisher's site
View description>>
© 2019 IEEE The ever fast growing mobile social video traffic has motivated the urgent requirement of alleviating backbone pressures while ensuring the user-quality experience. Mobile video prefetching previously caches the future accessed videos at the edge, which has become a promising solution for traffic offloading and delay reduction. However, providing high performance prefetching still remains problematic in the presence of high dynamic mobile users' viewing behaviors and consecutive generated video content. Besides, given the fact that making prefetching decision requires viewing history that is sensitive, the increasing privacy issues should also be considered. In this paper, we propose a differential privacy oriented distributed online learning method for mobile social video prefetching (DPDL-SVP). Through a large-scale data analysis based on one of the most popular online social network sites, WeiBo.cn, we reveal that users' viewing behaviors have strong a relation with video preference, content popularity, and social interactions. We then formulate the prefetching problem as an online convex optimization based on these three factors. Furthermore, the problem is divided into two subproblems, and we implement a distributed algorithm separately to solve them with differential privacy. The performance bound of the proposed online algorithms is also theoretically proved. We conduct a series simulation based on real viewing traces to evaluate the performance of DPDL-SVP. Evaluation results show how our proposed algorithms achieve superior performance in terms of the prediction accuracy, delay reduction, and scalability.
Wang, W, Zhang, G & Lu, J 2019, 'Hierarchy Visualization for Group Recommender Systems', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 6, pp. 1152-1163.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Most recommender systems (RSs), especially group RSs, focus on methods and accuracy but lack explanations, hence users find them difficult to trust. We present a hierarchy visualization method for group recommender (HVGR) systems to provide visual presentation and intuitive explanation. We first use a hierarchy graph to organize all the entities using nodes (e.g., neighbor nodes and recommendation nodes) and illustrate the overall recommender process using edges. Second, a pie chart is attached to every entity node in which each slice represents a single member, which makes it easy to track the influence of each member on a specific entity. HVGR can be extended to adapt different pseudouser modeling methods by resizing group member nodes and pseudouser nodes. It can also be easily extended to individual RSs through the use of a single member group. An implementation has been developed and feasibility is tested using a real data set.
Wang, X, Liu, Y, Lu, J, Xiong, F & Zhang, G 2019, 'TruGRC: Trust-Aware Group Recommendation with Virtual Coordinators', Future Generation Computer Systems, vol. 94, pp. 224-236.
View/Download from: Publisher's site
Wang, X, Xu, X, Sheng, QZ, Wang, Z & Yao, L 2019, 'Novel Artificial Bee Colony Algorithms for QoS-Aware Service Selection', IEEE Transactions on Services Computing, vol. 12, no. 2, pp. 247-261.
View/Download from: Publisher's site
Wang, Y, Feng, C, Chen, L, Yin, H, Guo, C & Chu, Y 2019, 'User identity linkage across social networks via linked heterogeneous network embedding', World Wide Web, vol. 22, no. 6, pp. 2611-2632.
View/Download from: Publisher's site
View description>>
© 2018 Springer Science+Business Media, LLC, part of Springer Nature User identity linkage has important implications in many cross-network applications, such as user profile modeling, recommendation and link prediction across social networks. To discover accurate cross-network user correspondences, it is a critical prerequisite to find effective user representations. While structural and content information describe users from different perspectives, there is a correlation between the two aspects of information. For example, a user who follows a celebrity tends to post content about the celebrity as well. Therefore, the projections of structural and content information of a user should be as close to each other as possible, which inspires us to fuse the two aspects of information in a unified space. However, owing to the information heterogeneity, most existing methods extract features from content and structural information respectively, instead of describing them in a unified way. In this paper, we propose a Linked Heterogeneous Network Embedding model (LHNE) to learn the comprehensive representations of users by collectively leveraging structural and content information in a unified framework. We first model the topics of user interests from content information to filter out noise. Next, cross-network structural and content information are embedded into a unified space by jointly capturing the friend-based and interest-based user co-occurrence in intra-network and inter-network, respectively. Meanwhile, LHNE learns user transfer and topic transfer for enhancing information exchange across networks. Empirical results show LHNE outperforms the state-of-the-art methods on both real social network and synthetic datasets and can work well even with little or no structural information.
Wang, Y, Sun, Y, Su, S, Tian, Z, Li, M, Qiu, J & Wang, X 2019, 'Location Privacy in Device-Dependent Location-Based Services: Challenges and Solution', Computers, Materials & Continua, vol. 59, no. 3, pp. 983-993.
View/Download from: Publisher's site
View description>>
Copyright c 2019 Tech Science Press With the evolution of location-based services (LBS), a new type of LBS has already gain a lot of attention and implementation, we name this kind of LBS as the Device-Dependent LBS (DLBS). In DLBS, the service provider (SP) will not only send the information according to the user’s location, more significant, he also provides a service device which will be carried by the user. DLBS has been successfully practised in some of the large cities around the world, for example, the shared bicycle in Beijing and London. In this paper, we, for the first time, blow the whistle of the new location privacy challenges caused by DLBS, since the service device is enabled to perform the localization without the permission of the user. To conquer these threats, we design a service architecture along with a credit system between DLBS provider and the user. The credit system tie together the DLBS device usability with the curious behaviour upon user’s location privacy, DLBS provider has to sacrifice their revenue in order to gain extra location information of their device. We make the simulation of our proposed scheme and the result convince its effectiveness.
Wu, L, Sun, Q, Wang, X, Wang, J, Yu, S, Zou, Y, Liu, B & Zhu, Z 2019, 'An Efficient Privacy-Preserving Mutual Authentication Scheme for Secure V2V Communication in Vehicular Ad Hoc Network', IEEE Access, vol. 7, pp. 55050-55063.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Recent years have witnessed that the new mobility Intelligent Transportation System is booming, especially the development of Vehicular Ad Hoc Networks (VANETs). It brings convenience and a good experience for drivers. Unfortunately, VANETs are suffering from potential security and privacy issues due to the inherent openness of VANETs. In the past few years, to address security and privacy-preserving problems, many identity-based privacy-preserving authentication schemes have been proposed by researchers. However, we found that these schemes fail to meet the requirements of user privacy protection and are vulnerable to attacks or have high computational complexity. Hence, we focus on enhancing privacy-preserving via authentication and achieving better performance. In this paper, first, we describe the vulnerabilities of the previous scheme. Furthermore, to enhance privacy protection and achieve better performance, we propose an efficient privacy-preserving mutual authentication protocol for secure V2V communication in VANETs. Through security analysis and comparison, we formally demonstrate that our scheme can accomplish security goals under dynamic topographical scenario compared with the previous scheme. Finally, the efficiency of the scheme is showed by performance evaluation. The results of our proposed scheme are computationally efficient compared with the previously proposed privacy-preserving authentication scheme.
Wu, P, Li, H, Merigo, JM & Zhou, L 2019, 'Integer Programming Modeling on Group Decision Making With Incomplete Hesitant Fuzzy Linguistic Preference Relations', IEEE Access, vol. 7, pp. 136867-136881.
View/Download from: Publisher's site
Wu, W, Li, B, Chen, L, Zhang, C & Yu, PS 2019, 'Improved Consistent Weighted Sampling Revisited', IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 12, pp. 2332-2345.
View/Download from: Publisher's site
View description>>
IEEE Min-Hash is a popular technique for efficiently estimating the Jaccard similarity of binary sets. Consistent Weighted Sampling (CWS) generalizes the Min-Hash scheme to sketch weighted sets and has drawn increasing interest from the community. Due to its constant-time complexity independent of the values of the weights, Improved CWS (ICWS) is considered as the state-of-the-art CWS algorithm. In this paper, we revisit ICWS and analyze its underlying mechanism to show that there actually exists dependence between the two components of the hash-code produced by ICWS, which violates the condition of independence. To remedy the problem, we propose an Improved ICWS (I2CWS) algorithm which not only shares the same theoretical computational complexity as ICWS but also abides by the required conditions of the CWS scheme. The experimental results on a number of synthetic data sets and real-world text data sets demonstrate that our I2CWS algorithm can estimate the Jaccard similarity more accurately, and also competes with or outperforms the compared methods, including ICWS, in classification and top-K retrieval, after relieving the underlying dependence.
Xia, L-Y, Wang, Q-Y, Cao, Z & Liang, Y 2019, 'Descriptor Selection Improvements for Quantitative Structure-Activity Relationships', International Journal of Neural Systems, vol. 29, no. 09, pp. 1950016-1950016.
View/Download from: Publisher's site
View description>>
Molecular descriptor selection is an essential procedure to improve a predictive quantitative structure–activity relationship (QSAR) model. However, within the QSAR model, there are a number of redundant, noisy and irrelevant descriptors. In this study, we propose a novel descriptor selection framework using self-paced learning (SPL) via sparse logistic regression (LR) with Logsum penalty (SPL-Logsum), which can simultaneously adaptively identify the simple and complex samples and avoid over-fitting. SPL is inspired by the learning process of humans or animals gradually learned from simple and complex samples to train models, and the Logsum penalized LR helps to select a small subset of significant molecular descriptors for improving the QSAR models. Experimental results on some simulations and three public QSAR datasets show that our proposed SPL-Logsum framework outperforms other existing sparse methods regarding the area under the curve, sensitivity, specificity, accuracy, and [Formula: see text]-values.
Xia, Q, Xu, Z, Liang, W, Yu, S, Guo, S & Zomaya, AY 2019, 'Efficient Data Placement and Replication for QoS-Aware Approximate Query Evaluation of Big Data Analytics', IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 12, pp. 2677-2691.
View/Download from: Publisher's site
View description>>
© 1990-2012 IEEE. Enterprise users at different geographic locations generate large-volume data that is stored at different geographic datacenters. These users may also perform big data analytics on the stored data to identify valuable information in order to make strategic decisions. However, it is well known that performing big data analytics on data in geographical-located datacenters usually is time-consuming and costly. In some delay-sensitive applications, the query result may become useless if answering a query takes too long time. Instead, sometimes users may only be interested in timely approximate rather than exact query results. When such approximate query evaluation is the case, applications must sacrifice timeliness to get more accurate evaluation results or tolerate evaluation result with a guaranteed error bound obtained from analyzing the samples of the data to meet their stringent timeline. In this paper, we study quality-of-service (QoS)-aware data replication and placement for approximate query evaluation of big data analytics in a distributed cloud, where the original (source) data of a query is distributed at different geo-distributed datacenters. We focus on the problems of placing data samples of the source data at some strategic datacenters to meet stringent query delay requirements of users, by exploring a non-trivial trade-off between the cost of query evaluation and the error bound of the evaluation result. We first propose an approximation algorithm with a provable approximation ratio for a single approximate query. We then develop an efficient heuristic algorithm for evaluating a set of approximate queries with the aim to minimize the evaluation cost while meeting the delay requirements of these queries. We finally demonstrate the effectiveness and efficiency of the proposed algorithms through both experimental simulations and implementations in a real test-bed, real datasets are employed. Experimental results show that the ...
Xiao, R, Ren, W, Zhu, T & Choo, K-KR 2019, 'A Mixing Scheme Using a Decentralized Signature Protocol for Privacy Protection in Bitcoin Blockchain', IEEE Transactions on Dependable and Secure Computing, vol. PP, no. 99, pp. 1-1.
View/Download from: Publisher's site
View description>>
IEEE Bitcoin transactions are not truly anonymous as an attacker can attempt to reveal a user's private information by tracing related transactions. Existing approaches to protect privacy (e.g. mixcoin, shuffle, and blinded token) suffer from a number of limitations. For example, some approaches assume the existence of a trusted third party, rely on exchanges among various currencies, or broadcast sensitive details before mixing. Therefore, there is a real risk of privacy breach or losing tokens. Thus in this paper, we design a mixing scheme with one decentralized signature protocol, which does not rely on a third party or require a transaction fee. Specifically, our scheme uses a negotiation process to guarantee transaction details, which is monitored by the participants. Furthermore, the scheme includes a signature protocol based on the ElGamal signature protocol and secret sharing. The proposed scheme is then proven secure.
Xiong, P, Zhu, D, Zhang, L, Ren, W & Zhu, T 2019, 'Optimizing rewards allocation for privacy-preserving spatial crowdsourcing', Computer Communications, vol. 146, pp. 85-94.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Rewards allocation in one of the key issues for ensuring a high task acceptance rate in spatial crowdsourcing applications. Generally, workers who participate in a crowdsourcing project are required to disclose their locations, which may lead to serious privacy threats. Unfortunately, providing a rigid privacy guarantee is incompatible with ensuring a high task acceptance rate in most existing crowdsourcing solutions. Hence, this paper proposes a crowdsourcing framework based on optimized reward allocation strategies. The key idea is to tune the reward for performing each task to the workers’ preferences to attain a high acceptance rate. The first step in the framework is to interrogate the workers’ preferences using a cryptographic protocol that fully preserves the location privacy of the workers. Based on those preferences, two different approaches to reward assignments have been proposed to ensure the rewards are distributed optimally. A theoretical analysis of the privacy protection inherent in the framework demonstrates that the proposed framework guarantee the worker's location privacy from adversaries including the requester and crowdsourcing server. Further, experiments based on real-world datasets show that the proposed strategies outperform existing solutions in terms of task acceptance rates.
Xu, C, Han, Z, Wang, Q, Zhao, G & Yu, S 2019, 'Modelling the impact of interference on the energy efficiency of WLANs', Concurrency and Computation: Practice and Experience, vol. 31, no. 17, pp. e5217-e5217.
View/Download from: Publisher's site
View description>>
SummaryThe high‐bandwidth demands from variety of applications drive a dense wireless local access network (WLAN), which results in a complicated wireless network scene with serious co‐channel interference and energy waste. In this paper, to reveal the interactions between interference and energy efficiency, we propose an interference‐energy efficiency (IFEE) model to quantify the interference impact on the energy efficiency of 802.11 access point (AP) devices. Firstly, we introduce the channel separation and the difference of received signal strength indication (D‐RSSI) as two indicators to extend the classical signal to interference plus noise ratio (SINR) notion and rate adaptive mechanism. Then, these two parameters are integrated into the energy consumption model to establish the IFEE model. Lastly, we conduct extensive measurements with five typical WiFi interference scene in real network to validate the effectiveness of our model. The comparisons between simulation results and real data demonstrate that the proposed IFEE model can quantify the interference and energy efficiency with high accuracy, which can be used for wireless network optimization and protocol design.
Xu, C, Xiong, Z, Zhao, G & Yu, S 2019, 'An Energy-Efficient Region Source Routing Protocol for Lifetime Maximization in WSN', IEEE Access, vol. 7, pp. 135277-135289.
View/Download from: Publisher's site
View description>>
As the sensor layer of Internet of Things (IOT), enormous amount of sensor nodes are densely deployed in a hostile environment to monitor and sense the changes in the physical space. Since sensor nodes are driven with limited power batteries, it is very difficult and expensive for wireless sensor networks (WSNs) to extend network lifetime. In order to achieve reliable data transmission in WSNs, energy efficient routing protocol is a crucial issue in extending the network lifetime of a network. However, traditional routing protocols usually propagate throughout the whole network to discover a reliable route or employ some cluster heads to undertake data transmission for other nodes, which both require large amount energy consumption. In this paper, to maximize the network lifetime of the WSN, we propose a novel energy efficient region source routing protocol (referred to ER-SR). In ER-SR, a distributed energy region algorithm is proposed to select the nodes with high residual energy in the network as source routing node dynamically. Then, the source routing nodes calculate the optimal source routing path for each common node, which enables partial nodes to participate in the routing process and balances the energy consumption of sensor nodes. Furthermore, to minimize the energy consumption of data transmission, we propose an effective distance-based ant colony optimization algorithm to search the global optimal transmission path for each node. Simulation results demonstrate that ER-SR exhibits higher energy efficiency, and has moderate performance improvements on network lifetime, packet delivery ratio, and delivery delay, compared with other routing protocols in WSNs.
Xue, J-X, Sun, C-Y, Cheng, J-J, Xu, M-L, Li, Y-F & Yu, S 2019, 'Wheat ear growth modeling based on a polygon', Frontiers of Information Technology & Electronic Engineering, vol. 20, no. 9, pp. 1175-1184.
View/Download from: Publisher's site
View description>>
© 2019, Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature. Visual inspection of wheat growth has been a useful tool for understanding and implementing agricultural techniques and a way to accurately predict the growth status of wheat yields for economists and policy decision makers. In this paper, we present a polygonal approach for modeling the growth process of wheat ears. The grain, lemma, and palea of wheat ears are represented as editable polygonal models, which can be re-polygonized to detect collision during the growth process. We then rotate and move the colliding grain to resolve the collision problem. A linear interpolation and a spherical interpolation are developed to simulate the growth of wheat grain, performed in the process of heading and growth of wheat grain. Experimental results show that the method has a good modeling effect and can realize the modeling of wheat ears at different growth stages.
Xue, S, Lu, J & Zhang, G 2019, 'Cross-domain network representations', Pattern Recognition, vol. 94, pp. 135-148.
View/Download from: Publisher's site
Yamasaki, H, Vijayan, MK & Hsieh, M-H 2019, 'Hierarchy of quantum operations in manipulating coherence and entanglement', Quantum, vol. 5, p. 480.
View description>>
Quantum resource theory under different classes of quantum operationsadvances multiperspective understandings of inherent quantum-mechanicalproperties, such as quantum coherence and quantum entanglement. We establishhierarchies of different operations for manipulating coherence and entanglementin distributed settings, where at least one of the two spatially separatedparties are restricted from generating coherence. In these settings, weintroduce new classes of operations and also characterize those maximal, i.e.,the resource-non-generating operations, progressing beyond existing studies onincoherent versions of local operations and classical communication and thoseof separable operations. The maximal operations admit asemidefinite-programming formulation useful for numerical algorithms, whereasthe existing operations not. To establish the hierarchies, we prove a sequenceof inclusion relations among the operations by clarifying tasks whereseparation of the operations appears. We also demonstrate an asymptoticallynon-surviving separation of the operations in the hierarchy in terms ofperformance of the task of assisted coherence distillation, where a separationin a one-shot scenario vanishes in the asymptotic limit. Our results serve asfundamental analytical and numerical tools to investigate interplay betweencoherence and entanglement under different operations in the resource theory.
Yang, L, Zhi, Y, Wei, T, Yu, S & Ma, J 2019, 'Inference attack in Android Activity based on program fingerprint', Journal of Network and Computer Applications, vol. 127, pp. 92-106.
View/Download from: Publisher's site
View description>>
© 2018 Private breach has always been an important threat to mobile security. Recent studies show that an attacker can infer users’ private information through side channels, such as the use of runtime memory and network usage. For side-channel attacks, malicious applications generally run parallel in the background with a foreground application and stealthily collect side-channel information. In this paper, we analyze the relationship between memory changes and Activity transition, then use side-channel information to label an Activity and build an Activity signature database. We show how to use the runtime memory exposure to infer the Activity transition of the current application and use other side channels to infer its Activity interface. We demonstrate the effectiveness of the attacks with 5 popular applications that contain user sensitive information, and successfully inferred most of the Activity transition and Activity interface process. Moreover, we propose a protection scheme which can effectively resist side-channel attacks.
Yang, M, Zhu, T, Liang, K, Zhou, W & Deng, RH 2019, 'A blockchain-based location privacy-preserving crowdsensing system', Future Generation Computer Systems, vol. 94, pp. 408-418.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. With the support of portable electronic devices and crowdsensing, a new class of mobile applications based on the Internet of Things (IoT) application is emerging. Crowdsensing enables workers with mobile devices to travel to specified locations and collect data, then send it back to the requester for rewards. However, the majority of the existing crowdsensing systems are based on centralized servers, which are prone to a high chance of attack, intrusion, and manipulation. Further, during the process of transmitting information to and from the service server, the worker's location is usually exposed. This raises the potential risk of a privacy infringement. In this paper, we first identify three ways locations can be disclosed in traditional crowdsensing systems. Then, we propose a novel solution, dubbed a blockchain privacy-preservation crowdsensing system, to address these privacy problems. The proposed system not only protects the privacy of worker locations but also increases the success rate of completing the assigned task. Specifically, the system entails a rewards-based task assignment process that, essentially, markets the given assignment and uses the anonymized characteristics of blockchain technology to hide the identity information of users. To prevent attacks through re-identification, we have introduced a private blockchain to distribute the worker's transaction records.
Yang, W, Wang, J, Lu, H, Niu, T & Du, P 2019, 'Hybrid wind energy forecasting and analysis system based on divide and conquer scheme: A case study in China', Journal of Cleaner Production, vol. 222, pp. 942-959.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Wind energy, acknowledged as a promising form of renewable energy and the fastest-growing clean method for electricity generation, has attracted considerable attention from many scientists and researchers in recent decades. However, wind energy forecasting is still a challenging task owing to its inherent features of non-linearity and randomness. Therefore, this study develops a hybrid wind energy forecasting and analysis system including a deterministic forecasting module and an uncertainty analysis module to mitigate the challenges in existing studies. In particular, these challenges are as follows: (1) It is difficult to guarantee that the data characteristics underlying the time series are effectively extracted; (2) in the modeling of each subseries, i.e., when the original data is decomposed into some time series, forecasting accuracy and stability are not simultaneously considered, and thus, they are not properly modeled; and (3) the best function to perform a deterministic forecasting and uncertainty analysis based on the forecaster of each subseries is unknown. The developed hybrid system consists of three steps: First, data preprocessing is conducted to capture and mine the main feature of the wind energy time series and weaken the noises’ negative effects; second, multi-objective optimization is proposed to achieve the forecasting of each subseries with improvements in accuracy and stability; finally, search for the best function, which obtains the deterministic forecasting and uncertainty analysis results using an optimized extreme learning machine based on different modeling objectives, is conducted. Experimental simulations are performed using data from three sites in a real wind farm, which indicate that the developed system has a better performance in engineering applications than that of other methods. Furthermore, this system could not only be used as an effective tool for wind energy deterministic forecasting and uncertainty ...
Yang, Z, Li, X, Cao, Z & Li, J 2019, 'Q-rung Orthopair Normal Fuzzy Aggregation Operators and Their Application in Multi-Attribute Decision-Making', Mathematics, vol. 7, no. 12, pp. 1142-1142.
View/Download from: Publisher's site
View description>>
Q-rung orthopair fuzzy set (q-ROFS) is a powerful tool to describe uncertain information in the process of subjective decision-making, but not express vast objective phenomenons that obey normal distribution. For this situation, by combining the q-ROFS with the normal fuzzy number, we proposed a new concept of q-rung orthopair normal fuzzy (q-RONF) set. Firstly, we defined the conception, the operational laws, score function, and accuracy function of q-RONF set. Secondly, we presented some new aggregation operators to aggregate the q-RONF information, including the q-RONF weighted operators, the q-RONF ordered weighted operators, the q-RONF hybrid operator, and the generalized form of these operators. Furthermore, we discussed some desirable properties of the above operators, such as monotonicity, commutativity, and idempotency. Meanwhile, we applied the proposed operators to the multi-attribute decision-making (MADM) problem and established a novel MADM method. Finally, the proposed MADM method was applied in a numerical example on enterprise partner selection, the numerical result showed the proposed method can effectively handle the objective phenomena with obeying normal distribution and complicated fuzzy information, and has high practicality. The results of comparative and sensitive analysis indicated that our proposed method based on q-RONF aggregation operators over existing methods have stronger information aggregation ability, and are more suitable and flexible for MADM problems.
Yang, Z, Xiong, G, Cao, Z, Li, Y & Huang, L 2019, 'A Decision Method for Online Purchases Considering Dynamic Information Preference Based on Sentiment Orientation Classification and Discrete DIFWA Operators', IEEE Access, vol. 7, pp. 77008-77026.
View/Download from: Publisher's site
Yao, L, Wang, X, Sheng, QZ, Dustdar, S & Zhang, S 2019, 'Recommendations on the Internet of Things: Requirements, Challenges, and Directions', IEEE Internet Computing, vol. 23, no. 3, pp. 46-54.
View/Download from: Publisher's site
View description>>
© 1997-2012 IEEE. The Internet of Things (IoT) is accelerating the growth of data available on the Internet, which makes the traditional search paradigms incapable of digging the information that people need from massive and deep resources. Furthermore, given the dynamic nature of organizations, social structures, and devices involved in IoT environments, intelligent and automated approaches become critical to support decision makers with the knowledge derived from the vast amount of information available through IoT networks. Indeed, IoT is more desirable of an effective and efficient paradigm of proactive discovering rather than postactive searching. This paper discusses some of the important requirements and key challenges to enable effective and efficient thing-of-interest recommendation and provides an array of new perspectives on IoT recommendation.
Yao, S, Chen, J, He, K, Du, R, Zhu, T & Chen, X 2019, 'PBCert: Privacy-Preserving Blockchain-Based Certificate Status Validation Toward Mass Storage Management', IEEE Access, vol. 7, pp. 6117-6128.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In the recent years, the vulnerabilities of conventional public key infrastructure are exposed by the real-world attacks, such as the certificate authority's single-point-of-failure or clients' private information leakage. Aimed at the first issue, one type of approach is that multiple entities are introduced to assist the certificate operations, including registration, update, and revocation. However, it is inefficient in computation. Another type is to make the certificate information publicly visible by bringing in the log servers. Nevertheless, the data synchronization among log servers may lead to network latency. Based on the second approach, the blockchain-based public key infrastructure schemes are proposed. Through these type of schemes, all the certificate operations are stored in the blockchain for public audit. However, the issue of revoked certificates' status storage is worth paying attention, especially in the setting with massive certificates. In addition, the target web server that a client wants to access is exposed in the process of certificate status validation. In this paper, we propose a privacy-preserving blockchain-based certificate status validation scheme called PBCert to solve these two issues. First, we separate the revoked certificates control and storage plane. Only the minimal control information (namely, certificate hashes and related operation block height) is stored in the blockchain and it uses external data stores for the detailed information about all revoked certificates. Second, we design an obscure response to the clients' certificate status query for the purpose of privacy preserving. Through the security analysis and experiment evaluation, our scheme is significant in practice.
Yazdani, M, Babagolzadeh, M, Kazemitash, N & Saberi, M 2019, 'Reliability estimation using an integrated support vector regression – variable neighborhood search model', Journal of Industrial Information Integration, vol. 15, pp. 103-110.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Inc. As failure and reliability predictions play a significant role in production systems they have caught the attention of researchers. In this study, Support Vector Regression (SVR), which is known as a powerful neural network method, is developed as a way of forecasting reliability. Generally, SVR is applied in many research environments, and the results illustrate that SVR is a successful method in solving non-linear regression problems. However, SVR parameters tuning is a vital task for performing an accurate reliability estimation. We propose variable neighborhood search (VNS) for continuous space, including some simple but efficient shaking and local search as its main operators, to tune the SVR parameters and create a novel SVR-VNS hybrid system to improve the reliability of estimation accuracy. The proposed method is validated with a benchmark from the former literature and compared with conventional techniques, namely RBF (Gaussian), AR (autoregressive), MLP (logistic), MLP (Gaussian), and SVMG (SVM with genetic algorithm). The experimental results indicate that the proposed model has a superior performance for prediction reliability than other techniques.
Ye, D, He, Q, Wang, Y & Yang, Y 2019, 'An Agent-Based Integrated Self-Evolving Service Composition Approach in Networked Environments', IEEE Transactions on Services Computing, vol. 12, no. 6, pp. 880-895.
View/Download from: Publisher's site
View description>>
© 2008-2012 IEEE. Service composition is an important research problem in service computing systems, which combines simple and individual services into composite services to fulfill users' complex requirements. Service composition usually consists of four stages, i.e., service discovery, candidate selection, service negotiation and task execution. In self-organising systems, there is the fifth stage of service composition: self-evolution. Most of existing works study only some of the five stages. However, these five stages should be systematically studied so as to develop an integrated and efficient service composition approach. Against this background, this paper proposes an agent-based integrated self-evolving service composition approach. This approach systematically takes the five stages of service composition into consideration. It is also decentralised and self-evolvable. Experimental results demonstrate that the proposed approach can achieve almost the same success rates while uses much less communication overhead and time consumption in comparison with three existing representative approaches.
Yin, K, Laranjo, L, Tong, HL, Lau, AYS, Kocaballi, AB, Martin, P, Vagholkar, S & Coiera, E 2019, 'Context-Aware Systems for Chronic Disease Patients: Scoping Review', Journal of Medical Internet Research, vol. 21, no. 6, pp. e10896-e10896.
View/Download from: Publisher's site
Yin, R, Li, K, Zhang, G & Lu, J 2019, 'A deeper graph neural network for recommender systems', Knowledge-Based Systems, vol. 185, pp. 105020-105020.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Interaction data in recommender systems are usually represented by a bipartite user–item graph whose edges represent interaction behavior between users and items. The data sparsity problem, which is common in recommender systems, is the result of insufficient interaction data in the link prediction on graphs. The data sparsity problem can be alleviated by extracting more interaction behavior from the bipartite graph, however, stacking multiple layers will lead to over-smoothing, in which case, all nodes will converge to the same value. To address this issue, we propose a deeper graph neural network in this paper that can predict links on a bipartite user–item graph using information propagation. An attention mechanism is introduced to our method to address the problem that variable size inputs for each node on a bipartite graph. Our experimental results demonstrate that our proposed method outperforms five baselines, suggesting that the interactions extracted help to alleviate the data sparsity problem and improve recommendation accuracy.
Ying, H, Wu, J, Xu, G, Liu, Y, Liang, T, Zhang, X & Xiong, H 2019, 'Time-aware metric embedding with asymmetric projection for successive POI recommendation', World Wide Web, vol. 22, no. 5, pp. 2209-2224.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Successive Point-of-Interest (POI) recommendation aims to recommend next POIs for a given user based on this user’s current location. Indeed, with the rapid growth of Location-based Social Networks (LBSNs), successive POI recommendation has become an important and challenging task, since it can help to meet users’ dynamic interests based on their recent check-in behaviors. While some efforts have been made for this task, most of them do not capture the following properties: 1) The transition between consecutive POIs in user check-in sequences presents asymmetric property, however existing approaches usually assume the forward and backward transition probabilities between a POI pair are symmetric. 2) Users usually prefer different successive POIs at different time, but most existing studies do not consider this dynamic factor. To this end, in this paper, we propose a time-aware metric embedding approach with asymmetric projection (referred to as MEAP-T) for successive POI recommendation, which takes the above two properties into consideration. In addition, we exploit three latent Euclidean spaces to project the POI-POI, POI-user, and POI-time relationships. Finally, the experimental results on two real-world datasets show MEAP-T outperforms the state-of-the-art methods in terms of both precision and recall.
Yu, L, Zeng, S, Merigó, JM & Zhang, C 2019, 'A new distance measure based on the weighted induced method and its application to Pythagorean fuzzy multiple attribute group decision making', International Journal of Intelligent Systems, vol. 34, no. 7, pp. 1440-1454.
View/Download from: Publisher's site
View description>>
© 2019 Wiley Periodicals, Inc. This paper investigates a novel induced ordered weighted averaging (IOWA) distance operator and its application in Pythagorean fuzzy (PF) multiattribute group decision making (MAGDM). First, a new induced aggregated distance operator named the weighted IOWA distance (WIOWAD) operator is developed, which differs from the existing methods in that it considers the dual roles of the order-inducing variables at the same time. In other words, in addition to inducing the order of the arguments, the order-inducing variables of the WIOWAD operator also plays an important role in moderating the associated weight vector. Some useful properties and different families of the WIOWAD are also discussed. Then, an extension of the WIOWAD within the PF situation is presented, thus obtaining the PFWIOWAD operator. Furthermore, a MAGDM method based on the PFWIOWAD is introduced. Finally, the practicality and effectiveness of proposed approach are illustrated in a research and development project selection problem.
Yuan, B, Zou, D, Yu, S, Jin, H, Qiang, W & Shen, J 2019, 'Defending Against Flow Table Overloading Attack in Software-Defined Networks', IEEE Transactions on Services Computing, vol. 12, no. 2, pp. 231-246.
View/Download from: Publisher's site
View description>>
© 2008-2012 IEEE. The Software-Defined Network (SDN) is a new and promising network architecture. At the same time, SDN will surely become a new target of cyber attackers. In this paper, we point out one critical vulnerability in SDNs, the size of flow table, which is most likely to be attacked. Due to the expensive and power-hungry features of Ternary Content Addressable Memory (TCAM), a flow table usually has a limited size, which can be easily disabled by a flow table overloading attack (a transformed DDoS attack). To provide a security service in SDN, we proposed a QoS-aware mitigation strategy, namely, peer support strategy, which integrates the available idle flow table resource of the whole SDN system to mitigate such an attack on a single switch of the system. We established a practical mathematical model to represent the studied system, and conducted a thorough analysis for the system in various circumstances. Based on our analysis, we found that the proposed strategy can effectively defeat the flow table overloading attacks. Extensive simulations and testbed-based experiments solidly support our claims. Moreover, our work also shed light on the implementation of SDN networks against possible brute-force attacks.
Zha, Q, Liang, H, Kou, G, Dong, Y & Yu, S 2019, 'A Feedback Mechanism With Bounded Confidence- Based Optimization Approach for Consensus Reaching in Multiple Attribute Large-Scale Group Decision-Making', IEEE Transactions on Computational Social Systems, vol. 6, no. 5, pp. 994-1006.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Different feedback mechanisms have been developed in large-scale group decision-making (GDM) to provide the decision-makers with advices for preference adjustment with the aim of improving the group consensus level. However, the willingness of the decision-makers to accept these advices is rarely considered in the extant feedback mechanisms. In the field of opinion dynamics, this issue is studied by the bounded confidence model, which shows that the decision-makers only consider the preferences that differ from their own preferences not more than a certain confidence level. Following this idea, this article proposes a large-scale consensus model with a bounded confidence-based feedback mechanism to promote the consensus level among decision-makers with bounded confidences. Specifically, this feedback mechanism classifies the decision-makers into different clusters and provides the corresponding clusters with more acceptable advices based on a bounded confidence-based optimization approach. Finally, through the numerical example and the simulation analysis, the use of the model is introduced, and the effectiveness of the model is justified.
Zhan, K, Chang, X, Guan, J, Chen, L, Ma, Z & Yang, Y 2019, 'Adaptive Structure Discovery for Multimedia Analysis Using Multiple Features', IEEE Transactions on Cybernetics, vol. 49, no. 5, pp. 1826-1834.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Multifeature learning has been a fundamental research problem in multimedia analysis. Most existing multifeature learning methods exploit graph, which must be computed beforehand, as input to uncover data distribution. These methods have two major problems confronted. First, graph construction requires calculating similarity based on nearby data pairs by a fixed function, e.g., the RBF kernel, but the intrinsic correlation among different data pairs varies constantly. Therefore, feature learning based on such predefined graphs may degrade, especially when there is dramatic correlation variation between nearby data pairs. Second, in most existing algorithms, each single-feature graph is computed independently and then combine them for learning, which ignores the correlation between multiple features. In this paper, a new unsupervised multifeature learning method is proposed to make the best utilization of the correlation among different features by jointly optimizing data correlation from multiple features in an adaptive way. As opposed to computing the affinity weight of data pairs by a fixed function, the weight of affinity graph is learned by a well-designed optimization problem. Additionally, the affinity graph of data pairs from different features is optimized in a global level to better leverage the correlation among different channels. In this way, the adaptive approach correlates the features of all features for a better learning process. Experimental results on real-world datasets demonstrate that our approach outperforms the state-of-the-art algorithms on leveraging multiple features for multimedia analysis.
Zhan, M, Liang, H, Kou, G, Dong, Y & Yu, S 2019, 'Impact of Social Network Structures on Uncertain Opinion Formation', IEEE Transactions on Computational Social Systems, vol. 6, no. 4, pp. 670-679.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. When people express their opinions about a certain issue, they often give uncertain opinions rather than exact opinions. Particularly, these uncertain opinions will evolve in social networks. Therefore, in this paper, we focus on investigating uncertain opinion formation with social networks under bounded confidence. Specifically, we define the uncertain opinions by numerical interval opinions, whose ranges are between zero and one, and the larger width of numerical interval opinions means the more uncertainty of the opinions. Meanwhile, we describe social network structures by ER random graphs with different agents' scales and network connected probabilities. Then, we present the detailed simulation experiments to reveal the strong impact of social network structures on uncertain opinion formation. Simulation results show that: 1) larger agents' scales will yield the smaller ratios of agents expressing the uncertain opinions and larger average widths of uncertain opinions; 2) the average stable time starts increasing and then decreases with the increase in the network connected probabilities; and 3) larger network connected probabilities will yield less opinion clusters and the smaller ratios of the extremely small clusters in all clusters. The obtained results are helpful for the government and public opinion management departments to understand and manage uncertain public opinion evolution effectively.
Zhang, C, Wu, X, Zheng, X & Yu, S 2019, 'Driver Drowsiness Detection Using Multi-Channel Second Order Blind Identifications', IEEE Access, vol. 7, pp. 11829-11843.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. It is well known that blink, yawn, and heart rate changes give clue about a human's mental state, such as drowsiness and fatigue. In this paper, image sequences, as the raw data, are captured from smart phones which serve as non-contact optical sensors. Video streams containing subject's facial region are analyzed to identify the physiological sources that are mixed in each image. We then propose a method to extract blood volume pulse and eye blink and yawn signals as multiple independent sources simultaneously by multi-channel second-order blind identification (SOBI) without any other sophisticated processing, such as eye and mouth localizations. An overall decision is made by analyzing the separated source signals in parallel to determine the driver's driving state. The robustness of the proposed method is tested under various illumination contexts and a variety of head motion modes. Experiments on 15 subjects show that the multi-channel SOBI presents a promising framework to accurately detect drowsiness by merging multi-physiological information in a less complex way.
Zhang, H, Dong, Y, Chiclana, F & Yu, S 2019, 'Consensus efficiency in group decision making: A comprehensive comparative study and its optimal design', European Journal of Operational Research, vol. 275, no. 2, pp. 580-598.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Consensus reaching processes (CRPs) aim to help decision-makers achieve agreement regarding the solution to a common decision problem, and consequently play an increasingly important role in the resolution of group decision making (GDM) problems. To date, a large number of CRPs have been reported. However, there is a lack of a general framework and criteria to evaluate the efficiency of the different CRPs. This paper aims to fill this gap in the research literature on CRPs. To achieve this goal, firstly, a comprehensive review regarding the different approaches to CRP is reported, and a series of CRPs as the comparison objects are presented. Secondly, the following comparison criteria for measuring the efficiency of CPRs are proposed: the number of adjusted decision-makers, the number of adjusted alternatives, the number of adjusted preference values, the distance between the original and the adjusted preference information (adjustment cost), and the number of negotiation rounds required to reach consensus. Following this, a detailed simulation experiment is designed to analyze the efficiency of different CRPs under the mentioned different comparison criteria. Furthermore, new multi-stage optimization-based CRPs are also developed, which the simulation experiment shows to have better comprehensive consensus efficiency in different GDM settings.
Zhang, H-W, Kok, VC, Chuang, S-C, Tseng, C-H, Lin, C-T, Li, T-C, Sung, F-C, Wen, CP, Hsiung, CA & Hsu, CY 2019, 'Long-term ambient hydrocarbons exposure and incidence of ischemic stroke', PLOS ONE, vol. 14, no. 12, pp. e0225363-e0225363.
View/Download from: Publisher's site
View description>>
Exposure to air pollutants is known to have adverse effects on human health; however, little is known about the association between hydrocarbons in air and an ischemic stroke (IS) event. We investigated whether long-term exposure to airborne hydrocarbons, including volatile organic compounds, increased IS risk. This retrospective cohort study included 283,666 people aged 40 years or older in Taiwan. Cox proportional hazards regression analysis was used to fit single- and multiple-pollutant models for two targeted pollutants, total hydrocarbons (THC) and nonmethane hydrocarbons (NMHC), and estimated the risk of IS. Before controlling for multiple pollutants, hazard ratios (HRs) of IS with 95% confidence intervals for the overall population were 2.69 (2.64-2.74) at 0.16-ppm increase in THC and 1.62 (1.59-1.66) at 0.11-ppm increase in NMHC. For the multiple-pollutant models controlling for PM2.5, the adjusted HR was 3.64 (3.56-3.72) for THC and 2.21 (2.16-2.26) for NMHC. Our findings suggest that long-term exposure to THC and NMHC may be a risk factor for IS development.
Zhang, K, Qu, Z, Dong, Y, Lu, H, Leng, W, Wang, J & Zhang, W 2019, 'Research on a combined model based on linear and nonlinear features - A case study of wind speed forecasting', Renewable Energy, vol. 130, pp. 814-830.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd As one of the most promising sustainable energy sources, wind energy is being paid more attention by the researchers. Because of the volatility and instability of wind speed series, wind power integration faces a severe challenge; thus, an accurate wind energy forecasting plays a key role in smart grid planning and management. However, many traditional forecasting models do not consider the necessity and importance of data preprocessing and neglect the limitation of using a single forecasting model, which leads to poor forecasting accuracy. To solve these problems, a novel combined model based on two linear and four nonlinear forecasting algorithms is proposed to adapt both the linear and nonlinear characteristics of the wind energy time series. In addition, a modified Artificial Fish Swarm Algorithm and Ant Colony Optimization (AFSA-ACO) algorithm is proposed and employed to determine the optimal weight coefficients of the combined models. To verify the forecasting performance of the developed combined model, several experiments were implemented by using 10-min interval wind speed data in Shandong, China. Then, one-step (10-min), three-step (30-min) and five-step (50-min) predictions were conducted. The experimental results indicate that the developed combined model is remarkably superior to all benchmark models for the high precision and stability of wind-speed predictions.
Zhang, L, Xiong, P, Ren, W & Zhu, T 2019, 'A differentially private method for crowdsourcing data submission', Concurrency and Computation: Practice and Experience, vol. 31, no. 19.
View/Download from: Publisher's site
View description>>
SummaryIn recent years, the ubiquity of mobile devices has made spatial crowdsourcing a successful business platform for conducting spatiotemporal projects. In spatial crowdsourcing, workers contribute to a project by performing a task at a specific location. However, these platforms present serious threats to people's location privacy because sensitive information may be leaked from submitted spatiotemporal data. As a result, people may be hesitant to join spatial crowdsourcing projects, which hampers further applications of this business model. In this paper, we propose a private spatial crowdsourcing data submission algorithm, called PS‐Sub. This is a differentially private method that preserves people's location privacy and provides acceptable data utility. Rigorous privacy analyses theoretically demonstrate the privacy guarantees inherent in the proposed model. Experiments based on real‐world datasets were conducted using practical evaluation metrics. The results show that our method is able to achieve location privacy preservation efficiently, at an acceptable cost for spatial crowdsourcing applications.
Zhang, Q, Lu, J, Wu, D & Zhang, G 2019, 'A Cross-Domain Recommender System With Kernel-Induced Knowledge Transfer for Overlapping Entities', IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 7, pp. 1998-2012.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. The aim of recommender systems is to automatically identify user preferences within collected data, then use those preferences to make recommendations that help with decisions. However, recommender systems suffer from data sparsity problem, which is particularly prevalent in newly launched systems that have not yet had enough time to amass sufficient data. As a solution, cross-domain recommender systems transfer knowledge from a source domain with relatively rich data to assist recommendations in the target domain. These systems usually assume that the entities either fully overlap or do not overlap at all. In practice, it is more common for the entities in the two domains to partially overlap. Moreover, overlapping entities may have different expressions in each domain. Neglecting these two issues reduces prediction accuracy of cross-domain recommender systems in the target domain. To fully exploit partially overlapping entities and improve the accuracy of predictions, this paper presents a cross-domain recommender system based on kernel-induced knowledge transfer, called KerKT. Domain adaptation is used to adjust the feature spaces of overlapping entities, while diffusion kernel completion is used to correlate the non-overlapping entities between the two domains. With this approach, knowledge is effectively transferred through the overlapping entities, thus alleviating data sparsity issues. Experiments conducted on four data sets, each with three sparsity ratios, show that KerKT has 1.13%-20% better prediction accuracy compared with six benchmarks. In addition, the results indicate that transferring knowledge from the source domain to the target domain is both possible and beneficial with even small overlaps.
Zhang, X, Yu, S, Zhang, J & Xu, Z 2019, 'Forwarding Rule Multiplexing for Scalable SDN-Based Internet of Things', IEEE Internet of Things Journal, vol. 6, no. 2, pp. 3373-3385.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Internet of Things (IoT) provides a vast number of devices with heterogeneous characteristics connected to the Internet. As a promising networking paradigm that decouples control plane from data plane, software-defined networking (SDN) is an appropriate architecture for IoT. The SDN paradigm supports deploying traffic flows dynamically by a centralized controller to SDN switches. In particular, the controller configures forwarding rules of SDN switches to steer traffic. However, forwarding rules are usually stored in expensive and power hungry ternary content addressable memory (TCAM), which is very limited in quantity for SDN switches. Thus, the shortage of TCAM becomes a fatal bottleneck for scalable flow management for SDN-based IoT. To this end, we propose a method of forwarding rule multiplexing (FRM) to minimize the total number of forwarding rules in SDN-based IoT. We multiplex different traffic flows traversing through the same path into an aggregated flow with the label of VLAN ID. As a result, multiple forwarding rules could be merged into one multiplexed rule. We also extend the method to SDN protection against link failure, and reduce backup path forwarding rules. We formulate the FRM problem as an integer linear programming model. Since the problem is NP-hard, we design a polynomial algorithm using the Markov approximation technique. Theoretical analysis indicates that the polynomial algorithm generates near-optimal solution. The extensive emulation results show that the proposed Markov approximation-based algorithm reduces the number of forwarding rules by 15.73% in average compared with the benchmark algorithms.
Zhang, Y, Huang, Y, Porter, AL, Zhang, G & Lu, J 2019, 'Discovering and forecasting interactions in big data research: A learning-enhanced bibliometric study', Technological Forecasting and Social Change, vol. 146, pp. 795-807.
View/Download from: Publisher's site
View description>>
© 2018 As one of the most impactful emerging technologies, big data analytics and its related applications are powering the development of information technologies and are significantly shaping thinking and behavior in today's interconnected world. Exploring the technological evolution of big data research is an effective way to enhance technology management and create value for research and development strategies for both government and industry. This paper uses a learning-enhanced bibliometric study to discover interactions in big data research by detecting and visualizing its evolutionary pathways. Concentrating on a set of 5840 articles derived from Web of Science covering the period between 2000 and 2015, text mining and bibliometric techniques are combined to profile the hotspots in big data research and its core constituents. A learning process is used to enhance the ability to identify the interactive relationships between topics in sequential time slices, revealing technological evolution and death. The outputs include a landscape of interactions within big data research from 2000 to 2015 with a detailed map of the evolutionary pathways of specific technologies. Empirical insights for related studies in science policy, innovation management, and entrepreneurship are also provided.
Zhang, Y, Lu, X & Li, J 2019, 'Single-sample face recognition under varying lighting conditions based on logarithmic total variation', Signal, Image and Video Processing, vol. 13, no. 4, pp. 657-665.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag London Ltd., part of Springer Nature. The logarithmic total variation (LTV) algorithm is a classical algorithm that is proposed to address the illumination interference in face recognition. Some state-of-the-art techniques based on LTV assume that the illumination component mainly lies in the low-frequency features among face images. However, these techniques adopt unsuitable methods to process low-frequency features, resulting in final unsatisfactory recognition rates. In this paper, we propose an improved illumination normalization method based on the LTV method, called the RETINA&TH-LTV algorithm. In this algorithm, the retina model is utilized to eliminate most of the illumination component in low-frequency features. Then, an advanced contrast-limited adaptive histogram equalization technique is proposed to remove the residual lighting component. At the same time, through realizing threshold-value filtering on high-frequency features, the enhancement of facial features is achieved. Finally, the processed frequency features are combined to form a robust holistic feature image, which is then utilized for recognition. Insufficient training images in face recognition are also taken into consideration in this research. Comparative experiments for single-sample face recognition are conducted on YALE B, CMU PIE and our self-built driver databases. The nearest neighbor classifier and extended sparse representation classifier are employed as classification methods. The results indicate that the RETINA&TH-LTV algorithm has promising performance, especially in serious illumination and insufficient training sample conditions.
Zhang, Y, Lv, P, Lu, X & Li, J 2019, 'Face detection and alignment method for driver on highroad based on improved multi-task cascaded convolutional networks', Multimedia Tools and Applications, vol. 78, no. 18, pp. 26661-26679.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. Driver’s face detection and alignment techniques in Intelligent Transportation System (ITS) under unlimited environment are challenging issues, which are conductive to supervising traffic order and maintaining public safety. This paper proposes the improved Multi-task Cascaded Convolutional Networks (ITS-MTCNN) to realize accurate face region detection and feature alignment of driver’s face on highway, predicting face and feature location via a coarse-to-fine pattern. Moreover, the improved regularization method and effective online hard sample mining technique are proposed in ITS-MTCNN method. Then, the training model and contrast experiment are conducted on our self-build traffic driver’s face database. Finally, the effectiveness of ITS-MTCNN method is validated by comparative experiments and verified under various complex highway conditions. At the same time, average alignment errors on left eye, right eye, nose, left mouth as well as right mouth of the proposed technique are performed. Experimental results show that ITS-MTCNN model shows satisfied performance compared to other state-of-the-art techniques used in driver’s face detection and alignment, keeping robust to the occlusion, varying pose and extreme illumination on highway.
Zhang, Y, Ren, W, Zhu, T & Faith, E 2019, 'MoSa: A Modeling and Sentiment Analysis System for Mobile Application Big Data', Symmetry, vol. 11, no. 1, pp. 115-115.
View/Download from: Publisher's site
View description>>
The development of mobile internet has led to a massive amount of data being generated from mobile devices daily, which has become a source for analyzing human behavior and trends in public sentiment. In this paper, we build a system called MoSa (Mobile Sentiment analysis) to analyze this data. In this system, sentiment analysis is used to analyze news comments on the THAAD (Terminal High Altitude Area Defense) event from Toutiao by employing algorithms to calculate the sentiment value of the comment. This paper is based on HowNet; after the comparison of different sentiment dictionaries, we discover that the method proposed in this paper, which use a mixed sentiment dictionary, has a higher accuracy rate in its analysis of comment sentiment tendency. We then statistically analyze the relevant attributes of the comments and their sentiment values and discover that the standard deviation of the comments’ sentiment value can quickly reflect sentiment changes among the public. Besides that, we also derive some special models from the data that can reflect some specific characteristics. We find that the intrinsic characteristics of situational awareness have implicit symmetry. By using our system, people can obtain some practical results to guide interaction design in applications including mobile Internet, social networks, and blockchain based crowdsourcing.
Zhang, Y, Wang, J & Lu, H 2019, 'Research and Application of a Novel Combined Model Based on Multiobjective Optimization for Multistep-Ahead Electric Load Forecasting', Energies, vol. 12, no. 10, pp. 1931-1931.
View/Download from: Publisher's site
View description>>
Accurate forecasting of electric loads has a great impact on actual power generation, power distribution, and tariff pricing. Therefore, in recent years, scholars all over the world have been proposing more forecasting models aimed at improving forecasting performance; however, many of them are conventional forecasting models which do not take the limitations of individual predicting models or data preprocessing into account, leading to poor forecasting accuracy. In this study, to overcome these drawbacks, a novel model combining a data preprocessing technique, forecasting algorithms and an advanced optimization algorithm is developed. Thirty-minute electrical load data from power stations in New South Wales and Queensland, Australia, are used as the testing data to estimate our proposed model’s effectiveness. From experimental results, our proposed combined model shows absolute superiority in both forecasting accuracy and forecasting stability compared with other conventional forecasting models.
Zhang, Y, Wang, M, Gottwalt, F, Saberi, M & Chang, E 2019, 'Ranking scientific articles based on bibliometric networks with a weighting scheme', Journal of Informetrics, vol. 13, no. 2, pp. 616-634.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd. All rights reserved. As the volume of scientific articles has grown rapidly over the last decades, evaluating their impact becomes critical for tracing valuable and significant research output. Many studies have proposed various ranking methods to estimate the prestige of academic papers using bibliometric methods. However, the weight of the links in bibliometric networks has been rarely considered for article ranking in existing literature. Such incomplete investigation in bibliometric methods could lead to biased ranking results. Therefore, a novel scientific article ranking algorithm, W-Rank, is introduced in this study proposing a weighting scheme. The scheme assigns weight to the links of citation network and authorship network by measuring citation relevance and author contribution. Combining the weighted bibliometric networks and a propagation algorithm, W-Rank is able to obtain article ranking results that are more reasonable than existing PageRank-based methods. Experiments are conducted on both arXiv hep-th and Microsoft Academic Graph datasets to verify the W-Rank and compare it with three renowned article ranking algorithms. Experimental results prove that the proposed weighting scheme assists the W-Rank in obtaining ranking results of higher accuracy and, in certain perspectives, outperforming the other algorithms.
Zhang, Y, Wang, M, Saberi, M & Chang, E 2019, 'From Big Scholarly Data to Solution-Oriented Knowledge Repository', Frontiers in Big Data, vol. 2.
View/Download from: Publisher's site
View description>>
The volume of scientific articles grow rapidly, producing a scientific basis for understanding and identifying the research problems and the state-of-the-art solutions. Despite the considerable significance of the problem-solving information, existing scholarly recommending systems lack the ability to retrieve this information from the scientific articles for generating knowledge repositories and providing problem-solving recommendations. To address this issue, this paper proposes a novel framework to build solution-oriented knowledge repositories and provide recommendations to solve given research problems. The framework consists of three modules: a semantics based information extraction module mining research problems and solutions from massive academic papers; a knowledge assessment module based on the heterogeneous bibliometric graph and a ranking algorithm; and a knowledge repository generation module to produce solution-oriented maps with recommendations. Based on the framework, a prototype scholarly solution support system is implemented. A case study is carried out in the research field of intrusion detection, and the results demonstrate the effectiveness and efficiency of the proposed method.
Zhang, Z, Oberst, S & Lai, JCS 2019, 'A non-linear friction work formulation for the analysis of self-excited vibrations', Journal of Sound and Vibration, vol. 443, pp. 328-340.
View/Download from: Publisher's site
View description>>
Even though much research has been devoted to understand friction-induced vibrations, its root cause is not yet fully understood. Reliable prediction of friction-induced unstable vibrations such as in brake squeal or hip squeak remains a challenge because of nonlinearities involved and because the complex eigenvalue analysis (CEA) widely used in industry is linear. The energy fed back into the system by friction has been shown to be useful for analysis of measurements and numerical simulations. In numerical simulations, the linearised method of feed-in energy, calculated purely based on friction work has provided some insights into the physical mechanism for instabilities. However, the dynamics due to friction-induced instabilities is highly nonlinear and damping may offset some or all of the excess friction energy provided to the system. By using a nonlinear 2-DOF dry friction oscillator, a nonlinear friction work formulation is proposed to demonstrate that in combination with viscous damping the energy budget provides an improved analysis capability over linearised friction work. The results highlight the potential of nonlinear friction work as a reliable tool to study friction-induced instabilities to gain deeper physical insights into squeal triggering mechanisms and to better understand the over- and under-predictive character inherent to linear methods.
Zhao, G, Li, Y, Xu, C, Han, Z, Xing, Y & Yu, S 2019, 'Joint Power Control and Channel Allocation for Interference Mitigation Based on Reinforcement Learning', IEEE Access, vol. 7, pp. 177254-177265.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In dense Wireless Local Area Networks (WLANs), high-density Access Points (APs) bring severe interference that seriously affects the experience of users, resulting in lower throughput and poor connection quality. Due to the heavy computation workload raised by the sizable networking systems and the difficulty in estimating instantaneous Channel State Information (CSI), existing works are hard to solve interference problem. In this paper, we propose a Joint Power control and Channel allocation based on Reinforcement Learning (JPCRL) algorithm combining with statistical CSI to reduce interference adaptively. Firstly, we analyze the correlation between transmit power and channel, and formulate the interference optimization as a Mixed Integer Nonlinear Programming (MINLP) problem. Secondly, we use the statistical CSI method to take the power and channel state as the state and action space, the overall throughput increment as the reward function of Q-learning, and obtain the optimal joint optimization strategy through off-line training. Moreover, for the periodic reinforcement learning process leading to resource consumption, we design an event-driven mechanism of Q-learning, which triggers online learning to refresh the optimal policy by event-driven condition and the consumption of computing resources can be reduced. The evaluation results show that the proposed algorithm can effectively improve the throughput compared with the existing scheme.
Zhao, Y, Chen, J, Wu, D, Teng, J, Sharma, N, Sajjanhar, A & Blumenstein, M 2019, 'Network Anomaly Detection by Using a Time-Decay Closed Frequent Pattern', Information, vol. 10, no. 8, pp. 262-262.
View/Download from: Publisher's site
View description>>
Anomaly detection of network traffic flows is a non-trivial problem in the field of network security due to the complexity of network traffic. However, most machine learning-based detection methods focus on network anomaly detection but ignore the user anomaly behavior detection. In real scenarios, the anomaly network behavior may harm the user interests. In this paper, we propose an anomaly detection model based on time-decay closed frequent patterns to address this problem. The model mines closed frequent patterns from the network traffic of each user and uses a time-decay factor to distinguish the weight of current and historical network traffic. Because of the dynamic nature of user network behavior, a detection model update strategy is provided in the anomaly detection framework. Additionally, the closed frequent patterns can provide interpretable explanations for anomalies. Experimental results show that the proposed method can detect user behavior anomaly, and the network anomaly detection performance achieved by the proposed method is similar to the state-of-the-art methods and significantly better than the baseline methods.
Zheng, R, Jiang, J, Hao, X, Ren, W, Xiong, F & Zhu, T 2019, 'CaACBIM: A Context-aware Access Control Model for BIM', Information, vol. 10, no. 2, pp. 47-47.
View/Download from: Publisher's site
View description>>
A building information model (BIM) is of upmost importance with a full life-time cycle in architecture engineering and construction industry. Smart construction relies on BIM to manipulate information flow, data flow, and management flow. Currently, BIM has been explored mainly for information construction and utilization, but there exist few works concerning information security, e.g., audits of critical models and exposure of sensitive models. Moreover, few BIM systems have been proposed to make use of new computing paradigms, such as mobile cloud computing, blockchain and Internet of Things. In this paper, we propose a Context-aware Access Control (CaAC) model for BIM systems on mobile cloud architectures. BIM data can be confidentially accessed according to contexts in a fine-grained manner. We describe functions of CaAC formally by illustrating location-aware access control and time-aware access control. CaAC model can outperform role-based access control for preventing BIM data leakage by distinguishing contexts. In addition, grouping algorithms are also presented for flexibility, in which basic model (user grouping based on user role permissions) and advanced model (user grouping based on user requests) are differentiated. Compared with the traditional role-based access control model, security and feasibility of CaAC are remarkably improved by distinguishing an identical role with multiple contexts. The average efficiency is improved by 2 n / ( 2 n - p - q ) , and time complexity is O ( n ) .
Zhu, C, Mesiar, R, Yager, RR, Merigo, J, Qin, J, Feng, X & Jin, L 2019, 'Two-layer preference models with methodologies using induced aggregation in management administration and decision making', Journal of Intelligent & Fuzzy Systems, vol. 37, no. 1, pp. 1213-1221.
View/Download from: Publisher's site
View description>>
In this work, we propose some two-layer preference models that can be appropriately applied in management problems such as the group decision making about predicting the future market share of certain product. By introducing the convex IOWA operator paradigm and some related properties and definitions, we list some detailed preference and inducing preference models to demonstrate and exemplify the proposed conceptual frame of two-layer preference model. The convex IOWA operator paradigm facilitates the modeling process and, from mathematical view, makes it stricter. When relevant inducing information and aggregation selection change, the proposed models can be easily adapted to accommodate more different applications in decision making and evaluation.
Zhu, Y, Chambua, J, Lu, H, Shi, K & Niu, Z 2019, 'An opinion based cross‐regional meteorological event detection model', Weather, vol. 74, no. 2, pp. 51-55.
View/Download from: Publisher's site
Zijlema, A, van den Hoven, E & Eggen, B 2019, 'A qualitative exploration of memory cuing by personal items in the home', Memory Studies, vol. 12, no. 4, pp. 377-397.
View/Download from: Publisher's site
View description>>
We are surrounded by personal items that can trigger memories, such as photos, souvenirs and heirlooms. Also during holidays, we collect items to remind us of the events, but not all bring back memories to the same extent. Therefore, we explored peoples’ responses to personal items related to a holiday, using the home tour interviewing method. In total, 63 accounts of cuing responses from nine home tours were analysed using thematic analysis. This resulted in four types of cuing responses: (a) ‘no-memory’ responses, (b) ‘know’ responses, (c) ‘memory evoked think or feel’ responses and (d) ‘remember’ responses. For each of these cuing response categories, we looked into the types of items and their characteristics. Furthermore, we found that some items can evoke multiple memories. The majority of the memories’ content refers to events close to the moment of acquiring the item.
Zuo, H, Lu, J, Zhang, G & Liu, F 2019, 'Fuzzy Transfer Learning Using an Infinite Gaussian Mixture Model and Active Learning', IEEE Transactions on Fuzzy Systems, vol. 27, no. 2, pp. 291-303.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Transfer learning is gaining considerable attention due to its ability to leverage previously acquired knowledge to assist in completing a prediction task in a related domain. Fuzzy transfer learning, which is based on fuzzy system (especially fuzzy rule-based models), has been developed because of its capability to deal with the uncertainty in transfer learning. However, two issues with fuzzy transfer learning have not yet been resolved: choosing an appropriate source domain and efficiently selecting labeled data for the target domain. This paper proposes an innovative method based on fuzzy rules that combines an infinite Gaussian mixture model (IGMM) with active learning to enhance the performance and generalizability of the constructed model. An IGMM is used to identify the data structures in the source and target domains providing a promising solution to the domain selection dilemma. Further, we exploit the interactive query strategy in active learning to correct imbalances in the knowledge to improve the generalizability of fuzzy learning models. Through experiments on synthetic datasets, we demonstrate the rationality of employing an IGMM and the effectiveness of applying an active learning technique. Additional experiments on real-world datasets further support the capabilities of the proposed method in practical situations.
Zuo, H, Lu, J, Zhang, G & Pedrycz, W 2019, 'Fuzzy Rule-Based Domain Adaptation in Homogeneous and Heterogeneous Spaces', IEEE Transactions on Fuzzy Systems, vol. 27, no. 2, pp. 348-361.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Domain adaptation aims to leverage knowledge acquired from a related domain (called a source domain) to improve the efficiency of completing a prediction task (classification or regression) in the current domain (called the target domain), which has a different probability distribution from the source domain. Although domain adaptation has been widely studied, most existing research has focused on homogeneous domain adaptation, where both domains have identical feature spaces. Recently, a new challenge proposed in this area is heterogeneous domain adaptation where both the probability distributions and the feature spaces are different. Moreover, in both homogeneous and heterogeneous domain adaptation, the greatest efforts and major achievements have been made with classification tasks, while successful solutions for tackling regression problems are limited. This paper proposes two innovative fuzzy rule-based methods to deal with regression problems. The first method, called fuzzy homogeneous domain adaptation, handles homogeneous spaces while the second method, called fuzzy heterogeneous domain adaptation, handles heterogeneous spaces. Fuzzy rules are first generated from the source domain through a learning process; these rules, also known as knowledge, are then transferred to the target domain by establishing a latent feature space to minimize the gap between the feature spaces of the two domains. Through experiments on synthetic datasets, we demonstrate the effectiveness of both methods and discuss the impact of some of the significant parameters that affect performance. Experiments on real-world datasets also show that the proposed methods improve the performance of the target model over an existing source model or a model built using a small amount of target data.
Abad, ZSH, Bano, M & Zowghi, D 1970, 'How much authenticity can be achieved in software engineering project based courses?', ICSE (SEET), IEEE / ACM, pp. 208-219.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Software engineering (SE) students not only need sufficient technical knowledge and problem solving ability but also social and interpersonal skills in order to be industry ready. To prepare the students for the 'real world' the SE educators frequently use 'Authentic Assessment' and 'Project Based Learning (PBL)' approaches in their curricula. However, the level of 'authenticity' should vary within PBL courses offered in different years of a degree program. In this paper, we present and discuss the results of the data collected and analyzed from the first SE course offered to the students. The aim of our research is to explore how much authenticity can be achieved in the first SE course. Our study was conducted at the University of Calgary with 64 software development project teams, totaling 229 undergraduate students. The data is collected from three semesters (2016-2018) in order to assess and monitor students performance. The course design used seven authentic assessments that focused on students skills while covering a complete software development lifecycle. The results from data analysis show that students made progress in some areas of problem solving skills, however, they struggled in their social skills (e.g. people handling skills, negotiations skills and organizational skills), understanding software quality and adaptability.
Abdo, P, Huynh, BP, Braytee, A & Taghipour, R 1970, 'Effect of Phase Change Material on Temperature in a Room Fitted With a Windcatcher', Volume 7: Fluids Engineering, ASME 2019 International Mechanical Engineering Congress and Exposition, American Society of Mechanical Engineers.
View/Download from: Publisher's site
View description>>
Abstract Global warming and climate change have been considered as major challenges over the past few decades. Sustainable and renewable energy sources are nowadays needed to overcome the undesirable consequences of rapid development in the world. Phase change materials (PCM) are substances with high latent heat storage capacity which absorb or release the heat from or to the surrounding environment. They change from solid to liquid and vice versa. PCMs could be used as a passive cooling method which enhances energy efficiency in buildings. Integrating PCM with natural ventilation is investigated in this study by exploring the effect of phase change material on the temperature in a room fitted with a windcatcher. A chamber made of acrylic sheets fitted with a windcatcher is used to monitor the temperature variations. The dimensions of the chamber are 1250 × 1000 × 750 mm3. Phase change material is integrated respectively at the walls of the room, its floor and ceiling and within the windcatchers inlet channel. Temperature is measured at different locations inside the chamber. Wind is blown through the room using a fan with heating elements.
Abedin, B, Erfani, S, Milne, D, Beattie, A & Fenerty, K 1970, 'Unpacking support types in online health communities: An application of attraction-selection-attrition theory', Proceedings of the 23rd Pacific Asia Conference on Information Systems: Secure ICT Platform for the 4th Industrial Revolution, PACIS 2019, Pacific Asia Conference on Information Systems, AISEL, China 2, pp. 1-8.
View description>>
Online communities are increasingly becoming part of the healthcare ecosystem, as they allow patients, family members and carers to connect and support each other at any time and from any location. This support can take many forms, including information, advice, esteem support and solidarity. Prior research has identified the Attraction-Selection-Attrition Theory as a promising framework for modelling and explaining how participants join, participate, and leave organizations in general (and online communities specifically), and how the actions of individuals effect the organization as a whole. However, it has not previously been applied specifically to online health communities (i.e. those that focus on physical and/or mental health). We propose to gather empirical evidence from a large online community that provides support for Australians effected by cancer. In doing so, we hope to develop evidence-based policies and procedures for growing, maintaining and moderating these communities.
Abou Maroun, E, Daniel, J, Zowghi, D & Talaei-Khoei, A 1970, 'Blockchain in Supply Chain Management: Australian Manufacturer Case Study', Lecture Notes in Business Information Processing, Australasian Symposium on Service Research and Innovation, Springer International Publishing, Sydney, Australia, pp. 93-107.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. The recent explosion of interest around Blockchain and capabilities of this technology to track all types of transaction more transparently and securely motivate us to explore the possibilities Blockchain offers across the supply chain. This paper examines whether Blockchain makes a good fit for use in an Australian manufacturer supply chain. To address this, the research uses Technology Acceptance Model (TAM) as a framework from the literature. Blockchain allows us to have permissioned or permission-less distributed ledgers where stakeholders can interact with each other. It details how Blockchain works and the mechanism of hash algorithms which allows for greater security of information. It also focuses on the supply chain management and looks at the intricacies of a manufacturers supply chain. We present a review of the processes in place of an electrical manufacturer and the problems faced in the supply chain. A model is proposed in using public and private Blockchains to overcome these issues. The proposed solution has the potential to bring greater transparency, validity across the supply chain, and improvement of communication between stakeholders involved. We also point out some potential issues that should be considered if adopting Blockchain.
Abou Maroun, E, Zowghi, D & Agarwal, R 1970, 'Challenges in forecasting uncertain product demand in supply chain: A systematic literature review', Managing the many faces of sustainable work, Annual Australian and New Zealand Academy of Management, ANZAM, Auckland, New Zealand.
View description>>
Forecasting for uncertain product demand in supply chain is challenging and statistical models alone cannot overcome the challenges faced. Our overall objective is to explore the challenges faced in forecasting uncertain product demand and examine extant literature by synthesizing the results of studies that have empirically investigated this complex phenomenon. We performed a Systematic Literature Review (SLR) following the well-known guidelines of the evidence-based paradigm which resulted in selecting 66 empirical studies. Our results are presented into two categories of internal and external challenges: 24 of the 66 studies express internal challenges, whilst 13 studies report external challenges, and 8 studies cover both internal and external challenges. We also present significant gaps identified in the research literature
Adak, C, Chaudhuri, BB, Lin, C-T & Blumenstein, M 1970, 'Detecting Named Entities in Unstructured Bengali Manuscript Images', 2019 International Conference on Document Analysis and Recognition (ICDAR), 2019 International Conference on Document Analysis and Recognition (ICDAR), IEEE, Sydney, Australia, pp. 196-201.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. In this paper, we undertake a task to find named entities directly from unstructured handwritten document images without any intermediate text/character recognition. Here, we do not receive any assistance from natural language processing. Therefore, it becomes more challenging to detect the named entities. We work on Bengali script which brings some additional hurdles due to its own unique script characteristics. Here, we propose a new deep neural network-based architecture to extract the latent features from a text image. The embedding is then fed to a BLSTM (Bidirectional Long Short-Term Memory) layer. After that, the attention mechanism is adapted to an approach for named entity detection. We perform experimentation on two publicly-available offline handwriting repositories containing 420 Bengali handwritten pages in total. The experimental outcome of our system is quite impressive as it attains 95.43% balanced accuracy on overall named entity detection.
Ahadi, A & Mathieson, L 1970, 'A Comparison of Three Popular Source code Similarity Tools for Detecting Student Plagiarism', Proceedings of the Twenty-First Australasian Computing Education Conference, ACE'19: Twenty-First Australasian Computing Education Conference, ACM, Australia, pp. 112-117.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. This paper investigates automated code plagiarism detection in the context of an undergraduate level data structures and algorithms module. We compare three software tools which aim to detect plagiarism in the students' programming source code. We evaluate the performance of these tools on an individual basis and the degree of agreement between them. Based on this evaluation we show that the degree of agreement between these tools is relatively low. We also report the challenges faced during utilization of these methods and suggest possible future improvements for tools of this kind. The discrepancies in the results obtained by these detection techniques were used to devise guidelines for effectively detecting code plagiarism.
Ahadi, A, Lister, R & Mathieson, L 1970, 'ArAl', Proceedings of the Twenty-First Australasian Computing Education Conference, ACE'19: Twenty-First Australasian Computing Education Conference, ACM, Australia, pp. 118-125.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Several systems that collect data from students' problem solving processes exist. Within computing education research, such data has been used for multiple purposes, ranging from assessing students' problem solving strategies to detecting struggling students. To date, however, the majority of the analysis has been conducted by individual researchers or research groups using case by case methodologies. Our belief is that with increasing possibilities for data collection from students' learning process, researchers and instructors will benefit from ready-made analysis tools. In this study, we present ArAl, an online machine learning based platform for analyzing programming source code snapshot data. The benefit of ArAl is two-fold. The computing education researcher can use ArAl to analyze the source code snapshot data collected from their own institute. Also, the website provides a collection of well-documented machine learning and statistics based tools to investigate possible correlation between different variables. The presented web-portal is available at online-analysisdemo. herokuapp.com. This tool could be applied in many different subject areas given appropriate performance data.
Aldini, S, Akella, A, Singh, AK, Wang, Y-K, Carmichael, M, Liu, D & Lin, C-T 1970, 'Effect of Mechanical Resistance on Cognitive Conflict in Physical Human-Robot Collaboration', 2019 International Conference on Robotics and Automation (ICRA), 2019 International Conference on Robotics and Automation (ICRA), IEEE, Canada, pp. 6137-6143.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Physical Human-Robot Collaboration (pHRC) is about the interaction between one or more human operator(s) and one or more robot(s) in direct contact and voluntarily exchanging forces to accomplish a common task. In any pHRC, the intuitiveness of the interaction has always been a priority, so that the operator can comfortably and safely interact with the robot. So far, the intuitiveness has always been described in a qualitative way. In this paper, we suggest an objective way to evaluate intuitiveness, known as prediction error negativity (PEN) using electroencephalogram (EEG). PEN is defined as a negative deflection in event related potential (ERP) due to cognitive conflict, as a consequence of a mismatch between perception and reality. Experimental results showed that the forces exchanged between robot and human during pHRC modulate the amplitude of PEN, representing different levels of cognitive conflict. We also found that PEN amplitude significantly decreases (mathrm {p} lt 0.05) when a mechanical resistance is being applied smoothly and more time in advance before an invisible obstacle, when compared to a scenario in which the resistance is applied abruptly before the obstacle. These results indicate that an earlier and smoother resistance reduces the conflict level. Consequently, this suggests that smoother changes in resistance make the interaction more intuitive.
Al-Doghman, F, Chaczko, Z, Brooke, W & Gordon, LC 1970, 'Social Consensus-inspired Aggregation Algorithms for Edge Computing', 2019 3rd Cyber Security in Networking Conference (CSNet), 2019 3rd Cyber Security in Networking Conference (CSNet), IEEE, pp. 138-141.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The current interest about the∗nternet of Things (IoT) evokes the establishment of infinite services giving huge, active, and varied information sets. Within it, an enormous mass of heterogeneous data are generated and interchanged by billions of device which can yield to an enormous information traffic jam and affects network efficiency. To get over this issue, there's a necessity for an effective, smart, distributed, and in-network technique that uses a cooperative effort to aggregate data along the pathway from the network edge to its sink. we tend to propose an information organization blueprint that systematizes data aggregation and transmission within the bounds of the Edge domain from the front-end until the Cloud. A social consensus technique obtained by applying statistical analysis is employed within the blueprint to get and update a policy concerning a way to aggregate and transmit data according to the order of information consumption inside the network. The Propose technique, consensus Aggregation, uses statistical Machine Learning to consolidate the approach and appraise its performance. inside the normal operation of the approach, data aggregation is performed with the utilization of data distribution. A notable information delivery efficiency was obtained with a nominal loss in precision as the blueprint was tested inside a particular environment as a case study. The conclusion of the strategy showed that the consensus approach overcome the individual ones in several directions.
Al-Najjar, HAH, Kalantar, B, Pradhan, B & Saeidi, V 1970, 'Conditioning factor determination for mapping and prediction of landslide susceptibility using machine learning algorithms', Earth Resources and Environmental Remote Sensing/GIS Applications X, Earth Resources and Environmental Remote Sensing/GIS Applications X, SPIE, Strasbourg, FRANCE.
View/Download from: Publisher's site
Altulyan, MS, Huang, C, Yao, L, Wang, X, Kanhere, S & Cao, Y 1970, 'Reminder Care System: An Activity-Aware Cross-Device Recommendation System', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Advanced Data Mining and Applications, Springer International Publishing, Dalian, China, pp. 207-220.
View/Download from: Publisher's site
View description>>
Alzheimer’s disease (AD) affects large numbers of elderly people worldwide and represents a significant social and economic burden on society, particularly in relation to the need for long term care facilities. These costs can be reduced by enabling people with AD to live independently at home for a longer time. The use of recommendation systems for the Internet of Things (IoT) in the context of smart homes can contribute to this goal. In this paper, we present the Reminder Care System (RCS), a research prototype of a recommendation system for the IoT for elderly people with cognitive disabilities. RCS exploits daily activities that are captured and learned from IoT devices to provide personalised recommendations. The experimental results indicate that RCS can inform the development of real-world IoT applications.
Anjum, M, Voinov, A, Castilla Rho, J & Pileggi, SF 1970, 'Understanding mental models through a moderated framework for serious discussion', 23rd International Congress on Modelling and Simulation, Canberra.
Anwar, M, Gill, A & Beydoun, G 1970, 'Using Adaptive Enterprise Architecture Framework for Defining the Adaptable Identity Ecosystem Architecture', https://aisel.aisnet.org/acis2019/, Australasian Conference on Information Systems, AIS, Perth, pp. 1-11.
View description>>
Digital identity management is often used to handle fraud detection and hence reduce identity thefts. However, using digital identity management presents additional challenges in terms of privacy of the identity owner meanwhile managing the security of the verification. In this paper, drawing on adaptive enterprise architecture (EA) with an ecosystem approach to digital identity, we describe an identity ecosystem (IdE) architecture to handle identity management (IdM) while safeguarding security and privacy. This study is a part of the larger action design research project with our industry partner DZ. We have used Adaptive EA as a baseline to define a privacy aware adaptive IdE to make ID operations more efficient and improve the delivery of services in the public and private sector. The value of the anticipated architecture is in its generic yet comprehensive structure, component orientation and layered approach which aim to enable the contemporary IdM
Anwar, MJ & Gill, AQ 1970, 'A Review of the Seven Modelling Approaches for Digital Ecosystem Architecture.', CBI (1), IEEE Conference on Business Informatics, IEEE, Moscow, Russia, pp. 94-103.
View/Download from: Publisher's site
View description>>
A dynamic digital ecosystem is an interrelated network of organisations, people and/or entities that interact and collaborate for value co-creation. The challenge is how to effectively model the digital ecosystems operating in a highly complex and dynamic environment. There are several modelling approaches to choose from. There is a need to evaluate the existing modelling approaches to support the modelling of digital ecosystems. This paper evaluates the scope and coverage of the selected seven modelling approaches (Adaptive Enterprise Architecture, ArchiMate, TOGAF, FAML, ISO/IEC/IEEE 42010, SABSA, and ITIL) for modelling the digital ecosystems. Adaptive enterprise architecture is taken as a reference architecture for this review due to its higher relevance to digital ecosystem layers. The results of this review indicate that every modelling methodology is different in scope and coverage and demands the integration and tailoring of a context specific modelling approaches to provide the type of support needed for digital ecosystems.
Anwar, MJ, Gill, AQ & Beydoun, G 1970, 'Using Adaptive Enterprise Architecture Framework for Defining the Adaptable Identity Ecosystem Architecture', ACIS 2019 Proceedings - 30th Australasian Conference on Information Systems, pp. 890-900.
View description>>
Digital identity management is often used to handle fraud detection and hence reduce identity thefts. However, using digital identity management presents additional challenges in terms of privacy of the identity owner meanwhile managing the security of the verification. In this paper, drawing on adaptive enterprise architecture (EA) with an ecosystem approach to digital identity, we describe an identity ecosystem (IdE) architecture to handle identity management (IdM) while safeguarding security and privacy. This study is a part of the larger action design research project with our industry partner DZ. We have used adaptive EA as a theoretical lens to define a privacy aware adaptive IdM with a view to improve the Id operations and delivery of services in the public and private sector. The value of the anticipated architecture is in its generic yet comprehensive structure, component orientation and layered approach which aim to enable the contemporary IdM.
Aung, TWW, Huo, H & Sui, Y 1970, 'Interactive Traceability Links Visualization using Hierarchical Trace Map', 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, Cleveland, Ohio.
View/Download from: Publisher's site
View description>>
Traceability links between various software artifacts of a system aid software engineers in system comprehension, verification and change impact analysis. Establishing trace links between software artifacts manually is an error-prone and costly task. Recently, studies in automated traceability link recovery area have received broad attention in the software maintenance community aiming to overcome the challenges of manual trace links maintenance process. In these studies, the trace links results generated by an automated trace recovery approach are presented either in a bland textual matrix format (e.g., tabular format) or two-dimensional graphical formats (e.g. tree view, hierarchical leaf node). Therefore, it is challenging for software engineers to explore the inter-relationships between various artifacts at once (e.g., which test cases and source code files/methods are related to a particular requirement). In this position paper, we propose a hierarchical trace map visualization technique to explore inter-relationships between various artifacts at once naturally and intuitively
Awan, Z, Kahlke, T, Ralph, P & Kennedy, P 1970, 'Chemical Named Entity Recognition with Deep Contextualized Neural Embeddings', Proceedings of the 11th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, 11th International Conference on Knowledge Discovery and Information Retrieval, SCITEPRESS - Science and Technology Publications, Austria, pp. 135-144.
View/Download from: Publisher's site
View description>>
Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved Chemical named entity recognition (ChemNER) is a preliminary step in chemical information extraction pipelines. ChemNER has been approached using rule-based, dictionary-based, and feature-engineered based machine learning, and more recently also deep learning based methods. Traditional word-embeddings, like word2vec and Glove, are inherently problematic because they ignore the context in which an entity appears. Contextualized embeddings called embedded language models (ELMo) have been recently introduced to represent contextual information of a word in its embedding space. In this work, we quantify the impact of contextualized embeddings for ChemNER by using Bi-LSTM-CRF (bidirectional long short term memory networks - conditional random fields) networks. We benchmarked our approach using four well-known corpora for chemical named entity recognition. Our results show that incorporation of ELMo results in statistically significant improvements in F1 score in all of the tested datasets.
Bai, L, Yao, L, Kanhere, SS, Wang, X & Sheng, QZ 1970, 'STG2Seq: Spatial-Temporal Graph to Sequence Model for Multi-step Passenger Demand Forecasting', Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}, International Joint Conferences on Artificial Intelligence Organization, pp. 1981-1987.
View/Download from: Publisher's site
View description>>
Multi-step passenger demand forecasting is a crucial task in on-demand vehicle sharing services. However, predicting passenger demand is generally challenging due to the nonlinear and dynamic spatial-temporal dependencies. In this work, we propose to model multi-step citywide passenger demand prediction based on a graph and use a hierarchical graph convolutional structure to capture both spatial and temporal correlations simultaneously. Our model consists of three parts: 1) a long-term encoder to encode historical passenger demands; 2) a short-term encoder to derive the next-step prediction for generating multi-step prediction; 3) an attention-based output module to model the dynamic temporal and channel-wise information. Experiments on three real-world datasets show that our model consistently outperforms many baseline methods and state-of-the-art models.
Bai, L, Yao, L, Kanhere, SS, Wang, X, Liu, W & Yang, Z 1970, 'Spatio-Temporal Graph Convolutional and Recurrent Networks for Citywide Passenger Demand Prediction', Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19: The 28th ACM International Conference on Information and Knowledge Management, ACM, Beijing, China, pp. 2293-2296.
View/Download from: Publisher's site
View description>>
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. Online ride-sharing platforms have become a critical part of the urban transportation system. Accurately recommending hotspots to drivers in such platforms is essential to help drivers find passengers and improve users' experience, which calls for efficient passenger demand prediction strategy. However, predicting multi-step passenger demand is challenging due to its high dynamicity, complex dependencies along spatial and temporal dimensions, and sensitivity to external factors (meteorological data and time meta). We propose an end-to-end deep learning framework to address the above problems. Our model comprises three components in pipeline: 1) a cascade graph convolutional recurrent neural network to accurately extract the spatial-temporal correlations within citywide historical passenger demand data; 2) two multi-layer LSTM networks to represent the external meteorological data and time meta, respectively; 3) an encoder-decoder module to fuse the above two parts and decode the representation to predict over multi-steps into the future. The experimental results on three real-world datasets demonstrate that our model can achieve accurate prediction and outperform the most discriminative state-of-the-art methods.
Bai, L, Yao, L, Kanhere, SS, Yang, Z, Chu, J & Wang, X 1970, 'Passenger Demand Forecasting with Multi-Task Convolutional Recurrent Neural Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Macau, China, pp. 29-42.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. Accurate prediction of passenger demands for taxis is vital for reducing the waiting time of passengers and drivers in large cities as we move towards smart transportation systems. However, existing works are limited in fully utilizing multi-modal features. First, these models either include excessive data from weakly correlated regions or neglect the correlations with similar but spatially distant regions. Second, they incorporate the influence of external factors (e.g., weather, holidays) in a simplistic manner by directly mapping external features to demands through fully-connected layers and thus result in substantial bias as the influence of external factors is not unified. To tackle these problems, we propose an end-to-end multi-task deep learning model for passenger demand prediction. First, we select similar regions for each target region based on their Point-of-Interest (PoI) information or historical demand and utilize Convolutional Neural Networks (CNN) to extract their spatial correlations. Second, we map external factors to future demand levels as part of the multi-task learning framework to further boost prediction accuracy. We conduct experiments on a large-scale real-world dataset collected from a city in China with a population of 1.5 million. The results demonstrate that our model significantly outperforms the state-of-the-art and a set of baseline methods.
Bakhanova, E, Voinov, A, Raffe, W & Garcia, J 1970, 'Gamification of participatory modeling in the context of sustainable development: existing and new solutions', 23rd International Congress on Modelling and Simulation, Canberra.
View description>>
Serious games and gamification tools have gradually expanded their application in participatory settings, while already being widely used in the context of sustainable development in general. Their popularity is explained by their ability to create an engaging and experimental environment, which evokes critical thought, meaningful interaction between the participants and experience-based learning. Although game design principles and tools are, to a large extent, universal, their application differs from one field to another. The simulation modelling field has a long history of using game elements to make complicated models more user-friendly and understandable for wider audiences. Management flight simulators, microworlds, policy exercises and strategic simulations are among the most common examples. Meanwhile, the urban planning field often makes use of interactive 3D maps, including the most recent advancements in applying XR technologies to make the interaction with the system more tactile and collaborative in a multi- user setting. Serious games are used in participatory projects as a supplementary approach to provoking discussion among the stakeholders and stimulating critical thinking. Gamification in the participatory modeling field is commonly used at the initial and final stages of the process or by incorporating a role playing component into the process (e.g. in companion modeling and social simulations). Based on the existing research, we have two main observations: (1) in each of the above-mentioned fields there are traditional ways of using gamification and visualization instruments and there is a lack of ‘cross-pollination’ between various application fields in terms of choosing gamification tools, (2) gamification tools are commonly used at one or two stages of participatory modeling process but rarely over the entire process of participatory modeling. We suggest that by introducing more gamification elements throughout the whole PM proce...
Bano, M & Zowghi, D 1970, 'Gender disparity in the governance of software engineering conferences.', GE@ICSE, International Workshop on Gender Equality in Software Engineering (GE), IEEE / ACM, Montreal, Canada, pp. 21-24.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. In this paper, we discuss gender disparity in software engineering (SE) conferences. We have examined the roles of General Chair, Program Chair, and main track Program Committee members in six highly ranked conferences in SE for a period of ten years in order to understand the pattern of gender disparity in visible roles. We also present the opinions elicited from ten participants on this topic, who have served at some of these SE conferences in leadership roles. Our aim is to reflect on the current state and initiate the debate, on gender equality in SE conferences.
Binh, HTT, Toulouse, M, Yu, S, Bui, M, Ha, LM, Hu, Z & Thang, HQ 1970, 'Foreword', ACM International Conference Proceeding Series, pp. VII-VIII.
Binh, HTT, Toulouse, M, Yu, S, Bui, M, Ha, LM, Hu, Z & Thang, HQ 1970, 'Foreword', ACM International Conference Proceeding Series, pp. VII-VIII.
Blanco-Mesa, F, Leon-Castro, E & Merigo, JM 1970, 'The IOWAWA operator with bonferroni means', 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE.
View/Download from: Publisher's site
View description>>
The induced ordered weighted average is an averaging aggregation operator that provides a parameterized family of aggregation operators between the minimum and the maximum. This paper presents a new operator that takes into the same formulation the Iowa operator and the Bonferroni means. This new operator is called Bonferroni Induced Ordered Weighted Averaging-Weighted Average (BON-IowaWA) operator. The main advantage of this approach is the possibility of reordering the results according to complex ranking processes based on order inducing variables.
Boroon, L, Abedin, B & Erfani, S 1970, 'Addiction to Social Network Site Use: An Information Technology Identity Perspective', ACIS 2019 Proceedings, Australasian Conference on Information Systems, AIS Electronic Library (AISeL)., Perth, pp. 1-8.
View description>>
As the popularity of social network sites (SNSs) has grown substantially over the past years, several negative effects of using SNSs have been experienced by users and reported by Information Systems (IS) researchers. Addiction to SNSs is one of such negative experiences, which has widely been considered from a psychopathology perspective. While increasingly there is more studies in IS on this phenomenon,it is still unclear what characterises addiction to SNSs and what may influence it. This in-progress study adopts an information technology (IT) identity perspective and applies Dual Systems Theory as well as Protection Motivation Theory to provide an initial understanding of what impacts SNS addiction and how to combat it from an IT/SNS identity perspective. To achieve these objectives, we reviewed theliterature and proposed a preliminary framework of addiction to SNSs use. We then offer discuss research implications and propose ideas for future studies.
Brunker, A, Catchpoole, D, Kennedy, P, Simoff, S & Nguyen, QV 1970, 'Two-Dimensional Immersive Cohort Analysis Supporting Personalised Medical Treatment', 2019 23rd International Conference in Information Visualization – Part II, 2019 23rd International Conference in Information Visualization – Part II, IEEE, Adelaide, Australia, pp. 34-41.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Genomic data are large and complex which are challenges to visualize them effectively on ordinary screens due to the limited display spaces. Large and high resolution displays could enable the capability to show more information at once for better comprehension from the visualization. This paper presents a two-dimensional interactive visualization system and supporting algorithm for multi-dimensional large genomic data analysis that can be used in both ordinary displays or immersive environments. We provide both view of the entire patient cohort in the similarity space and the genomic details currently for comparison among the patients. Through the similarity space and on the selected genes of interest, we are able to perceive the genetic similarity throughout the cohort. From the linked heat map visualisation of the selected genes, we apply hierarchical clustering on both the horizontal and vertical axes to group together the genetically similar patients. We demonstrate the effectiveness of the visualization with two case studies on pediatric cancer patients suffering from Acute Lymphoblastic Leukemia (ALL) and from Rhabdomyosarcoma (RMS)
Cancino, CA, Amirbagheri, K, Merigó, JM & Dessouky, Y 1970, 'Evolution of the academic research on supply chain and global warming', Proceedings of International Conference on Computers and Industrial Engineering, CIE.
View description>>
The aim of this work is to study supply chain publications with a focus on global warming effects using a bibliometric approach. The study uses the Web of Science Core Collection database to analyze the bibliometric data from 1994 to 2018. The main objective is to identify the leading trends in this area by analyzing the most significant journals, papers, institutions and supra-regions. This work also develops a graphical mapping of the bibliographic material by using visualization of similarities (VOS) viewer software. With this software, the study analyses co-citations of journals and co-occurrence of author keywords. The results show the growth of the development of supply chain models that consider global warming factors between 2014-2018, which is consistent with the general public awareness of climate change. The researchers from Imperial College London and Hong Kong Polytechnic University have the greatest number of publications in this area. In terms of supra-regions, more than 25% of the publications come from Asian universities, followed by American and British universities with 20%. Given the growing global concern about the effects of supply chains on global warming, it is expected that the number of publications from different parts of the world and the greater number of citations will strongly increase.
Cao, Z, Chang, Y-C, Prasad, M, Tanveer, M & Lin, C-T 1970, 'Tensor Decomposition for EEG Signals Retrieval', 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Bari, Italy, pp. 2423-2427.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Prior studies have proposed methods to recover multi-channel electroencephalography (EEG) signal ensembles from their partially sampled entries. These methods depend on spatial scenarios, yet few approaches aiming to a temporal reconstruction with lower loss. The goal of this study is to retrieve the temporal EEG signals independently which was overlooked in data pre-processing. We considered EEG signals are impinging on tensor-based approach, named nonlinear Canonical Polyadic Decomposition (CPD). In this study, we collected EEG signals during a resting-state task. Then, we defined that the source signals are original EEG signals and the generated tensor is perturbed by Gaussian noise with a signal-to-noise ratio of 0 dB. The sources are separated using a basic nonnegative CPD and the relative errors on the estimates of the factor matrices. Comparing the similarities between the source signals and their recovered versions, the results showed significantly high correlation over 95%. Our findings reveal the possibility of recoverable temporal signals in EEG applications.
Cetindamar, D, Kocaoglu, D, Lammers, T & Merigo, JM 1970, 'A Bibliometric Analysis of Technology Management Research at PICMET for 2009–2018', 2019 Portland International Conference on Management of Engineering and Technology (PICMET), 2019 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, Oregon, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2019 PICMET. The Portland International Centre for Management of Engineering and Technology (PICMET) was established in 1989. It has since become one of the leading organizations in the field of management of engineering and technology in the world. PICMET provides a strong platform for academicians, industry professionals and government representatives to exchange new knowledge derived from both research and implementation of technology management. To celebrate its 30-year journey, and to show the trends in technology management research and implementation over the past ten years (2009-2018), this paper presents a bibliometric analysis of the more than 3000 papers accepted for inclusion in PICMET conferences. The study highlights the topics, authors, journals and countries where significant research on technology management is conducted.
Chang, Y-C, Dostovalova, A, Lin, C-T & Kim, J 1970, 'Intelligent Multi-agent Coordination and Learning', 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Bari, Italy.
View/Download from: Publisher's site
View description>>
We present a hierarchical neural-fuzzy system for precision coordination of multiple mobile agents for simultaneous arrival at their destination positions in a cluttered urban environment. We assume that each agent is equipped with a 2D scanning LiDAR to make movement decisions based on local distance and bearing information. Two solution approaches are considered and compared. Both of them are structured around a hierarchical arrangement of controller modules to enable synchronisation of the agents arrival times while avoiding collision with obstacles. The first approach is based on cascading SONFIN (Self-Organizing Neural Fuzzy Inference Network) controllers, and the second approach considers the use of an LSTM (Long ShortTerm Memory) recurrent neural network module alongside SONFIN modules. Parameters of all the controllers are optimised using the Particle Swarm optimization algorithm. A physics-based simulator, Webots, is used as a training and testing environment for the two learning models to facilitate the deployment of codes to hardware which will follow in the next phase of our research.
Chen, F, Pan, S, Jiang, J, Huo, H & Long, G 1970, 'DAGCN: Dual Attention Graph Convolutional Networks', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, Hungary.
View/Download from: Publisher's site
View description>>
Graph convolutional networks (GCNs) have recently become one of the most powerful tools for graph analytics tasks in numerous applications, ranging from social networks and natural language processing to bioinformatics and chemoinformatics, thanks to their ability to capture the complex relationships between concepts. At present, the vast majority of GCNs use a neighborhood aggregation framework to learn a continuous and compact vector, then performing a pooling operation to generalize graph embedding for the classification task. These approaches have two disadvantages in the graph classification task: (1)when only the largest sub-graph structure (k-hop neighbor) is used for neighborhood aggregation, a large amount of early-stage information is lost during the graph convolution step; (2) simple average/sum pooling or max pooling utilized, which loses the characteristics of each node and the topology between nodes. In this paper, we propose a novel framework called, dual attention graph convolutional networks (DAGCN) to address these problems. DAGCN automatically learns the importance of neighbors at different hops using a novel attention graph convolution layer, and then employs a second attention component, a self-attention pooling layer, to generalize the graph representation from the various aspects of a matrix graph embedding. The dual attention network is trained in an end-to-end manner for the graph classification task. We compare our model with state-of-the-art graph kernels and other deep learning methods. The experimental results show that our framework not only outperforms other baselines but also achieves a better rate of convergence.
Chen, X, Huang, C, Zhang, X, Wang, X, Liu, W & Yao, L 1970, 'Expert2Vec: Distributed Expert Representation Learning in Question Answering Community', Advanced Data Mining and Applications (LNAI), International Conference on Advanced Data Mining and Applications, Springer International Publishing, Dalian, China, pp. 288-301.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. Community question answering (CQA) has attracted increasing attention recently due to its potential as a de facto knowledge base. Expert finding in CQA websites also has considerably board applications. Stack Overflow is one of the most popular question answering platforms, which is often utilized by recent studies on the recommendation of the domain expert. Despite the substantial progress seen recently, it still lacks relevant research on the direct representation of expert users. Hence hereby we propose Expert2Vec, a distributed Expert Representation learning in question answering community to boost the recommendation of the domain expert. Word2Vec is used to preprocess the Stack Overflow dataset, which helps to generate representations of domain topics. Weight rankings are then extracted based on domains and variational autoencoder (VAE) is unitized to generate representations of user-topic information. This finally adopts the reinforcement learning framework with the user-topic matrix to improve it internally. Experiments show the adequate performance of our proposed approaches in the recommendation system.
Cheng, Y, Yang, L, Yu, S & Ma, J 1970, 'Achieving Efficient and Verifiable Assured Deletion for Outsourced Data Based on Access Right Revocation', Cryptology and Network Security, International Conference on Cryptology and Network Security, Springer International Publishing, Fuzhou, China, pp. 392-411.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. With the growing use of cloud storage facilities, outsourced data security becomes a major concern. However, assured deletion for outsourced data, as an important issue for users, but received less attention in academia and industry. Most of traditional deletion solutions require specific data organization forms or storage media, and are not applicable for outsourced data. Moreover, existing access control schemes for cloud which used ciphertext-policy attribute-based encryption (CPABE), focused on fine-grained access control, and completely ignored data deletion. In this paper, we aim to design an effective data deletion scheme that can be applied to any CPABE built on linear secret sharing-scheme. However, the challenge is how to maintain the traits of traditional CPABE while implementing a universal deletion method. To address this challenge, we propose a policy graph to describe relationships among users, policies, attributes, and files and introduce a new deletion concept for CPABE: when all users are unauthorized for a file, we say that the file is deleted. Then, we extend an efficient and verifiable deletion scheme on a CPABE. Specifically, we give an effective method to select key attributes and update the relevant parts of ciphertext so that all users become unauthorized. Furthermore, we verify the cipher update performed by third-party server through merkle trees. We also demonstrate its universality and prove the security under q-BDHE assumption. Finally, the performance evaluation and simulation results reveal that our solution achieves better performance compared with other schemes.
Chou, K-P, Lin, C-T & Lin, W-C 1970, 'A self-adaptive artificial bee colony algorithm with local search for TSK-type neuro-fuzzy system training', 2019 IEEE Congress on Evolutionary Computation (CEC), 2019 IEEE Congress on Evolutionary Computation (CEC), IEEE, Wellington, NEW ZEALAND, pp. 1502-1509.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. In this paper, we introduce a self-adaptive artificial bee colony (ABC) algorithm for learning the parameters of a Takagi-Sugeno-Kang-type (TSK-type) neuro-fuzzy system (NFS). The proposed NFS learns fuzzy rules for the premise part of the fuzzy system using an adaptive clustering method according to the input-output data at hand for establishing the network structure. All the free parameters in the NFS, including the premise and the following TSK-type consequent parameters, are optimized by the modified ABC (MABC) algorithm. Experiments involve two parts, including numerical optimization problems and dynamic system identification problems. In the first part of investigations, the proposed MABC compares to the standard ABC on mathematical optimization problems. In the remaining experiments, the performance of the proposed method is verified with other metaheuristic methods, including differential evolution (DE), genetic algorithm (GA), particle swarm optimization (PSO) and standard ABC, to evaluate the effectiveness and feasibility of the system. The simulation results show that the proposed method provides better approximation results than those obtained by competitors methods.
Chow, D, Liu, A, Zhang, G & Lu, J 1970, 'Knowledge graph-based entity importance learning for multi-stream regression on Australian fuel price forecasting', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. A knowledge graph (KG) represents a collection of interlinked descriptions of entities. It has become a key focus for organising and utilising this type of data for applications. Many graph embedding techniques have been proposed to simplify the manipulation while preserving the inherent structure of the KG. However, scant attention has been given to the investigation of the importance of the entities (the nodes of KGs). In this paper, we propose a novel entities importance learning framework that investigates how to weight the entities and use them as a prior knowledge for solving multi-stream regression problems. The framework consists of KG feature extraction, multi-stream correlation analysis, and entity importance learning. To evaluate the proposed method, we implemented the framework based on Wikidata and applied it to Australian retail fuel price forecasting. The experiment results indicate that the proposed method reduces prediction error, which supports the weighted knowledge graph information as a means for improving machine learning model accuracy.
Coluccia, A, Fascista, A, Schumann, A, Sommer, L, Ghenescu, M, Piatrik, T, De Cubber, G, Nalamati, M, Kapoor, A, Saqib, M, Sharma, N, Blumenstein, M, Magoulianitis, V, Ataloglou, D, Dimou, A, Zarpalas, D, Daras, P, Craye, C, Ardjoune, S, De la Iglesia, D, Mendez, M, Dosil, R & Gonzalez, I 1970, 'Drone-vs-Bird Detection Challenge at IEEE AVSS2019', 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Taipei, Taiwan.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. This paper presents the second edition of the 'drone-vs-bird' detection challenge, launched within the activities of the 16-th IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS). The challenge's goal is to detect one or more drones appearing at some point in video sequences where birds may be also present, together with motion in background or foreground. Submitted algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. This paper reports on the challenge results on the 2019 dataset, which extends the first edition dataset provided by the SafeShore project with additional footage under different conditions.
Cortes, CAT, Chen, H-T & Lin, C-T 1970, 'Analysis of VR Sickness and Gait Parameters During Non-Isometric Virtual Walking with Large Translational Gain', 25th ACM Symposium on Virtual Reality Software and Technology, VRST '19: 25th ACM Symposium on Virtual Reality Software and Technology, ACM, Western Sydney Univ, Sydney, AUSTRALIA.
View/Download from: Publisher's site
Dasgupta, A, Gill, A & Hussain, F 1970, 'A Conceptual Framework for Data Governance in IoT-enabled Digital IS Ecosystems', Proceedings of the 8th International Conference on Data Science, Technology and Applications, 8th International Conference on Data Science, Technology and Applications, SCITEPRESS - Science and Technology Publications, Prague, Czech Republic, pp. 209-216.
View/Download from: Publisher's site
View description>>
Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved There is a growing interest in the use of Internet of Things (IoT) in information systems (IS). Data or information governance is a critical component of IoT enabled digital IS ecosystem. There is insufficient guidance available on how to effectively establish data governance for IoT enabled digital IS ecosystem. The introduction of new regulations related to privacy such as General Data Protection Regulation (GDPR) as well as existing regulations such as Health Insurance Portability and Accountability Act (HIPPA) has added complexity to this issue of data governance. This could possibly hinder the effective IoT adoption in healthcare digital IS ecosystem. This paper enhances the 4I framework, which is iteratively developed and updated using the design science research (DSR) method to address this pressing need for organizations to have a robust governance model to provide the coverage across the entire data lifecycle in IoT-enabled digital IS ecosystem. The 4I framework has four major phases: Identify, Insulate, Inspect and Improve. The application of this framework is demonstrated with the help of a Healthcare case study. It is anticipated that the proposed framework can help the practitioners to identify, insulate, inspect and improve governance of data in IoT enabled digital IS ecosystem.
Dasgupta, A, Gill, AQ & Hussain, FK 1970, 'A Review of General Data Protection Regulation for Supply Chain Ecosystem.', IMIS, International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Springer, Sydney, Australia, pp. 456-465.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. The data-intensive digital supply chain management (SCM) ecosystems seem to be impacted by the recent changes in the regulations and advancement in technologies such as Artificial Intelligence, Big Data, Analytics, Networking, IoT including proliferation of less expensive hardware devices. There is limited guidance available on how to govern the logistics sector, particularly from a regulatory compliance perspective. Through this paper, we investigate the impact of General Data Protection Regulation (GDPR) on digitized SCM. The key questions are: What are the GPDR specific legal obligations? What is the best approach to manage data access, quality, privacy, security and ownership effectively in SCM? This research paper aims to assist researchers and practitioners to understand the impact of GDPR on SCM, provide the 4I (Identify, Insulate, Inspect, Improve) Framework and its applicability to streamline the GDPR compliance activities.
Do, T, Lin, C, Cortes, C, Singh, A, Liu, J, Chen, H & Gramann, K 1970, 'Human brain dynamics during navigation with natural walking under different workload conditions in virtual reality by using the mobile brain/body imaging approach', Society for Neuroscience, Society for Neuroscience, Chicago.
Dong, M, Yao, L, Wang, X, Benatallah, B & Huang, C 1970, 'Similarity-Aware Deep Attentive Model for Clickbait Detection', Advances in Knowledge Discovery and Data Mining (LNAI), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Macau, China, pp. 56-69.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. Clickbait is a type of web content advertisements designed to entice readers into clicking accompanying links. Usually, such links will lead to articles that are either misleading or non-informative, making the detection of clickbait essential for our daily lives. Automated clickbait detection is a relatively new research topic. Most recent work handles the clickbait detection problem with deep learning approaches to extract features from the meta-data of content. However, little attention has been paid to the relationship between the misleading titles and the target content, which we found to be an important clue for enhancing clickbait detection. In this work, we propose a deep similarity-aware attentive model to capture and represent such similarities with better expressiveness. In particular, we present the ways of either using similarity only or integrating it with other available quality features for the clickbait detection. We evaluate our model on two benchmark datasets, and the experimental results demonstrate the effectiveness of our approach by outperforming a series of competitive state-of-the-arts and baseline methods.
Dong, M, Yao, L, Wang, X, Benatallah, B, Zhang, X & Sheng, QZ 1970, 'Dual-stream Self-Attentive Random Forest for False Information Detection', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, HUNGARY.
View/Download from: Publisher's site
Dutta, LK, Xiong, J, Gui, L, Liu, B & Shi, Z 1970, 'On Hit Rate Improving and Energy Consumption Minimizing in Cache-Based Convergent Overlay Network on High-speed Train', 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Jeju, Korea (South).
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Content caching and energy consumption to protract user device lifetime while bolstering the hit rate when mobility is beyond 300Kmph is challenging. In this paper, we consider a cache-based convergent overlay network comprising cellular base stations and terrestrial broadcast networks as effective means to deliver the services in high-speed train(HST). Most popular contents with high Zipf rank are pushed and cached using broadcast network in user devices and relay system on HST to offload the network traffic and bring user experience upfront leveraging energy. Presence of cache in relay system (RS), users can get the services without delay. However, if user's request for contents are served by cellular network from cloud, they face throughput bottleneck with delay and increased transmission time. This eventually leads increase in user device energy consumption. A popular contents caching scheme with constraints of location, limited storage size and limited power of user device is modeled as closed form expression to maximize users local hit rate with minimal power consumption. we propose an algorithm to cache the contents in user device to minimize total energy consumption of user devices. Moreover, user's content activity behavior is retained to minimize user device's energy by dynamic cache space algorithm. Simulation results justify that the proposed scheme can effectively improve cache hit rate and reduce the power consumption of user devices.
Erfani, S, Erfani, SM & Ramin, K 1970, 'A Smartphone Health Application To Facilitate Falls Prevention Practices For Older Adults', Proceedings of the 27th European Conference on Information Systems (ECIS), European Conference on Information Systems, AISEL, Stockholm & Uppsala, Sweden, pp. 1-11.
View description>>
Falls pose a serious threat to older adults’ health and their quality of life. Web-based technologies such as smartphones have emerged as vital tools for health-related behavioural interventions, but little is known about the potential benefits of a smartphone health application (app) in applying falls prevention practices for older adults. The research presented in this paper sought to answer the question: what are the key features needed in a smartphone health app intended to support falls prevention practices for older adults, increase their autonomy and improve their quality of life? A comprehensive literature review of studies conducted in public health, aged care, mobile health and mobile app design disciplines was undertaken and a conceptual framework for a smartphone app was proposed. The framework depicts the features of a smartphone app that can facilitate the implementation of falls prevention practices, including exercise programs; establishing a healthy diet and falls prevention education. Translation of the conceptual framework into a practical app will reduce falls in older adults, improve their sense of belongingness, and consequently enable better autonomy and quality of life.
Erfani, SS, Erfani, SM & Ramin, K 1970, 'Facebook support groups for ovarian cancer carers: A qualitative evaluation', 25th Americas Conference on Information Systems, AMCIS 2019, Americas Conference on Information Systems, Curran, Cancun, Mexico., pp. 1560-1564.
View description>>
A cancer diagnosis takes a great toll on the health of both patients and their carers. Online cancer support groups, including cancer support Facebook groups, have evolved as new sources of support for cancer patients and their carers. However, little is known about how cancer carers make use of such online resources. Most research attention has been paid to Facebook support groups for cancer patients. This research is designed to determine the content of communication in Ovarian Facebook pages, and the impact of those communications on carers of ovarian cancer patients. The study will contribute to knowledge about how cancer patients’ carers use Facebook cancer support groups and the impact of this use on their health and quality of life.
Fan, X, Li, B, Sisson, SA, Li, C & Chen, L 1970, 'Scalable deep generative relational models with high-order node dependence', Advances in Neural Information Processing Systems, Vancouver, Canada.
View description>>
We propose a probabilistic framework for modelling and exploring the latent structure of relational data. Given feature information for the nodes in a network, the scalable deep generative relational model (SDREM) builds a deep network architecture that can approximate potential nonlinear mappings between nodes' feature information and the nodes' latent representations. Our contribution is two-fold: (1) We incorporate high-order neighbourhood structure information to generate the latent representations at each node, which vary smoothly over the network. (2) Due to the Dirichlet random variable structure of the latent representations, we introduce a novel data augmentation trick which permits efficient Gibbs sampling. The SDREM can be used for large sparse networks as its computational cost scales with the number of positive links. We demonstrate its competitive performance through improved link prediction performance on a range of real-world datasets.
Fang, Z, Lu, J, Liu, F & Zhang, G 1970, 'Unsupervised Domain Adaptation with Sphere Retracting Transformation', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Unsupervised domain adaptation aims to leverage the knowledge in training data (source domain) to improve the performance of tasks in the remaining unlabeled data (target domain) by mitigating the effect of the distribution discrepancy. Existing approaches resolve this problem mainly by 1) mapping data into a latent space where the distribution discrepancy between two domains is reduced; or 2) reducing the domain shift by weighting the source domain. However, most of these approaches share a common issue that they neglect inter-class margins while matching distributions, which has a significant impact on classification performance. In this paper, we analyze the issue from the theoretical aspect and propose a novel unsupervised domain adaptation approach: Sphere Retracting Transformation (SRT), which reduces the distribution discrepancy and increases inter-class margins. We implement SRT, according to our theoretical analysis by (1) assigning class-specific weights for data in the source domain, and (2) minimizing the intra-class variations. Experiments confirm that the SRT approach outperforms several competitive approaches for standard domain adaptation benchmarks.
Faro, B, Abedin, B & Kozanoglu, DC 1970, 'Continuous Transformation of Public Sector Organisations in the Digital Era.', AMCIS, Americas Conference on Information Systems, Association for Information Systems, Cancun, Mexico, pp. 1-5.
View description>>
© 2019 Association for Information Systems. All rights reserved. Public-sector organisations need to continuously transform to retain their legitimacy by meeting their obligations to citizens, central governments, and laws. Digital era brings new challenges for public-sector organisations who historically are slow in adoption of changes. This is significant as policymakers are concerned that unexpected disruptions could take away their governance power. This research in progress aims to clarify how public-sector organisations respond to digital transformation drivers. The literature review and expert interviews highlight that organisations require both existing and novel organisational capabilities to utilise digital technologies in order to respond to transformation drivers. This research highlights the gap related to organisational capabilities for existing and novel organisational forms.
Faro, B, Abedin, B & Kozanoglu, DC 1970, 'Continuous transformation of public–sector organisations in the digital era', 25th Americas Conference on Information Systems, AMCIS 2019.
View description>>
Public-sector organisations need to continuously transform to retain their legitimacy by meeting their obligations to citizens, central governments, and laws. Digital era brings new challenges for public-sector organisations who historically are slow in adoption of changes. This is significant as policymakers are concerned that unexpected disruptions could take away their governance power. This research in progress aims to clarify how public-sector organisations respond to digital transformation drivers. The literature review and expert interviews highlight that organisations require both existing and novel organisational capabilities to utilise digital technologies in order to respond to transformation drivers. This research highlights the gap related to organisational capabilities for existing and novel organisational forms.
Fei, X, Li, K, Yu, S & Li, K 1970, 'An Economical and High-Quality Encryption Scheme for Cloud Servers with GPUs', 2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), 2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT), IEEE, Gold Coast, Australia.
View/Download from: Publisher's site
View description>>
Motivated by cloud servers undertaking heavy encryption of outsourced data for diverse devices, an Economical and High-Quality Encryption Scheme is proposed to alleviate the burden of energy consumption of the servers meanwhile to keep high-quality services. The objective of the scheme is to minimize the cost that combines economy and service quality. For achieving this objective, a two-phase scoring mechanism is proposed. And then based on the above methods and the scoring mechanism, an algorithm achieving the scheme is designed. To evaluate the scheme, some experiments are performed on a heterogeneous platform. The experimental results show that the encryption algorithm can save energy consumption by 47.8% and slightly improve delay rate by 0.93/10000 on average compared with the original one.
Ferrari, A, Spoletini, P, Bano, M & Zowghi, D 1970, 'Learning Requirements Elicitation Interviews with Role-Playing, Self-Assessment and Peer-Review.', RE, International Requirements Engineering Conference, IEEE, Jeju Island, Korea (South), pp. 28-39.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Interviews are largely used in the practice of requirements elicitation. Nevertheless, performing an effective interview often depends on soft-skills, and on knowledge acquired through experience. When it comes to requirements engineering education and training (REET), limited resources and few well-founded pedagogical approaches are available to allow students to acquire and improve their skills as interviewers. This paper presents a novel pedagogical approach that combines role-playing, peer-review and self-assessment to enable students to reflect on their mistakes, and improve their interview skills. We evaluate the approach through a controlled quasi-experiment. The study shows that the approach significantly reduces the amount of mistakes made by the students. Feedback from the participants confirms the usefulness and easiness of the proposed training. This work contributes to the body of knowledge of REET with an empirically evaluated method for teaching inter-views. Furthermore, we share the pedagogical material used, to enable other educators to apply and possibly tailor the approach.
Frawley, JK, Wakefield, J, Dyson, LE & Tyler, J 1970, 'Building graduate attributes using student-generated screencasts', ASCILITE 2015 - Australasian Society for Computers in Learning and Tertiary Education, Conference Proceedings, Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education, ASCILITE, Perth, Australia, pp. 100-111.
View description>>
There has been an increasing emphasis in recent years on developing the “soft” skills, or graduate attributes, that students need once they finish their university studies in addition to the specific domain knowledge of their discipline. This paper describes an innovative approach to developing graduate attributes through the introduction of an optional assignment in which first-year accounting students designed and developed screencasts explaining key concepts to their peers. Screencasts have been used in recent years for teaching but the approach of students, rather than teachers, making screencasts is far less common. Quantitative and qualitative analysis of student surveys showed that, in addition to improving their accounting knowledge and providing a fun and different way of learning accounting, the assignment contributed to the development and expression of a number of graduate attributes. These included the students' ability to communicate ideas to others and skills in multimedia, creativity, teamwork and self-directed learning.
Froissard, JC, Liu, D, Richards, D & Atif, A 1970, 'A learning analytics pilot in Moodle and its impact on developing organisational capacity in a university', ASCILITE 2017 - Conference Proceedings - 34th International Conference of Innovation, Practice and Research in the Use of Educational Technologies in Tertiary Education, ASCILITE, ASCILITE, Toowoomba, QLD, pp. 73-77.
View description>>
Moodle is used as a learning management system around the world. However, integrated learning analytics solutions for Moodle that provide actionable information and allow teachers to efficiently use it to connect with their students are lacking. The enhanced Moodle Engagement Analytics Plugin (MEAP), presented at ASCILITE2015, enabled teachers to identify and contact students at-risk of not completing their units. Here, we discuss a pilot using MEAP in 36 units at Macquarie University, a metropolitan Australian university. We use existing models for developing organisational capacity in learning analytics and to embed learning analytics into the practice of teaching and learning to discuss a range of issues arising from the pilot. We outline the interaction and interdependency of five stages during the pilot: technology infrastructure, analytics tools and applications; policies, processes, practices and workflows; values and skills; culture and behaviour; and leadership. We conclude that one of the most significant stages is to develop a culture and behaviour around learning analytics.
Gaisbauer, W, Raffe, WL, Garcia, JA & Hlavacs, H 1970, 'Procedural Generation of Video Game Cities for Specific Video Game Genres Using WaveFunctionCollapse (WFC)', Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts, CHI PLAY '19: The Annual Symposium on Computer-Human Interaction in Play, ACM, Barcelona, SPAIN, pp. 397-404.
View/Download from: Publisher's site
View description>>
Copyright held by the owner/author(s). Virtual cities as background scenarios can be used for many 3D video game genres like action. However, the procedural generation of virtual cities for specific video game genres is an on-going research problem. In this paper, we seek to establish a grounding for future work into city generation for specific game genres by exploring how game designers approach existing generation tool-sets. Firstly, we look at the video game city Skara Brae from the party-based role-playing game The Bard’s Tale and try to replicate it using the Wave Function Collapse (WFC) approach to procedural generation. We show in two experimental conditions which parameters for WFC are suitable for replicating the city. Secondly, a pilot user study with eight users shows how they approach creating different video game cities after they preselect a video game genre. The users’ video game level ideas are then discussed, and different output levels are generated using WFC.
Gao, Y, Niu, J, Liu, X, Mao, K & Yu, S 1970, 'MemNetAR: Memory Network with Adversative Relation for Target-Level Sentiment Classification', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
Garcia, JA, Sundara, N, Tabor, G, Gay, VC & Leong, TW 1970, 'Solitaire Fitness: Design of an asynchronous exergame for the elderly to enhance cognitive and physical ability', 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH), 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH), IEEE, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The use of exergames has shown positive results in encouraging the elderly to increase their motivation towards physical activity and rehabilitation. These games usually offer playful routines that require players to perform full body movements in order to interact with the game. While this is often well-received by elderly users, this approach has some limitations that can lead to negative effects in the aged cohort. The main one being, that gameplay and exercise must happen concurrently. This, unfortunately, places limitations on the elderly users and limits the range of exercises that can be delivered. Also, prior studies have revealed that while the aged cohort often finds this approach enjoyable, they are more inclined to exercise in more traditional ways. This paper describes the design and development of an asynchronous game, called Solitaire Fitness, where physical exercise and cognitive gameplay do not occur at the same time. The game is designed to enhance both cognitive and physical abilities. It seamlessly links a well-established card game, solitaire, and let the elderly chose the form of exercise they are familiar with and let them exercise at their own pace, allow them to fully immerse in gameplay, and ultimately increase their motivation towards an healthy active lifestyle.
Gehrke, L, Akman, S, Lopes, P, Chen, A, Singh, AK, Chen, H-T, Lin, C-T & Gramann, K 1970, 'Detecting Visuo-Haptic Mismatches in Virtual Reality using the Prediction Error Negativity of Event-Related Brain Potentials', Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19: CHI Conference on Human Factors in Computing Systems, ACM, Glasgow, SCOTLAND.
View/Download from: Publisher's site
View description>>
© 2019 Copyright held by the owner/author(s). Designing immersion is the key challenge in virtual reality; this challenge has driven advancements in displays, rendering and recently, haptics. To increase our sense of physical immersion, for instance, vibrotactile gloves render the sense of touching, while electrical muscle stimulation (EMS) renders forces. Unfortunately, the established metric to assess the effectiveness of haptic devices relies on the user’s subjective interpretation of unspecific, yet standardized, questions. Here, we explore a new approach to detect a conflict in visuo-haptic integration (e.g., inadequate haptic feedback based on poorly configured collision detection) using electroencephalography (EEG). We propose analyzing event-related potentials (ERPs) during interaction with virtual objects. In our study, participants touched virtual objects in three conditions and received either no haptic feedback, vibration, or vibration and EMS feedback. To provoke a brain response in unrealistic VR interaction, we also presented the feedback prematurely in 25% of the trials. We found that the early negativity component of the ERP (so called prediction error) was more pronounced in the mismatch trials, indicating we successfully detected haptic conflicts using our technique. Our results are a first step towards using ERPs to automatically detect visuo-haptic mismatches in VR, such as those that can cause a loss of the user’s immersion.
Giabbanelli, PJ, Voinov, AA, Castellani, B & Tornberg, P 1970, 'Ideal, Best, and Emerging Practices in Creating Artificial Societies', 2019 Spring Simulation Conference (SpringSim), 2019 Spring Simulation Conference (SpringSim), IEEE, Tucson, AZ, USA.
View/Download from: Publisher's site
View description>>
© 2019 Society for Modeling & Simulation International (SCS). Artificial societies used to guide and evaluate policies should be built by following “best practices”. However, this goal may be challenged by the complexity of artificial societies and the interdependence of their sub-systems (e.g., built environment, social norms). We created a list of seven practices based on simulation methods, specific aspects of quantitative individual models, and data-driven modeling. By evaluating published models for public health with respect to these ideal practices, we noted significant gaps between current and ideal practices on key items such as replicability and uncertainty. We outlined opportunities to address such gaps, such as integrative models and advances in the computational machinery used to build simulations.
Gong, C, Shi, K & Niu, Z 1970, 'Hierarchical Text-Label Integrated Attention Network for Document Classification', Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference, HPCCT 2019: 2019 The 3rd High Performance Computing and Cluster Technologies Conference, ACM.
View/Download from: Publisher's site
Gupta, D, Sarma, HJ, Mishra, K & Prasad, M 1970, 'Regularized Universum twin support vector machine for classification of EEG Signal', 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Bari, Italy, pp. 2298-2304.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Electroencephalogram signal is the signal used for the detection of a neurological disorder as epilepsy disorder, sleep disorder and many more. The types of EEG signal gives the hidden information regarding the distribution of the data that may consist of a large volume of the poor and noisy signal. In order to reduce the outlier effects and noise, incorporation of prior knowledge in the model, universum may help and enhance the better generalization ability of the model. This paper proposes a regularized universum twin support vector machine (RUTWSVM) for classification of the healthy and seizure EEG signals. Here, the selection of the universum data points is obtained in two ways (i). Universum data has been generated from the healthy and seizure EEG signals itself and (ii). Interictal EEG signal has been used as universum data which may help to handle the outlier effects. Further, various feature selection techniques are applied to extract the important noise free features from the EEG signals. We have performed a comparative analysis of proposed RUTWSVM with USVM and UTWSVM to classify the EEG signals as well as benchmark real-world datasets in an optimum way. The experiment results clearly exhibit the applicability and usability of the proposed RUTWSVM with interictal EEG signals as universum data points as well as benchmark real-world datasets.
Halkon, B, Rauter, A, Oberst, S & Marburg, S 1970, 'Research and development of an air-puff excitation system for lightweight structures', 8th IOMAC - International Operational Modal Analysis Conference, Proceedings, International Operational Modal Analysis Conference, Curran Associates, Copenhagen, Denmark, pp. 627-634.
View description>>
Lightweight, thin-walled structures appear in numerous engineering and natural structures. Due to their sensitivity, vibration excitation by, now traditional, contacting techniques, such as modally-tuned impact hammers or electrodynamic shakers, to investigate their dynamics is challenging since it typically adds substantial mass and/or stiffness at the excitation location. The research presented in this article, therefore, is intended to yield a system for the non-contact excitation of thin-walled structures through small, controlled blasts of air. An air-puff system, consisting of two fast-acting solenoid-controlled valves, a small air outlet nozzle and bespoke control software with a programmable valve control sequence, is researched and developed. The excitation impulse characteristics are investigated experimentally and described in detail for varying input control parameters. Ultimately, suitability of the system for the excitation of thin-walled structures is explored, for both a 3D-printed micro-satellite panel and a natural bee honeycomb, with promising results when compared to that of an impact hammer.
Haque, MN, Mathieson, L & Moscato, P 1970, 'A memetic algorithm approach to network alignment', Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '19: Genetic and Evolutionary Computation Conference, ACM, Prague, Czech Republic, pp. 258-265.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Given two graphs modelling related, but possibly distinct, networks, the alignment of the networks can help identify signiicant structures and substructures which may relate to the functional purpose of the network components. The Network Alignment Problem is the NP-hard computational formalisation of this goal and is a useful technique in a variety of data mining and knowledge discovery domains. In this paper we develop a memetic algorithm to solve the Network Alignment Problem and demonstrate the efectiveness of the approach on a series of biological networks against the existing state of the art alignment tools. We also demonstrate the use of network alignment as a clustering and classiication tool on two mental health disorder diagnostic databases.
Hiroto, M, Keiichi, S, Devitt, S, Rui, W, Yukito, N & Jaw-Shen, T 1970, 'Packaging Large-scale Superconducting Quantum Computer with Airbridge', Bulletin of the American Physical Society, APS March Meeting 2019, Boston, Massachussetts.
Inibhunu, C, Jalali, R, Doyle, I, Gates, A, Madill, J & McGregor, C 1970, 'Adaptive API for Real-Time Streaming Analytics as a Service', 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), IEEE, Berlin, Germany, pp. 3472-3477.
View/Download from: Publisher's site
View description>>
A significant amount of physiological data is generated from bedside monitors and sensors in neonatal intensive units (NICU) every second, however facilitating the ingestion of such data into multiple analytical processes in a real time streaming architecture remains a central challenge for systems that seek effective scaling of real-time data streams. In this paper we demonstrate an adaptive streaming application program interface (API) that provides real time streams of data for consumption by multiple analytics services enabling real-time exploration and knowledge discovery from live data streams. We have designed, developed and evaluated an adaptive API with multiple ingestion of data streamed out of bedside monitors that is passed to a middleware for standardization and structuring and finally distributed as a service for multiple analytical services to consume and perform further processing. This approach allows, (a) multiple applications to process the same data streams using multiple algorithms, (b) easy scalability to manage diverse data streams, (c) processing of analytics for each patient monitored at the NICU, (d) ability to integrate analytics that seek to evaluate multiple patients at the same point in time, and (e) a robust automated process with no manual interruptions that effectively adapts to changing data volumes when bedside monitors increases or the amount of data emitted by a monitor changes. The proposed architecture has been instantiated within the Artemis Platform which provides a framework for real-time high speed physiological data collection from multiple and diverse bed side monitors and sensors in NICUs from multiple hospitals. Results indicate this is a robust approach that can scale effectively as data volumes increase or data sources change.
Islam, MR, Helen Lu, H, Hossain, MJ & Li, L 1970, 'Improving Power Quality of Distributed PV-EV Distribution Grid by Mitigating Unbalance', 2019 IEEE International Conference on Industrial Technology (ICIT), 2019 IEEE International Conference on Industrial Technology (ICIT), IEEE, Melbourne, Australia, pp. 643-648.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Increasing price of fossil fuels and public awareness encouraged many countries to use clean technologies in transport and electricity generation sector. The advent of smart meters can identify unbalance in PV-EV distribution grid which is a great concern for Distribution Service Operators (DSOs). Several researchers have accessed the degree of unbalance and impact of unbalance on distribution grids considering either distributed PV or EVs. Moreover, a few research work has been done for mitigating unbalance till now. This paper measures unbalance due to unequal distribution of loads and sources among three phases and assess the impact of unbalance on power quality of the PV-EV distribution system by considering different PV and EV penetration levels using DigSILENT Power Factory simulation software. An improved method is proposed to mitigate unbalance using Genetic Algorithm by optimizing load distribution among phases. Finally, the efficacy of the proposed method is evaluated considering unequally distributed residential and EV load scenarios, and it is found that the proposed method can reduce a significant amount of unbalance at all the buses of the distribution grid.
Islam, MR, Lu, H, Fang, G, Li, L & Hossain, MJ 1970, 'Optimal Dispatch of Electrical Vehicle and PV Power to Improve the Power Quality of an Unbalanced Distribution Grid', 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), IEEE, Shenzhen, China, pp. 258-263.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. In the smart grid, the distributed generations play an important role to manage the distribution grid. The renewable energy sources such as PV solar, wind, etc. and the Electric Vehicle's Energy Storage are the prominent distributed generation sources. The distributed generation (DG) reduces power loss and improves the voltage profile and reliability of a low voltage (LV) distribution grid. However, optimal placement and sizing of DGs need to be planned properly. Several researchers planned to place single or multiple DGs at the optimum node with an optimal amount of power dispatch assuming balanced distribution grid. But the DGs are connected at all node/buses which require an optimum amount of power dispatch and distribution grids are seldom balance. Moreover, a few research have been conducted for optimizing DG dispatch in an unbalanced distribution grid. This paper proposes a method to improve voltage profile and reduce the total power loss by optimizing the PV and EVs power dispatch in an unbalanced distribution grid. This study will solve the optimization problem using the Differential evolution (DE) optimization algorithm and compares the performance with the Genetic algorithm (GA). Finally, the efficacy of the proposed method is evaluated by applying to an Australian distribution grid. The proposed method reduces 55.72% real power loss of the network. It is also found that the proposed method improves the bus voltage up to 7.65% and increase the bus voltage above 0.95 p.u at all the nodes.
Islam, MR, Lu, H, Hossain, MJ & Li, L 1970, 'Multi-objective Dynamic Phase re-configuration Technique to Mitigate the Unbalance Due to Penetration of Electric Vehicles', 2019 9th International Conference on Power and Energy Systems (ICPES), 2019 9th International Conference on Power and Energy Systems (ICPES), IEEE, Perth, AUSTRALIA.
View/Download from: Publisher's site
Islam, MR, Lu, H, Hossain, MJ & Li, L 1970, 'Reducing Neutral Current of a higher EV Penetrated Unbalanced Distribution Grid', 2019 9th International Conference on Power and Energy Systems (ICPES), 2019 9th International Conference on Power and Energy Systems (ICPES), IEEE, Perth, AUSTRALIA.
View/Download from: Publisher's site
Islam, MR, Lu, HH, Hossain, MJ & Li, L 1970, 'A Comparison of Performance of GA, PSO and Differential Evolution Algorithms for Dynamic Phase Reconfiguration Technology of a Smart Grid', 2019 IEEE Congress on Evolutionary Computation (CEC), 2019 IEEE Congress on Evolutionary Computation (CEC), IEEE, Wellington, New Zealand, pp. 858-865.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Increasing penetration of Distributed Generations (Photovoltaic solar energy (PV), Wind energy, and Battery Energy Storage) and PEVs (Plug-in Electric Vehicles) into smart grid induce network imbalance which reduces power quality. The uncertainty of demand-generation requires balancing for mitigating network imbalance. Several researchers have used various optimization methods for mitigating unbalance. Moreover, a few researchers have done comparative studies of optimization methods for mitigating unbalance till now. This paper proposes a method to mitigate unbalance and reduce the total power loss by optimizing load distribution among phases. This paper compares the performance of Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) algorithms on the application of phase balancing. Finally, the efficacy of these algorithms are evaluated for the proposed unbalance mitigation technique, and it is found that the proposed technique using DE algorithm can reduce a significant amount of unbalance at all the buses of the distribution grid with less computational effort.
Islam, MR, Lu., HH, Hossain, MJ & Li, L 1970, 'Compensating Neutral Current, Voltage Unbalance and Improving Voltage of an Unbalanced Distribution Grid Connected with EV and Renewable Energy Sources', 2019 22nd International Conference on Electrical Machines and Systems (ICEMS), 2019 22nd International Conference on Electrical Machines and Systems (ICEMS), IEEE, Harbin, China.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Coordinating electric vehicle (EV) charging offers several possible solutions, e.g., charging or discharging rate, and schedule time to improve performances of the distribution network. But EV charging or discharging schedule can be affected due to the punctuality of EV users and equipment failures. The growing penetration of EVs is expected to affect the distribution network performances (voltage unbalance, neutral current, and voltage) as well as generation scheduling due to EV uncertainties. Most of the proposed EV charging control strategies improve the network performance ignoring comfortability (change charging or discharging rate) and lack of punctuality of EV users. This paper investigates the impact of EV uncertainty on the imbalance of the network in a higher penetrated distribution grid. A centralized control algorithm is proposed to coordinate EVs and DESs service point of connection (SPOC) among phases to mitigate the network imbalance and improve the voltage. Using the proposed control approach, the candidate DES number is reduced to participate, whereas EV users do not require to participate. Results obtained using the proposed control approach shows that the neutral current reduces 82.98%, voltage unbalance up to 99.08% and improve voltage up to 17.08%.
Jaschek, C, Beckmann, T, Garcia, JA & Raffe, WL 1970, 'Mysterious Murder - MCTS-driven Murder Mystery Generation', 2019 IEEE Conference on Games (CoG), 2019 IEEE Conference on Games (CoG), IEEE, London, United Kingdom, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. We present an approach to procedurally generate the narrative of a simple murder mystery. As a basis for the simulation, we use a rule evaluation system inspired by Ceptre, which employs linear logic to resolve valid actions during each step of the simulation. We extend Ceptre's system with a concept of believable agents to make consecutive actions appear to have a causal connection so that players can comprehend the flow of events. The parts of the generated narratives are then presented to a player whose task it is to figure out who the murderer in this story could have been. Rather than aiming to replace highly authored narratives, this project generates puzzles, which may contain emerging arcs of a story as perceived by the player. While we found that even a simple rule set can create stories that are interesting to reason about, we expect that this type of system is flexible enough to create considerably more engaging stories if enough time is invested in authoring more complex rule sets.
Ji, S, Long, G, Pan, S, Zhu, T, Jiang, J & Wang, S 1970, 'Detecting Suicidal Ideation with Data Protection in Online Communities', Database Systems for Advanced Applications (LNCS), International Conference on Database Systems for Advanced Applications, Springer International Publishing, Chiang Mai, Thailand, pp. 225-229.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. Recent advances in Artificial Intelligence empower proactive social services that use virtual intelligent agents to automatically detect people’s suicidal ideation. Conventional machine learning methods require a large amount of individual data to be collected from users’ Internet activities, smart phones and wearable healthcare devices, to amass them in a central location. The centralized setting arises significant privacy and data misuse concerns, especially where vulnerable people are concerned. To address this problem, we propose a novel data-protecting solution to learn a model. Instead of asking users to share all their personal data, our solution is to train a local data-preserving model for each user which only shares their own model’s parameters with the server rather than their personal information. To optimize the model’s learning capability, we have developed a novel updating algorithm, called average difference descent, to aggregate parameters from different client models. An experimental study using real-world online social community datasets has been included to mimic the scenario of private communities for suicide discussion. The results of experiments demonstrate the effectiveness of our technology solution and paves the way for mental health service providers to apply this technology to real applications.
Ji, Z, Qiao, Y, Song, F & Yun, A 1970, 'General Linear Group Action on Tensors: A Candidate for Post-quantum Cryptography', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Theory of Cryptography, Springer International Publishing, Nuremberg, pp. 251-281.
View/Download from: Publisher's site
View description>>
© 2019, International Association for Cryptologic Research. Starting from the one-way group action framework of Brassard and Yung (Crypto’90), we revisit building cryptography based on group actions. Several previous candidates for one-way group actions no longer stand, due to progress both on classical algorithms (e.g., graph isomorphism) and quantum algorithms (e.g., discrete logarithm). We propose the general linear group action on tensors as a new candidate to build cryptography based on group actions. Recent works (Futorny–Grochow–Sergeichuk Lin. Alg. Appl., 2019) suggest that the underlying algorithmic problem, the tensor isomorphism problem, is the hardest one among several isomorphism testing problems arising from areas including coding theory, computational group theory, and multivariate cryptography. We present evidence to justify the viability of this proposal from comprehensive study of the state-of-art heuristic algorithms, theoretical algorithms, hardness results, as well as quantum algorithms. We then introduce a new notion called pseudorandom group actions to further develop group-action based cryptography. Briefly speaking, given a group G acting on a set S, we assume that it is hard to distinguish two distributions of (s, t) either uniformly chosen from S × S, or where s is randomly chosen from S and t is the result of applying a random group action of gεG on s. This subsumes the classical Decisional Diffie-Hellman assumption when specialized to a particular group action. We carefully analyze various attack strategies that support instantiating this assumption by the general linear group action on tensors. Finally, we construct several cryptographic primitives such as digital signatures and pseudorandom functions. We give quantum security proofs based on the one-way group action assumption and the pseudorandom group action assumption.
Jiang, S, Cao, J & Prasad, M 1970, 'The Metrics to Evaluate the Health Status of OSS Projects Based on Factor Analysis', Communications in Computer and Information Science, CCF Conference on Computer Supported Cooperative Work and Social Computing, Springer Singapore, Kunming, China, pp. 723-737.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Singapore Pte Ltd. As open-source software (OSS) development is becoming a trend, an increasing number of businesses and developers are joining OSS projects. For project managers, developers and users, understanding the current health status of a project is very important to manage a development process, select the open-source projects to development or to adopt the software packages developed by projects. Therefore, an efficient approach to evaluate the health status of the open-source project is needed. Unfortunately, although many approaches including metrics have been proposed, they are designed in arbitrary ways. In this paper, a math ematical tool, i.e., factor analysis, is used to build a health evaluation model for OSS projects. As far as we know, this is the first time that factor analysis has been applied to evaluate OSS projects. This model is based on GitHub data and uses the basic indexes that are closely related to the health status of the projects as the input. Then, six new synthetic metrics, namely community activity, project popularity, development activity, completeness, responsiveness and persistence are obtained through factor analysis, which can be used to calculate the overall health score of a project. Moreover, in order to verify the effectiveness of this model, it is applied to some real projects and the results show that the overall scores achieved by this model can reflect the health status of the projects.
John, BM & Jayan Chirayath Kurian, J 1970, 'Making the world a better place with Mixed Reality in Education,', Perth.
John, BM & Jayan Chirayath Kurian, J 1970, 'Mixed Reality in the Information Systems pedagogy: An Authentic Learning Experience', Munich.
Kalantar, B, Ueda, N, Al-Najjar, HAH, Gibril, MBA, Lay, US & Motevalli, A 1970, 'AN EVALUATION OF LANDSLIDE SUSCEPTIBILITY MAPPING USING REMOTE SENSING DATA AND MACHINE LEARNING ALGORITHMS IN IRAN', ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Geospatial Week, Copernicus GmbH, The Netherlands, pp. 503-511.
View/Download from: Publisher's site
View description>>
Abstract. Landslide is painstaking as one of the most prevalent and devastating forms of mass movement that affects man and his environment. The specific objective of this research paper is to investigate the application and performances of some selected machine learning algorithms (MLA) in landslide susceptibility mapping, in Dodangeh watershed, Iran. A 112 sample point of the past landslide, occurrence or inventory data was generated from the existing and field observations. In addition, fourteen landslide-conditioning parameters were derived from DEM and other topographic databases for the modelling process. These conditioning parameters include total curvature, profile curvature, plan curvature, slope, aspect, altitude, topographic wetness index (TWI), topographic roughness index (TRI), stream transport index (STI), stream power index (SPI), lithology, land use, distance to stream, distance to the fault. Meanwhile, factor analysis was employed to optimize the landslide conditioning parameters and the inventory data, by assessing the multi-collinearity effects and outlier detections respectively. The inventory data is divided into 70% (78) training dataset and 30% (34) test dataset for model validation. The receiver operating characteristics (ROC) curve or area under curve (AUC) value was used for assessing the model's performance. The findings reveal that TRI has 0.89 collinearity effect based on variance-inflated factor (VIF) and based on Gini factor optimization total curvature is not significant in the model development, therefore the two parameters are excluded from the modelling. All the selected MLAs (RF, BRT, and DT) shown promising performances on landslide susceptibility mapping in Dodangeh watershed, Iran. The ROC curve for training and validation for RF are 86% success rate and 83% prediction rate implies the best model performance compared to BRT and DT, with ROC curve of 72% and 70% prediction rate, respectively. In conclusion, ...
Kalantar, B, Ueda, N, Al-Najjar, HAH, Halin, AA, Ahmadi, P & Gibril, MBA 1970, 'On the effects of different groundwater inventory scenarios for spring potential mapping in Haraz, northern Iran', Earth Resources and Environmental Remote Sensing/GIS Applications X, Earth Resources and Environmental Remote Sensing/GIS Applications X, SPIE.
View/Download from: Publisher's site
View description>>
© 2019 SPIE. This study investigates the effectiveness of using groundwater inventory data for groundwater spring potential mapping in the Haraz watershed located in Norther Iran. From a total of 917 groundwater inventory dataset, six random inventory scenarios of 917, 690, 450, 230, 92, and 46 were generated. We trained two learning classifiers, namely the Support Vector Machine (SVM) and Random Forest (RF) based on each scenario to determine which one(s) would be more suitable for spring potential mapping. In each of the scenarios, 70% of the dataset was used for training whereas 30% was used for testing. The end results (classified maps) for each classifier and their respective dataset were quantitatively assessed based on the Area under Curve (AUC) metric. The prediction accuracies for the spring potential maps being produced for each scenario ranged from 0.693 to 0.736 using the SVM, and 0.608 to 0.895 for RF. Our findings indicate that 46 random points of inventory data did not produce a desirable outcome. On the contrary, more points yield better results, i.e. 450 random points produced the highest ROC when using SVM (0.736) followed by 917 and 690 random points using RF (0.895 and 0.877, respectively).
Kalantar, B, Ueda, N, Al-Najjar, HAH, Moayedi, H, Halin, AA & Mansor, S 1970, 'UAV AND LIDAR IMAGE REGISTRATION: A SURF-BASED APPROACH FOR GROUND CONTROL POINTS SELECTION', The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Geospatial Week, Copernicus GmbH, The Netherlands, pp. 413-418.
View/Download from: Publisher's site
View description>>
Abstract. Multisource remote sensing image data provides synthesized information to support many applications including land cover mapping, urban planning, water resource management, and GIS modelling. Effectively utilizing such images however requires proper image registration, which in turn highly relies on accurate ground control points (GCP) selection. This study evaluates the performance of the interest point descriptor SURF (Speeded-Up Robust Features) for GCPs selection from UAV and LiDAR images. The main motivation for using SURF is due to it being invariant to scaling, blur and illumination, and partially invariant to rotation and view point changes. We also consider features generated by the Sobel and Canny edge detectors as complements to potentially increase the accuracy of feature matching between the UAV and LiDAR images. From our experiments, the red channel (Band-3) produces the most accurate and practical results in terms of registration, while adding the edge features seems to produce lacklustre results.
Kalantar, B, Ueda, N, Lay, US, Al-Najjar, HAH & Halin, AA 1970, 'Conditioning Factors Determination for Landslide Susceptibility Mapping Using Support Vector Machine Learning', IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, IEEE, Yokohama, Japan.
View/Download from: Publisher's site
View description>>
This study investigates the effectiveness of two sets of landslide conditioning variable(s). Fourteen landslide conditioning variables were considered in this study where they were duly divided into two sets G1 and G2. Two Support Vector Machine (SVM) classifiers were constructed based on each dataset (SVM-G1 and SVM-G2) in order to determine which set would be more suitable for landslide susceptibility prediction. In total, 160 landslide inventory datasets of the study area were used where 70% was used for SVM training and 30% for testing. The intra-relationships between parameters were explored based on variance inflation factors (VIF), Pearson's correlation and Cohen's kappa analysis. Other evaluation metrics are the area under curve (AUC).
Katic, M, Cetindamar, D, Agarwal, R & Sick, N 1970, 'Operationalising Ambidexterity: The Role of 'Better' Management Practices in High-Variety, Low-Volume Manufacturing', 2019 Portland International Conference on Management of Engineering and Technology (PICMET), 2019 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, Oregon, pp. 1-10.
View/Download from: Publisher's site
Kridalukmana, R, Lu, H & Naderpour, M 1970, 'Component-Based Transparency to Comprehend Intelligent Agent Behaviour for Human-Autonomy Teaming', 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE.
View/Download from: Publisher's site
Laccone, F, Malomo, L, Froli, M, Cignoni, P & Pietroni, N 1970, 'Concept and cable-tensioning optimization of post-tensioned shells made of structural glass', IASS Symposium 2019 - 60th Anniversary Symposium of the International Association for Shell and Spatial Structures; Structural Membranes 2019 - 9th International Conference on Textile Composites and Inflatable Structures, FORM and FORCE, pp. 2188-2195.
View description>>
Shells made of structural glass are charming objects from both the aesthetics and the engineering point of view. However, they pose two significant challenges: the first one is to assure adequate safety and redundancy concerning possible global collapse; the second one is to guarantee the economy for replacing collapsed components. To address both requirements, this research explores a novel concept where triangular panels of structural glass are both post-tensioned and reinforced to create 3D free-form systems. Hence, the filigree steel truss, made of edges reinforcements, is sized in performance-based perspective to bear at least the weight of all panels in the occurrence of simultaneous cracks (worst-case scenario). The panels are post-tensioned using a set of edge-aligned cables that add beneficial compressive stress on the surface. The cable placement and pre-loads are optimized to minimize the tensile stress acting on the shell and match the manufacturing constraints. These shells optimize material usage by providing not only a transparent and fascinating building separation but also load-bearing capabilities. Visual and structural lightness are improved to grid shell competitors.
Leong, TW, Lawrence, C & Wadley, G 1970, 'Designing for diversity in Aboriginal Australia', Proceedings of the 31st Australian Conference on Human-Computer-Interaction, OZCHI'19: 31ST AUSTRALIAN CONFERENCE ON HUMAN-COMPUTER-INTERACTION, ACM, Fremantle, Australia, pp. 418-422.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Aboriginal Australians have been colonized for over 230 years. As a result, many have been disconnected from their communities and identity. This paper reports on a national-scale HCI project that aims to design technology that allows Aboriginal Australians to reconnect with their communities and to reaffirm their Aboriginal identity. Our project faces significant challenges, some due to the effects of colonization and some due to the great (and underrecognized) diversity of Aboriginal Australia. In this paper, we report the design phase of our project, and discuss some of these challenges we faced. Through this, we offer insights for HCI designers and researchers undertaking similar work.
Li, Q, Zhong, J, Li, Q, Cao, Z & Wang, C 1970, 'Enhancing Network Embedding with Implicit Clustering', Database Systems for Advanced Applications (LNCS), International Conference on Database Systems for Advanced Applications, Springer International Publishing, Chiang Mai, Thailand, pp. 452-467.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. Network embedding aims at learning the low dimensional representation of nodes. These representations can be widely used for network mining tasks, such as link prediction, anomaly detection, and classification. Recently, a great deal of meaningful research work has been carried out on this emerging network analysis paradigm. The real-world network contains different size clusters because of the edges with different relationship types. These clusters also reflect some features of nodes, which can contribute to the optimization of the feature representation of nodes. However, existing network embedding methods do not distinguish these relationship types. In this paper, we propose an unsupervised network representation learning model that can encode edge relationship information. Firstly, an objective function is defined, which can learn the edge vectors by implicit clustering. Then, a biased random walk is designed to generate a series of node sequences, which are put into Skip-Gram to learn the low dimensional node representations. Extensive experiments are conducted on several network datasets. Compared with the state-of-art baselines, the proposed method is able to achieve favorable and stable results in multi-label classification and link prediction tasks.
Li, X, Zhou, L, Fu, A, Yu, S, Su, M & Yang, W 1970, 'Privacy Preserving Fog-Enabled Dynamic Data Aggregation in Mobile Phone Sensing', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
With the development of science and technology, mobile phones have gained unprecedented popularity, subsequently the applications based on mobile phone perception have become widespread. Mobile sensing encourages many users to participate in data collection tasks through their mobile phones, but this process raises privacy issues. Previous studies have either used additive homomorphism to protect data privacy or aggregated methods to provide identity privacy protection. However, little research has been done on data dynamics and how to improve aggregation efficiency in multi-user scenarios. To solve this problem, we propose a privacy preserving fog-enabled dynamic data aggregation protocol in mobile phone sensing, named FDDA. In the proposal, we first add fog nodes to our framework, allowing fog nodes to aggregate a set of user data at the same geographical location without identifying the data sources. Then, we design data dynamics strategy to support the user joining and revoking effectively. Finally, we show that our protocol can be well applied in actual scenarios through security analysis and simulation experiments.
Li, Y, Long, G, Shen, T, Zhou, T, Yao, L, Huo, H & Jiang, J 1970, 'Self-Attention Enhanced Selective Gate with Entity-Aware Embedding for Distantly Supervised Relation Extraction'.
View description>>
Distantly supervised relation extraction intrinsically suffers from noisylabels due to the strong assumption of distant supervision. Most prior worksadopt a selective attention mechanism over sentences in a bag to denoise fromwrongly labeled data, which however could be incompetent when there is only onesentence in a bag. In this paper, we propose a brand-new light-weight neuralframework to address the distantly supervised relation extraction problem andalleviate the defects in previous selective attention framework. Specifically,in the proposed framework, 1) we use an entity-aware word embedding method tointegrate both relative position information and head/tail entity embeddings,aiming to highlight the essence of entities for this task; 2) we develop aself-attention mechanism to capture the rich contextual dependencies as acomplement for local dependencies captured by piecewise CNN; and 3) instead ofusing selective attention, we design a pooling-equipped gate, which is based onrich contextual representations, as an aggregator to generate bag-levelrepresentation for final relation classification. Compared to selectiveattention, one major advantage of the proposed gating mechanism is that, itperforms stably and promisingly even if only one sentence appears in a bag andthus keeps the consistency across all training examples. The experiments on NYTdataset demonstrate that our approach achieves a new state-of-the-artperformance in terms of both AUC and top-n precision metrics.
Li, Z, Liu, W, Chang, X, Yao, L, Prakash, M & Zhang, H 1970, 'Domain-Aware Unsupervised Cross-dataset Person Re-identification', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 15th International Conference on Advanced Data Mining and Applications, Springer International Publishing, Dalian, China, pp. 406-420.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. We focus on the person re-identification (re-id) problem of matching people across non-overlapping camera views. While most existing works rely on the abundance of labeled exemplars, we consider a more difficult unsupervised scenario, where no labeled exemplar is provided. One solution for unsupervised re-id that attracts much attention in the recent researches is cross-dataset transfer learning. It utilizes knowledge from multiple source datasets from different domains to enhance the unsupervised learning performance on the target domain. In previous works, much effect is taken on extraction of the generic and robust common appearances representations across domains. However, we observe that there also particular appearances in different domains. Simply ignoring these domain-unique appearances will misleading the matching schema in re-id application. Few unsupervised cross-dataset algorithms are proposed to learn the common appearances across multiple domains, even less of them consider the domain-unique representations. In this paper, we propose a novel domain-aware representation learning algorithm for unsupervised cross-dataset person re-id problem. The proposed algorithm not only learns a common appearances across-datasets but also captures the domain-unique appearances on the target dataset via minimization of the overlapped signal supports across different domains. Extensive experimental studies on benchmark datasets show superior performances of our algorithm over state-of-the-art algorithms. Sample analysis on selected samples also verifies the ability of diversity learning of our algorithm.
Li, Z, Yao, L, Zhang, X, Wang, X, Kanhere, S & Zhang, H 1970, 'Zero-Shot Object Detection with Textual Descriptions', Proceedings of the AAAI Conference on Artificial Intelligence, Conference on Artificial Intelligence / Innovative Applications of Artificial Intelligence Conference / AAAI Symposium on Educational Advances in Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI), Honolulu, HI, pp. 8690-8697.
View/Download from: Publisher's site
View description>>
Object detection is important in real-world applications. Existing methods mainly focus on object detection with sufficient labelled training data or zero-shot object detection with only concept names. In this paper, we address the challenging problem of zero-shot object detection with natural language description, which aims to simultaneously detect and recognize novel concept instances with textual descriptions. We propose a novel deep learning framework to jointly learn visual units, visual-unit attention and word-level attention, which are combined to achieve word-proposal affinity by an element-wise multiplication. To the best of our knowledge, this is the first work on zero-shot object detection with textual descriptions. Since there is no directly related work in the literature, we investigate plausible solutions based on existing zero-shot object detection for a fair comparison. We conduct extensive experiments on three challenging benchmark datasets. The extensive experimental results confirm the superiority of the proposed model.
Lin, A, Lu, J, Xuan, J, Zhu, F & Zhang, G 1970, 'One-Stage Deep Instrumental Variable Method for Causal Inference from Observational Data', 2019 IEEE International Conference on Data Mining (ICDM), 2019 IEEE International Conference on Data Mining (ICDM), IEEE, Beijing, China, pp. 419-428.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Causal inference from observational data aims to estimate causal effects when controlled experimentation is not feasible, but it faces challenges when unobserved confounders exist. The instrumental variable method resolves this problem by introducing a variable that is correlated with the treatment and affects the outcome only through the treatment. However, existing instrumental variable methods require two stages to separately estimate the conditional treatment distribution and the outcome generating function, which is not sufficiently effective. This paper presents a one-stage approach to jointly estimate the treatment distribution and the outcome generating function through a cleverly designed deep neural network structure. This study is the first to merge the two stages to leverage the outcome to the treatment distribution estimation. Further, the new deep neural network architecture is designed with two strategies (i.e., shared and separate) of learning a confounder representation account for different observational data. Such network architecture can unveil complex relationships between confounders, treatments, and outcomes. Experimental results show that our proposed method outperforms the state-of-the-art methods. It has a wide range of applications, from medical treatment design to policy making, population regulation and beyond.
Lin, J, Sun, G, Shen, J, Cui, T, Yu, P, Xu, D, Li, L & Beydoun, G 1970, 'Towards the Readiness of Learning Analytics Data for Micro Learning.', SCC, International Conference on Services Computing, Springer, San Diego, CA, pp. 66-76.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. With the development of data mining and machine learning techniques, data-driven based technology-enhanced learning (TEL) has drawn wider attention. Researchers aim to use established or novel computational methods to solve educational problems in the ‘big data’ era. However, the readiness of data appears to be the bottleneck of the TEL development and very little research focuses on investigating the data scarcity and inappropriateness in the TEL research. This paper is investigating an emerging research topic in the TEL domain, namely micro learning. Micro learning consists of various technical themes that have been widely studied in the TEL research field. In this paper, we firstly propose a micro learning system, which includes recommendation, segmentation, annotation, and several learning-related prediction and analysis modules. For each module of the system, this paper reviews representative literature and discusses the data sources used in these studies to pinpoint their current problems and shortcomings, which might be debacles for more effective research outcomes. Accordingly, the data requirements and challenges for learning analytics in micro learning are also investigated. From a research contribution perspective, this paper serves as a basis to depict and understand the current status of the readiness of data sources for the research of micro learning.
Liu, B, Xiong, J, Wu, Y, Ding, M & Wu, CM 1970, 'Protecting Multimedia Privacy from Both Humans and AI', 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Jeju, Korea (South).
View/Download from: Publisher's site
View description>>
© 2019 IEEE. With the development of artificial intelligence (AI), multimedia privacy issues have become more challenging than ever. AI-assisted malicious entities can steal private information from multimedia data more easily than humans. Traditional multimedia privacy protection only considers the situation when humans are the adversaries, therefore they are ineffective against AI-assisted attackers. In this paper, we develop a new framework and new algorithms that can protect image privacy from both humans and AI. We combine the idea of adversarial image perturbation which is effective against AI and the obfuscation technique for human adversaries. Experiments show that our proposed methods work well for all types of attackers.
Liu, B, Zhu, T, Zhou, W, Wang, K, Zhou, H & Ding, M 1970, 'Protecting Privacy-Sensitive Locations in Trajectories with Correlated Positions', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, Waikoloa, HI, USA.
View/Download from: Publisher's site
View description>>
The location privacy issue has become a critical research topic recently. The existing solutions do not solve one typical problem in practice: people may only want to protect certain privacy-sensitive locations among a group of temporal and spatial correlated points in a trajectory. As an effort towards this issue, we analyze the impact of space-time relationship on location privacy preservation. In addition, we propose new privacy definitions to better evaluate the privacy level and prove that the target location's privacy can be enhanced by randomizing its time and space related points. Moreover, under the constraint of the total noise power, the problem of obfuscating a location in a temporal and spatial correlated trajectory is formulated as finding the best noise allocation vector which can achieve the highest privacy level. This problem is solved by our proposed location privacy preserving method which applies differential privacy scheme on a series of points with noise budget allocation. Lastly, the performance of the proposed scheme is evaluated by simulations.
Liu, DYT, Atif, A, Froissard, JC & Richards, D 1970, 'An enhanced learning analytics plugin for Moodle: Student engagement and personalised intervention', ASCILITE 2015 - Australasian Society for Computers in Learning and Tertiary Education, Conference Proceedings, ASCILITE, ASCILITE, Perth, Australia, pp. 180-189.
View description>>
Moodle, an open source Learning Management System (LMS), collects a large amount of data on student interactions within it, including content, assessments, and communication. Some of these data can be used as proxy indicators of student engagement, as well as predictors for performance. However, these data are difficult to interrogate and even more difficult to action from within Moodle. We therefore describe a design-based research narrative to develop an enhanced version of an open source Moodle Engagement Analytics Plugin (MEAP). Working with the needs of unit convenors and student support staff, we sought to improve the available information, the way it is represented, and create affordances for action based on this. The enhanced MEAP (MEAP+) allows analyses of gradebook data, assessment submissions, login metrics, and forum interactions, as well as direct action through personalised emails to students based on these analyses.
Liu, F, Zhang, G & Lu, J 1970, 'A Novel Fuzzy Neural Network for Unsupervised Domain Adaptation in Heterogeneous Scenarios', 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, New Orleans, LA, USA.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. How to leverage knowledge from labelled domain (source) to help classify unlabeled domain (target) is a key problem in the machine learning field. Unsupervised domain adaptation (UDA) provides a solution to this problem and has been well developed for two homogeneous domains. However, when the target domain is unlabeled and heterogeneous with the source domain, current UDA models cannot accurately transfer knowledge from a source domain to a target domain. Benefiting from development of neural networks, this paper presents a new neural network, shared fuzzy equivalence relations neural network (SFER-NN), to address the heterogeneous UDA (HeUDA) problem. SFER-NN transfers knowledge across two domains according to shared fuzzy equivalence relations that can simultaneously cluster features of two domains into several categories. Based on the clustered categories, SFER-NN is constructed to minimize the discrepancy between two domains. Compared to previous works, SFER-NN is more capable of minimizing discrepancy between two domains. As a result of this advantage, SFER-NN delivers a better performance than previous studies using two public datasets.
Liu, Y, Yan, Y, Chen, L, Han, Y & Yang, Y 1970, 'Adaptive sparse confidence-weighted learning for online feature selection', 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, 33rd AAAI Conference on Artificial Intelligence / 31st Innovative Applications of Artificial Intelligence Conference / 9th AAAI Symposium on Educational Advances in Artificial Intelligence, ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, Honolulu, HI, pp. 4408-4415.
View description>>
In this paper, we propose a new online feature selection algorithm for streaming data. We aim to focus on the following two problems which remain unaddressed in literature. First, most existing online feature selection algorithms merely utilize the first-order information of the data streams, regardless of the fact that second-order information explores the correlations between features and significantly improves the performance. Second, most online feature selection algorithms are based on the balanced data presumption, which is not true in many real-world applications. For example, in fraud detection, the number of positive examples are much less than negative examples because most cases are not fraud. The balanced assumption will make the selected features biased towards the majority class and fail to detect the fraud cases. We propose an Adaptive Sparse Confidence-Weighted (ASCW) algorithm to solve the aforementioned two problems. We first introduce an `0-norm constraint into the second-order confidence-weighted (CW) learning for feature selection. Then the original loss is substituted with a cost-sensitive loss function to address the imbalanced data issue. Furthermore, our algorithm maintains multiple sparse CW learner with the corresponding cost vector to dynamically select an optimal cost. We theoretically enhance the theory of sparse CW learning and analyze the performance behavior in F-measure. Empirical studies show the superior performance over the state-of-the-art online learning methods in the online-batch setting.
Lu, S, Oberst, S, Zhang, G & Luo, Z 1970, 'Period adding bifurcations in dynamic pricing processes', 2019 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), 2019 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr), IEEE, Shenzhen, China, pp. 71-76.
View/Download from: Publisher's site
View description>>
Price information enables consumers to anticipate a price and to make purchasing decisions based on their price expectations, which are critical for agents with pricing decisions or price regulations. A company with pricing decisions can aim to optimise the short-term or the long-term revenue, each of which leads to different pricing strategies thereby different price expectations. Two key ingredients play important roles in the choosing of the short-term or the long-term optimisation objectives: the maximal revenue and the robustness of the chosen pricing strategy against market volatility. However the robustness is rarely identified in a volatile market. Here, we investigate the robustness of optimal pricing strategies with the short-term or long-term optimisation objectives through the analysis of nonlinear dynamics of price expectations. Bifurcation diagrams and period diagrams are introduced to compare the change in dynamics of the optomal pricing strategies. Our results highlight that period adding bifurcations occur during the dynamic pricing processes studied. These bifurcations would challenge the robustness of an optimal pricing strategy. The consideration of the long-term revenue allows a company to charge a higher price, which in turn increases the revenue. However, the consideration of the short-term revenue can reduce the occurrence of period adding bifurcations, contributing to a robust pricing strategy. For a company, this strategy is a robust guarantee of optimal revenue in a volatile market; for consumers, this strategy avoids rapid changes in price and reduce their dissatisfaction of price variations.
Madhisetty, S & Williams, M-A 1970, 'Managing Privacy Through Key Performance Indicators When Photos and Videos Are Shared via Social Media', Advances in Intelligent Systems and Computing, Science and Information Conference, Springer International Publishing, London, United Kingdom, pp. 1103-1117.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. There are many definitions of privacy. What is considered sensitive varies from individual to individual. When a document is shared it may reveal certain information, the exchange of information is grounded with a specific context. This contextual grounding may not be afforded when photos and videos are shared, because they may contain rich semantic and syntactic information coded as tacit knowledge. Identifying sensitive information in a photo or a video is a major problem; therefore, rather than making assumptions about what is sensitive in a photo or a video, this research asked a group of study participants why they share content and what their concerns are (if any)? This enabled inferences to be made about categories of sensitivity in accordance with the participants’ responses. Interview data was gathered and Grounded Theory was applied. The following themes emerged from the data: a major theme, in which no privacy concerns were developed, three sub-themes in which varying levels of privacy concerns were developed and key performance indicators which manage levels of privacy were determined. This paper focuses on the main themes’ key performance indicators and how they can manage privacy when photos and videos are shared over social media.
Madhisetty, S & Williams, M-A 1970, 'The Role of Trust and Control in Managing Privacy When Photos and Videos Are Stored or Shared', Advances in Intelligent Systems and Computing, Future Technologies Conference, Springer International Publishing, Vancouver, BC (Canada), pp. 127-140.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. A photo or a video could contain sensitive information coded as tacit information, which makes it difficult gauge, the loss of privacy, if such photo or a video were shared. Social media applications like Facebook, Twitter, WhatsApp and many more such applications are becoming popular. The instant sharing of information via photos and videos is making the management of issues which rise out of loss of privacy more difficult. Many users of social media trust that their content will not be misused other than purposes that were originally intended. This paper discusses not only about how much of that trust is real and how much of it was forced, but demonstrates the reasoning behind forced trust. These interferences were made after data collection via interviews and data analysis using Grounded Theory.
Madhisetty, S, Williams, M-A, Massy-Greene, J, Franco, L & El Khoury, M 1970, 'How to Manage Privacy in Photos after Publication', Proceedings of the 21st International Conference on Enterprise Information Systems, 21st International Conference on Enterprise Information Systems, SCITEPRESS - Science and Technology Publications, Greece, pp. 162-168.
View/Download from: Publisher's site
View description>>
Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. Photos and videos once published may stay available for people to view it unless they are deleted by the publisher of the photograph. If the content is downloaded and uploaded by others then they lose all the privacy settings once afforded by the publisher of the photograph or video via social media settings. This means that they could be modified or in some cases misused by others. Photos also contain tacit information, which cannot be completely interpreted at the time of their publication. Sensitive information may be revealed to others as the information is coded as tacit information. Tacit information allows different interpretations and creates difficulty in understanding loss of privacy. Free flow and availability of tacit information embedded in a photograph could have serious privacy problems. Our solution discussed in this paper illuminates the difficulty of managing privacy due the tacit information embedded in a photo. It also provides an offline solution for the photograph such that it cannot be modified or altered and gets automatically deleted over a period of time. By extending the Exif data of a photograph by incorporating an in-built feature of automatic deletion, and the access to an image by scrambling the image via adding a hash value. Only a customized application can unscramble the image therefore making it available. This intends to provide a novel offline solution to manage the availability of the image post publication.
Mahdavi, F, Hayati, H, Kennedy, P & Eager, D 1970, 'Ageing and resulting injuries – effects on racing greyhounds', European Society of Biomechanics, European Society of Biomechanics, Vienna, Austria.
Mahdavi, F, Hayati, H, Kennedy, P & Eager, D 1970, 'Effects of the number of starts on greyhound racing dynamics', International Society of Biomechanics Conference, International Society of Biomechanics Conference, Calgary, Canada.
Mao, K, Niu, J, Liu, X, Yu, S & Zhao, L 1970, 'Word2Cluster: A New Multi-Label Text Clustering Algorithm with an Adaptive Clusters Number', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
Text clustering has been widely used in many Natural Language Processing (NLP) applications such as text summarization and news recommendation. However, most of the current algorithms need to predefine a clustering number, which is difficult to obtain. Moreover, the mutli-label clustering is useful in multiple clustering tasks in many applications, but related works are rarely available. Although several studies have attempted to solve above two problems, there is a need for methods that can solve the two issues simultaneously. Therefore, we propose a new text clustering algorithm called Word2Cluster. Word2Cluster can automatically generate an adaptive number of clusters and support multi-label clustering. To test the performance of Wrod2Cluster, we build a Chinese text dataset, Hotline, according to real world applications. To evaluate the clustering results better, we propose an improved evaluation method based on basic accuracy, precision and recall for multi-label text clustering. Experimental results on a Chinese text dataset (Hotline) and a public English text dataset (Reuters) demonstrate that our algorithm can achieve better F1-measure and runs faster than the state-of- the-art baselines.
McGregor, C & Majola, PX 1970, 'Opportunities for a Cloud Based Health Analytics as a Service for Eastern Cape Initiation Schools in South Africa', 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Cordoba, Spain, pp. 531-534.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Traditional male circumcision in the contemporary South Africa has become the focus of the government and media due to the large number of initiates severely injured or dying during the initiation period, which happens twice a year. Deaths and penile amputations are a feature of every circumcision season, as a result of sepsis, gangrene and dehydration amongst other diseases. This paper proposes a Cloud based Health Analytics as a Service for Eastern Cape initiation schools in South Africa to assist in saving lives and preserving the customs. The proposed Artemis platform will assist in acquiring physiological data of initiates before and during initiation to provide early insights of many conditions that can develop during initiation. Big data analytics based on Clinical Decision Support System such as Artemis provides real-time online analytics with knowledge extraction component that supports data mining and enables clinical research of various conditions. Conversely, Artemis has challenges for lower resource settings, which will be explored in this paper.
Meena, MS, Singh, P, Rana, A, Mery, D & Prasad, M 1970, 'A Robust Face Recognition System for One Sample Problem', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Rim Symposium on Image and Video Technology, Springer International Publishing, Sydney, NSW, Australia, pp. 13-26.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. Most of the practical applications have limited number of image samples of individuals for face verification and recognition process such as passport, driving licenses, photo ID etc. So use of computer system becomes challenging task, when image samples available per person for training and testing of system are limited. We are proposing a robust face recognition system based on Tetrolet, Local Directional Pattern (LDP) and Cat Swam Optimization (CSO) to solve this problem. Initially, the input image is pre-processed to extract region of interest using filtering method. This image is then given to the proposed descriptor, namely Tetrolet-LDP to extract the features of the image. The features are subjected to classification using the proposed classification module, called Cat Swarm Optimization based 2-Dimensional Hidden Markov Model (CSO-based 2DHMM) in which the CSO trains the 2D-HMM. The performance is analyzed using the metrics, such as accuracy, False Rejection Rate (FRR), & False Acceptance Rate (FAR) and the system achieves high accuracy of 99.65%, and less FRR and FAR of 0.0033 and 0.003 for training percentage variation and 99.65%, 0.0035 and 0.004 for k-Fold Validation.
Melnikov, A, Chiang, YK, Oberst, S, Quan, L, Alu, A, Marburg, S & Powell, D 1970, 'Experimental validation of maximal Willis coupling in an acoustic meta-atom', 13th International Congress on Artificial Materials for Novel Wave Phenomena – Metamaterials 2019, International Congress on Artificial Materials for Novel Wave Phenomena, Rome, pp. 1-2.
View description>>
Willis coupling is the acoustic analog of bianisotropy, representing coupling between the monopolar and dipolar degrees of freedom. It has recently been theoretically demonstrated that there is an upper bound on the strength of this coupling, imposed by the conservation of energy. Here we present a scalable meta-atom design, and experimentally demonstrate that it approaches the theoretical limit for Willis coupling.
Meng, L, Lin, C-T, Jung, T-P & Wu, D 1970, 'Neural Information Processing', Neural Information Processing 26th International Conference, ICONIP 2019 Sydney, NSW, Australia, December 12–15, 2019 Proceedings, Part I, International Conference on Neural Information Processing of the Asia-Pacific Neural Network Society, Springer International Publishing, Australia, pp. 476-490.
View/Download from: Publisher's site
View description>>
Machine learning has achieved great success in many applications, including electroencephalogram (EEG) based brain-computer interfaces (BCIs). Unfortunately, many machine learning models are vulnerable to adversarial examples, which are crafted by adding deliberately designed perturbations to the original inputs. Many adversarial attack approaches for classification problems have been proposed, but few have considered target adversarial attacks for regression problems. This paper proposes two such approaches. More specifically, we consider white-box target attacks for regression problems, where we know all information about the regression model to be attacked, and want to design small perturbations to change the regression output by a pre-determined amount. Experiments on two BCI regression problems verified that both approaches are effective. Moreover, adversarial examples generated from both approaches are also transferable, which means that we can use adversarial examples generated from one known regression model to attack an unknown regression model, i.e., to perform black-box attacks. To our knowledge, this is the first study on adversarial attacks for EEG-based BCI regression problems, which calls for more attention on the security of BCI systems.
Mirtalaie, MA, Hussain, OK, Chang, E & Hussain, FK 1970, 'A Fine-Grained Ontology-Based Sentiment Aggregation Approach', Advances in Intelligent Systems and Computing, International Conference on Complex, Intelligent, and Software Intensive Systems, Springer International Publishing, Japan, pp. 252-262.
View/Download from: Publisher's site
View description>>
© 2019, Springer International Publishing AG, part of Springer Nature. Sentiment analysis techniques are widely used to capture the voice of customers about different products/services. Aspect or feature-based sentiment detection tools as one of the sentiment analyses’ types are developed to find the customers’ opinions about various features of a product. However, as a product may contain many features, presenting the final obtained results to the users is a challenge. Even though this issue is addressed in the literature by developing different sentiment aggregation methods, their results are mostly presented at the basic-level features of a product. This may cause in losing customers’ opinion about at minor sub-features. However, as the performance of a basic feature is dependent on those of its different sub-features, we propose an approach which aggregates the extracted results at a fine-grained level features using a product ontology tree. We interpret the polarity of each feature as a satisfaction score which can help managers in investigating the weaknesses of their products even at minor levels in a more informed way.
Naji, M, Al-Ani, A, Braytee, A, Anaissi, A & Kennedy, P 1970, 'Queue Formation Augmented with Particle Swarm Optimisation to Improve Waiting Time in Airport Security Screening', Advances in Intelligent Systems and Computing, Workshops of the 33rd International Conference on Advanced Information Networking and Applications, Springer International Publishing, Japan, pp. 923-935.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. Airport security screening processes are essential to ensure the safety of both passengers and the aviation industry. Security at airports has improved noticeably in recent years through the utilisation of state-of-the-art technologies and highly trained security officers. However, maintaining a high level of security can be costly to operate and implement. It may also lead to delays for passengers and airlines. This paper proposes a novel queue formation method based on a queueing theory model augmented with a particle swarm optimisation method known as QQT-PSO to improve the average waiting time in airport security areas. Extensive experiments were conducted using real-world datasets collected from Sydney airport. Compared to the existing system, our method significantly reduces the average waiting time and operating cost by 11.89% compared to the one-queue formation.
Nalamati, M, Kapoor, A, Saqib, M, Sharma, N & Blumenstein, M 1970, 'Drone Detection in Long-Range Surveillance Videos', 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Taipei, Taiwan.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The usage of small drones/UAVs has significantly increased recently. Consequently, there is a rising potential of small drones being misused for illegal activities such as terrorism, smuggling of drugs, etc. posing high-security risks. Hence, tracking and surveillance of drones are essential to prevent security breaches. The similarity in the appearance of small drone and birds in complex background makes it challenging to detect drones in surveillance videos. This paper addresses the challenge of detecting small drones in surveillance videos using popular and advanced deep learning-based object detection methods. Different CNN-based architectures such as ResNet-101 and Inception with Faster-RCNN, as well as Single Shot Detector (SSD) model was used for experiments. Due to sparse data available for experiments, pre-trained models were used while training the CNNs using transfer learning. Best results were obtained from experiments using Faster-RCNN with the base architecture of ResNet-101. Experimental analysis on different CNN architectures is presented in the paper, along with the visual analysis of the test dataset.
Naseem, U & Musial, K 1970, 'DICE: Deep Intelligent Contextual Embedding for Twitter Sentiment Analysis', 2019 International Conference on Document Analysis and Recognition (ICDAR), 2019 International Conference on Document Analysis and Recognition (ICDAR), IEEE, Sydney, Australia, pp. 953-958.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The sentiment analysis of the social media-based short text (e.g., Twitter messages) is very valuable for many good reasons, explored increasingly in different communities such as text analysis, social media analysis, and recommendation. However, it is challenging as tweet-like social media text is often short, informal and noisy, and involves language ambiguity such as polysemy. The existing sentiment analysis approaches are mainly for document and clean textual data. Accordingly, we propose a Deep Intelligent Contextual Embedding (DICE), which enhances the tweet quality by handling noises within contexts, and then integrates four embeddings to involve polysemy in context, semantics, syntax, and sentiment knowledge of words in a tweet. DICE is then fed to a Bi-directional Long Short Term Memory (BiLSTM) network with attention to determine the sentiment of a tweet. The experimental results show that our model outperforms several baselines of both classic classifiers and combinations of various word embedding models in the sentiment analysis of airline-related tweets.
Nosouhi, MR, Yu, S, Grobler, M, Zhu, Q & Xiang, Y 1970, 'Blockchain–Based Location Proof Generation and Verification', IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE.
View/Download from: Publisher's site
View description>>
In location-sensitive applications, service providers need to verify the location of users in order to provide them with access to a service or benefit. This provides dishonest users with an incentive to cheat on their location by submitting fake location claims. To address this issue, a number of location proof mechanisms have been proposed in literature to date. However, they are faced with different security and privacy challenges. In this paper, we utilize the unique features of the blockchain technology to design a decentralized architecture in which mobile users act as witnesses and generate location proofs for other users. In the proposed scheme, a location proof is issued as part of a transaction that is broadcasted into a peer-to-peer network where it can be picked up by verifiers for further verification. Once a transaction is successfully verified, it is stored in a public ledger. Our security and privacy analysis shows that the proposed scheme preserves users' privacy and achieves a reliable performance against Prover-Prover and Prover-Witness collusions. Moreover, our prototype implementation on the Android platform shows that the location proof generation process in the proposed scheme is faster than the current decentralized schemes and requires low computational resources.
Nosouhi, MR, Yu, S, Sood, K & Grobler, M 1970, 'HSDC–Net: Secure Anonymous Messaging in Online Social Networks', 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), IEEE, Rotorua, New Zealand, pp. 350-357.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Hiding contents of users' messages has been successfully addressed before, while anonymization of message senders remains a challenge since users do not usually trust ISPs and messaging application providers. To resolve this challenge, several solutions have been proposed so far. Among them, the Dining Cryptographers network protocol (DC-net) provides the strongest anonymity guarantees. However, DC-net suffers from two critical issues that makes it impractical, i.e., (1) collision possibility and (2) vulnerability against disruptions. Apart from that, we noticed a third critical issue during our investigation. (3) DC-net users can be deanonymized after they publish at least three messages. We name this problem the short stability issue and prove that anonymity is provided only for a few cycles of message publishing. As far as we know, this problem has not been identified in the previous research works. In this paper, we propose Harmonized and Stable DC-net (HSDC-net), a self-organizing protocol for anonymous communications. In our protocol design, we first resolve the short stability issue and obtain SDC-net, a stable extension of DC-net. Then, we integrate the Slot Reservation and Disruption Management sub-protocols into SDC-net to overcome the collision and security issues, respectively. The obtained HSDC-net protocol can also be integrated into blockchain-based cryptocurrencies (e.g. Bitcoin) to mix multiple transactions (belonging to different users) into a single transaction in such a way that the source of each payment is unknown. This preserves privacy of blockchain users. Our prototype implementation shows that HSDC-net achieves low latencies that makes it a practical protocol.
Ona, ED, Cuesta-Gomez, A, Garcia, JA, Raffe, W, Sanchez-Herrera, P, Cano-de-la-Cuerda, R & Jardon, A 1970, 'Evaluating A VR-based Box and Blocks Test for Automatic Assessment of Manual Dexterity: A Preliminary Study in Parkinson’s Disease', 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH), 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH), IEEE, Kyoto, Japan.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Opportunities of using Virtual Reality (VR) technology for the automation of clinical procedures in general, and for the assessment of motor function in particular, have not been fully explored in Parkinson' disease (PD). For that purpose, a game-like version of the Box and Blocks Test (BBT) for automatic assessment of hand motor function in VR was built. This system uses the Leap Motion Controller (LMC) for hand tracking and the Oculus Rift for a fully immersive experience. In this paper, we focus on evaluating the capabilities of our VR-BBT to reliably measure the manual dexterity in a sample of PD patients. For this study, a group of nine individuals in mild to moderate stage of PD were recruited. Participants were asked to perform both the physical BBT and the VR-BBT systems. Correlation analysis of collected data was carried out comparing the BBT and VR-BBT assessments. The test-retest reliability was also explored for the scores gathered with the virtual tool. Statistical analysis proved that the performance data collected by the game-like system correlated with the validated measures of the physical BBT, with a strong test-retest reliability. This fact suggests that the virtual version of the BBT could be used as a valid and reliable indicator for health improvements.
Park, M, Yang, W, Cao, Z, Kang, B, Connor, D & Lea, M-A 1970, 'Marine Vertebrate Predator Detection and Recognition in Underwater Videos by Region Convolutional Neural Network', Knowledge Management and Acquisition for Intelligent Systems, Pacific Rim Knowledge Acquisition Workshop, Springer International Publishing, Cuvu, Fiji, pp. 66-80.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. In this paper, we present R-CNN, Fast R-CNN and Faster R-CNN methods to automatically detect and recognise the predators in underwater videos. We compare the results of these methods on real data and discuss their strengths and weaknesses. We build a dataset using footage captured from representative environment of the wild and devise a data model with three classes (seal, dolphin, background). Following this, we train R-CNN, Fast R-CNN and Faster R-CNN, then evaluate them on a test dataset compose of challenging objects that had not been seen during training. We perform evaluation on GPU, acquiring information about the AP and IOU for each model and network based on various proposal numbers as well as runtime speeds. Based on the results, we found that the best model of predator detection using visual deep learning models is Faster R-CNN with 2000 proposals.
Pelchen, T & Lister, R 1970, 'On the Frequency of Words Used in Answers to Explain in Plain English Questions by Novice Programmers', Proceedings of the Twenty-First Australasian Computing Education Conference, ACE'19: Twenty-First Australasian Computing Education Conference, ACM, Sydney NSW Australia, pp. 11-20.
View/Download from: Publisher's site
View description>>
Most previous research studies using Explain in Plain English questions have focussed on categorising the answers of novice programmers according to the SOLO taxonomy, and/or the relationship between explaining code and writing code. In this paper, we study the words used in the explanations of novice programmers. Our data is from twelve Explain in plain English questions presented to over three hundred students in an exam at the end of the students' first semester of programming. For each question, we compare the frequency of certain words used in correct answers, between students who scored a perfect twelve on all the Explain in plain English questions and students with lower scores. We report a number of statistically significant differences in word frequency between the students who answered all questions correctly and students who did not. The students who answered all twelve questions correctly tended to be more precise, more comprehensive, and more likely to choose words not explicitly in the code, but instead words that are an abstraction beyond the code.
Pfeiffer, S, Ebrahimian, D, Herse, S, Le, TN, Leong, S, Lu, B, Powell, K, Raza, SA, Sang, T, Sawant, I, Tonkin, M, Vinaviles, C, Vu, TD, Yang, Q, Billingsley, R, Clark, J, Johnston, B, Madhisetty, S, McLaren, N, Peppas, P, Vitale, J & Williams, M-A 1970, 'UTS Unleashed! RoboCup@Home SSPL Champions 2019', RoboCup 2019: Robot World Cup XXIII, Robot World Cup, Springer International Publishing, Sydney, NSW, Australia, pp. 603-615.
View/Download from: Publisher's site
View description>>
This paper summarizes the approaches employed by Team UTS Unleashed! to take First Place in the 2019 RoboCup@Home Social Standard Platform League. First, our system architecture is introduced. Next, our approach to basic skills needed for a strong performance in the competition. We describe several implementations for tests participation. Finally our software development methodology is discussed.
Pileggi, SF, Peña, FC, Villamil, M-D-P & Beydoun, G 1970, 'Analysing the Trade-Off Between Computational Performance and Representation Richness in Ontology-Based Systems.', ICCS (5), International Conference on Computational Science, Springer, Faro, Portugal, pp. 237-250.
View/Download from: Publisher's site
View description>>
As the result of the intense research activity of the past decade, Semantic Web technology has achieved a notable popularity and maturity. This technology is leading the evolution of the Web via interoperability by providing structured metadata. Because of the adoption of rich data models on a large scale to support the representation of complex relationships among concepts and automatic reasoning, the computational performance of ontology-based systems can significantly vary. In the evaluation of such a performance, a number of critical factors should be considered. Within this paper, we provide an empirical framework that yields an extensive analysis of the computational performance of ontology-based systems. The analysis can be seen as a decision tool in managing the constraints of representational requirements versus reasoning performance. Our approach adopts synthetic ontologies characterised by an increasing level of complexity up to OWL 2 DL. The benefits and the limitations of this approach are discussed in the paper.
Pokhrel, SR, Sood, K, Yu, S & Nosouhi, MR 1970, 'Policy-based Bigdata Security and QoS Framework for SDN/IoT: An Analytic Approach', IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE, Paris, France, pp. 73-78.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. With the explosive growth of Internet of Things (IoT) using WiFi networks along with their huge data flows (especially Bigdata using TCP connections), the significant challenges are the application performance and network security. Bigdata comes in form of varying volume, velocity, etc. and is very challenging to manage with traditional networks. Therefore, we advocate Software-defined networking (SDN) paradigm in this paper. Using SDN, firstly, from security perspective, we are able to diagnose Bigdata TCP streams that may come from both attack or non-attack sources. Secondly, when the Bigdata TCP streams come from legitimate sources, SDN can help in maintaining Quality of Service (QoS) to particular flow or application. In this paper, we have proposed a Policy-based framework that maintains the security as well the flow specific QoS requirement in SDN enabled IoT network. In our network settings, we proposed an algorithm at WiFi Access Point (AP) or at network edge router, to learn the incoming traffic from different Things and then takes appropriate action/s based on the policies in place. A mathematical model is developed considering TCP CUBIC streams over WiFi networks to understand and evaluate our idea. Our extensive simulation results demonstrate how we jointly enhance the security and effectively maintain the desired QoS of the streams in real time.
Prior, J, Laudari, S & Leaney, J 1970, 'What is the Effect of a Software Studio Experience on a Student's Employability?', Proceedings of the Twenty-First Australasian Computing Education Conference, ACE'19: Twenty-First Australasian Computing Education Conference, ACM, Australia, pp. 28-36.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Our software studio demonstrably increases students' employability, according to the empirical findings of this study, and an evaluation of these findings based on the CareerEDGE Employability Development Profile. We provide a studio environment in which students work in mixed teams on real software projects for clients, under the guidance of industry and academic mentors. This study used open-ended interviews and ethnographic observations in the studio sessions to understand employability success. Skills found important for employability include: Collaboration and communication, project management, supporting each other to resolve technical issues, seeking help from industry mentors and academics, social aspects of work (working with clients and mentors), reflection skills and technical skills. These skills were compared with the CareerEDGE Employability Development Profile and found to give good coverage of employability skills. Contributions made by this study to computing education are: • A deep empirical understanding of students' perspectives and what they value about their employability as a result of participating in the software studio • An evaluation of our findings against the CareerEDGE employability framework, in a technical learning environment • Findings from an investigation that is complementary to students' perspectives collected in accordance with the CareerEDGE approach, where the data is collected via a questionnaire with 5-point Likert scale responses; our interviews were open-ended and accompanied by ethnographic observations.
Prysyazhnyuk, A, McGregor, C, Chernikova, A & Rusanov, V 1970, 'A sliding window real-time processing approach for analysis of heart rate variability during spaceflight', Proceedings of the International Astronautical Congress, IAC.
View description>>
The paradigm of technological disruption continues to pave the way for innovative technology that has the capacity to acquire comprehensive real-time physiological and environmental data and present endless opportunities to study physiological processes and mechanisms, aid clinical discovery and advance the field of preventative and corrective medicine both on Earth and during spaceflight. Missions of increased distance and duration, as well as ad-hoc emergency situations that render the space crew to remain in space for long periods of time with reduced number of team members necessitate deployment of comprehensive clinical-decision support systems aboard the space station, to preserve and maintain the well-being of the crew, and ensure successful execution of mission objectives and safe return to Earth. In prior work, we presented the use of Artemis, big-data analytics platform for real-time analysis of adaption to conditions of spaceflight, to assess the levels of stress imposed on the human body and identify the state of well-being and any deviation from the norm that becomes apparent prior to onset of clinical symptoms. Conventional methods of adaption assessment were limited to 5-minute windows of data, which were historically averaged to a single hourly and daily value. The capability of Artemis to support analysis of high-frequency, high-volume and high-velocity data present new opportunities for analysis of heart rate variability during spaceflight. As such, we propose the use of a 5-minute sliding window-based analysis of heart rate variability for assessment of adaption during spaceflight. This method would support investigation of stressor-induced responses (i.e. physical load, task activity, environmental) to help identify the exact onset of the highest strain of regulatory mechanisms and assess activity of various components of the autonomic nervous system. In addition, 5-minute sliding window analysis would provide more insight into recov...
Pugalia, S & Cetindamar Kozanoglu, D 1970, 'Tearing down the double glass ceiling for the women immigrant entrepreneurs in high-tech industry', Cairns, Australia, pp. 1-14.
View description>>
Although the number of women-owned firms is growing, there still remains the gap in the technology sector. The purpose of the present study is to explore the barriers faced by women-entrepreneurs due to their immigrant and ethnicity status. The paper presents a literature review in order to shed light on the possible causes of the lower number of women immigrant entrepreneurs particularly in high-tech sectors. Given the human, financial and network disadvantages faced by women vis-a-vis men, the immigrant status escalates the barriers further and create additional layer of 'glass ceiling' to pass for women who want to start a technology-based venture. In other words, immigrant women face a set of invisible barriers to advancement in their entrepreneurial career in high-technology sectors. The paper points out the existence of barriers and calls for researchers to find out ways to tear down these glass ceilings in order to empower woman and support their contribution to society as UN 2030 Agenda for Sustainable Development argues.
Qu, Y, Yu, S, Zhang, J, Binh, HTT, Gao, L & Zhou, W 1970, 'GAN-DP: Generative Adversarial Net Driven Differentially Privacy-Preserving Big Data Publishing', ICC 2019 - 2019 IEEE International Conference on Communications (ICC), ICC 2019 - 2019 IEEE International Conference on Communications (ICC), IEEE, Shanghai, China.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Increasing massive volume of data are generated every single second in this big data era. With big data from multiple sources, adversaries continuously mine private information for potential benefits. Motivated by this, we propose a generative adversarial net (GAN) driven noise generation method under the framework of differential privacy. We add one more perceptron, which is a specifically devised differential privacy identifier. After the generator produces the noise, the discriminator and the proposed identifier game with each other to derive the Nash Equilibrium. Extensive experimental results demonstrate the proposed model meets differential privacy constraints and upgrade data utility simultaneously.
Rahman, MA, Singh, P, Muniyandi, RC, Mery, D & Prasad, M 1970, 'Prostate Cancer Classification Based on Best First Search and Taguchi Feature Selection Method', Image and Video Technology, Pacific-Rim Symposium on Image and Video Technology, Springer International Publishing, Sydney, NSW, Australia, pp. 325-336.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. Prostate cancer is the second most common cancer occurring in men worldwide, about 1 in 41 men will die because of prostate cancer. Death rates of prostate cancer increases with age. Even though, it being a serious condition only about 1 man in 9 will be diagnosed with prostate cancer during his lifetime. Accurate and early diagnosis can help clinician to treat the cancer better and save lives. This paper proposes two phases feature selection method to enhance prostate cancer early diagnosis based on artificial neural network. In the first phase, Best First Search method is used to extract the relevant features from original dataset. In the second phase, Taguchi method is used to select the most important feature from the already extracted features from Best First Search method. A public available prostate cancer benchmark dataset is used for experiment, which contains two classes of data normal and abnormal. The proposed method outperforms other existing methods on prostate cancer benchmark dataset with classification accuracy of 98.6%. The proposed approach can help clinicians to reach at more accurate and early diagnosis of different stages of prostate cancer and so that they make most suitable treatment decision to save lives of patients and prevent death due to prostate cancer.
Salamai, A, Hussain, O & Saberi, M 1970, 'Decision Support System for Risk Assessment Using Fuzzy Inference in Supply Chain Big Data', 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), IEEE, Shenzhen, China, pp. 248-253.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Currently, organisations find it difficult to design a Decision Support System (DSS) that can predict various operational risks, such as financial and quality issues, with operational risks responsible for significant economic losses and damage to an organisation's reputation in the market. This paper proposes a new DSS for risk assessment, called the Fuzzy Inference DSS (FIDSS) mechanism, which uses fuzzy inference methods based on an organisation's big data collection. It includes the Emerging Association Patterns (EAP) technique that identifies the important features of each risk event. Then, the Mamdani fuzzy inference technique and several membership functions are evaluated using the firm's data sources. The FIDSS mechanism can enhance an organisation's decision-making processes by quantifying the severity of a risk as low, medium or high. When it automatically predicts a medium or high level, it assists organisations in taking further actions that reduce this severity level.
Sawhney, R, Shah, RR, Bhatia, V, Lin, C-T, Aggarwal, S & Prasad, M 1970, 'Exploring the Impact of Evolutionary Computing based Feature Selection in Suicidal Ideation Detection', 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, New Orleans, LA, USA.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The ubiquitous availability of smartphones and the increasing popularity of social media provide a platform for users to express their feelings, including suicidal ideation. Suicide prevention by suicidal ideation detection on social media lights the path to controlling the rapidly increasing suicide rates amongst youth. This paper proposes a diverse set of features and investigates into feature selection using the Firefly algorithm to build an efficient and robust supervised approach to classifying tweets with suicidal ideation. The development of a suicidal language to create three diverse, manually annotated datasets leads to the validation of the proposed model. An in-depth result and error analysis lead to an accurate system for monitoring suicidal ideation on social media along with the discovery of optimal feature subsets and selection methods using a penalty based Firefly algorithm.
Shaham, S, Ding, M, Liu, B, Lin, Z & Li, J 1970, 'Machine Learning Aided Anonymization of Spatiotemporal Trajectory Datasets', IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The big data era requires a growing number of companies to publish their data publicly. Preserving the privacy of users while publishing these data has become a critical problem. One of the most sensitive sources of data is spatiotemporal trajectory datasets. Such datasets are extremely sensitive as users' personal information such as home address, workplace and shopping habits can be inferred from them. In this paper, we propose an approach for anonymization of spatiotemporal trajectory datasets. The proposed approach is based on generalization entailing alignment and clustering of trajectories. We propose to apply k'-means algorithm for clustering trajectories by developing a technique that makes it possible. We also significantly reduce the information loss during the alignment by incorporating multiple sequence alignment instead of pairwise sequence alignment used in the literature. We analyze the performance of our proposed approach by applying it to Geolife dataset, which includes GPS logs of over 180 users in Beijing, China. Our experiments indicate the robustness of our framework compared to prior works.
Shaham, S, Ding, M, Liu, B, Lin, Z & Li, J 1970, 'Transition-Entropy: A Novel Metric for Privacy Preservation in Location-Based Services', IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The advent of location-based services has created the need for preserving the location privacy of users. An adversary such as an untrusted location-based server can monitor the queried locations by a user to infer sensitive information such as the user's home address, health conditions, shopping habits, etc. To address this issue, dummy-based algorithms have been developed to increase the anonymity of users, and thus, protecting their privacy. Unfortunately, the existing algorithms only consider a limited amount of side information known by the adversary whereas they may face more serious challenges in practice. In this paper, we consider a new type of side information based on consecutive location changes of users, and propose a new metric called transition-entropy to investigate the location privacy preservation. Furthermore, we develop a greedy algorithm to significantly improve the transition-entropy performance for a given dummy generation algorithm. Via experiments conducted on a real-life dataset, we evaluate the performance of the proposed metric and algorithm.
Shakeri Hossein Abad, Z, Gervasi, V, Zowghi, D & H. Far, B 1970, 'Supporting Analysts by Dynamic Extraction and Classification of Requirements-Related Knowledge', 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), IEEE, Montreal, QC, Canada, pp. 442-453.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. In many software development projects, analysts are required to deal with systems’ requirements from unfamiliar domains. Familiarity with the domain is necessary in order to get full leverage from interaction with stakeholders and for extracting relevant information from the existing project documents. Accurate and timely extraction and classification of requirements knowledge support analysts in this challenging scenario. Our approach is to mine real-time interaction records and project documents for the relevant phrasal units about the requirements related topics being discussed during elicitation. We propose to use both generative and discriminating methods. To extract the relevant terms, we leverage the flexibility and power of Weighted Finite State Transducers (WFSTs) in dynamic modelling of natural language processing tasks. We used an extended version of Support Vector Machines (SVMs) with variable-sized feature vectors to efficiently and dynamically extract and classify requirements-related knowledge from the existing documents. To evaluate the performance of our approach intuitively and quantitatively, we used edit distance and precision/recall metrics. We show in three case studies that the snippets extracted by our method are intuitively relevant and reasonably accurate. Furthermore, we found that statistical and linguistic parameters such as smoothing methods, and words contiguity and order features can impact the performance of both extraction and classification tasks.
Shdifat, B, Cetindamar, D & Erfani, S 1970, 'A Literature Review on Big Data Analytics Capabilities', 2019 Portland International Conference on Management of Engineering and Technology (PICMET), 2019 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, Oregon, pp. 1-6.
View/Download from: Publisher's site
View description>>
Many researchers and practitioners are interested in big data due to its transformational potential for achieving competitive advantage. Recent studies indicate that business achieves competitive advantage not only by investments on technology infrastructure but also by creating technological and organizational capabilities. In the light of the Resource-based View theory, this paper aims to find out "what capabilities have been required to build big data analytics?" by conducting an in-depth literature review. We adopted a systematic literature review approach and studied academic articles published between 2010 and 2018. We used Scopus and Web of Science (WoS) databases to find published studies related to big data analytics capabilities, twenty-five (25) of which met the selection criteria. Results showed capabilities of big data analytics fall into two major categories: human and infrastructure capability
Shu, Y & Xu, G 1970, 'Emotion Recognition from Music Enhanced by Domain Knowledge', PRICAI 2019: Trends in Artificial Intelligence, PACIFIC RIM INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, Springer International Publishing, Yanuca Island, Cuvu, Fiji, pp. 121-134.
View/Download from: Publisher's site
View description>>
Music elements have been widely used to influence the audiences’ emotional experience by its music grammar. However, these domain knowledge, has not been thoroughly explored as music grammar for music emotion analyses in previous work. In this paper, we propose a novel method to analyze music emotion via utilizing the domain knowledge of music elements. Specifically, we first summarize the domain knowledge of music elements and infer probabilistic dependencies
between different main musical elements and emotions from the summarized
music theory. Then, we transfer the domain knowledge to constraints,
and formulate affective music analysis as a constrained optimization problem.
Experimental results on the Music in 2015 database and the AMG1608 database
demonstrate that the proposed music content analyses method outperforms the
state-of-the-art performance prediction methods.
Sick, N, Katic, M, Agarwal, R & Cetindamar Kozanoglu, D 1970, 'Operationalising ambidexterity: The role of better management practices in high-variety, low volume manufacturing', PICMET ’19 Conference: Technology Management in the World of Intelligent Systems, Portland, Oregon, USA.
Singh, A 1970, 'Investigating the impact of landmarks on spatial learning during active navigation', Society for Neuroscience, Society for Neuroscience, Chicago.
Song, Y, Zhang, G, Lu, H & Lu, J 1970, 'A Noise-tolerant Fuzzy c-Means based Drift Adaptation Method for Data Stream Regression', 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, New Orleans, LA, USA.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Concept drift referring to the changes of data distributions has been one critical challenge typically associated with mining data streams. Current drift detection and adaptation methods focus on how to immediately detect the distribution changes once the concept drift occurs and swiftly update the model to be applicable to the newly arrived data instances. Most of those methods assume the data does not have noise or the noise is too weak to affect the modeling procedure. However, realworld data are normally contaminated, and denoise techniques are highly preferred as a necessary preprocess. This issue is more complex for a data stream with concept drift because the noise is very likely to be confused with drift. Motivated by that, this paper proposes a Noise-tolerant Fuzzy c-means based drift Adaptation method (NFA) which can adapt to the changing distributions and is suitable for noisy data streams. The concept drift problem is solved by using a fuzzy c-means based regression model to continuously include the most relevant data instances to the latest pattern in the training set. In addition, a denoise technique is designed in NFA to remove noise, and the ability of incremental updating enables it to be embedded in the incremental drift adaptation process, and therefore NFA can solve concept drift and noise problems at the same time. Experimental evaluation results also show good performance of our method on handling data streams with concept drift and noise.
Sood, K, Pokhrel, SR, Karmakar, K, Vardharajan, V & Yu, S 1970, 'SDN-Capable IoT Last-Miles: Design Challenges', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, Waikoloa, HI, USA.
View/Download from: Publisher's site
View description>>
We propose to redesign SDN control in IoT last-miles so as to extend the capability from edge routers to devices (\emph{end-node things enabled with SDN capabilities}). Our approach put forward existing and new challenges that are impossible to be resolved using the seminal approaches directly. The main challenges we identify are: scalability of sensor nodes/things, maintaining the security of the system, and fulfilling the Quality of Service (QoS) requirement of all IoT applications. Firstly, we elaborate and discuss the aforementioned critical and fundamental challenges that require immediate investigations. Secondly, we propose a policy-driven framework for secure routing and conduct performance modeling and analysis. Further, in the QoS context, we have proposed an intent-based flow offloading scheme to meet the flow-specific QoS requirements. More importantly, we have developed an analysis by modeling TCP-based flows over WiFi, thus forming the required SDN-IoT network, by using mathematics as a tool for reasoning our challenges. With new insights from our analysis, the feasibility of the proposed approach is validated using factors such as path set-up time in SDN-IoT networks, SDN controller/devices throughputs, packets losses and response time of the controller.
Soomro, AM, Paryani, S, Rehman, J, Echeverria, RA, Biloria, N & Prasad, M 1970, 'Influencing Human Behaviour to Optimise Energy in Commercial Buildings', ACIS 2019 Proceedings - 30th Australasian Conference on Information Systems, Australian Conference on Information Systems, ACIS 2019, Perth, Australia, pp. 901-907.
View description>>
This paper discusses the impact of user energy choices on building energy demand, and how energy choices could be influenced to minimise building energy consumption using information systems. Accordingly, a socio-technical framework is designed and presented, which draws upon the use of energy interventions. A novel Social-Economic-Environmental (SEE) model is presented within the socio-technical framework which is aimed at nudging inhabitants enabling them to conserve energy in the university buildings, thereby making the world a sustainable place to live. The framework takes into account the Agent-based Modelling (ABM) approach to model user energy choices and their willingness to conserve energy in buildings. This research intends to test the socio-technical framework in the next stage of this study. Finally, this paper highlights gaps and the significance of understanding how user behaviour and their energy consumption can be influenced to optimise energy in university buildings, thereby reducing global greenhouse emissions.
Sreevallabh Chivukula, A, Yang, X & Liu, W 1970, 'Adversarial Deep Learning with Stackelberg Games', Communications in Computer and Information Science, Springer International Publishing, pp. 3-12.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2019. Deep networks are vulnerable to adversarial attacks from malicious adversaries. Currently, many adversarial learning algorithms are designed to exploit such vulnerabilities in deep networks. These methods focus on attacking and retraining deep networks with adversarial examples to do either feature manipulation or label manipulation or both. In this paper, we propose a new adversarial learning algorithm for finding adversarial manipulations to deep networks. We formulate adversaries who optimize game-theoretic payoff functions on deep networks doing multi-label classifications. We model the interactions between a classifier and an adversary from a game-theoretic perspective and formulate their strategies into a Stackelberg game associated with a two-player problem. Then we design algorithms to solve for the Nash equilibrium, which is a pair of strategies from which there is no incentive for either the classifier or the adversary to deviate. In designing attack scenarios, the adversary’s objective is to deliberately make small changes to test data such that attacked samples are undetected. Our results illustrate that game-theoretic modelling is significantly effective in securing deep learning models against performance vulnerabilities attached by intelligent adversaries.
Sun, K, Qian, T, Yin, H, Chen, T, Chen, Y & Chen, L 1970, 'What Can History Tell Us?', Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19: The 28th ACM International Conference on Information and Knowledge Management, ACM, pp. 1593-1602.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Recommendation systems have been widely applied to many E-commerce and online social media platforms. Recently, sequential item recommendation, especially session-based recommendation, has aroused wide research interests. However, existing sequential recommendation approaches either ignore the historical sessions or consider all historical sessions without any distinction that whether the historical sessions are relevant or not to the current session, which motivates us to distinguish the effect of each historical session and identify relevant historical sessions for recommendation. In light of this, we propose a novel deep learning based sequential recommender framework for session-based recommendation, which takes Nonlocal Neural Network and Recurrent Neural Network as the main building blocks. Specifically, we design a two-layer nonlocal architecture to identify historical sessions that are relevant to the current session and learn the long-term user preferences mostly from these relevant sessions. Besides, we also design a gated recurrent unit (GRU) enhanced by the nonlocal structure to learn the short-term user preferences from the current session. Finally, we propose a novel approach to integrate both long-term and short-term user preferences in a unified way to facilitate training the whole recommender model in an end-to-end manner. We conduct extensive experiments on two widely used real-world datasets, and the experimental results show that our model achieves significant improvements over the state-of-the-art methods.
Taghikhah, F, Raffe, WL, Mitri, G, Du Toit, S, Voinov, A & Garcia, JA 1970, 'Last Island', Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2019: Australasian Computer Science Week 2019, ACM, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. A serious game was designed and developed with the goal of exploring potential sustainable futures and the transitions towards them. This computer-assisted board game, Last Island, which incorporates a system dynamics model into a board game's core mechanics, attempts to impart knowledge and understanding on sustainability and how an isolated society may transition to various futures to a non-expert community of players. To this end, this collaborativecompetitive game utilizes the Miniworld model which simulates three variables important for the sustainability of a society: Human population, economic production and the state of the environment. The resulting player interaction offers possibilities to collectively discover and validate potential scenarios for transitioning to a sustainable future, encouraging players to work together to balance the model output while also competing on individual objectives to be the individual winner of the game.
Tan, Z, Xiong, J, Liu, B & Gui, L 1970, 'A Novel Random Access Mechanism based on Real-time Access Intensity Detection', 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Jeju, Korea (South).
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The Random Access (RA) procedure in 4G LTE serves for the uplink synchronization between UE and eNB and the allocation of the channel resource for data transmission. The existing 4G LTE RA Channel (RACH) lacks the adjustment to the real-time RA traffic and suffers from preamble collision and system congestion caused by the RA request flooding, which will not meet the requirement of ubiquitous and massive connection in 5G. To better improve the system throughput and the RA success probability of RACH in the dense device network, we propose a novel congestion-aware RA mechanism via a two-phase process in which concurrent devices carry out a Real-time Access Intensity Detection (RAID) prior to RA preamble message. In this paper, we develop an analytical model based on stochastic geometry and derive the RA success probability of our proposed model for typical device in single RA slot. The analytical results demonstrate the improvement of our proposed RA mechanism in terms of RA success probability under heavy RA traffic. Furthermore, a large amount of simulations under homogeneous and cluster Poisson Point Process (PPP) is carried out to analyze the performance of the proposed RA mechanism in terms of the system throughput, the RA success probability, the number of retransmission and the RA delay.
Tirado Cortes, CA, Chen, H-T & Lin, C-T 1970, 'Analysis of VR Sickness and Gait Parameters During Non-Isometric Virtual Walking with Large Translational Gain', Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI '19: The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, ACM, Brisbane, Australia, pp. 1-10.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. The combination of room-scale virtual reality and non-isometric virtual walking techniques is promising-the former provides a comfortable and natural VR experience, while the latter relaxes the constraint of the physical space surrounding the user. In the last few decades, many non-isometric virtual walking techniques have been proposed to enable unconstrained walking without disrupting the sense of presence in the VR environment. Nevertheless, many works reported the occurrence of VR sickness near the detection threshold or after prolonged use. There exists a knowledge gap on the level of VR sickness and gait performance for amplified nonisometric virtual walking at well beyond the detection threshold. This paper presents an experiment with 17 participants that investigated VR sickness and gait parameters during non-isometric virtual walking at large and detectable translational gain levels. The result showed that the translational gain level had a significant effect on the reported sickness score, gait parameters, and center of mass displacements. Surprisingly, participants who did not experience motion sickness symptoms at the end of the experiment adapted to the non-isometric virtual walking well and even showed improved performance at a large gain level of 10x.
Tonkin, M, Vitale, J, Herse, S, Raza, SA, Madhisetty, S, Kang, L, Vu, TD, Johnston, B & Williams, M-A 1970, 'Privacy First: Designing Responsible and Inclusive Social Robot Applications for in the Wild Studies', 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, New Delhi, India.
View/Download from: Publisher's site
View description>>
Deploying social robots applications in public spaces for conducting in the wild studies is a significant challenge but critical to the advancement of social robotics. Real world environments are complex, dynamic, and uncertain. Human-Robot interactions can be unstructured and unanticipated. In addition, when the robot is intended to be a shared public resource, management issues such as user access and user privacy arise, leading to design choices that can impact on users' trust and the adoption of the designed system. In this paper we propose a user registration and login system for a social robot and report on people's preferences when registering their personal details with the robot to access services. This study is the first iteration of a larger body of work investigating potential use cases for the Pepper social robot at a government managed centre for startups and innovation. We prototyped and deployed a system for user registration with the robot, which gives users control over registering and accessing services with either face recognition technology or a QR code. The QR code played a critical role in increasing the number of users adopting the technology. We discuss the need to develop social robot applications that responsibly adhere to privacy principles, are inclusive, and cater for a broad spectrum of people.
Tsai, T-Y, Lin, C-T & Prasad, M 1970, 'An Intelligent Customer Churn Prediction and Response Framework', 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, pp. 928-935.
View/Download from: Publisher's site
View description>>
Customer retention is one of the most important issues for companies. Companies always seek to reduce customer churn in order to increase the customer lifetime value and reduce the cost of acquisition of new customers. By focusing on customer churn prediction and identification, companies can predict in advance which customers are going to churn and therefore decrease customers churn rate through related personalized actions. The key issue here is how to predict customer churn at an early stage. This paper identifies related issues in customer churn prediction and provides new definitions and classifications on customer churn identification and strategies. This paper also establishes a customer churn prediction and response framework consists of three main stages: customer churn prediction, customer churn understanding and customer churn response. The framework presents the characteristics and challenges of related stages of customer churn as well. These outcomes can be used for customized or personalized product and service developments, to improve customer service efficiency and related decision-making more effective and more particularly enabling strategic promotion campaigns to customers with high churn risk.
van den Hoven, E 1970, 'Materialising Memories', Proceedings of the 5th International ACM In-Cooperation HCI and UX Conference, CHIuXiD'19: The 5th International HCI and UX Conference, ACM, INDONESIA, pp. 188-189.
View/Download from: Publisher's site
Verma, R & Merigo, JM 1970, 'On Generalized Intuitionistic Fuzzy Interaction Partitioned Bonferroni Mean Operators', 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE.
View/Download from: Publisher's site
Wan, Y, Shu, J, Sui, Y, Xu, G, Zhao, Z, Wu, J & Yu, P 1970, 'Multi-modal Attention Network Learning for Semantic Source Code Retrieval', 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), IEEE, San Diego, CA, USA, pp. 13-25.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Code retrieval techniques and tools have been playing a key role in facilitating software developers to retrieve existing code fragments from available open-source repositories given a user query (e.g., a short natural language text describing the functionality for retrieving a particular code snippet). Despite the existing efforts in improving the effectiveness of code retrieval, there are still two main issues hindering them from being used to accurately retrieve satisfiable code fragments from large-scale repositories when answering complicated queries. First, the existing approaches only consider shallow features of source code such as method names and code tokens, but ignoring structured features such as abstract syntax trees (ASTs) and control-flow graphs (CFGs) of source code, which contains rich and well-defined semantics of source code. Second, although the deep learning-based approach performs well on the representation of source code, it lacks the explainability, making it hard to interpret the retrieval results and almost impossible to understand which features of source code contribute more to the final results. To tackle the two aforementioned issues, this paper proposes MMAN, a novel Multi-Modal Attention Network for semantic source code retrieval. A comprehensive multi-modal representation is developed for representing unstructured and structured features of source code, with one LSTM for the sequential tokens of code, a Tree-LSTM for the AST of code and a GGNN (Gated Graph Neural Network) for the CFG of code. Furthermore, a multi-modal attention fusion layer is applied to assign weights to different parts of each modality of source code and then integrate them into a single hybrid representation. Comprehensive experiments and analysis on a large-scale real-world dataset show that our proposed model can accurately retrieve code snippets and outperforms the state-of-the-art methods.
Wang, B, Lu, J, Yan, Z, Luo, H, Li, T, Zheng, Y & Zhang, G 1970, 'Deep Uncertainty Quantification', Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM, Anchorage, USA, pp. 2087-2095.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Weather forecasting is usually solved through numerical weather prediction (NWP), which can sometimes lead to unsatisfactory performance due to inappropriate setting of the initial states. In this paper, we design a data-driven method augmented by an effective information fusion mechanism to learn from historical data that incorporates prior knowledge from NWP. We cast the weather forecasting problem as an end-to-end deep learning problem and solve it by proposing a novel negative log-likelihood error (NLE) loss function. A notable advantage of our proposed method is that it simultaneously implements single-value forecasting and uncertainty quantification, which we refer to as deep uncertainty quantification (DUQ). Efficient deep ensemble strategies are also explored to further improve performance. This new approach was evaluated on a public dataset collected from weather stations in Beijing, China. Experimental results demonstrate that the proposed NLE loss significantly improves generalization compared to mean squared error (MSE) loss and mean absolute error (MAE) loss. Compared with NWP, this approach significantly improves accuracy by 47.76%, which is a state-of-the-art result on this benchmark dataset. The preliminary version of the proposed method won 2nd place in an online competition for daily weather forecasting1
Wang, D, Zhang, W, Yu, S & He, H 1970, 'RLS-VNE: Repeatable Large-Scale Virtual Network Embedding over Substrate Nodes', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, Waikoloa, HI, USA.
View/Download from: Publisher's site
View description>>
Embedding multiple virtual networks (VNs) on a shared substrate network (SN), known as virtual network embedding (VNE), is a challenging problem in cloud platforms. VNE methods can provide strategies to deploy VNs onto SN resources. However, as the scale of VN greatly increases, traditional VNE methods are time-consuming and waste link resource. Meanwhile, traditional VNE methods assign each virtual node of the same VN to different substrate nodes, whereas it is hard to provide larger scale SN to provision the VN. In order to efficiently embed large-scale VNs, multiple virtual nodes from the same VN need to share the same substrate node. We therefore model a repeatable large-scale virtual network embedding (RLSVNE) problem in this study, provisioning large-scale VNs, and propose a heuristic method (Rlsvne) to handle RLS-VNE. Rlsvne pre-processes the VN topology before embedding. In the pre-processing stage, the VN topology is processed through graph coarsening, partitioning, and uncoarsening. After the pre-processing, Rlsvne accomplishes an embedding stage with a topology- aware repeatable embedding solution. 1,000 and 10,000-scale VNE experiments are conducted to demonstrate our Rlsvne. The evaluation results demonstrate that our Rlsvne outperforms three modified heuristics. Rlsvne shows improved performance in reducing substrate cost and fully utilizing substrate resources, achieving high acceptance ratio and revenue values.
Wang, X, Jin, D, Liu, M, He, D, Musial, K & Dang, J 1970, 'Emotional Contagion-Based Social Sentiment Mining in Social Networks by Introducing Network Communities', Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19: The 28th ACM International Conference on Information and Knowledge Management, ACM, Beijing, China, pp. 1763-1772.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. The rapid development of social media services has facilitated the communication of opinions through online news, blogs, microblogs, instant-messages, and so on. This article concentrates on the mining of readers' social sentiments evoked by social media materials. Existing methods are only applicable to a minority of social media like news portals with emotional voting information, while ignore the emotional contagion between writers and readers. However, incorporating such factors is challenging since the learned hidden variables would be very fuzzy (because of the short and noisy text in social networks). In this paper, we try to solve this problem by introducing a high-order network structure, i.e. communities. We first propose a new generative model called Community-Enhanced Social Sentiment Mining (CESSM), which 1) considers the emotional contagion between writers and readers to capture precise social sentiment, and 2) incorporates network communities to capture coherent topics. We then derive an inference algorithm based on Gibbs sampling. Empirical results show that, CESSM achieves significantly superior performance against the state-of-the-art techniques for text sentiment classification and interestingness in social sentiment mining.
Wang, Y, Jin, D, Musial, K & Dang, J 1970, 'Community Detection in Social Networks Considering Topic Correlations', Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI), Hawaii, USA, pp. 321-328.
View/Download from: Publisher's site
View description>>
Network contents including node contents and edge contents can be utilized for community detection in social networks. Thus, the topic of each community can be extracted as its semantic information. A plethora of models integrating topic model and network topologies have been proposed. However, a key problem has not been resolved that is the semantic division of a community. Since the definition of community is based on topology, a community might involve several topics. To ach
Wang, Y, Zhang, X, Fan, L, Yu, S & Lin, R 1970, 'Segment Routing Optimization for VNF Chaining', ICC 2019 - 2019 IEEE International Conference on Communications (ICC), ICC 2019 - 2019 IEEE International Conference on Communications (ICC), IEEE, Shanghai, China.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Segment Routing (SR) is an emerging source routing based tunneling technique, which allows source router to steer traffic by encoding segment list in the packet header. Due to its fine-grained control of routing path, SR can be leveraged to facilitate the deployment of Service Function Chains (SFCs). Using SR, multiple segments compose a specific path delivering traffic along a set of ordered Virtual Network Function (VNF) instances. However, when introducing SR into VNF chaining, the segment list depth of SR may face the scalability problem since traffic flows must be steered to traverse a serial of ordered VNFs. To address this problem, we study on segment routing optimization for VNF chaining. Our objective is to minimize the packet overhead of SR for all SFC demands, which indicates the scalability performance of SR. We first formulate the problem as an Integer Linear Programming (ILP) model. Since the ILP model is NP-hard, we then propose a heuristic algorithm named Segment Routing for SFC Steering (SR-SFCS), which is based on the method of backtracking and dynamic programming. Extensive simulation results show that compared with the benchmark algorithms, SR-SFCS can reduce the packet overhead by 23.77% in average.
Wang, Z, Li, Q, Li, G & Xu, G 1970, 'Polynomial Representation for Persistence Diagram', 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, CA, USA, pp. 6116-6125.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Persistence diagram (PD) has been considered as a compact descriptor for topological data analysis (TDA). Unfortunately, PD cannot be directly used in machine learning methods since it is a multiset of points. Recent efforts have been devoted to transforming PDs into vectors to accommodate machine learning methods. However, they share one common shortcoming: the mapping of PDs to a feature representation depends on a pre-defined polynomial. To address this limitation, this paper proposes an algebraic representation for PDs, i.e., polynomial representation. In this work, we discover a set of general polynomials that vanish on vectorized PDs and extract the task-adapted feature representation from these polynomials. We also prove two attractive properties of the proposed polynomial representation, i.e., stability and linear separability. Experiments also show that our method compares favorably with state-of-the-art TDA methods.
Wu, D, Chen, J, Sharma, N, Pan, S, Long, G & Blumenstein, M 1970, 'Adversarial Action Data Augmentation for Similar Gesture Action Recognition', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, Hungary.
View/Download from: Publisher's site
View description>>
Human gestures are unique for recognizing and describing human actions, and video-based human action recognition techniques are effective solutions to varies real-world applications, such as surveillance, video indexing, and human-computer interaction. Most existing video human action recognition approaches either using handcraft features from the frames or deep learning models such as convolutional neural networks (CNN) and recurrent neural networks (RNN); however, they have mostly overlooked the similar gestures between different actions when processing the frames into the models. The classifiers suffer from similar features extracted from similar gestures, which are unable to classify the actions in the video streams. In this paper, we propose a novel framework with generative adversarial networks (GAN) to generate the data augmentation for similar gesture action recognition. The contribution of our work is tri-fold: 1) we proposed a novel action data augmentation framework (ADAF) to enlarge the differences between the actions with very similar gestures; 2) the framework can boost the classification performance either on similar gesture action pairs or the whole dataset; 3) experiments conducted on both KTH and UCF101 datasets show that our data augmentation framework boost the performance on both similar gestures actions as well as the whole dataset compared with baseline methods such as 2DCNN and 3DCNN.
Wu, D, Hu, R, Zheng, Y, Jiang, J, Sharma, N & Blumenstein, M 1970, 'Feature-Dependent Graph Convolutional Autoencoders with Adversarial Training Methods', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, Hungary, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Graphs are ubiquitous for describing and modeling complicated data structures, and graph embedding is an effective solution to learn a mapping from a graph to a low-dimensional vector space while preserving relevant graph characteristics. Most existing graph embedding approaches either embed the topological information and node features separately or learn one regularized embedding with both sources of information, however, they mostly overlook the interdependency between structural characteristics and node features when processing the graph data into the models. Moreover, existing methods only reconstruct the structural characteristics, which are unable to fully leverage the interaction between the topology and the features associated with its nodes during the encoding-decoding procedure. To address the problem, we propose a framework using autoencoder for graph embedding (GED) and its variational version (VEGD). The contribution of our work is two-fold: 1) the proposed frameworks exploit a feature-dependent graph matrix (FGM) to naturally merge the structural characteristics and node features according to their interdependency; and 2) the Graph Convolutional Network (GCN) decoder of the proposed framework reconstructs both structural characteristics and node features, which naturally possesses the interaction between these two sources of information while learning the embedding. We conducted the experiments on three real-world graph datasets such as Cora, Citeseer and PubMed to evaluate our framework and algorithms, and the results outperform baseline methods on both link prediction and graph clustering tasks.
Wu, J, Xie, R, Song, L & Liu, B 1970, 'Deep Feature Guided Image Retargeting', 2019 IEEE Visual Communications and Image Processing (VCIP), 2019 IEEE Visual Communications and Image Processing (VCIP), IEEE, Sydney, Australia.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Image retargeting is the technique to display images via devices with various aspect ratios and sizes. Traditional content-Aware retargeting methods rely on low-level features to predict pixel-wise importance and can hardly preserve both the structure lines and salient regions of the source image. To address this problem, we propose a novel adaptive image warping approach which integrates with deep convolutional neural network. In the proposed method, a visual importance map and a foreground mask map are generated by a pre-Trained network. The two maps and other constraints guide the warping process to yield retargeted results with less distortions. Extensive experiments in terms of visual quality and a user study are carried out on the widely used RetargetMe dataset. Experimental results show that our method outperforms current state-of-Art image retargeting methods.
Wu, S & Bai, Q 1970, 'Incentivizing Long-Term Engagement Under Limited Budget', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 662-674.
View/Download from: Publisher's site
View description>>
In recent years, more and more systems have been designed to affect users’ decisions for realizing certain system goals. However, most of these systems only focus on affecting users’ short-term or one-off behaviors, while ignoring the maintenance of users’ long-term engagement. In this light, we intend to design a novel approach which focuses on incentivizing users’ long-term engagement. In this paper, inspired by the use of Markov Decision Process (MDP), we first formally model the process of a user’s decision-making under long-term incentives. Subsequently, we propose the MDP-based Incentive Estimation (MDP-IE) approach for determining the value of an incentive and the requirement of obtaining that incentive. Experimental results demonstrate that the proposed approach can effectively sustain users’ long-term engagement. Furthermore, the experiments also demonstrate that incentivizing users’ long-term engagement is more beneficial than one-off or short-term approaches.
Wu, S, Bai, Q & Kang, BH 1970, 'Adaptive Incentive Allocation for Influence-Aware Proactive Recommendation', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 649-661.
View/Download from: Publisher's site
View description>>
Most recommendation systems are designed for seeking users’ demands and preferences, whereas impotent to affect users’ decisions for realizing the system-level objective. In this light, we intend to propose a generic concept named ‘proactive recommendation’, which focuses on not only maintaining users’ satisfaction but also realizing system-level objectives. In this paper, we claim the proactive recommendation is crucial for the scenario where the system objectives are required to realize. To realize proactive recommendation, we intend to affect users’ decision-making by providing incentives and utilizing social influence between users. We design an approach for discovering the influential users in an unknown network, and a dynamic game-based mechanism that allocates incentives to users dynamically. The preliminary experimental results show the effectiveness of the proposed approach.
Xiao, K, Zhao, J, He, Y & Yu, S 1970, 'Trajectory Prediction of UAV in Smart City using Recurrent Neural Networks', ICC 2019 - 2019 IEEE International Conference on Communications (ICC), ICC 2019 - 2019 IEEE International Conference on Communications (ICC), IEEE, Shanghai, China.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The 5th generation (5G) wireless network with Unmanned aerial vehicle (UAV) is considered to be one of the most effective solutions for improving the communication coverage. However, UAV is easily affected by the wind, accompanied by a certain time delay during the air communication. Thus the inaccurate beamforming will be performed by the base station (BS), resulting in the unnecessary capacity loss. To address this issue, we propose a novel Recurrent Neural Networks (RNN)-based arrival angle predictor to predict the specific communication location of UAV under the 5G Internet of Things (IoT) networks in this paper. Specifically, a grid-based coordinate system is applied during the data preprocessing to make the training process easier and more effective. Moreover, the RNN model with the highest accuracy can be saved during the training process to ensure the real-time prediction. Simulation results reveal that the RNN-based predictor we proposed is of high prediction accuracy, which is 98% in average. Therefore, a more precise beamforming can be performed by BS to reduce the unnecessary capacity loss, resulting in a more effective and reliable communication system.
Xiao, Y, Xiao, L, Zhang, H, Yu, S & Poor, HV 1970, 'Privacy Aware Recommendation: Reinforcement Learning Based User Profile Perturbation', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, Waikoloa, HI, USA.
View/Download from: Publisher's site
View description>>
User profile release in recommendation systems can apply the user profile perturbation technique to protect user privacy, in which each user sends a perturbed user profile such as the a list of clicked items to receive a recommendation service from a server. The perturbation policy such as the privacy budget determines the recommendation quality and the privacy level, while its optimization usually depends on the known attack model, which is rarely known by the users. In this paper, we propose a reinforcement learning based user profile perturbation scheme that applies differential privacy to protect user privacy for recommendation systems. According to reinforcement learning, the privacy budget to perturb the released user profile depends on the features of the actual user profiles and the released user profiles, and the estimated user privacy level. This scheme enables a user to optimize his or her perturbation policy in terms of both the user privacy level and the received recommendation quality without being aware of the attack model. We evaluate the computational complexity of this scheme and analyze a case study, a privacy aware movie recommendation system. Simulation results show that this scheme improves user privacy protection for a given level of recommendation quality compared with a benchmark profile perturbation scheme.
Yan, W, Fu, A, Mu, Y, Zhe, X, Yu, S & Kuang, B 1970, 'EAPA', Proceedings of the 2nd International ACM Workshop on Security and Privacy for the Internet-of-Things, CCS '19: 2019 ACM SIGSAC Conference on Computer and Communications Security, ACM, United Kingdom.
View/Download from: Publisher's site
View description>>
The wide deployment of devices in Internet of Things (IoT) not only brings many benefits, but also incurs some security challenges. Remote attestation becomes an attractive method to guarantee the security of IoT devices. Unfortunately, most current attestation schemes only focus on the software attacks, but cannot detect the physical attacks. Several remote attestation schemes resilient to physical attacks still have some drawbacks in energy consumption, runtime, and security. In this paper, we propose an Efficient Attestation scheme resilient to Physical Attacks (EAPA) for IoT devices. We exploit a distributed attestation mode to make the protocol be executed in parallel, which reduces the total runtime to $O(1)$. Besides, we introduce an accusation mechanism to report compromised devices and design a new key update method, ensuring the efficiency and the security of our scheme. Furthermore, we present the security analysis and the performance evaluation of EAPA. The results indicate that EAPA has the lowest energy and runtime consumption compared with related works. Particularly, it shows a constant value in terms of runtime consumption.
Yang, H, Pan, S, Chen, L, Zhou, C & Zhang, P 1970, 'Low-Bit Quantization for Attributed Network Representation Learning', Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}, International Joint Conferences on Artificial Intelligence Organization, Macao, pp. 4047-4053.
View/Download from: Publisher's site
View description>>
Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.
Yao, L, Jia, Y, Zhang, H, Long, K, Pan, M & Yu, S 1970, 'A Decentralized Private Data Transaction Pricing and Quality Control Method', ICC 2019 - 2019 IEEE International Conference on Communications (ICC), ICC 2019 - 2019 IEEE International Conference on Communications (ICC), IEEE, Shanghai, China.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. In the past few years, it has become increasingly popular to analyze the information obtained to develop services by conducting a decentralized survey of private data for specific populations. Privacy security requirements for data providers force operators to implement reasonable privacy protections. But increasing the investment in privacy protection will also lead to a decline in operator revenue. In this case, operators need to ensure the privacy and security requirements of users while ensuring the sustainability of customized services. To this end, We study the relationship between collecting data quality and operator strategy, quantifying the price of private data, and building a model to maximize operator profitability. Specifically, closed-form solutions for best privacy data prices and subscription fees are designed to maximize the gross profit of service providers. Also includes the collection of data quality factors to ensure that the user perceived quality of service can be guaranteed to a certain extent. Finally, we explored the relationship between spending, subscription fees, and maximum gross profit of carriers during the data collection phase, based on the distribution of different user groups' privacy attitudes. In particular, we also explored the relationship between adding additional noise and collecting data utility in a decentralized privacy protection scenario. The simulation results show that compared with the existing methods, the algorithm can maximize the collected data quality while ensuring the provider's privacy security requirements. In addition, we demonstrate the benefits of our dynamic pricing approach and its applicability to other private data pricing algorithms.
Yeung, J & McGregor, C 1970, 'Analyzing countermeasure effectiveness utilizing big data analytics for space medicine decision support: A case study', Proceedings of the International Astronautical Congress, IAC, International Astronautical Congress, Washington D.C.
View description>>
The physiological health and wellbeing of every individual crew member is critical to the success of any long duration space mission. In the most predominant space travel in recent years that are the expeditions aboard the International Space Station (ISS), astronaut's physiological data, psychological surveyed data, and the spacecraft's habitable environmental data are periodically monitored by Mission Control, while also providing a range of data for retrospective research studies. This has led to the optimization of onboard environmental control and life support systems, countermeasure exercise programs, and preventive measures, extending human space travel capacities from 90 minutes in 1961 to 180 days or even a year for current day ISS expeditions. Although current methodologies help minimize health impacts for the astronauts pre-flight, during, and post-flight, these impacts are not detected in real-time and there is much that remain unknown for longer duration missions that will last 2-3 years, such as one to Mars. Acquired physiological data from existing onboard equipment is still monitored retrospectively and issues such as intracranial pressure resulting in vision changes for astronauts during spaceflight and post-flight still prevail. Behavioural health and psychological effects due to the isolation, confinement, and social impacts with other astronauts in the spacecraft for periods longer than current expeditions also still remain. Such health and well-being implications are critical for the astronauts themselves to comprehend given the autonomous nature of every mission into space, therefore Autonomous Medical Care development is critical. In recent research, advanced prognostic health management enabled by the online analytics platform, Artemis, has demonstrated its potential in determining health states of astronauts utilizing heart rate variability (HRV) data. However, this environment exists independent to the countermeasure exercise p...
Yin, R, Li, K, Lu, J & Zhang, G 1970, 'Enhancing Fashion Recommendation with Visual Compatibility Relationship', The World Wide Web Conference, WWW '19: The Web Conference, ACM, San Francisco CA USA, pp. 3434-3440.
View/Download from: Publisher's site
View description>>
© 2019 IW3C2 (International World Wide Web Conference Committee), published under Creative Commons CC-BY 4.0 License. With the increasing of online shopping services, fashion recommendation plays an important role in daily online shopping scenes. A lot of recommender systems have been developed with visual information. However, few works take into account compatibility relationship when they are generating recommendations. The challenge is that fashion concept is often subtle and subjective for different customers. In this paper, we propose a fashion compatibility knowledge learning method that incorporates visual compatibility relationships as well as style information. We also propose a fashion recommendation method with domain adaptation strategy to alleviate the distribution gap between the items in target domain and the items of external compatible outfits. Our results indicate that the proposed method is capable of learning visual compatibility knowledge and outperforms all the baselines.
Yin, R, Li, K, Lu, J & Zhang, G 1970, 'RsyGAN: Generative Adversarial Network for Recommender Systems', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, Hungary.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Many recommender systems rely on the information of user-item interactions to generate recommendations. In real applications, the interaction matrix is usually very sparse, as a result, the model cannot be optimised stably with different initial parameters and the recommendation performance is unsatisfactory. Many works attempted to solve this problem, however, the parameters in their models may not be trained effectively due to the sparse nature of the dataset which results in a lower quality local optimum. In this paper, we propose a generative network for making user recommendations and a discriminative network to guide the training process. An adversarial training strategy is also applied to train the model. Under the guidance of a discriminative network, the generative network converges to an optimal solution and achieves better recommendation performance on a sparse dataset. We also show that the proposed method significantly improves the precision of the recommendation performance on several datasets.
Yu, H, Lu, J, Xu, J & Zhang, G 1970, 'A Hybrid Incremental Regression Neural Network for Uncertain Data Streams', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, Hungary.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The design of classical regression algorithms was based on the assumption that all the required data is obtained at one time. With the emergence of big data, however, data is increasingly displayed in sequence form, such as in data streams, and can be read only once in a specific order. Many incremental regression algorithms which process data in a sequential manner have been proposed, but the accuracy of these algorithms deteriorates when the value of the data is uncertain. This paper proposes a hybrid incremental regression neural network based on self-organizing incremental neural network and incremental fuzzy support vector regression. In our proposed network, the neurons of the regression neural network are obtained by an improved self-organized incremental neural network (SOINN). This enables the regression neural network structure to self-organize as the number of neurons increases. An incremental fuzzy support vector regression (IFSVR) algorithm is then used to modify the parameters of the regression neural network. By combining the improved SOINN and IFSVR algorithms, our proposed hybrid incremental regression neural network is able to learn an accurate regression model from large uncertain data. Experiments on both artificial and real-world datasets indicate that our proposed hybrid incremental regression neural network achieves superior performance compared to other incremental regression algorithms.
Zakeri, A, Saberi, M, Aboutalebi, S, Hussain, OK & Chang, E 1970, 'Smart Farm', Proceedings of the Workshop on Interactive Data Mining, WSDM '19: The Twelfth ACM International Conference on Web Search and Data Mining, ACM, Melbourne, Australia, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. In recent past, there is a growing trend of interest among the downstream stakeholders of a dairy chain to receive milk (and dairy products) of high quality. Moreover, the rejection of milk and other dairy products by customers due to its poor quality has severe negative impacts on the dairy chain's upstream stakeholders. “Smart Farm” is a system for proactive management of raw milk quality in dairy farms. It aims to empower the dairy chain's stakeholders of milk farmers and logistics service providers with next-generation interactive artificial intelligence-based automated systems to be active participants in proactively managing raw milk quality and consequently, maximizing their expected benefits through maintaining higher quality for the milk they supply to the processor.
Zhang, B, Lu, J & Zhang, G 1970, 'Drift Adaptation via Joint Distribution Alignment', 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, pp. 498-504.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Machine learning in evolving environment faces challenges due to concept drift. Most concept drift adaptation methods focus on modifying the model. In this paper, a method, Drift Adaptation via Joint Distribution Alignment (DAJDA), is proposed. DAJDA performs a linear transformation to the drift instances instead of modifying model. Instances are transformed into a common feature space, reducing the discrepancy of distributions before and after drift. Experimental studies show that DAJDA has abilities to improve the performance of learning model under concept drift.
Zhang, D, Zhang, Q, Zhang, G & Lu, J 1970, 'FreshGraph: A Spam-Aware Recommender System for Cold Start Problem', 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, pp. 1211-1218.
View/Download from: Publisher's site
View description>>
Recommender systems provide personalized recommendation to help users levitating from information overload. Collaborative filtering based recommendation methods are playing a dominant role in the industry because of its versatility and simplicity. However, its performance suffers from sparse data, and being less effective in cold-start problem settings. In real world scenario, when users are recommended with items, it is very easy to overwhelm the target users with impersonalized information, which drives away valuable audience. In this paper, we propose a two-steps spam aware recommendation framework to effectively recommend new items to target users. By utilizing heterogeneous information graph structure, we first use item-user Meta-Path similarity measure for user candidate selection. Then we use entropy encoding measurement to identify false positive from candidate list to prevent possible spam from happening. The proposed method leverages the semantic information that persists inside the graph structure, which not only considers item content features, but also take user activeness into account for more effective audience targeting. The proposed method produces an explainable top-K user list for the new item, while K is a trailed number to each given item individually. Meanwhile, the proposed method is also adaptive to data change overtime, while capable of processing requests in a real-time fashion.
Zhang, J, Chen, B, Yu, S & Deng, H 1970, 'PEFL: A Privacy-Enhanced Federated Learning Scheme for Big Data Analytics', 2019 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, Waikoloa, HI, USA.
View/Download from: Publisher's site
View description>>
Federated learning has emerged as a promising solution for big data analytics, which jointly trains a global model across multiple mobile devices. However, participants' sensitive data information may be leaked to an untrusted server through uploaded gradient vectors. To address this problem, we propose a privacy-enhanced federated learning (PEFL) scheme to protect the gradients over an untrusted server. This is mainly enabled by encrypting participants' local gradients with Paillier homomorphic cryptosystem. In order to reduce the computation costs of the cryptosystem, we utilize the distributed selective stochastic gradient descent (DSSGD) method in the local training phase to achieve the distributed encryption. Moreover, the encrypted gradients can be further used for secure sum aggregation at the server side. In this way, the untrusted server can only learn the aggregated statistics for all the participants' updates, while each individual's private information will be well-protected. For the security analysis, we theoretically prove that our scheme is secure under several cryptographic hard problems. Exhaustive experimental results demonstrate that PEFL has low computation costs while reaching high accuracy in the settings of federated learning.
Zhang, J, Chen, J, Wu, D, Chen, B & Yu, S 1970, 'Poisoning Attack in Federated Learning using Generative Adversarial Nets', 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), IEEE, Rotorua, New Zealand, pp. 374-380.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
Zhang, J, Yang, L, Yu, S & Ma, J 1970, 'A DNS Tunneling Detection Method Based on Deep Learning Models to Prevent Data Exfiltration', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Network and System Security, Springer International Publishing, Sapporo, Japan, pp. 520-535.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. DNS tunneling is a typical DNS attack that has been used for stealing information for many years. The stolen data is encoded and encapsulated into the DNS request to evade intrusion detection. The popular detection methods of machine learning use features, such as network traffic and DNS behavior. However, most features can only be extracted when data exfiltration occurs, like time-frequency related features. The key to prevent data exfiltration based on DNS tunneling is to detect the malicious query from single DNS request. Since we don’t use the network traffic features and DNS behavior features, our method can detect DNS tunneling before data exfiltration. In this paper, we propose a detection method based on deep learning models, which uses the DNS query payloads as predictive variables in the models. As the DNS tunneling data is a kind of text, our approach use word embedding as a part of fitting the neural networks, which is a feature extraction method in natural language processing (NLP). In order to achieve high performance, the detection decision is made by these common deep learning models, including dense neural network (DNN), one-dimensional convolutional neural network (1D-CNN) and recurrent neural network (RNN). We implement the DNS tunneling detection system in the real network environment. The results show that our approach achieves 99.90% accuracy and is more secure than existing methods.
Zhang, Q, Hao, P, Lu, J & Zhang, G 1970, 'Cross-domain Recommendation with Semantic Correlation in Tagging Systems', 2019 International Joint Conference on Neural Networks (IJCNN), 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, Budapest, Hungary, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The tagging system provides users with a platform to express their preferences as they annotate terms or keywords to items. Tag information is a bridge between two domains for transferring knowledge and helping to alleviate the data sparsity problem, which is a crucial and challenging problem in most recommender systems. Existing methods incorporate correlations extracted from overlapping tags at a lexical level in cross-domain recommendation, but they neglect semantical relationships between different tags, which impairs prediction accuracy in the target domain. To solve this challenging problem, we propose a cross-domain recommendation method with semantic correlation in tagging systems. This method automatically captures the semantic relationships between non-identical tags and applies them to the recommendation. The word2vec technique is used to learn the latent representations of tags. Semantically equivalent tags are then grouped to form a joint embedding space comprised of tag clusters. This embedding space serves as the bridge between domains. By mapping users and items from both the source and target domains into the same embedding space, similar users or items across domains can be identified. Thus, the recommendation in a sparse target domain is improved by transferring knowledge through correlated users and items. Experimental results with three datasets on six cross-domain recommendation tasks demonstrate that the proposed method exploits the semantic links from tags in two domains and outperforms five benchmarks in prediction accuracy. The results indicate that transferring knowledge through tags semantics is feasible and effective.
Zhang, Q, Zhang, D, Lu, J, Zhang, G, Qu, W & Cohen, M 1970, 'A Recommender System for Cold-start Items: A Case Study in the Real Estate Industry', 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, pp. 1185-1192.
View/Download from: Publisher's site
View description>>
The recommender systems provide users with what they prefer and filter unnecessary information. In the fierce marketing environment, it is crucial to recommend items to users in an early stage to keep user's interests and loyalty. With the fast product renewal, classical recommendation methods such as collaborative filtering cannot handle the cold-start item problem. In many real-world applications, content information of items or users is available and can be used to assist recommendation. Besides, user may interact with the items in different behaviors such as view, click or subscribe. How to use the complex content information and multiple user behaviors are real problems that are not well solved in applications. In this paper, we propose a content-based recommender system to deal with the practical problem. Boosting tree model also added to the system to avoid potential Spam. We applied our developed method to real estate application to recommend new property which just landed into the market to users. Experimental results with three data subsets and three recommendation scenarios demonstrate that the proposed method can outperform the baseline on recommendation accuracy. The results indicate that our method can effectively reduce potential Spam to users, so that user experience will be improved.
Zhang, X, Yao, L, Wang, X, Zhang, W, Zhang, S & Liu, Y 1970, 'Know Your Mind: Adaptive Cognitive Activity Recognition with Reinforced CNN', 2019 IEEE International Conference on Data Mining (ICDM), 2019 IEEE International Conference on Data Mining (ICDM), IEEE, Beijing, China, pp. 896-905.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Electroencephalography (EEG) signals reflect and measure activities in certain brain areas. Its zero clinical risk and easy-to-use features make it a good choice of providing insights into the cognitive process. However, effective analysis of time-varying EEG signals remains challenging. First, EEG signal processing and feature engineering are time-consuming and highly rely on expert knowledge, and most existing studies focus on domain-specific classification algorithms, which may not apply to other domains. Second, EEG signals usually have low signal-to-noise ratios and are more chaotic than other sensor signals. In this regard, we propose a generic EEG-based cognitive activity recognition framework that can adaptively support a wide range of cognitive applications to address the above issues. The framework uses a reinforced selective attention model to choose the characteristic information among raw EEG signals automatically. It employs a convolutional mapping operation to dynamically transform the selected information into a feature space to uncover the implicit spatial dependency of EEG sample distribution. We demonstrate the effectiveness of the framework under three representative scenarios: intention recognition with motor imagery EEG, person identification, and neurological diagnosis, and further evaluate it on three widely used public datasets. The experimental results show our framework outperforms multiple state-of-the-art baselines and achieves competitive accuracy on all the datasets while achieving low latency and high resilience in handling complex EEG signals across various domains. The results confirm the suitability of the proposed generic approach for a range of problems in the realm of brain-computer Interface applications.
Zhang, Y, Saberi, M, Wang, M & Chang, E 1970, 'K3S: Knowledge-driven solution support system', 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Thirty-Third AAAI Conference on Artificial Intelligence, USA, pp. 9873-9874.
View description>>
As the volume of scientific papers grows rapidly in size, knowledge management for scientific publications is greatly needed. Information extraction and knowledge fusion techniques have been proposed to obtain information from scholarly publications and build knowledge repositories. However, retrieving the knowledge of problem/solution from academic papers to support users on solving specific research problems is rarely seen in the state of the art. Therefore, to remedy this gap, a knowledge-driven solution support system (K3S) is proposed in this paper to extract the information of research problems and proposed solutions from academic papers, and integrate them into knowledge maps. With the bibliometric information of the papers, K3S is capable of providing recommended solutions for any extracted problems. The subject of intrusion detection is chosen for demonstration in which required information is extracted with high accuracy, a knowledge map is constructed properly, and solutions to address intrusion problems are recommended.
Zhang, Y, Zhao, X, Li, X, Zhong, M, Curtis, C & Chen, C 1970, 'Enabling Privacy-Preserving Sharing of Genomic Data for GWASs in Decentralized Networks', Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19: The Twelfth ACM International Conference on Web Search and Data Mining, ACM.
View/Download from: Publisher's site
Zhang, Y, Zhu, Y, Huang, L, Zhang, G & Lu, J 1970, 'Characterizing the potential of being emerging generic technologies: A Bi-Layer Network Analytics-based Prediction Method', 17th International Conference on Scientometrics and Informetrics, ISSI 2019 - Proceedings, International Conference on Scientometrics & Informetrics, Edizioni Efesto, Rome, Italy, pp. 1436-1447.
View description>>
Despite tremendous involvement of bibliometrics in profiling technological landscapes and identifying emerging topics, how to predict potential technological change is still unclear. This paper proposes a bi-layer network analytics-based prediction method to characterize the potential of being emerging generic technologies. Initially, based on the innovation literature, three technological characteristics are defined, and quantified by topological indicators in network analytics; a link prediction approach is applied for reconstructing the network with weighted missing links, and such reconstruction will also result in the change of related technological characteristics; the comparison between the two ranking lists of terms can help identify potential emerging generic technologies. A case study on predicting emerging generic technologies in information science demonstrates the feasibility and reliability of the proposed method.
Zhao, M, Shu, Y, Liu, S & Xu, G 1970, 'Electricity Price Forecast using Meteorology data: A study in Australian Energy Market', 2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC), 2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC), IEEE, Beijing, China.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Electricity price as a fundamental cost for each family which is an essential segment in the electricity market. The adjustment of electricity price can present the change in electricity supply and demand relationship. For the electricity supply companies, an appropriate defined electricity price can eventually determine the level of profit. On the other hand, an accurate prediction can help to seize opportunities in the electricity market. In this paper, we aim to predict the electricity price with more confident accuracy by leveraging data mining techniques. Our experiment on 12 months of electricity prices as well as climate data in the New South Wales has achieved a promising prediction result.
Zhao, Y, Chen, J, Wu, D, Teng, J & Yu, S 1970, 'Multi-Task Network Anomaly Detection using Federated Learning', Proceedings of the Tenth International Symposium on Information and Communication Technology - SoICT 2019, the Tenth International Symposium, ACM Press, Hanoi Ha Long Bay, Vietnam, pp. 273-279.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Because of the complexity of network traffic, there are various sig-nificant challenges in the network anomaly detection fields. One of the major challenges is the lack of labeled training data. In this paper, we use federated learning to tackle data scarcity problem and to preserve data privacy, where multiple participants collaboratively train a global model. Unlike the centralized training architecture, participants do not need to share their training to the server in federated learning, which can prevent the training data from being exploited by attackers. Moreover, most of the previous works focus on one specific task of anomaly detection, which restricts the application areas and can not provide more valuable information to network administrators. Therefore, we propose a multi-task deep neural network in federated learning (MT-DNN-FL) to perform network anomaly detection task, VPN (Tor) traffic recognition task, and traffic classification task, simultaneously. Compared with multiple single-task models, the multi-task method can reduce training time overhead. Experiments conducted on well-known CICIDS2017, ISCXVPN2016, and ISCXTor2016 datasets, show that the detection and classification performance achieved by the proposed method is better than the baseline methods in centralized training architecture.
Zheng, C, Cai, Y, Xu, J, Leung, H-F & Xu, G 1970, 'A Boundary-aware Neural Model for Nested Named Entity Recognition', Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, pp. 357-366.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computational Linguistics In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.
Zhi, Y, Yang, L, Yu, S & Ma, J 1970, 'BQSV: Protecting SDN Controller Cluster’s Network Topology View Based on Byzantine Quorum System with Verification Function', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Symposium on Cyberspace Safety and Security, Springer International Publishing, Guangzhou, China, pp. 73-88.
View/Download from: Publisher's site
View description>>
© 2019, Springer Nature Switzerland AG. In Software-defined network (SDN), SDN applications and administrators rely on the logically centralized view of the network topology to make management decisions. Therefore, the correctness of SDN controller cluster’s network topology view becomes critical. However, the lack of security mechanism in SDN controller cluster makes the network topology view easy to be tampered with. In this paper, we argue that malicious controllers in a cluster can easily damage the network view of the cluster through the east-west bound interfaces. We present a scheme based on Byzantine Quorum System with verification function (BQSV) to prevent malicious controllers from manipulating the cluster’s network view through east-west bound interface and providing wrong topology information to SDN applications and administrators. Moreover, we implement the prototype of our scheme and extensive experiments to show that the proposed scheme can prevent malicious controllers from damaging the topology information of the cluster with trivial overheads.
Zhou, Z, Liu, S, Xu, G & Zhang, W 1970, 'On Completing Sparse Knowledge Base with Transitive Relation Embedding', Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI), Honolulu, Hawaii USA, pp. 3125-3132.
View/Download from: Publisher's site
View description>>
Multi-relation embedding is a popular approach to knowledge base completion that learns embedding representations of entities and relations to compute the plausibility of missing triplet. The effectiveness of embedding approach depends on the sparsity of KB and falls for infrequent entities that only appeared a few times. This paper addresses this issue by proposing a new model exploiting the entity-independent transitive relation patterns, namely Transitive Relation Embedding (TRE). The TRE model alleviates the sparsity problem for predicting on infrequent entities while enjoys the generalisation power of embedding. Experiments on three public datasets against seven baselines showed the merits of TRE in terms of knowledge base completion accuracy as well as computational complexity.
Zhu, T & Yu, PS 1970, 'Applying Differential Privacy Mechanism in Artificial Intelligence', 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), IEEE, Dallas, TX, USA, pp. 1601-1609.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
Zuo, H, Zhang, G, Pedrycz, W & Lu, J 1970, 'Domain Selection of Transfer Learning in Fuzzy Prediction Models', 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. Transfer learning has emerged as a solution for the cases where little or no labeled data are available in the training process. It leverages the previously acquired knowledge (a source domain with a large amount of labeled data) to facilitate solving the current tasks (a target domain with little labeled data). Many transfer learning methods have been proposed, and especially fuzzy transfer learning method, which is based on fuzzy systems, has been developed because of its capability to deal with the uncertainty in transfer learning. However, there is one issue with fuzzy transfer learning that has not yet been resolved: the domain selection problem, which is heavily depended on the knowledge transfer method and the applied prediction model. In this work, we explore the domain selection problem in TakagiSugeno fuzzy model when multiple source domains are accessible, and define the similarity between the source and target domains to provide guidance for the domain selection. The experiments on synthetic datasets are designed to simulate the situations of multiple sources in transfer learning, and demonstrate the rationality of the proposed similarity in selecting the source domain for the target domain. Further, the real-world datasets are used to validate the proposed domain adaptation method, and verify its capability in solving practical situations.
Zürn, X, Broekhuijsen, M, van Gennip, D, Bakker, S, Zijlema, A & van den Hoven, E 1970, 'Stimulating Photo Curation on Smartphones', Proceedings of the 2019 Conference on Human Information Interaction and Retrieval, CHIIR '19: Conference on Human Information Interaction and Retrieval, ACM, Glasgow, Scotland, pp. 255-259.
View/Download from: Publisher's site
View description>>
© 2019 Association for Computing Machinery. Personal photo collections have grown due to digital photography and the introduction of smartphones, and photo collections have become harder to manage. Deleting photos appears to be difficult and the task of curation is often perceived as not enjoyable. The lack of curation can make it harder to retrieve photos when people need them for various reasons, such as individual reminiscing, shared remembering or self-presentation. In this study we investigate how we can stimulate people to organise their photo collections on their smartphones. Ten participants evaluated and qualitatively compared four applications with different characteristics regarding voting on and deleting photos. We found that voting on photos is easier and more enjoyable in comparison to deleting photos, that participants showed reminiscence while organising, that deleting can be frustrating, that participants have different preferences for sorting and viewing photos and that voting could make deleting and retrieving easier.