Abdo, P, Huynh, BP, Braytee, A & Taghipour, R 2020, 'An experimental investigation of the thermal effect due to discharging of phase change material in a room fitted with a windcatcher', Sustainable Cities and Society, vol. 61, pp. 102277-102277.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd This paper investigates experimentally the effect of the Phase Change Material (PCM) discharging process as a passive cooling technique on the performance of a two sided windcatcher fitted on an acrylic chamber with dimensions 1250 × 1000 × 750 mm3. Four different models with different locations of PCM are studied, and the results are compared with each other and with a fifth model with No PCM. PCM is integrated respectively at the walls of the chamber, its floor and ceiling and also within the windcatcher's inlet channel. Humidity, temperature and air velocity are monitored for each of the models studied. It is noted that with all the models containing PCM, the average humidity inside the chamber changed only slightly compared to the model with No PCM. The difference in humidity ranged between 0 and 3.88 % which indicates that the humidity variations are not significant. The model with the PCM located on the floor, ceiling and walls as well as in the windcatcher's inlet channel has shown the best performance, with a significant minimum reduction of average temperature in the chamber of about 2.75 °C (approximately 9.33 %) compared with the model with No PCM.
Abdulkareem, SA, Augustijn, E-W, Filatova, T, Musial, K & Mustafa, YT 2020, 'Risk perception and behavioral change during epidemics: Comparing models of individual and collective learning', PLOS ONE, vol. 15, no. 1, pp. e0226483-e0226483.
View/Download from: Publisher's site
View description>>
Modern societies are exposed to a myriad of risks ranging from disease to natural hazards and technological disruptions. Exploring how the awareness of risk spreads and how it triggers a diffusion of coping strategies is prominent in the research agenda of various domains. It requires a deep understanding of how individuals perceive risks and communicate about the effectiveness of protective measures, highlighting learning and social interaction as the core mechanisms driving such processes. Methodological approaches that range from purely physics-based diffusion models to data-driven environmental methods rely on agent-based modeling to accommodate context-dependent learning and social interactions in a diffusion process. Mixing agent-based modeling with data-driven machine learning has become popularity. However, little attention has been paid to the role of intelligent learning in risk appraisal and protective decisions, whether used in an individual or a collective process. The differences between collective learning and individual learning have not been sufficiently explored in diffusion modeling in general and in agent-based models of socio-environmental systems in particular. To address this research gap, we explored the implications of intelligent learning on the gradient from individual to collective learning, using an agent-based model enhanced by machine learning. Our simulation experiments showed that individual intelligent judgement about risks and the selection of coping strategies by groups with majority votes were outperformed by leader-based groups and even individuals deciding alone. Social interactions appeared essential for both individual learning and group learning. The choice of how to represent social learning in an agent-based model could be driven by existing cultural and social norms prevalent in a modeled society.
Abedin, B, Milne, D & Erfani, E 2020, 'Attraction, selection, and attrition in online health communities: Initial conversations and their association with subsequent activity levels', International Journal of Medical Informatics, vol. 141, pp. 104216-104216.
View/Download from: Publisher's site
View description>>
BACKGROUND:The effectiveness of online health communities (OHCs) for improving outcomes for health care consumers, health professionals, and health services has already been well investigated. However, research on determinants of OHC users' activity levels, what is associated with attrition or attraction to these communities, and the impacts of initial posts is limited. OBJECTIVES:We sought to explore topic exchanges in OHCs and determine how users' initial posts and community reactions to them are associated with their subsequent activity levels. We also aimed to extend the theory of Attraction-Selection-Attrition for Online Communities (OCASA) to this area. METHODS:We examined exchanges in a major Australian OHC for cancer patients, analyzing about 2500 messages posted over 2009-18. We developed a novel annotation scheme to examine new members' initial posts and the community's reactions to them. RESULTS:The annotation scheme includes five themes: informational support provision, emotional support provision, requests for help, self-reflection & disclosures, and conversational cues. Initial conversations were associated with future activity levels in terms of active posting versus non-active engagement in the community. We found that most OHC members disclosed personal reflections to bond with the community, and many actively posted to the community solely to provide informational and emotional support to others. CONCLUSION:Our work extends OCASA theory to bond-based contexts, presents a new annotation scheme for OHC support topics, and makes an important contribution to knowledge about the relationship between users' activity levels and their initial posts. The findings help managers and owners understand how members use OHCs and how to encourage active participation. They also suggest how to attract new members and minimize attrition among existing members.
Abu ul Fazal, M, Ferguson, S & Johnston, A 2020, 'Investigating efficient speech-based information communication: a comparison between the high-rate and the concurrent playback designs', Multimedia Systems, vol. 26, no. 5, pp. 621-630.
View/Download from: Publisher's site
View description>>
© 2020, Springer-Verlag GmbH Germany, part of Springer Nature. This research aims to assist users to seek information efficiently while interacting with speech-based information, particularly in multimedia delivery, and reports on an experiment that tested two speech-based designs for communicating multiple speech-based information streams efficiently. In this experiment, a high-rate playback design and a concurrent playback design are investigated. In the high-rate playback design, two speech-based information streams were communicated by doubling the normal playback-rate, and in the concurrent playback design, two speech-based information streams were played concurrently. Comprehension of content in both the designs was also compared with the benchmark set from regular baseline condition. The results showed that the users’ comprehension regarding the main information dropped significantly in the high-rate playback and the concurrent playback designs compared to the baseline condition. However, in answering the questions set from the detailed information, the comprehension was not significantly different in all three designs. It is expected that such equeryfficient communication methods may increase productivity by providing information efficiently while interacting with an interactive multimedia system.
Akbari, F, Saberi, M & Hussain, OK 2020, 'Social network structure-based framework for innovation evaluation and propagation for new product development', Service Oriented Computing and Applications, vol. 14, no. 3, pp. 189-201.
View/Download from: Publisher's site
View description>>
© 2020, Springer-Verlag London Ltd., part of Springer Nature. Evaluating the innovation of a new idea before its implementation is a complicated but important phenomenon as it plays a critical role in the success of a product. The literature widely uses sentiment analysis as a technique for product designers to ascertain users’ opinion toward an idea before its implementation. However, that technique focuses only on determining the opinion of users studied. It does not assist designers in providing insights in terms of what needs to be done to propagate the popularity of the idea further to ensure its success. One framework by which this can be done is by considering social network structure and representing users as nodes of that network. In this paper, we investigate how a social network structure can be used to influence a user’s opinion among the society. Our proposed framework consists of four main components, namely data collection, sentiment extraction, budget approximation and presentation. After gathering customers’ comments in the data collection phase, the opinion of users who have expressed it is analyzed in the sentiment analysis phase. The budget approximation component then determines the cost of spreading positive opinion among the network of users, including those who have not given it. For that, influence maximization is used to compare the cost of convergence of the general opinion of society in the direction of innovation. In presentation component, the comparative information will be used by product designers to assist them in determining the viability of selecting an idea for implementation. The simulation results show that the network structure and the individuals’ positions are important factors in the acceptance of an innovation by society. This framework can be used to compare different innovative ideas and provide decision makers in organizations with informative reports as decision support materials.
Alfaro-García, VG, Merigó, JM, Alfaro Calderón, GG, Plata-Pérez, L, Gil-Lafuente, AM & Herrera-Viedma, E 2020, 'A citation analysis of fuzzy research by universities and countries', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 5, pp. 5355-5367.
View/Download from: Publisher's site
Alfaro-García, VG, Merigó, JM, Pedrycz, W & Gómez Monge, R 2020, 'Citation Analysis of Fuzzy Set Theory Journals: Bibliometric Insights About Authors and Research Areas', International Journal of Fuzzy Systems, vol. 22, no. 8, pp. 2414-2448.
View/Download from: Publisher's site
Al-Hadhrami, Y & Hussain, FK 2020, 'Real time dataset generation framework for intrusion detection systems in IoT', Future Generation Computer Systems, vol. 108, pp. 414-423.
View/Download from: Publisher's site
View description>>
© 2020 The Internet of Things (IoT) has evolved in the last few years to become one of the hottest topics in the area of computer science research. This drastic increase in IoT applications across different disciplines, such as in health-care and smart industries, comes with a considerable security risk. This is not limited only to attacks on privacy; it can also extend to attacks on network availability and performance. Therefore, an intrusion detection system is essential to act as the first line of defense for the network. IDS systems and algorithms depend heavily on the quality of the dataset provided. Sadly, there has been a lack of work in evaluating and collecting intrusion detection system related datasets that are designed specifically for an IoT ecosystem. Most of the studies published focus on outdated and non-compatible datasets such as the KDD98 dataset. Therefore, in this paper, we aim to investigate the existing datasets and their applications for IoT environments. Then we introduce a real-time data collection framework for building a dataset for intrusion detection system evaluation and testing. The main advantages of the proposed dataset are that it contains features that are explicitly designed for the 6LoWPAN/RPL network, the most widely used protocol in the IoT environment.
Almasoud, AS, Hussain, FK & Hussain, OK 2020, 'Smart contracts for blockchain-based reputation systems: A systematic literature review', Journal of Network and Computer Applications, vol. 170, pp. 102814-102814.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Reputation systems offer a medium where users can quantify the trustworthiness or reliability of individuals providing online services or products. In the past, researchers have used blockchain technology for reputation systems. Smart contracts are computer protocols which have the primary objective to supervise, implement, or validate performances or negotiations of contracts. However, through a systematic literature review, in this paper, we find that the existing literature has not proposed a framework that facilitates the interchangeable use of smart contracts for blockchain-based reputation systems. We adopt a systematic literature review from 30 relevant studies and the data from them were extracted before identifying the research gaps. As a solution to the research gaps, we propose the FarMed framework for creating an intelligent framework that will execute Ethereum smart contact-based reputation systems and develop reliable blockchain-based protocols for transferring reputation values from one provider to another. We briefly explain our proposed framework before concluding with our future work.
Altulyan, M, Yao, L, Kanhere, SS, Wang, X & Huang, C 2020, 'A unified framework for data integrity protection in people-centric smart cities', Multimedia Tools and Applications, vol. 79, no. 7-8, pp. 4989-5002.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. With the rapid increase in urbanisation, the concept of smart cities has attracted considerable attention. By leveraging emerging technologies such as the Internet of Things (IoT), artificial intelligence and cloud computing, smart cities have the potential to improve various indicators of residents’ quality of life. However, threats to data integrity may affect the delivery of such benefits, especially in the IoT environment where most devices are inherently dynamic and have limited resources. Prior work has focused on ensuring integrity of data in a piecemeal manner and covering only some parts of the smart city ecosystem. In this paper, we address integrity of data from an end-to-end perspective, i.e., from the data source to the data consumer. We propose a holistic framework for ensuring integrity of data in smart cities that covers the entire data lifecycle. Our framework is founded on three fundamental concepts, namely, secret sharing, fog computing and blockchain. We provide a detailed description of various components of the framework and also utilize smart healthcare as use case.
Alzoubi, YI & Gill, AQ 2020, 'An Empirical Investigation of Geographically Distributed Agile Development: The Agile Enterprise Architecture is a Communication Enabler.', IEEE Access, vol. 8, pp. 80269-80289.
View/Download from: Publisher's site
Amirbagheri, K, Merigó, JM, Guitart-Tarrés, L & Nuñez-Carballosa, A 2020, 'OWA operators in the calculation of the average green-house gases emissions', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 5, pp. 5427-5439.
View/Download from: Publisher's site
Ang, L, Hellmann, A, Kanbaty, M & Sood, S 2020, 'Emotional and attentional influences of photographs on impression management and financial decision making', Journal of Behavioral and Experimental Finance, vol. 27, pp. 100348-100348.
View/Download from: Publisher's site
View description>>
The use of photographs has become a key feature of corporate reporting in the last decades. As a form of impression management, photographs may be designed to influence investors’ judgments. Where a paucity of research on the use of photographs in corporate reports exists, this short communication discusses two important photographic features that can frame judgments — its ability to attract attention and convey emotions with simplicity. Usually, the non-creative content of photographs, such as size, is responsible for capturing our attention, while its creative content influences cognition and judgments through the elicitation of specific emotional responses. The use of photographs is now a norm and this letter will help open new avenues of behavioral research and methodologies.
Arodudu, O, Holmatov, B & Voinov, A 2020, 'Ecological impacts and limits of biomass use: a critical review', Clean Technologies and Environmental Policy, vol. 22, no. 8, pp. 1591-1611.
View/Download from: Publisher's site
View description>>
© 2020, Springer-Verlag GmbH Germany, part of Springer Nature. Conventional biomass sources have been widely exploited for several end uses (mostly food, feed, fuel and chemicals). More unconventional sources are continually being sought for meeting the growing planetary demands for biomass materials. Biofuels are already commercially produced in many countries and are becoming mainstream. The role of biorefineries for production of chemicals is also on the rise. Plant biomass is the primary source of food for all multicellular living organisms. Primary production remains a key link in the chain of life support on planet Earth. Is there enough for all? What new strategies (or technologies) are available or promising for providing plant biomass in a safe and sustainable way? What are the potential impacts (footprints and efficiencies) of such strategies? What can be the limiting factors—land, water, energy and nutrients? What might be the limits for specific regions (OECD vs. non-OECD, advanced vs. developing, dry and warm vs. wet and cool, etc.). In this paper, we provided answers to these questions by critically reviewing the pros and cons associated with current and future production and use pathways for biomass. We conclude that in many cases, the jury is still out, and we cannot come to a solid verdict about the future of biomass production and use.
Asadabadi, MR, Chang, E, Zwikael, O, Saberi, M & Sharpe, K 2020, 'Hidden fuzzy information: Requirement specification and measurement of project provider performance using the best worst method', Fuzzy Sets and Systems, vol. 383, pp. 127-145.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. The requirement specification process is an important part of a project and has the potential to prevent problems that may last for years after a project is delivered. Previous studies on the requirement specification process have focused on clarifying stated fuzzy terms in software requirement engineering. However, in many projects there is information that is not stated, but it is implied and can be inferred. This hidden information is usually ignored due to the assumption that ‘the provider understands what they mean/need’. This assumption is not always true. Such information, if extracted, may include fuzzy terms, namely hidden fuzzy terms (HFTs), which need specification. Therefore, these fuzzy terms have to be identified and then specified to avoid potential future consequences. This study proposes an algorithm to extract the hidden fuzzy terms, utilises a fuzzy inference system (FIS) to specify them, and applies the best worst multi-criteria decision making method (BWM) to evaluate the delivered product and measure the performance of the provider. The model is then used to examine a case from Defence Housing Australia. Such evaluation and measurement enable the project owner/manager to have a transparent basis to support decisions later in different phases of the project, and to ultimately reduce the likelihood of conflict and the receipt of an unsatisfactory product.
Asadabadi, MR, Saberi, M, Zwikael, O & Chang, E 2020, 'Ambiguous requirements: A semi-automated approach to identify and clarify ambiguity in large-scale projects', Computers & Industrial Engineering, vol. 149, pp. 106828-106828.
View/Download from: Publisher's site
Atif, A, Richards, D, Liu, D & Bilgin, AA 2020, 'Perceived benefits and barriers of a prototype early alert system to detect engagement and support ‘at-risk’ students: The teacher perspective', Computers & Education, vol. 156, pp. 103954-103954.
View/Download from: Publisher's site
Atov, I, Chen, K-C, Kamal, A & Yu, S 2020, 'Data Science and Artificial Intelligence for Communications', IEEE Communications Magazine, vol. 58, no. 1, pp. 10-11.
View/Download from: Publisher's site
Azadi, M, Izadikhah, M, Ramezani, F & Hussain, FK 2020, 'A mixed ideal and anti-ideal DEA model: an application to evaluate cloud service providers', IMA Journal of Management Mathematics, vol. 31, no. 2, pp. 233-256.
View/Download from: Publisher's site
View description>>
Abstract The rapid development of cloud computing and the sharp increase in the number of cloud service providers (CSPs) have resulted in many challenges in the suitability and selection of the best CSPs according to quality of service requirements. The main objective of this study is to propose three novel models based on the enhanced Russell model to increase the discrimination power in the evaluation and selection of CSPs. The proposed models are designed based on the distances to two special decision-making units (DMUs), namely the ideal DMU and the anti-ideal DMU. There are two advantages to the proposed ranking methods. First, they consider both pessimistic and optimistic scenarios of data envelopment analysis, so they are more equitable than methods that are based on only one of these scenarios. The second strength of this approach is its discrimination power, enabling it to provide a complete ranking for all CSPs. The proposed method can help customers to choose the most appropriate CSP while at the same time, it helps software developers to identify inefficient CSPs in order to improve their performance in the marketplace.
Bakhanova, E, Garcia, JA, Raffe, WL & Voinov, A 2020, 'Targeting social learning and engagement: What serious games and gamification can offer to participatory modeling', Environmental Modelling & Software, vol. 134, pp. 104846-104846.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Serious games and gamification are useful tools for learning and sustaining long-term engagement in the activities that are not meant to be entertaining. However, the application of game design in the participatory modeling context remains fragmented and mostly limited to user-friendly interfaces, storytelling, and visualization for better representation of the simulation models. This paper suggests possible extensions of game design use for each stage of the participatory modeling process, aiming at better learning, communication among stakeholders, and overall engagement. The proposed extensions are based on the effects that different types of game-like applications bring to the aspects of social learning and the contribution of gamification to engagement, motivation, and enjoyment of some activities. We conclude that serious games and gamification have a high potential for improving the quality of the participatory modeling process, while also highlighting additional research that is needed for designing particular practical gamified applications in this context.
Bashir, MR, Gill, AQ, Beydoun, G & Mccusker, B 2020, 'Big Data Management and Analytics Metamodel for IoT-Enabled Smart Buildings.', IEEE Access, vol. 8, pp. 169740-169758.
View/Download from: Publisher's site
View description>>
Big data management and analytics, in the context of IoT (Internet of Things)-enabled smart buildings, is a challenging task. It is a diffused and complex area of knowledge due to the diversity of IoT devices and the nature of data generated by the IoT devices. Many international bodies have developed metamodels for IoT-enabled ecosystems to allow knowledge sharing. However, these are often narrow in focus and deal with only the IoT aspects without taking into account the management and analytics of big data generated by the IoT devices. Hence, in this article we propose a metamodel for the Integrated Big Data Management and Analytics (IBDMA) framework for IoT-enabled smart buildings. The IBDMA Metamodel can be used to facilitate interoperability between existing big data management and analytics ecosystems deployed in smart buildings or other smart environments. We import the metamodel into a knowledge graph management tool and by considering a case study we validate the metamodel using this tool. The evaluation results demonstrate that IBDMA Metamodel is indeed suitable for its intended purpose.
Bednarik, R, Busjahn, T, Gibaldi, A, Ahadi, A, Bielikova, M, Crosby, M, Essig, K, Fagerholm, F, Jbara, A, Lister, R, Orlov, P, Paterson, J, Sharif, B, Sirkiä, T, Stelovsky, J, Tvarozek, J, Vrzakova, H & van der Linde, I 2020, 'EMIP: The eye movements in programming dataset', Science of Computer Programming, vol. 198, pp. 102520-102520.
View/Download from: Publisher's site
View description>>
© 2020 A large dataset that contains the eye movements of N=216 programmers of different experience levels captured during two code comprehension tasks is presented. Data are grouped in terms of programming expertise (from none to high) and other demographic descriptors. Data were collected through an international collaborative effort that involved eleven research teams across eight countries on four continents. The same eye tracking apparatus and software was used for the data collection. The Eye Movements in Programming (EMIP) dataset is freely available for download. The varied metadata in the EMIP dataset provides fertile ground for the analysis of gaze behavior and may be used to make novel insights about code comprehension.
Beydoun, G, Hoffmann, A, Garcia, RV, Shen, J & Gill, A 2020, 'Towards an assessment framework of reuse: a knowledge-level analysis approach', Complex & Intelligent Systems, vol. 6, no. 1, pp. 87-95.
View/Download from: Publisher's site
Bharill, N, Tiwari, A, Malviya, A, Patel, OP, Gupta, A, Puthal, D, Saxena, A & Prasad, M 2020, 'Fuzzy knowledge based performance analysis on big data', Neurocomputing, vol. 389, pp. 218-228.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Due to the various emerging technologies, an enormous amount of data, termed as Big Data, gets collected every day and can be of great use in various domains. Clustering algorithms that store the entire data into memory for analysis become unfeasible when the dataset is too large. Many clustering algorithms present in the literature deal with the analysis of huge amount of data. The paper discusses a new clustering approach called an Incremental Random Sampling with Iterative Optimization Fuzzy c-Means (IRSIO-FCM) algorithm. It is implemented on Apache Spark, a framework for Big Data processing. Sparks works really well for iterative algorithms by supporting in-memory computations, scalability, etc. IRSIO-FCM not only facilitates effective clustering of Big Data but also performs storage space optimization during clustering. To establish a fair comparison of IRSIO-FCM, we propose an incremental version of the Literal Fuzzy c-Means (LFCM) called ILFCM implemented in Apache Spark framework. The experimental results are analyzed in terms of time and space complexity, NMI, ARI, speedup, sizeup, and scaleup measures. The reported results show that IRSIO-FCM achieves a significant reduction in run-time in comparison with ILFCM.
Binh, NTM, Binh, HTT, Van Linh, N & Yu, S 2020, 'Efficient meta-heuristic approaches in solving minimal exposure path problem for heterogeneous wireless multimedia sensor networks in internet of things', Applied Intelligence, vol. 50, no. 6, pp. 1889-1907.
View/Download from: Publisher's site
View description>>
© 2020, Springer Science+Business Media, LLC, part of Springer Nature. One of the well-known methods for evaluating Heterogeneous wireless multimedia sensor networks (HWMSNs) in Internet of Things have drawn attention of the research community because this type of networks possesses great advantages of both coverage and performance. One of the most fundamental issues in HWMSNs is the barrier coverage problem which evaluates the surveillance capability of the network systems, especially those designed for security purposes. Among multiple approaches to solve this issue, finding the minimal exposure path (MEP), which corresponds to the worst-case coverage of the network is the most popular and efficient way. However, the MEP problem in HWMSNs (hereinafter heterogeneous multimedia MEP or HM-MEP) is specifically complex and challenging with the unique features of the HWMSNs. Thus, the problem is then converted into numerical functional extreme with high dimension, non-differential and non-linearity. Adapting to these features, two efficient meta-heuristic algorithms, Hybrid Evolutionary Algorithm (HEA) and Gravitation Particle Swarm Optimization (GPSO) are proposed for solving the problem. The HEA is a hybrid evolutionary algorithm in combination with local search while the GPSO is a novel particle swarm optimization based on the gravity force theory. Experimental results on extensive instances indicate that the proposed algorithms are suitable for the HM-MEP problem and perform well in term of both solution accuracy and computation time compared to existing approaches.
Blanco-Mesa, F & Merigó, JM 2020, 'Bonferroni Distances and Their Application in Group Decision Making', Cybernetics and Systems, vol. 51, no. 1, pp. 27-58.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 Taylor & Francis Group, LLC. The aim of the paper is to develop new aggregation operators using Bonferroni means, ordered weighted averaging (OWA) operators and some distance measures. We introduce the Bonferroni-Hamming weighted distance (BON-HWD), Bonferroni OWA distance (BON-OWAD), Bonferroni OWA adequacy coefficient (BON-OWAAC) and Bonferroni distances with OWA operators and weighted averages (BON-IWOWAD). The main advantages of using these operators are that they allow the consideration of different aggregations contexts to be considered and multiple comparison between each argument and distance measures in the same formulation. An application is developed using these new algorithms in combination with Pichat algorithm to solve a group decision-making problem. Creative personality is taken as an example for forming creative groups. The results show fuzzy dissimilarity relations in order to establish the maximum similarity subrelations and find groups according to each individual’s creative personality similarities.
Blanco-Mesa, F, León-Castro, E & Merigó, JM 2020, 'Covariances with OWA operators and Bonferroni means', Soft Computing, vol. 24, no. 19, pp. 14999-15014.
View/Download from: Publisher's site
Bommes, D, Pietroni, N & Hu, R 2020, 'Foreword to the Special Section on Shape Modeling International 2020.', Comput. Graph., vol. 90, pp. 4-4.
View/Download from: Publisher's site
View description>>
Image, graphical abstract.
Brodka, P, Musial, K & Jankowski, J 2020, 'Interacting Spreading Processes in Multilayer Networks: A Systematic Review', IEEE Access, vol. 8, pp. 10316-10341.
View/Download from: Publisher's site
Cao, Z, Ding, W, Wang, Y-K, Hussain, FK, Al-Jumaily, A & Lin, C-T 2020, 'Effects of repetitive SSVEPs on EEG complexity using multiscale inherent fuzzy entropy', Neurocomputing, vol. 389, pp. 198-206.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Multiscale inherent fuzzy entropy is an objective measurement of electroencephalography (EEG) complexity, reflecting the habituation of brain systems. Entropy dynamics are generally believed to reflect the ability of the brain to adapt to a visual stimulus environment. In this study, we explored repetitive steady-state visual evoked potential (SSVEP)-based EEG complexity by assessing multiscale inherent fuzzy entropy with relative measurements. We used a wearable EEG device with Oz and Fpz electrodes to collect EEG signals from 40 participants under the following three conditions: a resting state (closed-eyes (CE) and open-eyes (OE) stimulation with five 15-Hz CE SSVEPs and stimulation with five 20-Hz OE SSVEPs. We noted monotonic enhancement of occipital EEG relative complexity with increasing stimulus times in CE and OE conditions. The occipital EEG relative complexity was significantly higher for the fifth SSVEP than for the first SSEVP (FDR-adjusted p < 0.05). Similarly, the prefrontal EEG relative complexity tended to be significantly higher in the OE condition compared to that in the CE condition (FDR-adjusted p < 0.05). The results also indicate that multiscale inherent fuzzy entropy is superior to other competing multiscale-based entropy methods. In conclusion, EEG relative complexity increases with stimulus times, a finding that reflects the strong habituation of brain systems. These results suggest that multiscale inherent fuzzy entropy is an EEG pattern with which brain complexity can be assessed using repetitive SSVEP stimuli.
Cao, Z, Lin, C-T, Lai, K-L, Ko, L-W, King, J-T, Liao, K-K, Fuh, J-L & Wang, S-J 2020, 'Extraction of SSVEPs-Based Inherent Fuzzy Entropy Using a Wearable Headband EEG in Migraine Patients', IEEE Transactions on Fuzzy Systems, vol. 28, no. 1, pp. 14-27.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Inherent fuzzy entropy is an objective measurement of electroencephalography (EEG) complexity reflecting the robustness of brain systems. In this study, we present a novel application of multiscale relative inherent fuzzy entropy using repetitive steady-state visual evoked potentials (SSVEPs) to investigate EEG complexity change between two migraine phases, i.e., interictal (baseline) and preictal (before migraine attacks) phases. We used a wearable headband EEG device with O1, Oz, O2, and Fpz electrodes to collect EEG signals from 80 participants [40 migraine patients and 40 healthy controls (HCs)] under the following two conditions: During resting state and SSVEPs with five 15-Hz photic stimuli. We found a significant enhancement in occipital EEG entropy with increasing stimulus times in both HCs and patients in the interictal phase, but a reverse trend in patients in the preictal phase. In the 1st SSVEP, occipital EEG entropy of the HCs was significantly lower than that of patents in the preictal phase (FDR-adjusted p < 0.05). Regarding the transitional variance of EEG entropy between the 1st and 5th SSVEPs, patients in the preictal phase exhibited significantly lower values than patients in the interictal phase (FDR-adjusted p < 0.05). Furthermore, in the classification model, the AdaBoost ensemble learning showed an accuracy of 81 pm 6%and area under the curve of 0.87 for classifying interictal and preictal phases. In contrast, there were no differences in EEG entropy among groups or sessions by using other competing entropy models, including approximate entropy, sample entropy, and fuzzy entropy on the same dataset. In conclusion, inherent fuzzy entropy offers novel applications in visual stimulus environments and may have the potential to provide a preictal alert to migraine patients.
Cao, Z, Xu, P, Zhang, Z, Wang, G, Taulu, S & Beltrachini, L 2020, 'IEEE Access Special Section Editorial: Neural Engineering Informatics', IEEE Access, vol. 8, pp. 201696-201699.
View/Download from: Publisher's site
Casanovas, M, Torres-Martínez, A & Merigó, JM 2020, 'Multi-person and multi-criteria decision making with the induced probabilistic ordered weighted average distance', Soft Computing, vol. 24, no. 2, pp. 1435-1446.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag GmbH Germany, part of Springer Nature. This paper presents a new approach for selecting suppliers of products or services, specifically with respect to complex decisions that require evaluating different business characteristics to ensure their suitability and to meet the conditions defined in the recruitment process. To address this type of problem, this study presents the multi-person multi-criteria induced ordered weighted average distance (MP-MC-IOWAD) operator, which is an extension of the OWA operators that includes the notion of distances to multiple criteria and expert valuations. Thus, this work introduces new distance measures that can aggregate the information with probabilistic information and consider the attitudinal character of the decision maker. Further extensions are developed using probabilities to form the induced probabilistic ordered weighted average distance (IPOWAD) operator. An example in the management of insurance policies is presented, where the selection of insurance companies is very complex and requires the consideration of subjective criteria by experts in decision making.
Cetindamar, D, Lammers, T & Zhang, Y 2020, 'Exploring the knowledge spillovers of a technology in an entrepreneurial ecosystem—The case of artificial intelligence in Sydney', Thunderbird International Business Review, vol. 62, no. 5, pp. 457-474.
View/Download from: Publisher's site
View description>>
AbstractNew knowledge presents opportunities for commercial value and can hence be a critical asset for entrepreneurial ecosystems (EEs). In particular, general purpose technologies are major drivers of entrepreneurship. Thus, a nuanced understanding on technological knowledge and its spillovers among actors within an EE is warranted. Using knowledge‐spillover‐based strategic entrepreneurship theory, we propose to observe knowledge spillovers through the assessment of the knowledge bases of a technology in an EE. To do so, this article proposes to use three key sources of knowledge: publications reflecting the emerging knowledge base, patents representing the realized knowledge base, and startups showing the experimental knowledge base. This article uses secondary data sources such as Web of Science and applies the method of bibliometrics to illustrate how an assessment is carried out in practice by evaluating the artificial intelligence (AI) knowledge bases in Sydney from 2000 to 2018. The findings are summarized with an illustration of the evolution of the key actors and their activities over time in order to indicate the key strengths and weaknesses in Sydney's AI knowledge among the different bases. Contrary to expectations from the high potential of knowledge spillovers from a general purpose digital technology such as AI, the article shows that apparent knowledge spillovers are yet highly limited in Sydney. Even though Sydney has a strong emerging knowledge base, the realized knowledge base seems weak and the experimental knowledge base is slowly improving. That observation itself verifies the need to take strategic actions to facilitate knowledge spillovers within EEs. After the implications for theory and policy makers are discussed, suggestions for further studies are proposed.
Chalmers, T, Maharaj, S, Lees, T, Lin, CT, Newton, P, Clifton-Bligh, R, McLachlan, CS, Gustin, SM & Lal, S 2020, 'Impact of acute stress on cortical electrical activity and cardiac autonomic coupling', Journal of Integrative Neuroscience, vol. 19, no. 2, pp. 239-239.
View/Download from: Publisher's site
View description>>
Assessment of heart rate variability (reflective of the cardiac autonomic nervous system) has shown some predictive power for stress. Further, the predictive power of the distinct patterns of cortical brain activity and - cardiac autonomic interactions are yet to be explored in the context of acute stress, as assessed by an electrocardiogram and electroencephalogram. The present study identified distinct patterns of neural-cardiac autonomic coupling during both resting and acute stress states. In particular, during the stress task, frontal delta waves activity was positively associated with low-frequency heart rate variability and negatively associated with high-frequency heart rate variability. Low high-frequency power is associated with stress and anxiety and reduced vagal control. A positive association between resting high-frequency heart rate variability and frontocentral gamma activity was found, with a direct inverse relationship of low-frequency heart rate variability and gamma wave coupling at rest. During the stress task, low-frequency heart rate variability was positively associated with frontal delta activity. That is, the parasympathetic nervous system is reduced during a stress task, whereas frontal delta wave activity is increased. Our findings suggest an association between cardiac parasympathetic nervous system activity and frontocentral gamma and delta activity at rest and during acute stress. This suggests that parasympathetic activity is decreased during acute stress, and this is coupled with neuronal cortical prefrontal activity. The distinct patterns of neural-cardiac coupling identified in this study provide a unique insight into the dynamic associations between brain and heart function during both resting and acute stress states.
Chang, LC, Pare, S, Meena, MS, Jain, D, Li, DL, Saxena, A, Prasad, M & Lin, CT 2020, 'An Intelligent Automatic Human Detection and Tracking System Based on Weighted Resampling Particle Filtering', Big Data and Cognitive Computing, vol. 4, no. 4, pp. 27-27.
View/Download from: Publisher's site
View description>>
At present, traditional visual-based surveillance systems are becoming impractical, inefficient, and time-consuming. Automation-based surveillance systems appeared to overcome these limitations. However, the automatic systems have some challenges such as occlusion and retaining images smoothly and continuously. This research proposes a weighted resampling particle filter approach for human tracking to handle these challenges. The primary functions of the proposed system are human detection, human monitoring, and camera control. We used the codebook matching algorithm to define the human region as a target and track it, and we used the practical filter algorithm to follow and extract the target information. Consequently, the obtained information was used to configure the camera control. The experiments were tested in various environments to prove the stability and performance of the proposed system based on the active camera.
Chang, Y-C, Dostovalova, A, Lin, C-T & Kim, J 2020, 'Intelligent Multirobot Navigation and Arrival-Time Control Using a Scalable PSO-Optimized Hierarchical Controller', Frontiers in Artificial Intelligence, vol. 3, p. 50.
View/Download from: Publisher's site
Chebil, W, Wedyan, MO, Lu, H & Elshaweesh, OG 2020, 'Context-Aware Personalized Web Search Using Navigation History', International Journal on Semantic Web and Information Systems, vol. 16, no. 2, pp. 91-107.
View/Download from: Publisher's site
View description>>
It is highly desirable that web search engines know users well and provide just what the user needs. Although great effort has been devoted to achieve this dream, the commonly used web search engines still provide a “one-fit-all” results. One of the barriers is lack of an accurate representation of user search context that supports personalised web search. This article presents a method to represent user search context and incorporate this representation to produce personalised web search results based on Google search results. The key contributions are twofold: a method to build contextual user profiles using their browsing behaviour and the semantic knowledge represented in a domain ontology; and an algorithm to re-rank the original search results using these contextual user profiles. The effectiveness of proposed new techniques were evaluated through comparisons of cases with and without these techniques respectively and a promising result of 35% precision improvement is achieved.
Chen, C-Y, Quan, W, Cheng, N, Yu, S, Lee, J-H, Perez, GM, Zhang, H & Shieh, S 2020, 'IEEE Access Special Section Editorial: Artificial Intelligence in Cybersecurity', IEEE Access, vol. 8, pp. 163329-163333.
View/Download from: Publisher's site
Chen, F, Xiao, Z, Cui, L, Lin, Q, Li, J & Yu, S 2020, 'Blockchain for Internet of things applications: A review and open issues', Journal of Network and Computer Applications, vol. 172, pp. 102839-102839.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Blockchain and the Internet of things (IoT) systems are attracting more and more research efforts from both academia and industry. Blockchain is rapidly evolving to be a new infrastructure for building robust distributed applications. Similarly, Internet of things is getting increasing deployment in the context of smart city, smart home, smart healthcare, etc. On the intersection of the two emerging areas, researchers are proposing to use blockchain to build more dependable IoT systems. This paper reviews the most recent research advances in this direction during the past four years. Specifically, we review, summarize, and categorize existing research works. We divide the research works into four groups according to the roles that the blockchain plays in IoT systems, i.e., access control platform, data security platform, trusted third party, and automatic payment platform. For each group, we also discuss future research challenges. From the review, we further summarize the usage paradigms and open issues on using blockchain to build dependable IoT systems. We hope this work serves as a reference of existing models for both researchers and engineers that are interested to leverage blockchain to build future IoT systems.
Chen, K, Yao, L, Zhang, D, Wang, X, Chang, X & Nie, F 2020, 'A Semisupervised Recurrent Convolutional Attention Model for Human Activity Recognition', IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 5, pp. 1747-1756.
View/Download from: Publisher's site
View description>>
Recent years have witnessed the success of deep learning methods in human activity recognition (HAR). The longstanding shortage of labeled activity data inherently calls for a plethora of semisupervised learning methods, and one of the most challenging and common issues with semisupervised learning is the imbalanced distribution of labeled data over classes. Although the problem has long existed in broad real-world HAR applications, it is rarely explored in the literature. In this paper, we propose a semisupervised deep model for imbalanced activity recognition from multimodal wearable sensory data. We aim to address not only the challenges of multimodal sensor data (e.g., interperson variability and interclass similarity) but also the limited labeled data and class-imbalance issues simultaneously. In particular, we propose a pattern-balanced semisupervised framework to extract and preserve diverse latent patterns of activities. Furthermore, we exploit the independence of multi-modalities of sensory data and attentively identify salient regions that are indicative of human activities from inputs by our recurrent convolutional attention networks. Our experimental results demonstrate that the proposed model achieves a competitive performance compared to a multitude of state-of-the-art methods, both semisupervised and supervised ones, with 10% labeled training data. The results also show the robustness of our method over imbalanced, small training data sets.
Chen, L, Zhang, N, Sun, H-M, Chang, C-C, Yu, S & Choo, K-KR 2020, 'Secure search for encrypted personal health records from big data NoSQL databases in cloud', Computing, vol. 102, no. 6, pp. 1521-1545.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag GmbH Austria, part of Springer Nature. As the healthcare industry adopts the use of cloud to store personal health record (PHR), there is a need to ensure that we maintain the ability to perform efficient search on encrypted data (stored in the cloud). In this paper, we propose a secure searchable encryption scheme, which is designed to search on encrypted personal health records from a NoSQL database in semi-trusted cloud servers. The proposed scheme supports almost all query operations available in plaintext database environments, especially multi-dimensional, multi-keyword searches with range query. Specifically, in the proposed scheme, an Adelson-Velsky Landis (AVL) tree is utilized to construct the index, and an order-revealing encryption (ORE) algorithm is used to encrypt the AVL tree and realize range query. As document-based databases are probably the most popular NoSQL database, due to their flexibility, high efficiency, and ease of use, MongoDB, a document-based NoSQL database, is chosen to store the encrypted PHR data in our scheme. Experimental results show that the scheme can achieve secure and practical searchable encryption for PHRs. A comparison of the range query demonstrates that the time overhead of our ORE-based scheme is 25.5% shorter than that of the mOPE-based Arx (an encrypted database system) scheme.
Chen, M, Voinov, A, Ames, DP, Kettner, AJ, Goodall, JL, Jakeman, AJ, Barton, MC, Harpham, Q, Cuddy, SM, DeLuca, C, Yue, S, Wang, J, Zhang, F, Wen, Y & Lü, G 2020, 'Position paper: Open web-distributed integrated geographic modelling and simulation to enable broader participation and applications', Earth-Science Reviews, vol. 207, pp. 103223-103223.
View/Download from: Publisher's site
View description>>
© 2020 The Authors Integrated geographic modelling and simulation is a computational means to improve understanding of the environment. With the development of Service Oriented Architecture (SOA) and web technologies, it is possible to conduct open, extensible integrated geographic modelling across a network in which resources can be accessed and integrated, and further distributed geographic simulations can be performed. This open web-distributed modelling and simulation approach is likely to enhance the use of existing resources and can attract diverse participants. With this approach, participants from different physical locations or domains of expertise can perform comprehensive modelling and simulation tasks collaboratively. This paper reviews past integrated modelling and simulation systems, highlighting the associated development challenges when moving to an open web-distributed system. A conceptual framework is proposed to introduce a roadmap from a system design perspective, with potential use cases provided. The four components of this conceptual framework - a set of standards, a resource sharing environment, a collaborative integrated modelling environment, and a distributed simulation environment - are also discussed in detail with the goal of advancing this emerging field.
Chen, S, Fu, A, Shen, J, Yu, S, Wang, H & Sun, H 2020, 'RNN-DP: A new differential privacy scheme base on Recurrent Neural Network for Dynamic trajectory privacy protection', Journal of Network and Computer Applications, vol. 168, pp. 102736-102736.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Mobile devices furnish users with various services while on the move, but also raise public concerns about trajectory privacy. Unfortunately, traditional privacy protection methods, such as anonymity and generalization, are not secure because they cannot resist attackers with background knowledge. The emergence of differential privacy provides an effective solution to this problem. Still, the existing schemes are almost designed based on the collected aggregate historical data (so-called static trajectory privacy protection), which are not suitable for real-time dynamic trajectory privacy protection of mobile users. Furthermore, due to the complexity and redundancy features of the full trajectory data, the efficiency and accuracy of the privacy protection model are significantly limited by the existing schemes. In this paper, we propose a new differential privacy scheme base on the Recurrent Neural Network for Dynamic trajectory privacy Protection (RNN-DP). We firstly introduce a recurrent neural network model to handle the real-time data effectively instead of the full data. Secondly, we novelty leverage the dynamic velocity attribute to form a quaternion to indicate the status of the users. Moreover, we design a prejudgment mechanism to increase the availability of differential privacy technology. Compared with the current state-of-the-art mechanisms, the experimental results demonstrate that RNN-DP displays excellent performance in privacy protection and data availability for dynamic trajectory data.
Chen, Y, Mao, Y, Liang, H, Yu, S, Wei, Y & Leng, S 2020, 'Data Poison Detection Schemes for Distributed Machine Learning', IEEE Access, vol. 8, pp. 7442-7454.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Distributed machine learning (DML) can realize massive dataset training when no single node can work out the accurate results within an acceptable time. However, this will inevitably expose more potential targets to attackers compared with the non-distributed environment. In this paper, we classify DML into basic-DML and semi-DML. In basic-DML, the center server dispatches learning tasks to distributed machines and aggregates their learning results. While in semi-DML, the center server further devotes resources into dataset learning in addition to its duty in basic-DML. We firstly put forward a novel data poison detection scheme for basic-DML, which utilizes a cross-learning mechanism to find out the poisoned data. We prove that the proposed cross-learning mechanism would generate training loops, based on which a mathematical model is established to find the optimal number of training loops. Then, for semi-DML, we present an improved data poison detection scheme to provide better learning protection with the aid of the central resource. To efficiently utilize the system resources, an optimal resource allocation approach is developed. Simulation results show that the proposed scheme can significantly improve the accuracy of the final model by up to 20% for support vector machine and 60% for logistic regression in the basic-DML scenario. Moreover, in the semi-DML scenario, the improved data poison detection scheme with optimal resource allocation can decrease the wasted resources for 20-100%.
Cheng, C, Cao, Z & Xiao, F 2020, 'A generalized belief interval-valued soft set with applications in decision making', Soft Computing, vol. 24, no. 13, pp. 9339-9350.
View/Download from: Publisher's site
Cheng, EJ, Prasad, M, Yang, J, Khanna, P, Chen, B-H, Tao, X, Young, K-Y & Lin, C-T 2020, 'A fast fused part-based model with new deep feature for pedestrian detection and security monitoring', Measurement, vol. 151, pp. 107081-107081.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd In recent years, pedestrian detection based on computer vision has been widely used in intelligent transportation, security monitoring, assistance driving and other related applications. However, one of the remaining open challenges is that pedestrians are partially obscured and their posture changes. To address this problem, deformable part model (DPM) uses a mixture of part filters to capture variation in view point and appearance and achieves success for challenging datasets. Nevertheless, the expensive computation cost of DPM limits its ability in the real-time application. This study propose a fast fused part-based model (FFPM) for pedestrian detection to detect the pedestrians efficiently and accurately in the crowded environment. The first step of the proposed method trains six Adaboost classifiers with Haar-like feature for different body parts (e.g., head, shoulders, and knees) to build the response feature maps. These six response feature maps are combined with full-body model to produce spatial deep features. The second step of the proposed method uses the deep features as an input to support vector machine (SVM) to detect pedestrian. A variety of strategies is introduced in the proposed model, including part-based to full-body method, spatial filtering, and multi-ratios combination. Experiment results show that the proposed FFPM method improves the computation speed of DPM and maintains the performance in detection.
Chiang, YK, Oberst, S, Melnikov, A, Quan, L, Marburg, S, Alù, A & Powell, DA 2020, 'Reconfigurable Acoustic Metagrating for High-Efficiency Anomalous Reflection', Physical Review Applied, vol. 13, no. 6, pp. 064067-064067.
View/Download from: Publisher's site
View description>>
Recent study revealed that the scattering behaviors of bianisotropic scatterers can be controlled by an additional degree of freedom, represented as Willis coupling, which can be endowed with asymmetric wave scattering to form an acoustic metagrating for wavefront manipulation. Here, we introduce a flexible acoustic metagrating, formed by periodic arrays of properly design Willis scatterers, for anomalous reflection with nearly unitary efficiency and significantly less necessity of fine discretization. Numericalapproaches to predict the wave steering efficiency of the proposed acoustic metagratings with infinite and finite length are developed, which are utilized to demonstrate the strength and flexible features of the metagratings. Results reveal that the proposed acoustic metagrating can reroute incident wave into desired direction at a large angle with nearly unitary efficiency in reflection. The numerical predictions also show that the proposed designs offer a high efficient tunable platform in controlling the steering angles and operating frequencies. To practically realize the ability of extreme angle steering and tunable characteristics of the metagratings, designed structures are fabricated and examined experimentally. The acoustic wave is successfully rerouted to the targeted reflection angles by the finite metagrating. The flexibility regarding different steering angles and operating frequencies of the proposed metagratings are also demonstrated experimentally.
Cui, L, Qu, Y, Gao, L, Xie, G & Yu, S 2020, 'Detecting false data attacks using machine learning techniques in smart grid: A survey', Journal of Network and Computer Applications, vol. 170, pp. 102808-102808.
View/Download from: Publisher's site
View description>>
© 2020 The big data sources in smart grid (SG) enable utilities to monitor, control, and manage the energy system effectively, which is also promising to advance the efficiency, reliability, and sustainability of energy usage. However, false data attacks, as a major threat with wide targets and severe impacts, have exposed the SG systems to a large variety of security issues. To detect this threat effectively, several machine learning (ML)-based methods have been developed in the past few years. In this paper, we provide a comprehensive survey of these advances. The paper starts by providing a brief overview of SG architecture and its data sources. Moreover, the categories of false data attacks followed by data security requirements are introduced. Then, the recent ML-based detection techniques are summarized by grouping them into three major detection scenarios: non-technical losses, state estimation, and load forecasting. At last, we further investigate the potential research directions at the end of the paper, considering the deficiencies of current ML-based mechanisms. Specifically, we discuss intrusion detection against adversarial attacks, collaborative and decentralized detection framework, detection with privacy preservation, and some potential advanced ML techniques.
Cui, L, Wu, J, Pi, D, Zhang, P & Kennedy, P 2020, 'Dual Implicit Mining-Based Latent Friend Recommendation', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 5, pp. 1663-1678.
View/Download from: Publisher's site
View description>>
IEEE The latent friend recommendation in online social media is interesting, yet challenging, because the user-item ratings and the user-user relationships are both sparse. In this paper, we propose a new dual implicit mining-based latent friend recommendation model that simultaneously considers the implicit interest topics of users and the implicit link relationships between the users in the local topic cliques. Specifically, we first propose an algorithm called all reviews from a user and all tags from their corresponding items to learn the implicit interest topics of the users and their corresponding topic weights, then compute the user interest topic similarity using a symmetric Jensen-Shannon divergence. After that, we adopt the proposed weighted local random walk with restart algorithm to analyze the implicit link relationships between the users in the local topic cliques and calculate the weighted link relationship similarity between the users. Combining the user interest topic similarity with the weighted link relationship similarity in a unified way, we get the final latent friend recommendation list. The experiments on real-world datasets demonstrate that the proposed method outperforms the state-of-the-art latent friend recommendation methods under four different types of evaluation metrics.
Cui, L, Xie, G, Yu, S, Zhai, X & Gao, L 2020, 'An Inherent Property-Based Rumor Dissemination Model in Online Social Networks', IEEE Networking Letters, vol. 2, no. 1, pp. 43-46.
View/Download from: Publisher's site
Curiskis, SA, Drake, B, Osborn, TR & Kennedy, PJ 2020, 'An evaluation of document clustering and topic modelling in two online social networks: Twitter and Reddit', Information Processing & Management, vol. 57, no. 2, pp. 102034-102034.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Methods for document clustering and topic modelling in online social networks (OSNs) offer a means of categorising, annotating and making sense of large volumes of user generated content. Many techniques have been developed over the years, ranging from text mining and clustering methods to latent topic models and neural embedding approaches. However, many of these methods deliver poor results when applied to OSN data as such text is notoriously short and noisy, and often results are not comparable across studies. In this study we evaluate several techniques for document clustering and topic modelling on three datasets from Twitter and Reddit. We benchmark four different feature representations derived from term-frequency inverse-document-frequency (tf-idf) matrices and word embedding models combined with four clustering methods, and we include a Latent Dirichlet Allocation topic model for comparison. Several different evaluation measures are used in the literature, so we provide a discussion and recommendation for the most appropriate extrinsic measures for this task. We also demonstrate the performance of the methods over data sets with different document lengths. Our results show that clustering techniques applied to neural embedding feature representations delivered the best performance over all data sets using appropriate extrinsic evaluation measures. We also demonstrate a method for interpreting the clusters with a top-words based approach using tf-idf weights combined with embedding distance measures.
Dibaei, M, Zheng, X, Jiang, K, Abbas, R, Liu, S, Zhang, Y, Xiang, Y & Yu, S 2020, 'Attacks and defences on intelligent connected vehicles: a survey', Digital Communications and Networks, vol. 6, no. 4, pp. 399-421.
View/Download from: Publisher's site
View description>>
© 2020 Chongqing University of Posts and Telecommunications Intelligent vehicles are advancing at a fast speed with the improvement of automation and connectivity, which opens up new possibilities for different cyber-attacks, including in-vehicle attacks (e.g., hijacking attacks) and vehicle-to-everything communicationattacks (e.g., data theft). These problems are becoming increasingly serious with the development of 4G LTE and 5G communication technologies. Although many efforts are made to improve the resilience to cyber attacks, there are still many unsolved challenges. This paper first identifies some major security attacks on intelligent connected vehicles. Then, we investigate and summarize the available defences against these attacks and classify them into four categories: cryptography, network security, software vulnerability detection, and malware detection. Remaining challenges and future directions for preventing attacks on intelligent vehicle systems have been discussed as well.
Dimuro, GP, Lucca, G, Bedregal, B, Mesiar, R, Sanz, JA, Lin, C-T & Bustince, H 2020, 'Generalized CF1F2-integrals: From Choquet-like aggregation to ordered directionally monotone functions', Fuzzy Sets and Systems, vol. 378, pp. 44-67.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. This paper introduces the theoretical framework for a generalization of CF1F2-integrals, a family of Choquet-like integrals used successfully in the aggregation process of the fuzzy reasoning mechanisms of fuzzy rule based classification systems. The proposed generalization, called by gCF1F2-integrals, is based on the so-called pseudo pre-aggregation function pairs (F1,F2), which are pairs of fusion functions satisfying a minimal set of requirements in order to guarantee that the gCF1F2-integrals to be either an aggregation function or just an ordered directionally increasing function satisfying the appropriate boundary conditions. We propose a dimension reduction of the input space, in order to deal with repeated elements in the input, avoiding ambiguities in the definition of gCF1F2-integrals. We study several properties of gCF1F2-integrals, considering different constraints for the functions F1 and F2, and state under which conditions gCF1F2-integrals present or not averaging behaviors. Several examples of gCF1F2-integrals are presented, considering different pseudo pre-aggregation function pairs, defined on, e.g., t-norms, overlap functions, copulas that are neither t-norms nor overlap functions and other functions that are not even pre-aggregation functions.
Ding, W, Lin, C-T & Pedrycz, W 2020, 'Multiple Relevant Feature Ensemble Selection Based on Multilayer Co-Evolutionary Consensus MapReduce', IEEE Transactions on Cybernetics, vol. 50, no. 2, pp. 425-439.
View/Download from: Publisher's site
View description>>
IEEE Although feature selection for large data has been intensively investigated in data mining, machine learning, and pattern recognition, the challenges are not just to invent new algorithms to handle noisy and uncertain large data in applications, but rather to link the multiple relevant feature sources, structured, or unstructured, to develop an effective feature reduction method. In this paper, we propose a multiple relevant feature ensemble selection (MRFES) algorithm based on multilayer co-evolutionary consensus MapReduce (MCCM). We construct an effective MCCM model to handle feature ensemble selection of large-scale datasets with multiple relevant feature sources, and explore the unified consistency aggregation between the local solutions and global dominance solutions achieved by the co-evolutionary memeplexes, which participate in the cooperative feature ensemble selection process. This model attempts to reach a mutual decision agreement among co-evolutionary memeplexes, which calls for the need for mechanisms to detect some noncooperative co-evolutionary behaviors and achieve better Nash equilibrium resolutions. Extensive experimental comparative studies substantiate the effectiveness of MRFES to solve large-scale dataset problems with the complex noise and multiple relevant feature sources on some well-known benchmark datasets. The algorithm can greatly facilitate the selection of relevant feature subsets coming from the original feature space with better accuracy, efficiency, and interpretability. Moreover, we apply MRFES to human cerebral cortex-based classification prediction. Such successful applications are expected to significantly scale up classification prediction for large-scale and complex brain data in terms of efficiency and feasibility.
Ding, W, Lin, C-T, Liew, AW-C, Triguero, I & Luo, W 2020, 'Current trends of granular data mining for biomedical data analysis', Information Sciences, vol. 510, pp. 341-343.
View/Download from: Publisher's site
Ding, W, Pedrycz, W & Lin, C-T 2020, 'Guest Editorial for the Special Issue on Fuzzy Rough Sets for Big Data', IEEE Transactions on Fuzzy Systems, vol. 28, no. 5, pp. 803-805.
View/Download from: Publisher's site
Ding, W, Yen, GG, Cai, X & Cao, Z 2020, 'Foreword: Evolutionary data mining for big data', Swarm and Evolutionary Computation, vol. 57, pp. 100738-100738.
View/Download from: Publisher's site
Dong, M, Yao, L, Wang, X, Benatallah, B, Huang, C & Ning, X 2020, 'Opinion fraud detection via neural autoencoder decision forest', Pattern Recognition Letters, vol. 132, no. 10, pp. 21-29.
View/Download from: Publisher's site
View description>>
Online reviews play an important role in influencing buyers’ daily purchase decisions. However, fake and meaningless reviews, which cannot reflect users’ genuine purchase experience and opinions, widely exist on the Web and pose great challenges for users to make right choices. Therefore, it is desirable to build a fair model that evaluates the quality of products by distinguishing spamming reviews. We present an end-to-end trainable unified model to leverage the appealing properties from Autoencoder and random forest. A stochastic decision tree model is implemented to guide the global parameter learning process. Extensive experiments were conducted on a large Amazon review dataset. The proposed model consistently outperforms a series of compared methods.
Dong, X, Liu, L, Musial, K & Gabrys, B 2020, 'NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size', IEEE transactions on pattern analysis and machine intelligence, vol. PP, pp. 1-1.
View/Download from: Publisher's site
View description>>
Neural architecture search (NAS) has attracted a lot of attention and hasbeen illustrated to bring tangible benefits in a large number of applicationsin the past few years. Architecture topology and architecture size have beenregarded as two of the most important aspects for the performance of deeplearning models and the community has spawned lots of searching algorithms forboth aspects of the neural architectures. However, the performance gain fromthese searching algorithms is achieved under different search spaces andtraining setups. This makes the overall performance of the algorithms to someextent incomparable and the improvement from a sub-module of the searchingmodel unclear. In this paper, we propose NATS-Bench, a unified benchmark onsearching for both topology and size, for (almost) any up-to-date NASalgorithm. NATS-Bench includes the search space of 15,625 neural cellcandidates for architecture topology and 32,768 for architecture size on threedatasets. We analyze the validity of our benchmark in terms of various criteriaand performance comparison of all candidates in the search space. We also showthe versatility of NATS-Bench by benchmarking 13 recent state-of-the-art NASalgorithms on it. All logs and diagnostic information trained using the samesetup for each candidate are provided. This facilitates a much larger communityof researchers to focus on developing better NAS algorithms in a morecomparable and computationally cost friendly environment. All codes arepublicly available at: https://xuanyidong.com/assets/projects/NATS-Bench.
Downie, AS, Hancock, M, Abdel Shaheed, C, McLachlan, AJ, Kocaballi, AB, Williams, CM, Michaleff, ZA & Maher, CG 2020, 'An Electronic Clinical Decision Support System for the Management of Low Back Pain in Community Pharmacy: Development and Mixed Methods Feasibility Study', JMIR Medical Informatics, vol. 8, no. 5, pp. e17203-e17203.
View/Download from: Publisher's site
View description>>
Background People with low back pain (LBP) in the community often do not receive evidence-based advice and management. Community pharmacists can play an important role in supporting people with LBP as pharmacists are easily accessible to provide first-line care. However, previous research suggests that pharmacists may not consistently deliver advice that is concordant with guideline recommendations and may demonstrate difficulty determining which patients require prompt medical review. A clinical decision support system (CDSS) may enhance first-line care of LBP, but none exists to support the community pharmacist–client consultation. Objective This study aimed to develop a CDSS to guide first-line care of LBP in the community pharmacy setting and to evaluate the pharmacist-reported usability and acceptance of the prototype system. Methods A cross-platform Web app for the Apple iPad was developed in conjunction with academic and clinical experts using an iterative user-centered design process during interface design, clinical reasoning, program development, and evaluation. The CDSS was evaluated via one-to-one user-testing with 5 community pharmacists (5 case vignettes each). Data were collected via video recording, screen capture, survey instrument (system usability scale), and direct observation. Results Pharmacists’ agreement with CDSS-generated self-care recommendations was 90% (18/20), with medicines recommendations was 100% (25/25), and with referral advice was 88% (22/25; total 70 recommendations). Pharmacists expressed uncertainty when screening for serious p...
Du, X, Yin, H, Chen, L, Wang, Y, Yang, Y & Zhou, X 2020, 'Personalized Video Recommendation Using Rich Contents from Videos', IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 3, pp. 492-505.
View/Download from: Publisher's site
View description>>
IEEE Video recommendation has become an essential way of helping people explore the massive videos and discover the ones that may be of interest to them. In the existing video recommender systems, the models make the recommendations based on the user-video interactions and single specific content features. When the specific content features are unavailable, the performance of the existing models will seriously deteriorate. Inspired by the fact that rich contents (e.g., text, audio, motion, and so on) exist in videos, in this paper, we explore how to use these rich contents to overcome the limitations caused by the unavailability of the specific ones. Specifically, we propose a novel general framework that incorporates arbitrary single content feature with user-video interactions, named as collaborative embedding regression (CER) model, to make effective video recommendation in both in-matrix and out-of-matrix scenarios. Our extensive experiments on two real-world large-scale datasets show that CER beats the existing recommender models with any single content feature and is more time efficient. In addition, we propose a priority-based late fusion (PRI) method to gain the benefit brought by the integrating the multiple content features. The corresponding experiment shows that PRI brings real performance improvement to the baseline and outperforms the existing fusion methods.
Espinoza-Audelo, LF, León-Castro, E, Olazabal-Lugo, M, Merigó, JM & Gil-Lafuente, AM 2020, 'Using Ordered Weighted Average for Weighted Averages Inflation', International Journal of Information Technology & Decision Making, vol. 19, no. 02, pp. 601-628.
View/Download from: Publisher's site
View description>>
This paper presents the ordered weighted average weighted average inflation (OWAWAI) and some extensions using induced and heavy aggregation operators and presents the generalized operators and some of their families. The main advantage of these new formulations is that they can use two different sets of weighting vectors and generate new scenarios based on the reordering of the arguments with the weights. With this idea, it is possible to generate new approaches that under- or overestimate the results according to the knowledge and expertise of the decision-maker. The work presents an application of these new approaches in the analysis of the inflation in Chile, Colombia, and Argentina during 2017.
Fachrunnisa, O & Hussain, FK 2020, 'A methodology for creating sustainable communities based on dynamic factors in virtual environments', International Journal of Electronic Business, vol. 15, no. 2, pp. 133-133.
View/Download from: Publisher's site
View description>>
Copyright © 2020 Inderscience Enterprises Ltd. A virtual community is one of communities that exist in an internet economy; however, little research has been conducted on how to make it sustainable. We propose a methodology for creating sustainable virtual communities which depends on the community’s respond to the dynamic factors in its environment such as number of members, shared contents and interaction rules. The methodology proposes the use of iterative negotiation and a panel of expert agents to assess the quality of service (QoS) delivered. This QoS assessment is based on an interaction agreement between the community members and expert agent as the administrator’s representative. The administrators use this QoS assessment to determine whether an individual’s membership will be renewed or terminated after a certain period of time. We present a metric to measure the sustainability index and demonstrate the validity of the methodology by engineering a prototype setup and running simulations under various operational conditions.
Fachrunnisa, O & Hussain, FK 2020, 'Blockchain-based human resource management practices for mitigating skills and competencies gap in workforce', International Journal of Engineering Business Management, vol. 12, pp. 184797902096640-184797902096640.
View/Download from: Publisher's site
View description>>
Skills gap between company needs and competencies occupied by the workforce can be the source of inefficiencies. The purpose of this research is to develop a blockchain-based human resource (HR) framework to match the needs from the company and workforce competencies This framework will help Corporate Training Centre to standardized the competencies which then used by HR Department to develop the training material. In order to get valid information regarding skills that are needed from the company, we develop a prototype based on Blockchain. Hence, blockchain-based HRM is built to improve the quality of workforce competency in an organization. The current organizations are struggling to fulfil the needs of the workforce in accordance with industry quality standards. Therefore, this will help all parties to create a consensus between the needs of the industry with the labour market. Corporate Training Centre through the competent institution will be the mediator or intermediary to unite the information from companies, training institutions, and Professional Certification Institutions. As a result, in the long term, the needs of the workforce with the qualification required by the company in such industries will always fit the current situation. Blockchain helps to process the information and data needed by each party so that the connection between parties will be assisted efficiently and effectively.
Fahmideh, M & Zowghi, D 2020, 'An exploration of IoT platform development', Information Systems, vol. 87, pp. 101409-101409.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd IoT (Internet of Things) platforms are key enablers for smart city initiatives, targeting the improvement of citizens’ quality of life and economic growth. As IoT platforms are dynamic, proactive, and heterogeneous socio-technical artefacts, systematic approaches are required for their development. Limited surveys have exclusively explored how IoT platforms are developed and maintained from the perspective of information system development process lifecycle. In this paper, we present a detailed analysis of 63 approaches. This is accomplished by proposing an evaluation framework as a cornerstone to highlight the characteristics, strengths, and weaknesses of these approaches. The survey results not only provide insights of empirical findings, recommendations, and mechanisms for the development of quality aware IoT platforms, but also identify important issues and gaps that need to be addressed.
Fang, L, Li, Y, Yun, X, Wen, Z, Ji, S, Meng, W, Cao, Z & Tanveer, M 2020, 'THP: A Novel Authentication Scheme to Prevent Multiple Attacks in SDN-Based IoT Network', IEEE Internet of Things Journal, vol. 7, no. 7, pp. 5745-5759.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. SDN has provided significant convenience for network providers and operators in cloud computing. Such a great advantage is extending to the Internet of Things network. However, it also increases the risk if the security of an SDN network is compromised. For example, if the network operator's permission is illegally obtained by a hacker, he/she can control the entry of the SDN network. Therefore, an effective authentication scheme is needed to fit various application scenarios with high-security requirements. In this article, we design, implement, and evaluate a new authentication scheme called the hidden pattern (THP), which combines graphics password and digital challenge value to prevent multiple types of authentication attacks at the same time. We examined THP in the perspectives of both security and usability, with a total number of 694 participants in 63 days. Our evaluation shows that THP can provide better performance than the existing schemes in terms of security and usability.
Fang, L, Yin, C, Zhu, J, Ge, C, Tanveer, M, Jolfaei, A & Cao, Z 2020, 'Privacy Protection for Medical Data Sharing in Smart Healthcare', ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 3s, pp. 1-18.
View/Download from: Publisher's site
View description>>
In virtue of advances in smart networks and the cloud computing paradigm, smart healthcare is transforming. However, there are still challenges, such as storing sensitive data in untrusted and controlled infrastructure and ensuring the secure transmission of medical data, among others. The rapid development of watermarking provides opportunities for smart healthcare. In this article, we propose a new data-sharing framework and a data access control mechanism. The applications are submitted by the doctors, and the data is processed in the medical data center of the hospital, stored in semi-trusted servers to support the selective sharing of electronic medical records from different medical institutions between different doctors. Our approach ensures that privacy concerns are taken into account when processing requests for access to patients’ medical information. For accountability, after data is modified or leaked, both patients and doctors must add digital watermarks associated with their identification when uploading data. Extensive analytical and experimental results are presented that show the security and efficiency of our proposed scheme.
Fang, L, Zhang, X, Sood, K, Wang, Y & Yu, S 2020, 'Reliability-aware virtual network function placement in carrier networks', Journal of Network and Computer Applications, vol. 154, pp. 102536-102536.
View/Download from: Publisher's site
View description>>
© 2020 Network Function Virtualization (NFV) is a promising technology that implements Virtual Network Function (VNF) with software on general servers. Traffic needs to go through a set of ordered VNFs, which is called a Service Function Chain (SFC). Rational deployment of VNFs can reduce costs and increase profits for network operators. However, during the deployment of the VNFs, how to guarantee the reliability of SFC requirements while optimizing network resource cost is still an open problem. To this end, we study the problem of reliability-aware VNF placement in carrier networks. In this paper, we firstly redefine the reliability of SFC, which is the product of the reliability of all nodes and physical links in SFC. On this basis, we propose two reliability protection mechanisms: the All-Nodes Protection Mechanism (ANPM) and the Single-Node Protection Mechanism (SNPM). Following this, for each protection mechanism, we formulate the problem as an Integer Linear Programming (ILP) model. Due to the problem complexity, we propose a heuristic algorithm based on Dynamic Programming and Lagrangian Relaxation for each protection mechanism. With extensive simulations using real world topologies, our results show that compared with the benchmark algorithm and ANPM, SNPM can save up to 33.34% and 26.76% network resource cost on average respectively while guaranteeing the reliability requirement of SFC requests, indicating that SNPM performs better than ANPM and has better application potential in carrier networks.
Fang, L, Zhu, H, Lv, B, Liu, Z, Meng, W, Yu, Y, Ji, S & Cao, Z 2020, 'HandiText', ACM/IMS Transactions on Data Science, vol. 1, no. 4, pp. 1-18.
View/Download from: Publisher's site
View description>>
The Internet of Things (IoT) is a new manifestation of data science. To ensure the credibility of data about IoT devices, authentication has gradually become an important research topic in the IoT ecosystem. However, traditional graphical passwords and text passwords can cause user’s serious memory burdens. Therefore, a convenient method for determining user identity is needed. In this article, we propose a handwriting recognition authentication scheme named HandiText based on behavior and biometrics features. When people write a word by hand, HandiText captures their static biological features and dynamic behavior features during the writing process (writing speed, pressure, etc.). The features are related to habits, which make it difficult for attackers to imitate. We also carry out algorithms comparisons and experiments evaluation to prove the reliability of our scheme. The experiment results show that the Long Short-Term Memory has the best classification accuracy, reaching 99% while keeping relatively low false-positive rate and false-negative rate. We also test other datasets, the average accuracy of HandiText reach 98%, with strong generalization ability. Besides, the 324 users we investigated indicated that they are willing to use this scheme on IoT devices.
Fang, XS, Sheng, QZ, Wang, X, Zhang, WE, Ngu, AHH & Yang, J 2020, 'From Appearance to Essence', ACM Transactions on Intelligent Systems and Technology, vol. 11, no. 6, pp. 1-24.
View/Download from: Publisher's site
View description>>
Truth discovery has been widely studied in recent years as a fundamental means for resolving the conflicts in multi-source data. Although many truth discovery methods have been proposed based on different considerations and intuitions, investigations show that no single method consistently outperforms the others. To select the right truth discovery method for a specific application scenario, it becomes essential to evaluate and compare the performance of different methods. A drawback of current research efforts is that they commonly assume the availability of certain ground truth for the evaluation of methods. However, the ground truth may be very limited or even impossible to obtain, rendering the evaluation biased. In this article, we present CompTruthHyp , a generic approach for comparing the performance of truth discovery methods without using ground truth. In particular, our approach calculates the probability of observations in a dataset based on the output of different methods. The probability is then ranked to reflect the performance of these methods. We review and compare 12 representative truth discovery methods and consider both single-valued and multi-valued objects. The empirical studies on both real-world and synthetic datasets demonstrate the effectiveness of our approach for comparing truth discovery methods.
Fazal, MAU, Ferguson, S & Johnston, A 2020, 'Evaluation of Information Comprehension in Concurrent Speech-based Designs', ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 4, pp. 1-19.
View/Download from: Publisher's site
View description>>
In human-computer interaction, particularly in multimedia delivery, information is communicated to users sequentially, whereas users are capable of receiving information from multiple sources concurrently. This mismatch indicates that a sequential mode of communication does not utilise human perception capabilities as efficiently as possible. This article reports an experiment that investigated various speech-based (audio) concurrent designs and evaluated the comprehension depth of information by comparing comprehension performance across several different formats of questions (main/detailed, implied/stated). The results showed that users, besides answering the main questions, were also successful in answering the implied questions, as well as the questions that required detailed information, and that the pattern of comprehension depth remained similar to that seen to a baseline condition, where only one speech source was presented. However, the participants answered more questions correctly that were drawn from the main information, and performance remained low where the questions were drawn from detailed information. The results are encouraging to explore the concurrent methods further for communicating multiple information streams efficiently in human-computer interaction, including multimedia.
Feng, B, Cui, Z, Huang, Y, Zhou, H & Yu, S 2020, 'Elastic Resilience for Software-Defined Satellite Networking: Challenges, Solutions, and Open Issues', IT Professional, vol. 22, no. 6, pp. 39-45.
View/Download from: Publisher's site
View description>>
© 1999-2012 IEEE. Satellite networks have long been regarded as a key enabler for ubiquitous Internet access and global data distribution. However, since they are highly dynamic and much more vulnerable to various failures, how to detour traffic around fault satellites and interrupted links becomes an important but challenging issue. Thanks to the emerging software-defined networking, great controllability can be introduced to the satellite networks for agile management and automation. Hence, in this article, we focus on elastic resilience for software-defined satellite networking, and propose a preliminary solution to cope with the related fundamental challenges in guarantees of controller reachability, collections of network status, and failure detection and recovery. We also discuss several key open issues to be urgently addressed, hoping to shed some light on this promising land.
Ferrari, A, Spoletini, P, Bano, M & Zowghi, D 2020, 'SaPeer and ReverseSaPeer: teaching requirements elicitation interviews with role-playing and role reversal.', Requir. Eng., vol. 25, no. 4, pp. 417-438.
View/Download from: Publisher's site
View description>>
© 2020, Springer-Verlag London Ltd., part of Springer Nature. Among the variety of the available requirements elicitation techniques, interviews are the most commonly used. Performing effective interviews is challenging, especially for students and novice analysts, since interviews’ success depends largely on soft skills and experience. Despite their diffusion and their challenging nature, when it comes to requirements engineering education and training (REET), limited resources and few well-founded pedagogical approaches are available to allow students to acquire and improve their skills as interviewers. To overcome this limitation, this paper presents two pedagogical approaches, namely SaPeer and ReverseSaPeer. SaPeer uses role-playing, peer review and self-assessment to enable students to experience first-hand the difficulties related to the interviewing process, reflect on their mistakes, and improve their interview skills by practice and analysis. ReverseSaPeer builds on the first approach and includes a role reversal activity in which participants play the role of a customer interviewed by a competent interviewer. We evaluate the effectiveness of SaPeer through a controlled quasi-experiment, which shows that the proposed approach significantly reduces the amount of mistakes made by the participants and that it is perceived as useful and easy by the participants. ReverseSaPeer and the impact of role reversal are analyzed through a thematic analysis of the participant’s reflections. The analysis shows that not only the students perceive the analysis as beneficial, but also that they have emotional involvement in learning. This work contributes to the body of knowledge of REET with two methods, quantitative and qualitative evaluated, respectively. Furthermore, we share the pedagogical material used, to enable other educators to apply and possibly tailor the approach.
Flores-Sosa, M, Avilés-Ochoa, E & Merigó, JM 2020, 'Induced OWA operators in linear regression', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 5, pp. 5509-5520.
View/Download from: Publisher's site
Gai, K, Guo, J, Zhu, L & Yu, S 2020, 'Blockchain Meets Cloud Computing: A Survey', IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2009-2030.
View/Download from: Publisher's site
View description>>
© 1998-2012 IEEE. Blockchain technology has been deemed to be an ideal choice for strengthening existing computing systems in varied manners. As one of the network-enabled technologies, cloud computing has been broadly adopted in the industry through numerous cloud service models. Fusing blockchain technology with existing cloud systems has a great potential in both functionality/performance enhancement and security/privacy improvement. The question remains on how blockchain technology inserts into current deployed cloud solutions and enables the reengineering of cloud datacenter. This survey addresses this issue and investigates recent efforts in the technical fusion of blockchain and clouds. Three technical dimensions roughly are covered in this work. First, we concern the service model and review an emerging cloud-relevant blockchain service model, Blockchain-as-a-Service (BaaS); second, security is considered a key technical dimension in this work and both access control and searchable encryption schemes are assessed; finally, we examine the performance of cloud datacenter with supports/participance of blockchain from hardware and software perspectives. Main findings of this survey will be theoretical supports for future reference of blockchain-enabled reengineering of cloud datacenter.
Gao, H, Yin, Y & Hussain, W 2020, 'Editorial: The ubiquitous internet of things in electricity (IOTE): Computational-intelligence-based optimization, security control, and fault diagnosis', IAENG International Journal of Computer Science, vol. 47, no. 3, pp. 565-566.
Gong, S, Oberst, S & Wang, X 2020, 'An experimentally validated rubber shear spring model for vibrating flip-flow screens', Mechanical Systems and Signal Processing, vol. 139, pp. 106619-106619.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Vibrating flip-flow screens (VFFS) provide an effective solution for screening highly moist and fine-grained minerals, and the dynamic response of the main and the floating screen frames largely accounts for a VFFS's screening performance and its processing capacity. An accurate dynamic model of the rubber shear springs inserted between the frames of the VFFS is critical for its dynamic analysis but has rarely been studied in detail. In this paper, a variance-based global sensitivity analysis is applied to actually illustrate that the rubber shear spring is the most important component for the dynamics of VFFS. Then a nonlinear rubber shear spring model is proposed to predict its amplitude and frequency dependency, which is described by a friction model and a fractional derivative viscoelastic model, respectively, and the elasticity is predicted by a nonlinear spring. The reasonability of the proposed model is verified by experimental cyclic tests of the rubber shear spring. Comparisons between the newly proposed model and other classic models, including the Generalized Maxwell model, adopted for the dynamic analysis of the VFFS are carried out, and experimental tests of an industrial VFFS's dynamic response show that dynamics of the VFFS can be better described using the proposed model than the existing models. Furthermore, the method of the global sensitivity analysis is also applied to the newly VFFS dynamic model to calculate the sensitivities of model outputs caused by the input parameters. The results reveal that the dynamic response of an operating VFFS is most sensitive to changes in the stiffness of the rubber shear spring, followed by the mass of the floating screen frames.
Gu, B, Gao, L, Wang, X, Qu, Y, Jin, J & Yu, S 2020, 'Privacy on the Edge: Customizable Privacy-Preserving Context Sharing in Hierarchical Edge Computing', IEEE Transactions on Network Science and Engineering, vol. 7, no. 4, pp. 2298-2309.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. The booming of edge computing enables and reshapes this big data era. However, privacy issues arise because increasing volume of data are published per second while the edge devices can only provide limited computing and storage resources. In addition, this has been aggravated by new emerging features of edge computing, such as decentralized and hierarchical infrastructure, mobility, and content-Aware applications. Although some existing privacy preserving methods are extended to this domain, the privacy issues of data dissemination between multiple edge nodes and end users is barely studied. Motivated by this, we propose a dynamic customizable privacy-preserving model based on Markov decision process to obtain the optimized trade-off between customizable privacy protection and data utility. We start with establishing a game model between users and adversaries based on a QoS-based payoff function. A modified reinforcement learning algorithm is deployed to derive the exclusive Nash Equilibrium. Furthermore, the model can achieve fast convergence by the reduction of cardinality from n to 2. Extensive experimental results confirm the significance of the proposed model comparing to the existing work both in terms of effectiveness and feasibility.
Gu, F, Niu, J, Jin, X & Yu, S 2020, 'FDFA: A fog computing assisted distributed analytics and detecting system for family activities', Peer-to-Peer Networking and Applications, vol. 13, no. 1, pp. 38-52.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. Researches have shown that taking parting in family activities could establish good relationships with family members. Fine-grained family activities detection is proven effective for increasing self-awareness and motivating people to modify their life styles for improved well being. Mobile health provides the possibility to solve this problem. However, with the increase of such applications, the requirements for computation, communication, and storage capability are becoming higher and higher. Fog computing, a new computing paradigm, utilizes a collaborative multitude of end-user clients or near-user edge devices to conduct a substantial amount of computing, communication, storage, and so on. In this paper, we propose FDFA, the first fog computing assisted distributed analytics and detecting system for family activities using smartphones and smart watches. Specifically, FDFA firstly uses the built-in sensors to obtain sensing data, such as the striding frequency and heart rate of the users, the sound of environment, and so forth. Then, a fog computing assisted resolution framework is proposed to efficiently detect family activities in an unobtrusive manner based on sensed data. Finally, considering the characteristics of different people, FDFA sets a personal plan for family members in doing some exercise and making continuous progress in the process of communicating. We have fully implemented FDFA on the Android platform and the extensive experimental results demonstrate that FDFA is easy to use, accurate, and appropriate for family activities with the accuracy of 79.1% and the user satisfaction degree of 82.4%. Moreover, the system can achieve more than 90% bandwidth efficiency and offer low-latency real time response with fog computing.
Guan, L, Abbasi, A & Ryan, MJ 2020, 'Analyzing green building project risk interdependencies using Interpretive Structural Modeling', Journal of Cleaner Production, vol. 256, pp. 120372-120372.
View/Download from: Publisher's site
Guan, L, Liu, Q, Abbasi, A & Ryan, MJ 2020, 'DEVELOPING A COMPREHENSIVE RISK ASSESSMENT MODEL BASED ON FUZZY BAYESIAN BELIEF NETWORK (FBBN)', JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT, vol. 26, no. 7, pp. 614-634.
View/Download from: Publisher's site
View description>>
Reliable and efficient risk assessments are essential to deal effectively with potential risks in international construction projects. However, most conventional risk modeling methods are based on the hypothesis that risk factors are independent, which does not account adequately for the causal relationships among risk factors. In this study, a risk assessment model for international construction projects was developed to improve the efficacy of risk management by integrating fault tree analysis and fuzzy set theory with a Bayesian belief network. The risk rating of each risk factor, expressed as the product of risk occurrence probability and impact, was incorporated into the risk assessment model to evaluate degrees of risk. Therefore, risk factors were categorized into different risk levels taking into account their inherent causal relationships, which allowed the identification of critical risk factors. The applicability of the fuzzy Bayesian belief network-based risk assessment model was verified using a case study through a comparative analysis with the results from a fuzzy synthetic evaluation method. The comparison shows that the proposed risk assessment model is able to provide guidelines for an effective risk management process and ultimately to increase project performance in a complex environment such as international construction projects.
Guo, Z, Xiao, F, Sheng, B, Fei, H & Yu, S 2020, 'WiReader: Adaptive Air Handwriting Recognition Based on Commercial WiFi Signal', IEEE Internet of Things Journal, vol. 7, no. 10, pp. 10483-10494.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. In recent years, with the rapid development of the Internet-of-Things (IoT) technologies, many intelligent sensing applications have emerged, which realize contactless sensing and human-computer interaction (HCI). Handwriting recognition is the communication link between the human and computer. Previous handwriting recognition applications are usually founded on images and sensors, which require significant device overhead and are device dependent. Recently, the revolution of the wireless signal sensing technology has laid the foundation for the intelligent handwriting recognition technology without devices. In this article, we propose WiReader, an adaptive air handwriting recognition system based on wireless signals. WiReader utilizes ubiquitous commercial WiFi devices to process the collected channel state information (CSI), segments the data in combination with activity factors, and then transforms the original signal using the CSI-Ratio model. In order to address the problem of feature extraction caused by handwriting, we utilize the cumulative principal components and multilayer wavelet transform for the transformed signal. Finally, the energy feature matrix is generated and combines with long short-term memory (LSTM) to realize the recognition of different handwriting actions. Extensive real-world experiments show that WiReader achieves an average recognition accuracy of 90.64% leading other applications in three scenarios and has strong robustness to user location, user diversity, and different scenarios.
Gupta, A, Agrawal, RK, Kirar, JS, Kaur, B, Ding, W, Lin, C-T, Andreu-Perez, J & Prasad, M 2020, 'A hierarchical meta-model for multi-class mental task based brain-computer interfaces', Neurocomputing, vol. 389, pp. 207-217.
View/Download from: Publisher's site
View description>>
© 2019 In the last few years, many research works have been suggested on Brain-Computer Interface (BCI), which assists severely physically disabled persons to communicate directly with the help of electroencephalogram (EEG)signal, generated by the thought process of the brain. Thought generation inside the brain is a dynamic process, and plenty thoughts occur within a small time window. Thus, there is a need for a BCI device that can distinguish these various ideas simultaneously. In this research work, our previous binary-class mental task classification has been extended to the multi-class mental task problem. The present work proposed a novel feature construction scheme for multi mental task classification. In the proposed method, features are extracted in two phases. In the first step, the wavelet transform is used to decompose EEG signal. In the second phase, each feature component obtained is represented compactly using eight parameters (statistical and uncertainty measures). After that, a set of relevant and non-redundant features is selected using linear regression, a multivariate feature selection approach. Finally, optimal decision tree based support vector machine (ODT-SVM)classifier is used for multi mental task classification. The performance of the proposed method is evaluated on the publicly available dataset for 3-class, 4-class, and 5-class mental task classification. Experimental results are compared with existing methods, and it is observed that the proposed plan provides better classification accuracy in comparison to the existing methods for 3-class, 4-class, and 5-class mental task classification. The efficacy of the proposed method encourages that the proposed method may be helpful in developing BCI devices for multi-class classification.
Gupta, AK, Seal, A, Prasad, M & Khanna, P 2020, 'Salient Object Detection Techniques in Computer Vision—A Survey', Entropy, vol. 22, no. 10, pp. 1174-1174.
View/Download from: Publisher's site
View description>>
Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.
Hämäläinen, RP, Miliszewska, I & Voinov, A 2020, 'Leadership in participatory modelling – Is there a need for it?', Environmental Modelling & Software, vol. 133, pp. 104834-104834.
View/Download from: Publisher's site
Han, Y, Deng, Y, Cao, Z & Lin, C-T 2020, 'An interval-valued Pythagorean prioritized operator-based game theoretical framework with its applications in multicriteria group decision making', Neural Computing and Applications, vol. 32, no. 12, pp. 7641-7659.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag London Ltd., part of Springer Nature. Multicriteria decision-making process explicitly evaluates multiple conflicting criteria in decision making. The conventional decision-making approaches assumed that each agent is independent, but the reality is that each agent aims to maximize personal benefit which causes a negative influence on other agents’ behaviors in a real-world competitive environment. In our study, we proposed an interval-valued Pythagorean prioritized operator-based game theoretical framework to mitigate the cross-influence problem. The proposed framework considers both prioritized levels among various criteria and decision makers within five stages. Notably, the interval-valued Pythagorean fuzzy sets are supposed to express the uncertainty of experts, and the game theories are applied to optimize the combination of strategies in interactive situations. Additionally, we also provided illustrative examples to address the application of our proposed framework. In summary, we provided a human-inspired framework to represent the behavior of group decision making in the interactive environment, which is potential to simulate the process of realistic humans thinking.
Hellmann, A, Ang, L & Sood, S 2020, 'Towards a conceptual framework for analysing impression management during face-to-face communication', Journal of Behavioral and Experimental Finance, vol. 25, pp. 100265-100265.
View/Download from: Publisher's site
Hesam-Shariati, N, Chang, W-J, McAuley, JH, Booth, A, Trost, Z, Lin, C-T, Newton-John, T & Gustin, SM 2020, 'The Analgesic Effect of Electroencephalographic Neurofeedback for People With Chronic Pain: Protocol for a Systematic Review and Meta-analysis', JMIR Research Protocols, vol. 9, no. 10, pp. e22821-e22821.
View/Download from: Publisher's site
View description>>
Background Chronic pain is a global health problem, affecting around 1 in 5 individuals in the general population. The understanding of the key role of functional brain alterations in the generation of chronic pain has led researchers to focus on pain treatments that target brain activity. Electroencephalographic neurofeedback attempts to modulate the power of maladaptive electroencephalography frequency powers to decrease chronic pain. Although several studies have provided promising evidence, the effect of electroencephalographic neurofeedback on chronic pain is uncertain. Objective This systematic review aims to synthesize the evidence from randomized controlled trials to evaluate the analgesic effect of electroencephalographic neurofeedback. In addition, we will synthesize the findings of nonrandomized studies in a narrative review. Methods We will apply the search strategy in 5 electronic databases (Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, PsycInfo, and CINAHL) for published studies and in clinical trial registries for completed unpublished studies. We will include studies that used electroencephalographic neurofeedback as an intervention for people with chronic pain. Risk-of-bias tools will be used to assess methodological quality of the included studies. We will include randomized controlled trials if they have compared electroencephalographic neurofeedback with any other intervention or placebo control. The data from randomized controlled trials will be aggregated to perform a meta-analysis for quantitative synthesis. The primary outcome measure is pain intensity assessed by self-report scales. Secondary outcome measures include depressive s...
Hesam-Shariati, N, Newton-John, T, Singh, AK, Tirado Cortes, CA, Do, T-TN, Craig, A, Middleton, JW, Jensen, MP, Trost, Z, Lin, C-T & Gustin, SM 2020, 'Evaluation of the Effectiveness of a Novel Brain-Computer Interface Neuromodulative Intervention to Relieve Neuropathic Pain Following Spinal Cord Injury: Protocol for a Single-Case Experimental Design With Multiple Baselines', JMIR Research Protocols, vol. 9, no. 9, pp. e20979-e20979.
View/Download from: Publisher's site
View description>>
Background Neuropathic pain is a debilitating secondary condition for many individuals with spinal cord injury. Spinal cord injury neuropathic pain often is poorly responsive to existing pharmacological and nonpharmacological treatments. A growing body of evidence supports the potential for brain-computer interface systems to reduce spinal cord injury neuropathic pain via electroencephalographic neurofeedback. However, further studies are needed to provide more definitive evidence regarding the effectiveness of this intervention. Objective The primary objective of this study is to evaluate the effectiveness of a multiday course of a brain-computer interface neuromodulative intervention in a gaming environment to provide pain relief for individuals with neuropathic pain following spinal cord injury. Methods We have developed a novel brain-computer interface-based neuromodulative intervention for spinal cord injury neuropathic pain. Our brain-computer interface neuromodulative treatment includes an interactive gaming interface, and a neuromodulation protocol targeted to suppress theta (4-8 Hz) and high beta (20-30 Hz) frequency powers, and enhance alpha (9-12 Hz) power. We will use a single-case experimental design with multiple baselines to examine the effectiveness of our self-developed brain-computer interface neuromodulative intervention for the treatment of spinal cord injury neuropathic pain. We will recruit 3 participants with spinal cord injury neuropathic pain. Each participant will be randomly allocated to a different baseline phase (ie, 7, 10, or 14 days), which will then be followed by 20 sessions of a 30-minute brain-computer interface neuromodulative interventi...
Hsu, TW, Pare, S, Meena, MS, Jain, DK, Li, DL, Saxena, A, Prasad, M & Lin, CT 2020, 'An Early Flame Detection System Based on Image Block Threshold Selection Using Knowledge of Local and Global Feature Analysis', Sustainability, vol. 12, no. 21, pp. 8899-8899.
View/Download from: Publisher's site
View description>>
Fire is one of the mutable hazards that damage properties and destroy forests. Many researchers are involved in early warning systems, which considerably minimize the consequences of fire damage. However, many existing image-based fire detection systems can perform well in a particular field. A general framework is proposed in this paper which works on realistic conditions. This approach filters out image blocks based on thresholds of different temporal and spatial features, starting with dividing the image into blocks and extraction of flames blocks from image foreground and background, and candidates blocks are analyzed to identify local features of color, source immobility, and flame flickering. Each local feature filter resolves different false-positive fire cases. Filtered blocks are further analyzed by global analysis to extract flame texture and flame reflection in surrounding blocks. Sequences of successful detections are buffered by a decision alarm system to reduce errors due to external camera influences. Research algorithms have low computation time. Through a sequence of experiments, the result is consistent with the empirical evidence and shows that the detection rate of the proposed system exceeds previous studies and reduces false alarm rates under various environments.
Huang, C, Yao, L, Wang, X, Benatallah, B & Zhang, X 2020, 'Software expert discovery via knowledge domain embeddings in a collaborative network', Pattern Recognition Letters, vol. 130, pp. 46-53.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Community Question Answering (CQA) websites can be claimed as the most major venues for knowledge sharing, and the most effective way of exchanging knowledge at present. Considering that massive amount of users are participating online and generating huge amount data, management of knowledge here systematically can be challenging. Expert recommendation is one of the major challenges, as it highlights users in CQA with potential expertise, which may help match unresolved questions with existing high quality answers while at the same time may help external services like human resource systems as another reference to evaluate their candidates. In this paper, we in this work we propose to exploring experts in CQA websites. We take advantage of recent distributed word representation technology to help summarize text chunks, and in a semantic view exploiting the relationships between natural language phrases to extract latent knowledge domains. By domains, the users’ expertise is determined on their historical performance, and a rank can be compute to given recommendation accordingly. In particular, Stack Overflow is chosen as our dataset to test and evaluate our work, where inclusive experiment shows our competence.
Huang, L, Zhang, G & Yu, S 2020, 'A Data Storage and Sharing Scheme for Cyber-Physical-Social Systems', IEEE Access, vol. 8, pp. 31471-31480.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Cyber-Physical-Social System (CPSS) provides users secure and high-quality mobile service applications to share and exchange data in the cyberspace and physical world. With the explosive growth of data, it is necessary to introduce cloud storage service, which allows devices frequently resort to the cloud for data storage and sharing, into CPSS. In this paper, we propose a data storage and sharing scheme for CPSS with the help of cloud storage service. Since data integrity assurance is an inevitable problem in cloud storage, we first design a secure and efficient data storage scheme based on the technology of public auditing and bilinear map, which also ensures the security of the verification. In order to meet the real-time and reliability requirements of the CPSS, the rewards of timeliness incentive and effectiveness incentive are considered in the scheme. Secondly, based on the proposed storage scheme and ElGamal encryption, we propose a lightweight access model for users to access the final data processed by cloud server. We formally prove the security of the proposed scheme, and conduct performance evaluation to validate its high efficiency. The experimental results show that the proposed scheme has lower overheads in communication and access as compared to the technique CDS.
Huang, L, Zhou, J, Zhang, G, Sun, J, Wei, T, Yu, S & Hu, S 2020, 'IPANM: Incentive Public Auditing Scheme for Non-Manager Groups in Clouds', IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 2, pp. 1-1.
View/Download from: Publisher's site
View description>>
Cloud storage services give users a great facility in data management such as data collection, storage and sharing, but also bring some potential security hazards. An utmost importance is how to ensure the integrity of data files stored in the cloud, particular for user groups without trusted managers. Existing literature focuses on integrity checking for groups with managers who have lots of permissions. To overcome the shortage of public auditing for non-manager user groups in clouds, we develop a novel framework IPANM that integrates (t,n)(t,n) threshold technology, blinding technology, and incentive mechanism to realize an incentive privacy-preserving public auditing scheme. In IPANM, the data integrity is guaranteed by our (t,n)(t,n) threshold signature based public auditing and the data privacy during public auditing is protected by the blinding technology. The generation of signatures can be accelerated by our blockchain-aided incentive mechanism that mobilizes the initiative of signers in the signature generation by rewarding the contributed signers. We formally prove the security of our IPANM and conduct numerical analysis and evaluation study to validate its high efficiency. The experimental results demonstrate that IPANM has lower overheads of storage, communication, and computation as compared to the state-of-the-art technique IAID-PDP and NPP.
Huang, Z, Huang, L, Wang, C, Zhu, S, Qi, X, Chen, Y, Zhang, Y, Cowley, MA, Veldhuis, JD & Chen, C 2020, 'Dapagliflozin restores insulin and growth hormone secretion in obese mice', Journal of Endocrinology, vol. 245, no. 1, pp. 1-12.
View/Download from: Publisher's site
View description>>
The well-documented hormonal disturbance in a general obese population is characterised by an increase in insulin secretion and a decrease in growth hormone (GH) secretion. Such hormonal disturbance promotes an increase in fat mass, which deteriorates obesity and accelerates the development of insulin resistance and type 2 diabetes. While the pathological consequence is alarming, the pharmaceutical approach attempting to correct such hormonal disturbance remains limited. By applying an emerging anti-diabetic drug, the sodium-glucose cotransporter 2 inhibitor, dapagliflozin (1 mg/kg/day for 10 weeks), to a hyperphagic obese mouse model, we observed a significant improvement in insulin and GH secretion as early as 4 weeks after the initiation of the treatment. Restoration of pathological disturbance of insulin and GH secretion reduced fat accumulation and preserved lean body mass in the obese animal model. Such phenotypic improvement followed with concurrent improvements in glucose and lipid metabolism, insulin sensitivity, as well as the expression of metabolic genes that were regulated by insulin and GH. In conclusion, 10 weeks of treatment with dapagliflozin effectively reduces hyperinsulinemia and restores pulsatile GH secretion in the hyperphagic obese mice with considerable improvement in lipid and glucose metabolism. Promising outcomes from this study may provide insights into drug intervention to correct hormonal disturbance in obesity to delay the diabetes progression.
Hussain, T, Muhammad, K, Ullah, A, Cao, Z, Baik, SW & de Albuquerque, VHC 2020, 'Cloud-Assisted Multiview Video Summarization Using CNN and Bidirectional LSTM', IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 77-86.
View/Download from: Publisher's site
Hussain, W, Sohaib, O, Naderpour, M & Gao, H 2020, 'Cloud Marginal Resource Allocation: A Decision Support Model.', Mob. Networks Appl., vol. 25, no. 4, pp. 1418-1433.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. One of the significant challenges for cloud providers is how to manage resources wisely and how to form a viable service level agreement (SLA) with consumers to avoid any violation or penalties. Some consumers make an agreement for a fixed amount of resources, these being the required resources that are needed to execute its business. Consumers may need additional resources on top of these fixed resources, known as– marginal resources that are only consumed and paid for in case of an increase in business demand. In such contracts, both parties agree on a pricing model in which a consumer pays upfront only for the fixed resources and pays for the marginal resources when they are used. A marginal resource allocation is a challenge for service provider particularly small- to medium-sized service providers as it can affect the usage of their resources and consequently their profits. This paper proposes a novel marginal resource allocation decision support model to assist cloud providers to manage the cloud SLAs before its execution, covering all possible scenarios, including whether a consumer is new or not, and whether the consumer requests the same or different marginal resources. The model relies on the capabilities of the user-based collaborative filtering method with an enhanced top-k nearest neighbor algorithm and a fuzzy logic system to make a decision. The proposed framework assists cloud providers manage their resources in an optimal way and avoid violations or penalties. Finally, the performance of the proposed model is shown through a cloud scenario which demonstrates that our proposed approach can assists cloud providers to manage their resources wisely to avoid violations.
Huynh, P, Phan, KT, Liu, B & Ross, R 2020, 'Throughput Analysis of Buffer-Aided Decode-and-Forward Wireless Relaying with RF Energy Harvesting', Sensors, vol. 20, no. 4, pp. 1222-1222.
View/Download from: Publisher's site
View description>>
In this paper, we investigated a buffer-aided decode-and-forward (DF) wireless relaying system over fading channels, where the source and relay harvest radio-frequency (RF) energy from a power station for data transmissions. We derived exact expressions for end-to-end throughput considering half-duplex (HD) and full-duplex (FD) relaying schemes. The numerical results illustrate the throughput and energy efficiencies of the relaying schemes under different self-interference (SI) cancellation levels and relay deployment locations. It was demonstrated that throughput-optimal relaying is not necessarily energy efficiency-optimal. The results provide guidance on optimal relaying network deployment and operation under different performance criteria.
Islam, MR, Liu, S, Wang, X & Xu, G 2020, 'Deep learning for misinformation detection on online social networks: a survey and new perspectives', Social Network Analysis and Mining, vol. 10, no. 1.
View/Download from: Publisher's site
View description>>
© 2020, Springer-Verlag GmbH Austria, part of Springer Nature. Recently, the use of social networks such as Facebook, Twitter, and Sina Weibo has become an inseparable part of our daily lives. It is considered as a convenient platform for users to share personal messages, pictures, and videos. However, while people enjoy social networks, many deceptive activities such as fake news or rumors can mislead users into believing misinformation. Besides, spreading the massive amount of misinformation in social networks has become a global risk. Therefore, misinformation detection (MID) in social networks has gained a great deal of attention and is considered an emerging area of research interest. We find that several studies related to MID have been studied to new research problems and techniques. While important, however, the automated detection of misinformation is difficult to accomplish as it requires the advanced model to understand how related or unrelated the reported information is when compared to real information. The existing studies have mainly focused on three broad categories of misinformation: false information, fake news, and rumor detection. Therefore, related to the previous issues, we present a comprehensive survey of automated misinformation detection on (i) false information, (ii) rumors, (iii) spam, (iv) fake news, and (v) disinformation. We provide a state-of-the-art review on MID where deep learning (DL) is used to automatically process data and create patterns to make decisions not only to extract global features but also to achieve better results. We further show that DL is an effective and scalable technique for the state-of-the-art MID. Finally, we suggest several open issues that currently limit real-world implementation and point to future directions along this dimension.
Islam, MR, Lu, H, Hossain, J, Islam, MR & Li, L 2020, 'Multiobjective Optimization Technique for Mitigating Unbalance and Improving Voltage Considering Higher Penetration of Electric Vehicles and Distributed Generation', IEEE Systems Journal, vol. 14, no. 3, pp. 3676-3686.
View/Download from: Publisher's site
View description>>
© 2007-2012 IEEE. The increasing penetration of distributed generations (DGs) and electric vehicles (EVs) offers not only several opportunities but also introduces many challenges for the distribution system operators (DSOs) regarding power quality. This article investigates the network performances due to uncoordinated DG and EV distribution. It also considers power quality-related performances such as the neutral current, energy loss, voltage imbalance, and bus voltage as a multiobjective optimization problem. The differential evolution optimization algorithm is employed to solve the multiobjective optimization problem to coordinate EV and DG in a distribution grid. This article proposed a method to coordinate EV and DG distribution. The proposed method allows DSOs to jointly optimize the phase sequence and optimal dispatch of DGs to improve the network's performance. If the network requires further improvement, the EV charging or discharging rate is coordinated for a particular location. The efficacy of the proposed method is tested in an Australian low-voltage distribution grid considering the amount of imbalance due to higher penetration of DG and EV. It is observed that the proposed method reduces voltage unbalance factor by up to 98.24%, neutral current up to 94%, and energy loss by 59.45%, and improve bus voltage by 10.42%.
Islam, MR, Lu, H, Islam, MR, Hossain, J & Li, L 2020, 'An IoT- Based Decision Support Tool for Improving the Performance of Smart Grids Connected with Distributed Energy Sources and Electric Vehicles', IEEE Transactions on Industry Applications, vol. 56, no. 4, pp. 1-1.
View/Download from: Publisher's site
Jafarzadeh, M, Wu, Y-D, Sanders, YR & Sanders, BC 2020, 'Randomized benchmarking for qudit Clifford gates', New Journal of Physics, vol. 22, no. 6, pp. 063014-063014.
View/Download from: Publisher's site
View description>>
Abstract We introduce unitary-gate randomized benchmarking (URB) for qudit gates by extending single- and multi-qubit URB to single- and multi-qudit gates. Specifically, we develop a qudit URB procedure that exploits unitary 2-designs. Furthermore, we show that our URB procedure is not simply extracted from the multi-qubit case by equating qudit URB to URB of the symmetric multi-qubit subspace. Our qudit URB is elucidated by using pseudocode, which facilitates incorporating into benchmarking applications.
Jahangoshai Rezaee, M, Yousefi, S, Eshkevari, M, Valipour, M & Saberi, M 2020, 'Risk analysis of health, safety and environment in chemical industry integrating linguistic FMEA, fuzzy inference system and fuzzy DEA', Stochastic Environmental Research and Risk Assessment, vol. 34, no. 1, pp. 201-218.
View/Download from: Publisher's site
Jiang, P, Li, R, Lu, H & Zhang, X 2020, 'Modeling of electricity demand forecast for power system', Neural Computing and Applications, vol. 32, no. 11, pp. 6857-6875.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag London Ltd., part of Springer Nature. The emerging complex circumstances caused by economy, technology, and government policy and the requirement of low-carbon development of power grid lead to many challenges in the power system coordination and operation. However, the real-time scheduling of electricity generation needs accurate modeling of electricity demand forecasting for a range of lead times. In order to better capture the nonlinear and non-stationary characteristics and the seasonal cycles of future electricity demand data, a new concept of the integrated model is developed and successfully applied to research the forecast of electricity demand in this paper. The proposed model combines adaptive Fourier decomposition method, a new signal preprocessing technology, for extracting useful element from the original electricity demand series through filtering the noise factors. Considering the seasonal term existing in the decomposed series, it should be eliminated through the seasonal adjustment method, in which the seasonal indexes are calculated and should multiply the forecasts back to restore the final forecast. Besides, a newly proposed moth-flame optimization algorithm is used to ensure the suitable parameters of the least square support vector machine which can generate the forecasts. Finally, the case studies of Australia demonstrated the efficacy and feasibility of the proposed integrated model. Simultaneously, it can provide a better concept of modeling for electricity demand prediction over different forecasting horizons.
Jiang, Y, Gu, X, Wu, D, Hang, W, Xue, J, Qiu, S & Chin-Teng, L 2020, 'A Novel Negative-Transfer-Resistant Fuzzy Clustering Model with a Shared Cross-Domain Transfer Latent Space and its Application to Brain CT Image Segmentation', IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 18, no. 1, pp. 1-1.
View/Download from: Publisher's site
View description>>
Traditional clustering algorithms for medical image segmentation can only achieve satisfactory clustering performance under relatively ideal conditions, in which there is adequate data from the same distribution, and the data is rarely disturbed by noise or outliers. However, a sufficient amount of medical images with representative manual labels are often not available, because medical images are frequently acquired with different scanners (or different scan protocols) or polluted by various noises. Transfer learning improves learning in the target domain by leveraging knowledge from related domains. Given some target data, the performance of transfer learning is determined by the degree of relevance between the source and target domains. To achieve positive transfer and avoid negative transfer, a negative-transfer-resistant mechanism is proposed by computing the weight of transferred knowledge. Extracting a negative-transfer-resistant fuzzy clustering model with a shared cross-domain transfer latent space (called NTR-FC-SCT) is proposed by integrating negative-transfer-resistant and maximum mean discrepancy (MMD) into the framework of fuzzy c-means clustering. Experimental results show that the proposed NTR-FC-SCT model outperformed several traditional non-transfer and related transfer clustering algorithms.
Jin, D, Zhang, B, Song, Y, He, D, Feng, Z, Chen, S, Li, W & Musial, K 2020, 'ModMRF: A modularity-based Markov Random Field method for community detection', Neurocomputing, vol. 405, pp. 218-228.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier B.V. Complex networks are widely used in the research of social and biological fields. Analyzing real community structure in networks is the key to the study of complex networks. Modularity optimization is one of the most popular techniques in community detection. However, due to its greedy characteristic, it leads to a large number of incorrect partitions and more communities than in reality. Existing methods use the modularity as a Hamiltonian at the finite temperature to solve the above problem. Nevertheless, modularity is not formalized as a statistical model in the method, which makes many statistical inference methods limited and cannot be used. Moreover, the method uses the sum-product version of belief propagation (BP) and its performance is not as good as the max-sum version, since it calculates per-variable marginal probabilities rather than the joint probability. To address these issues, we propose a novel Markov Random Field (MRF) method by formalizing modularity as an energy function based on the rich structures of MRF to represent properties and constraints of this problem, and use the max-sum BP to infer model parameters. In order to analyze our method and compare it with existing methods, we conducted experiments on both real-world and synthetic networks with ground-truth of communities, showing that the new method outperforms the state-of-the-art methods.
Kalantar, B, Ueda, N, Al-Najjar, HAH & Halin, AA 2020, 'Assessment of Convolutional Neural Network Architectures for Earthquake-Induced Building Damage Detection based on Pre- and Post-Event Orthophoto Images', Remote Sensing, vol. 12, no. 21, pp. 3529-3529.
View/Download from: Publisher's site
View description>>
In recent years, remote-sensing (RS) technologies have been used together with image processing and traditional techniques in various disaster-related works. Among these is detecting building damage from orthophoto imagery that was inflicted by earthquakes. Automatic and visual techniques are considered as typical methods to produce building damage maps using RS images. The visual technique, however, is time-consuming due to manual sampling. The automatic method is able to detect the damaged building by extracting the defect features. However, various design methods and widely changing real-world conditions, such as shadow and light changes, cause challenges to the extensive appointing of automatic methods. As a potential solution for such challenges, this research proposes the adaption of deep learning (DL), specifically convolutional neural networks (CNN), which has a high ability to learn features automatically, to identify damaged buildings from pre- and post-event RS imageries. Since RS data revolves around imagery, CNNs can arguably be most effective at automatically discovering relevant features, avoiding the need for feature engineering based on expert knowledge. In this work, we focus on RS imageries from orthophoto imageries for damaged-building detection, specifically for (i) background, (ii) no damage, (iii) minor damage, and (iv) debris classifications. The gist is to uncover the CNN architecture that will work best for this purpose. To this end, three CNN models, namely the twin model, fusion model, and composite model, are applied to the pre- and post-orthophoto imageries collected from the 2016 Kumamoto earthquake, Japan. The robustness of the models was evaluated using four evaluation metrics, namely overall accuracy (OA), producer accuracy (PA), user accuracy (UA), and F1 score. According to the obtained results, the twin model achieved higher accuracy (OA = 76.86%; F1 score = 0.761) compare to the fusion model (OA = 72.27%; F1...
Khalilpour, KR, Pace, R & Karimi, F 2020, 'Retrospective and prospective of the hydrogen supply chain: A longitudinal techno-historical analysis', International Journal of Hydrogen Energy, vol. 45, no. 59, pp. 34294-34315.
View/Download from: Publisher's site
View description>>
© 2020 Hydrogen Energy Publications LLC The objective of this study was to investigate the evolution of hydrogen research and its international scientific collaboration network. From the Scopus database, 58,006 relevant articles, published from 1935 until mid-2018, were retrieved. To review this massive volume of publication records, we took a scientometric network analysis approach and investigated the social network of the publication contents based on keywords co-occurrence as well as international collaboration ties. An interesting observation is that despite publications on hydrogen occurring since 1935, the growth of this research field ignited with the Kyoto Protocol of 1997. The publication profile reveals that more than 93% of the existing records have been published over the last two decades. More recently, the accelerated growth of renewables has further motivated hydrogen research with almost 36,000 academic records having been indexed from 2010 till mid-2018. This accounts for ~62% of the total historical publications on hydrogen. The conventional hydrogen production pathway is fossil fuel-based, involving fossil fuel reforming for synthesis gas generation. The keyword analysis also shows a paradigm shift in hydrogen generation to renewables. While all components of hydrogen supply chain research are now growing, the topic areas of biohydrogen and photocatalysis seem to be growing the fastest. Analysis of international collaboration networks also reveals a strong correlation between the increase of collaboration ties on hydrogen research and the publications. Until the 1970s, only 25 countries had collaborated, while this has reached 108 countries as of 2018, with over 17,500 collaboration ties. The collaborations have also evolved into a substantially more integrated network, with a few strong clusters involving China, the United States, Germany, and Japan. The longitudinal network evolution maps also reveal a shift, over the last two decades, from ...
Khan, HU, ARUYA, JOYA & Gill, AQ 2020, 'Web 2.0 Technologies Adoption Barriers for External Contacts and Participation: A Case Study of Federal Establishment of Africa', International Journal of Business Information Systems, vol. 1, no. 1, pp. 1-1.
View/Download from: Publisher's site
Khan, NU, Wan, W & Yu, S 2020, 'Location-Based Social Network’s Data Analysis and Spatio-Temporal Modeling for the Mega City of Shanghai, China', ISPRS International Journal of Geo-Information, vol. 9, no. 2, pp. 76-76.
View/Download from: Publisher's site
View description>>
The aim of the current study is to analyze and extract the useful patterns from Location-Based Social Network (LBSN) data in Shanghai, China, using different temporal and spatial analysis techniques, along with specific check-in venue categories. This article explores the applications of LBSN data by examining the association between time, frequency of check-ins, and venue classes, based on users’ check-in behavior and the city’s characteristics. The information regarding venue classes is created and categorized by using the nature of physical locations. We acquired the geo-location information from one of the most famous Chinese microblogs called Sina-Weibo (Weibo). The extracted data are translated into the Geographical Information Systems (GIS) format, and after analysis the results are presented in the form of statistical graphs, tables, and spatial heatmaps. SPSS is used for temporal analysis, and Kernel Density Estimation (KDE) is applied based on users’ check-ins with the help of ArcMap and OpenStreetMap for spatial analysis. The findings show various patterns, including more frequent use of LBSN while visiting entertainment and shopping locations, a substantial number of check-ins from educational institutions, and that the density extends to suburban areas mainly because of educational institutions and residential areas. Through analytical results, the usage patterns based on hours of the day, days of the week, and for an entire six months, including by gender, venue category, and frequency distribution of the classes, as well as check-in density all over Shanghai city, are thoroughly demonstrated.
Khan, NU, Wan, W, Yu, S, Muzahid, AAM, Khan, S & Hou, L 2020, 'A Study of User Activity Patterns and the Effect of Venue Types on City Dynamics Using Location-Based Social Network Data', ISPRS International Journal of Geo-Information, vol. 9, no. 12, pp. 733-733.
View/Download from: Publisher's site
View description>>
The main purpose of this research is to study the effect of various types of venues on the density distribution of residents and model check-in data from a Location-Based Social Network for the city of Shanghai, China by using combination of multiple temporal, spatial and visualization techniques by classifying users’ check-ins into different venue categories. This article investigates the use of Weibo for big data analysis and its efficiency in various categories instead of manually collected datasets, by exploring the relation between time, frequency, place and category of check-in based on location characteristics and their contributions. The data used in this research was acquired from a famous Chinese microblogs called Weibo, which was preprocessed to get the most significant and relevant attributes for the current study and transformed into Geographical Information Systems format, analyzed and, finally, presented with the help of graphs, tables and heat maps. The Kernel Density Estimation was used for spatial analysis. The venue categorization was based on nature of the physical locations within the city by comparing the name of venue extracted from Weibo dataset with the function such as education for schools or shopping for malls and so on. The results of usage patterns from hours to days, venue categories and frequency distribution into these categories as well as the density of check-in within the Shanghai and contribution of each venue category in its diversity are thoroughly demonstrated, uncovering interesting spatio-temporal patterns including frequency and density of users from different venues at different time intervals, and significance of using geo-data from Weibo to study human behavior in variety of studies like education, tourism and city dynamics based on location-based social networks. Our findings uncover various aspects of activity patterns in human behavior, the significance of venue classes and its effects in Shanghai...
Khatibi, R & Saberi, M 2020, 'Bio-climatic classification of Iran by multivariate statistical methods', SN Applied Sciences, vol. 2, no. 10.
View/Download from: Publisher's site
Khlaifat, N, Altaee, A, Zhou, J, Huang, Y & Braytee, A 2020, 'Optimization of a Small Wind Turbine for a Rural Area: A Case Study of Deniliquin, New South Wales, Australia', Energies, vol. 13, no. 9, pp. 2292-2292.
View/Download from: Publisher's site
View description>>
The performance of a wind turbine is affected by wind conditions and blade shape. This study aimed to optimize the performance of a 20 kW horizontal-axis wind turbine (HAWT) under local wind conditions at Deniliquin, New South Wales, Australia. Ansys Fluent (version 18.2, Canonsburg, PA, USA) was used to investigate the aerodynamic performance of the HAWT. The effects of four Reynolds-averaged Navier–Stokes turbulence models on predicting the flows under separation condition were examined. The transition SST model had the best agreement with the NREL CER data. Then, the aerodynamic shape of the rotor was optimized to maximize the annual energy production (AEP) in the Deniliquin region. Statistical wind analysis was applied to define the Weibull function and scale parameters which were 2.096 and 5.042 m/s, respectively. The HARP_Opt (National Renewable Energy Laboratory, Golden, CO, USA) was enhanced with design variables concerning the shape of the blade, rated rotational speed, and pitch angle. The pitch angle remained at 0° while the rising wind speed improved rotor speed to 148.4482 rpm at rated speed. This optimization improved the AEP rate by 9.068% when compared to the original NREL design.
King, J-T, Prasad, M, Tsai, T, Ming, Y-R & Lin, C-T 2020, 'Influence of Time Pressure on Inhibitory Brain Control During Emergency Driving', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 11, pp. 4408-4414.
View/Download from: Publisher's site
View description>>
IEEE It is believed that failures of people's reaction to emergencies occurred during driving are closely related to the inhibitory mechanism of brain's operations. To investigate the role of this function in emergency driving, two virtual realistic driving conditions based on stop signal task were designed and time limitation was manipulated to increase the stress in one condition. Sixteen subjects with behavioral encephalography recordings were collected and analyzed. By comparing successful and unsuccessful stop trials with event-related spectral perturbation analysis, δ and θ band power increases in frontal and central areas are correlated with driving inhibitory control of the brain. Moreover, β and ɣ band power in frontal and central areas showed more increases upon stress condition. Time pressure in driving could adjust the operation of brain's inhibition control, to benefit the people's reactive ability upon emergency.
Kocaballi, AB, Ijaz, K, Laranjo, L, Quiroz, JC, Rezazadegan, D, Tong, HL, Willcock, S, Berkovsky, S & Coiera, E 2020, 'Envisioning an artificial intelligence documentation assistant for future primary care consultations: A co-design study with general practitioners', Journal of the American Medical Informatics Association, vol. 27, no. 11, pp. 1695-1704.
View/Download from: Publisher's site
View description>>
Abstract Objective The study sought to understand the potential roles of a future artificial intelligence (AI) documentation assistant in primary care consultations and to identify implications for doctors, patients, healthcare system, and technology design from the perspective of general practitioners. Materials and Methods Co-design workshops with general practitioners were conducted. The workshops focused on (1) understanding the current consultation context and identifying existing problems, (2) ideating future solutions to these problems, and (3) discussing future roles for AI in primary care. The workshop activities included affinity diagramming, brainwriting, and video prototyping methods. The workshops were audio-recorded and transcribed verbatim. Inductive thematic analysis of the transcripts of conversations was performed. Results Two researchers facilitated 3 co-design workshops with 16 general practitioners. Three main themes emerged: professional autonomy, human-AI collaboration, and new models of care. Major implications identified within these themes included (1) concerns with medico-legal aspects arising from constant recording and accessibility of full consultation records, (2) future consultations taking place out of the exam rooms in a distributed system involving empowered patients, (3) human conversation and empathy remaining the core tasks of doctors in any future AI-enabled consultations, and (4) questioning the current focus of AI initiatives on improved efficiency as opposed to patient care. ...
Kocaballi, AB, Quiroz, JC, Rezazadegan, D, Berkovsky, S, Magrabi, F, Coiera, E & Laranjo, L 2020, 'Responses of Conversational Agents to Health and Lifestyle Prompts: Investigation of Appropriateness and Presentation Structures', Journal of Medical Internet Research, vol. 22, no. 2, pp. e15823-e15823.
View/Download from: Publisher's site
View description>>
Background Conversational agents (CAs) are systems that mimic human conversations using text or spoken language. Their widely used examples include voice-activated systems such as Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana. The use of CAs in health care has been on the rise, but concerns about their potential safety risks often remain understudied. Objective This study aimed to analyze how commonly available, general-purpose CAs on smartphones and smart speakers respond to health and lifestyle prompts (questions and open-ended statements) by examining their responses in terms of content and structure alike. Methods We followed a piloted script to present health- and lifestyle-related prompts to 8 CAs. The CAs’ responses were assessed for their appropriateness on the basis of the prompt type: responses to safety-critical prompts were deemed appropriate if they included a referral to a health professional or service, whereas responses to lifestyle prompts were deemed appropriate if they provided relevant information to address the problem prompted. The response structure was also examined according to information sources (Web search–based or precoded), response content style (informative and/or directive), confirmation of prompt recognition, and empathy. Results The 8 studied CAs provided in total 240 responses to 30 prompts. They collectively responded appropriately to 41% (46/112) of the safety-critical and 39% (37/96) of the lifestyle prompts. The ratio of appropriate responses deteriorated when safety-critical prompts were rephrased or when ...
Kong, L, Qu, W, Yu, J, Zuo, H, Chen, G, Xiong, F, Pan, S, Lin, S & Qiu, M 2020, 'Distributed Feature Selection for Big Data Using Fuzzy Rough Sets', IEEE Transactions on Fuzzy Systems, vol. 28, no. 5, pp. 846-857.
View/Download from: Publisher's site
Kridalukmana, R, Lu, HY & Naderpour, M 2020, 'A supportive situation awareness model for human-autonomy teaming in collaborative driving', Theoretical Issues in Ergonomics Science, vol. 21, no. 6, pp. 658-683.
View/Download from: Publisher's site
View description>>
Driving has become a collaborative activity and a form of human-autonomy teaming (HAT) with the addition of autonomy to the advanced driver assistance system (ADAS), which makes situational decisions and sensible actions (e.g., autopilot and collision avoidance). However, it has been identified that in many fatal road accidents involving collaborative driving, over-reliance on the ADAS becomes the primary factor. To overcome this issue, the underlying situation awareness (SA) concept is investigated to identify an appropriate SA model for collaborative driving that could impact the intelligent agent’s design in an HAT context. The formalization of existing SA model characteristics is defined and compared with those in collaborative driving. As a result, existing SA models are inadequate for explaining collaborative driving. Therefore, a new supportive SA (SSA) model is proposed. Based on the nature of this new model, applying transparency during SA development of the ADAS is suggested as a mechanism to comprehend ADAS behaviours. The proposed SA model is a significant expansion of multiple-agent SA models, and a transparent-based system can be a future direction of ADAS development to calibrate drivers’ trust.
La Paz, A, Merigó, JM, Powell, P, Ramaprasad, A & Syn, T 2020, 'Twenty‐five years of the Information Systems Journal: A bibliometric and ontological overview', Information Systems Journal, vol. 30, no. 3, pp. 431-457.
View/Download from: Publisher's site
View description>>
AbstractThe Information Systems Journal (ISJ) published its first issue in 1991, and in 2015, the journal celebrated its 25th anniversary. This study presents an overview of the leading research trends in the papers that the journal has published during its first quarter of a century via a bibliometric and ontological analysis. From a bibliometric perspective, the analysis considers the publication and citation structure of the journal. The study then develops a graphical analysis of the bibliographic material by using visualization of similarities software that employs bibliographic coupling and cocitation analysis. The work produces an ontological framework of impact and analyses the journal papers to assess qualitatively ISJ's impact. The results indicate that the journal has grown significantly over time and is now recognized as one of the leading journals in information systems. Yet challenges remain if the journal is to meet its aims in impacting and setting the agenda for the development of the Information Systems field.
Laccone, F, Malomo, L, Pérez, J, Pietroni, N, Ponchio, F, Bickel, B & Cignoni, P 2020, 'A bending-active twisted-arch plywood structure: computational design and fabrication of the FlexMaps Pavilion', SN Applied Sciences, vol. 2, no. 9, pp. 1505-9.
View/Download from: Publisher's site
View description>>
Bending-active structures are able to efficiently produce complex curved shapes from flat panels. The desired deformation of the panels derives from the proper selection of their elastic properties. Optimized panels, called FlexMaps, are designed such that, once they are bent and assembled, the resulting static equilibrium configuration matches a desired input 3D shape. The FlexMaps elastic properties are controlled by locally varying spiraling geometric mesostructures, which are optimized in size and shape to match specific bending requests, namely the global curvature of the target shape. The design pipeline starts from a quad mesh representing the input 3D shape, which defines the edge size and the total amount of spirals: every quad will embed one spiral. Then, an optimization algorithm tunes the geometry of the spirals by using a simplified pre-computed rod model. This rod model is derived from a non-linear regression algorithm which approximates the non-linear behavior of solid FEM spiral models subject to hundreds of load combinations. This innovative pipeline has been applied to the project of a lightweight plywood pavilion named FlexMaps Pavilion, which is a single-layer piecewise twisted arch that fits a bounding box of 3.90x3.96x3.25 meters. This case study serves to test the applicability of this methodology at the architectural scale. The structure is validated via FE analyses and the fabrication of the full scale prototype.
Laengle, S, Merigó, JM, Modak, NM & Yang, J-B 2020, 'Bibliometrics in operations research and management science: a university analysis', Annals of Operations Research, vol. 294, no. 1-2, pp. 769-813.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Many universities around the World have made important contributions in the field of operations research and management science. This article presents the most productive and influential universities between 1991 and 2015. For doing so, we use the Web of Science database in order to search for the information which is usually regarded as the most relevant for scientific research. The results show the country of origin of the leading universities being mainly from North America and Asia and especially from USA and China. The Centre National de la Recherche Scientifique (CNRS) of France is the most productive university while the Massachusetts Institute of Technology (MIT) of USA is the most influential one. The temporal evolution shows that USA is trailing its dominancy while China progressing quickly. The evaluation also reveals that Asian universities outperform North American universities during the last 5 years.
León-Castro, E, Espinoza-Audelo, LF, Merigó, JM, Gil-Lafuente, AM & Yager, RR 2020, 'The ordered weighted average inflation', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 2, pp. 1901-1913.
View/Download from: Publisher's site
Li, G, Feng, B, Zhou, H, Zhang, Y, Sood, K & Yu, S 2020, 'Adaptive service function chaining mappings in 5G using deep Q-learning', Computer Communications, vol. 152, pp. 305-315.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier B.V. With introduction of Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) technologies, mobile network operators are able to provide on-demand Service Function Chaining (SFC) to meet various needs from users. However, it is challenging to map multiple SFCs to substrate networks efficiently, particularly in a number of key scenarios of forthcoming 5G, where user requests have different priorities and various resource demands. To this end, we first formulate the mapping of multiple SFCs with priorities as a multi-step Linear Integer Programming (ILP) problem, of which the mapping strategy (i.e., the objective function) in each step is configurable to improve overall CPU and bandwidth resource utilization rates. Secondly, to solve the strategy selection problem in each step and alleviate the complexity of ILP, we propose an adaptive deep Q-learning based SFC mapping approach (ADAP), where an agent is learned to make decisions from two low-complexity heuristic SFC mapping algorithms. Finally, we conduct extensive simulations using multiple SFC requests with randomly generated CPU and bandwidth demands in a real-world substrate network topology. Related results demonstrate that compared with a single strategy or random selections of strategies under the ILP-based approach or the proposed heuristic algorithms, our ADAP approach can improve whole-system resource efficiency by scheduling this two simply designed heuristic algorithms properly after limited training episodes.
Li, G, Guo, G, Peng, S, Wang, C, Yu, S, Niu, J & Mo, J 2020, 'Matrix Completion via Schatten Capped p Norm', IEEE Transactions on Knowledge and Data Engineering, vol. PP, no. 99, pp. 1-1.
View/Download from: Publisher's site
View description>>
The low-rank matrix completion problem is fundamental in both machine learning and computer vision fields with many important applications, such as recommendation system, motion capture, face recognition, and image inpainting. In order to avoid solving the rank minimization problem which is NP-hard, several surrogate functions of the rank have been proposed in the literature. However, the matrix restored from the optimization problem based on the existing surrogate functions seriously deviates from the original one. In this paper, we first design a new non-convex Schatten capped p norm which generalizes several existing non-convex matrix norms and balances between the rank and the nuclear norm of the matrix. Then, a matrix completion method based on the Schatten capped p norm is proposed by exploiting the framework of the alternating direction method of multipliers. Meanwhile, the Schatten capped p norm regularized least squares subproblem is analyzed in detail and is solved explicitly. Finally, we evaluate the performance of the proposed matrix completion method based on extensive experiments in the field of image inpainting. All the experimental results demonstrate that the proposed method can indeed improve the accuracy of matrix completion compared with the existing methods.
Li, H, Yang, Y, Dai, Y, Yu, S & Xiang, Y 2020, 'Achieving Secure and Efficient Dynamic Searchable Symmetric Encryption over Medical Cloud Data', IEEE Transactions on Cloud Computing, vol. 8, no. 2, pp. 484-494.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In medical cloud computing, a patient can remotely outsource her medical data to the cloud server. In this case, only authorized doctors are allowed to access the data since the medical data is highly sensitive. Before outsourcing, the data is commonly encrypted, where the corresponding secret key is sent to authorized doctors. However, performing searches on encrypted medical data is difficult without decryption. In this paper, we propose two Secure and Efficient Dynamic Searchable Symmetric Encryption (SEDSSE) schemes over medical cloud data. First, we utilize the secure k-Nearest Neighbor (kNN) and Attribute-Based Encryption (ABE) techniques to construct a dynamic searchable symmetric encryption scheme, which can achieve forward privacy and backward privacy simultaneously. These tow security properties are vital and very challenging in the area of dynamic searchable symmetric encryption. Then, we propose an enhanced scheme to solve the key sharing problem which widely exists in the kNN based searchable encryption scheme. Compared with existing proposals, our schemes are better in terms of storage, search and updating complexity. Extensive experiments demonstrate the efficiency of our schemes on storage overhead, index building, trapdoor generating and query.
Li, K, Liu, AX & Yu, S 2020, 'Special issue on natural computation, fuzzy systems and knowledge discovery from the ICNC&FSKD 2017', Neurocomputing, vol. 393, pp. 112-114.
View/Download from: Publisher's site
Li, P, Guo, S, Yu, S & Zhuang, W 2020, 'Cross-Cloud MapReduce for Big Data', IEEE Transactions on Cloud Computing, vol. 8, no. 2, pp. 375-386.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. MapReduce plays a critical role as a leading framework for big data analytics. In this paper, we consider a geo-distributed cloud architecture that provides MapReduce services based on the big data collected from end users all over the world. Existing work handles MapReduce jobs by a traditional computation-centric approach that all input data distributed in multiple clouds are aggregated to a virtual cluster that resides in a single cloud. Its poor efficiency and high cost for big data support motivate us to propose a novel data-centric architecture with three key techniques, namely, cross-cloud virtual cluster, data-centric job placement, and network coding based traffic routing. Our design leads to an optimization framework with the objective of minimizing both computation and transmission cost for running a set of MapReduce jobs in geo-distributed clouds. We further design a parallel algorithm by decomposing the original large-scale problem into several distributively solvable subproblems that are coordinated by a high-level master problem. Finally, we conduct real-world experiments and extensive simulations to show that our proposal significantly outperforms the existing works.
Li, Q, Cao, Z, Ding, W & Li, Q 2020, 'A multi-objective adaptive evolutionary algorithm to extract communities in networks', Swarm and Evolutionary Computation, vol. 52, pp. 100629-100629.
View/Download from: Publisher's site
Li, Q, Cao, Z, Tanveer, M, Pandey, HM & Wang, C 2020, 'A Semantic Collaboration Method Based on Uniform Knowledge Graph', IEEE Internet of Things Journal, vol. 7, no. 5, pp. 4473-4484.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The Semantic Internet of Things (SIoT) is the extension of the Internet of Things (IoT) and the Semantic Web, which aims to build an interoperable collaborative system to solve the heterogeneous problems in the IoT. However, the SIoT has the characteristics of both the IoT and the Semantic Web environment, and the corresponding semantic data present many new data features. In this article, we analyze the characteristics of semantic data and propose the concept of a uniform knowledge graph (UKG), allowing us to be applied to the environment of the SIoT better. Here, we design a semantic collaboration method based on a UKG. It can take the UKG as the form of knowledge organization and representation, and provide a useful data basis for semantic collaboration by constructing the semantic links to complete semantic relation between different data sets, to achieve the semantic collaboration in the SIoT. Our experiments show that the proposed method can analyze and understand the semantics of user requirements better and provide more satisfactory outcomes.
Li, Q, Cao, Z, Tanveer, M, Pandey, HM & Wang, C 2020, 'An Effective Reliability Evaluation Method for Power Communication Network Based on Community Structure', IEEE Transactions on Industry Applications, vol. 56, no. 4, pp. 1-1.
View/Download from: Publisher's site
View description>>
The reliability evaluation of the power communication network is beneficial for the improvement of the stable operation of the power system and the robustness of the power grid. However, the existing reliability evaluation models of the power communication network cannot meet the current situation of timeliness performance, due to rapidly increasing scale and complexity of information across varying services. In this study, we used the complex network theory to analyze the structure of the power communication network. Then we constructed the evaluation index of node (link) reliability of the power communication network based on community reliability. Compared with the traditional reliability indexes, our index not only considers the influence of the environment of the node (link) on the single structure of the power communication network, but also possesses the reliability evaluation rate of the node (link), which have the opportunities for improving the performance of the reliability evaluation of the wide-area power communication network. To verify the rationality of the index, we developed random, low reliability, and high-betweenness deliberate attacks to attack the designated node (link), and compared the network efficiency before and after the attack. Based on the simulation results, it can verify the rationality and superiority of our proposed evaluation index.
Li, Q, Zhong, J, Cao, Z & Li, X 2020, 'Optimizing streaming graph partitioning via a heuristic greedy method and caching strategy', Optimization Methods and Software, vol. 35, no. 6, pp. 1144-1159.
View/Download from: Publisher's site
View description>>
© 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group. Graph partitioning is an important method for accelerating large distributed graph computation. Streaming graph partitioning is more efficient than offline partitioning, and it has been developed continuously in the application of graph partitioning in recent years. In this work, we first introduce a heuristic greedy streaming partitioning method and show that it outperforms the state-of-the-art streaming partitioning methods, leading to exact balance and fewer cut edges. Second, we propose a cache structure for streaming partitioning, called an adjacent edge structure, which can improve the partition efficiency several times on a single commodity type computer without affecting the partition quality. Regardless as to whether the memory capacity is limited (local cache) or not (global cache), our strategy can also improve the partition quality by restreaming partitioning. Taking linear weight greedy streaming algorithm as an example, the experimental results on 19 real-world graphs show that the average partitioning time of the new method is 4.9 times faster than that of the original method, which proves the effectiveness and superiority of the cache structure mentioned in this paper.
Li, Y & Qiao, Y 2020, 'Group-theoretic generalisations of vertex and edge connectivities', Proceedings of the American Mathematical Society, vol. 148, no. 11, pp. 4679-4693.
View/Download from: Publisher's site
View description>>
Let p p be an odd prime. Let P P be a finite p p -group of class 2 2 and exponent p p , whose commutator quotient P / [ P , P ] ACM Transactions on Intelligent Systems and Technology, vol. 11, no. 3, pp. 1-29.
View/Download from: Publisher's site
View description>>
Estimating causal effects by making causal inferences from observational data is common practice in scientific studies, business decision-making, and daily life. In today’s data-driven world, causal inference has become a key part of the evaluation process for many purposes, such as examining the effects of medicine or the impact of an economic policy on society. However, although the literature contains some excellent models, there is room to improve their representation power and their ability to capture complex relationships. For these reasons, we propose a novel prior called Causal DP and a model called CDP. The prior captures the complex relationships between covariates, treatments, and outcomes in observational data using a rational probabilistic dependency structure. The model is Bayesian, nonparametric, and generative and is not based on the assumption of any parametric distribution. CDP is designed to estimate various kinds of causal effects—average, conditional average, average treated, quantile, and so on. It performs well with missing covariates and does not suffer from overfitting. Comparative experiments on synthetic datasets against several state-of-the-art methods demonstrate that CDP has a superior ability to capture complex relationships. Further, a simple evaluation to infer the effect of a job training program on trainee earnings from real-world data shows that CDP is both effective and useful for causal inference.
Lin, C-T, King, J-T, Chuang, C-H, Ding, W, Chuang, W-Y, Liao, L-D & Wang, Y-K 2020, 'Exploring the Brain Responses to Driving Fatigue Through Simultaneous EEG and fNIRS Measurements', International Journal of Neural Systems, vol. 30, no. 01, pp. 1950018-1950018.
View/Download from: Publisher's site
View description>>
Fatigue is one problem with driving as it can lead to difficulties with sustaining attention, behavioral lapses, and a tendency to ignore vital information or operations. In this research, we explore multimodal physiological phenomena in response to driving fatigue through simultaneous functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) recordings with the aim of investigating the relationships between hemodynamic and electrical features and driving performance. Sixteen subjects participated in an event-related lane-deviation driving task while measuring their brain dynamics through fNIRS and EEGs. Three performance groups, classified as Optimal, Suboptimal, and Poor, were defined for comparison. From our analysis, we find that tonic variations occur before a deviation, and phasic variations occur afterward. The tonic results show an increased concentration of oxygenated hemoglobin (HbO2) and power changes in the EEG theta, alpha, and beta bands. Both dynamics are significantly correlated with deteriorated driving performance. The phasic EEG results demonstrate event-related desynchronization associated with the onset of steering vehicle in all power bands. The concentration of phasic HbO2 decreased as performance worsened. Further, the negative correlations between tonic EEG delta and alpha power and HbO2 oscillations suggest that activations in HbO2 are related to mental fatigue. In summary, combined hemodynamic and electrodynamic activities can provide complete knowledge of the brain’s responses as evidence of state changes during fatigue driving.
Lin, C-T, Yu, Y-H, King, J-T, Liu, C-H & Liao, L-D 2020, 'Augmented Wire-Embedded Silicon-Based Dry-Contact Sensors for Electroencephalography Signal Measurements', IEEE Sensors Journal, vol. 20, no. 7, pp. 3831-3837.
View/Download from: Publisher's site
View description>>
© 2001-2012 IEEE. The aim of this study was to develop a novel dry electroencephalography (EEG) sensor with a soft, pliable pad that conforms to the contours of the skin and skull, providing a suitable surface contact area for collecting electrical potential signals and ensuring a reliable connection. In this study, based on our experience in developing flexible silver/silicon-based dry-contact sensors (SBDSs) for biosignal measurements, we proposed a new, augmented wire-embedded silicon-based dry-contact sensor (WSBDS) with a long lifespan and better performance in EEG measurements. The following two augmentation concepts were proposed in this design and implemented in fabrication: 1) the addition of a metal stud and 2) the embedding of copper wires into the fingers of an acicular SBDS. The forehead sensor is suitable for forehead EEG measurements, and the acicular sensor is designed for application to hair-covered sites, where it can overcome hair interference to achieve satisfactory scalp contact while maintaining low impedance at the skin-electrode interface. Finally, this augmented WSBDS performed well in human EEG recording in a designed brain-computer interface (BCI) experiment and is feasible for practical applications.
Lipinska, V, Thinh, LP, Ribeiro, J & Wehner, S 2020, 'Certification of a functionality in a quantum network stage', Quantum Science and Technology, vol. 5, no. 3, pp. 035008-035008.
View/Download from: Publisher's site
View description>>
Abstract We consider testing the ability of quantum network nodes to execute multi-round quantum protocols. Specifically, we examine protocols in which the nodes are capable of performing quantum gates, storing qubits and exchanging said qubits over the network a certain number of times. We propose a simple ping-pong test, which provides a certificate for the capability of the nodes to run certain multi-round protocols. We first show that in the noise-free regime the only way the nodes can pass the test is if they do indeed possess the desired capabilities. We then proceed to consider the case where operations are noisy, and provide an initial analysis showing how our test can be used to estimate parameters that allow us to draw conclusions about the actual performance of such protocols on the tested nodes. Finally, we investigate the tightness of this analysis using example cases in a numerical simulation.
Liu, C, Nitschke, P, Williams, SP & Zowghi, D 2020, 'Data quality and the Internet of Things', Computing, vol. 102, no. 2, pp. 573-599.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag GmbH Austria, part of Springer Nature. The Internet of Things (IoT) is driving technological change and the development of new products and services that rely heavily on the quality of the data collected by IoT devices. There is a large body of research on data quality management and improvement in IoT, however, to date a systematic review of data quality measurement in IoT is not available. This paper presents a systematic literature review (SLR) about data quality in IoT from the emergence of the term IoT in 1999 to 2018. We reviewed and analyzed 45 empirical studies to identify research themes on data quality in IoT. Based on this analysis we have established the links between data quality dimensions, manifestations of data quality problems, and methods utilized to measure data quality. The findings of this SLR suggest new research areas for further investigation and identify implications for practitioners in defining and measuring data quality in IoT.
Liu, C, Zowghi, D & Talaei-Khoei, A 2020, 'An empirical study of the antecedents of data completeness in electronic medical records', International Journal of Information Management, vol. 50, pp. 155-170.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd There is a body of research that highlights the role of data management to improve the quality of data, which in return improves organizational performance. The literature in data management has indicated the five theoretical constructs used to understand the factors influencing data quality, including top management support, capability on the regulation and process management, business-IT alignment, staff participation, and integration of information systems. However, it is unclear how these theoretical constructs can be utilized to understand the antecedents of data completeness as a dimension of data quality. Following that stream of research, the current paper examines the factors influencing data completeness in electronic medical records (EMR). The scope of this study is by only surveying medical professionals at healthcare settings in northern Nevada. The empirical results reveal that resources should be added as one of the antecedents of data completeness in EMR.
Liu, C, Zowghi, D, Talaei-Khoei, A & Jin, Z 2020, 'Empirical study of Data Completeness in Electronic Health Records in China', Pacific Asia Journal of the Association for Information Systems, vol. 12, no. 2, pp. 104-130.
View/Download from: Publisher's site
View description>>
Abstract Background: As a dimension of data quality in electronic health records (EHR), data completeness plays an important role in improving quality of care. Although many studies of data management focus on constructing the factors that influence data quality for the purpose of quality improvement, the constructs that are developed for interpreting factors influencing data completeness in the EHR context have received limited attention. Methods: Based on related studies, we constructed the factors influencing EHR data completeness in a conceptual model. We then examined the proposed model by surveying clinical practitioners in China. Results: Our results show that the data quality management literature can serve as a starting point to derive a conceptual model of factors influencing data completeness in the EHR context. This study also demonstrates that “resources” should be added as a factor that influences data completeness in EHR. Conclusion: Our resulting conceptual model shows a substantial explanation of data completeness in EHR assessed in this study. Although the proposed relationships between the included factors were previously supported in the literature, our work provides the beginning empirical evidence that some relationships may not be always significantly supported. The possible explanation of these differences has been discussed in the present research. This study thus benefits decision makers and EHR program managers in implementing EHR as well as EHR vendors in the EHR integration by addressing data completeness issues.
Liu, G, Xiao, F, Lin, C-T & Cao, Z 2020, 'A Fuzzy Interval Time-Series Energy and Financial Forecasting Model Using Network-Based Multiple Time-Frequency Spaces and the Induced-Ordered Weighted Averaging Aggregation Operation', IEEE Transactions on Fuzzy Systems, vol. 28, no. 11, pp. 2677-2690.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Forecasting time series is an emerging topic in operational research. Existing time-series models have limited prediction accuracy when faced with the characteristics of nonlinearity and nonstationarity in complex situations related to energy and finance. To enhance overall prediction capabilities and improve forecasting accuracy, in this article we propose a fuzzy interval time-series forecasting model on the basis of network-based multiple time-frequency spaces and the induced-ordered weighted averaging aggregation (IOWA) operation. Specifically, a time-series signal is decomposed into ensemble empirical modes and then reconstructed as various time-frequency spaces, which are transformed into visibility graphs. Then, forecasting intervals in different spaces can be collected after the local random walker link prediction model is adopted. Furthermore, a rule-based representation value function inspired by Yager's golden rule approach is defined, and an appropriate representation value is calculated. Finally, after IOWA is used to aggregate the forecasting outcomes in different time-frequency spaces, the final forecast value can be obtained from the fuzzy forecasting interval. Considering that energy issues are of widespread interest in nature and the social economy, two cases, based on a hydrological time series from the Biliuhe River in China and two well-known sets of financial time-series data, Taiwan Stock Exchange Capitalization Weighted Stock Index and Hang Seng Index, are studied to test the performance of the proposed approach in comparison with existing models. Our results show that the proposed approach can achieve better performance than well-developed models.
Liu, J, Hou, J, Huang, X, Xiang, Y & Zhu, T 2020, 'Secure and efficient sharing of authenticated energy usage data with privacy preservation', Computers & Security, vol. 92, pp. 101756-101756.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd As a technological innovation, smart grid improves electricity services in terms of substantiality, economics, efficiency, and reliability. This owes to the bi-directional communication property which not only enables the fine-grained energy usage data to be available for different entities but also facilitates the automated grid management. However, sharing such energy usage data with other parties could potentially cause the leakage of customers’ sensitive information. In addition, the data are vulnerable to be tampered with by any internal or external attacker such that the value of data could be destroyed. Therefore, it is crucial to preserve customers’ privacy and provide authenticity and verifiability while sharing energy usage data with other parties. In this paper, we propose a bilinear-map accumulator based redactable signature scheme (RSS-BMA) which allows customers to safeguard their privacy while guaranteeing the verifiability of shared data. Furthermore, the batch-data-block verification property of our design enhances not only efficiency but also security by prohibiting additional redaction to a batch of data blocks. We analyze the efficiency and security of our proposed scheme extensively, and the results indicate our construction is more practical than others.
Liu, W, Chang, X, Chen, L, Phung, D, Zhang, X, Yang, Y & Hauptmann, AG 2020, 'Pair-based Uncertainty and Diversity Promoting Early Active Learning for Person Re-identification', ACM Transactions on Intelligent Systems and Technology, vol. 11, no. 2, pp. 1-15.
View/Download from: Publisher's site
View description>>
The effective training of supervised Person Re-identification (Re-ID) models requires sufficient pairwise labeled data. However, when there is limited annotation resource, it is difficult to collect pairwise labeled data. We consider a challenging and practical problem called Early Active Learning, which is applied to the early stage of experiments when there is no pre-labeled sample available as references for human annotating. Previous early active learning methods suffer from two limitations for Re-ID. First, these instance-based algorithms select instances rather than pairs, which can result in missing optimal pairs for Re-ID. Second, most of these methods only consider the representativeness of instances, which can result in selecting less diverse and less informative pairs. To overcome these limitations, we propose a novel pair-based active learning for Re-ID. Our algorithm selects pairs instead of instances from the entire dataset for annotation. Besides representativeness, we further take into account the uncertainty and the diversity in terms of pairwise relations. Therefore, our algorithm can produce the most representative, informative, and diverse pairs for Re-ID data annotation. Extensive experimental results on five benchmark Re-ID datasets have demonstrated the superiority of the proposed pair-based early active learning algorithm.
Liu, X, Song, W, Musial, K, Zhao, X, Zuo, W & Yang, B 2020, 'Semi-supervised stochastic blockmodel for structure analysis of signed networks', Knowledge-Based Systems, vol. 195, pp. 105714-105714.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier B.V. Finding hidden structural patterns is a critical problem for all types of networks, including signed networks. Among all of the methods for structural analysis of complex network, stochastic blockmodel (SBM) is an important research tool because it is flexible and can generate networks with many different types of structures. However, most existing SBM learning methods for signed networks are unsupervised, leading to poor performance in terms of finding hidden structural patterns, especially when handling noisy and sparse networks. Learning SBM in a semi-supervised way is a promising avenue for overcoming the above difficulty. In this type of model, a small number of labelled nodes and a large number of unlabelled nodes, coupled with their network structures, are simultaneously used to train SBM. We propose a novel semi-supervised signed stochastic blockmodel and its learning algorithm based on variational Bayesian inference, with the goal of discovering both assortative (the nodes connect more densely in same clusters than that in different clusters) and disassortative (the nodes link more sparsely in same clusters than that in different clusters) structures from signed networks. The proposed model is validated through a number of experiments wherein it compared with the state-of-the-art methods using both synthetic and real-world data. The carefully designed tests, allowing to account for different scenarios, show our method outperforms other approaches existing in this space. It is especially relevant in the case of noisy and sparse networks as they constitute the majority of the real-world networks.
Liu, Z, Cao, J, Tan, Y, Xiao, Q & Prasad, M 2020, 'Planning Above the API Clouds Before Flying Above the Clouds: A Real-Time Personalized Air Travel Planning Approach', International Journal of Parallel Programming, vol. 48, no. 1, pp. 137-156.
View/Download from: Publisher's site
View description>>
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. The rapid growth of the airline industry has resulted in the availability of a large number of flights, however this can also create a paralyzing problem. Flight information on all airlines across the world can be obtained via the Internet. Today, passengers trend to be interested in user personalized service. How to effectively find a passenger’s most preferred air travel plan, which might include multiple transfers from millions of possible choices with certain constraints, such as time and price, is a critical challenge. This paper presents an efficient air travel planning approach, which can find a number of air travel plans by invoking the APIs offered by airline companies. At the same time, these plans also best match the customer’s preference based on an analysis of historical orders. An algorithm to extract user preference features is introduced and heuristic rules to speed up the K path search process under constraints are presented. The experiment results show that the proposed model finds optimal air travel plans efficiently on a real-world dataset.
Livesu, M, Pietroni, N, Puppo, E, Sheffer, A & Cignoni, P 2020, 'LoopyCuts: practical feature-preserving block decomposition for strongly hex-dominant meshing.', ACM Trans. Graph., vol. 39, no. 4, pp. 121-121.
View/Download from: Publisher's site
View description>>
© 2020 ACM. We present a new fully automatic block-decomposition algorithm for feature-preserving, strongly hex-dominant meshing, that yields results with a drastically larger percentage of hex elements than prior art. Our method is guided by a surface field that conforms to both surface curvature and feature lines, and exploits an ordered set of cutting loops that evenly cover the input surface, defining an arrangement of loops suitable for hex-element generation. We decompose the solid into coarse blocks by iteratively cutting it with surfaces bounded by these loops. The vast majority of the obtained blocks can be turned into hexahedral cells via simple midpoint subdivision. Our method produces pure hexahedral meshes in approximately 80% of the cases, and hex-dominant meshes with less than 2% non-hexahedral cells in the remaining cases. We demonstrate the robustness of our method on 70+ models, including CAD objects with features of various complexity, organic and synthetic shapes, and provide extensive comparisons to prior art, demonstrating its superiority.
Lo, S-Y, King, J-T & Lin, C-T 2020, 'How Does Gender Stereotype Affect the Memory of Advertisements? A Behavioral and Electroencephalography Study', Frontiers in Psychology, vol. 11, p. 1580.
View/Download from: Publisher's site
View description>>
Previous studies have shown equivocal results about whether atypical or unusual events, compared with typical ones, facilitate or inhibit memory. We suspect that the indefinite findings could be partly due to the recall task used in these studies, as the participants might have used inference instead of recall in their responses. In the present study, we tested the recognition memory for real (Experiment 1) and fabricated (Experiment 2) advertisements, which could be congruent or incongruent with gender stereotypes. In congruent advertisements, a female endorser presented a traditionally considered feminine product or a male endorser presented a traditionally considered masculine product, whereas the gender-product type matching reversed in incongruent advertisements. The results of both behavioral experiments revealed that the participants' memory performance for stereotype-incongruent advertisements was higher than for congruent ones. In the event-related potential (ERP) recordings in Experiment 3, larger positive amplitudes were found for stereotype-incongruent advertisements than for congruent advertisements on the left parietal sites, suggesting a deeper encoding process for stereotype-incongruent information than for stereotype-congruent information.
Lu, J, Liu, A, Song, Y & Zhang, G 2020, 'Data-driven decision support under concept drift in streamed big data', Complex & Intelligent Systems, vol. 6, no. 1, pp. 157-163.
View/Download from: Publisher's site
View description>>
Abstract Data-driven decision-making ($$\mathrm {D^3}$$D3M) is often confronted by the problem of uncertainty or unknown dynamics in streaming data. To provide real-time accurate decision solutions, the systems have to promptly address changes in data distribution in streaming data—a phenomenon known as concept drift. Past data patterns may not be relevant to new data when a data stream experiences significant drift, thus to continue using models based on past data will lead to poor prediction and poor decision outcomes. This position paper discusses the basic framework and prevailing techniques in streaming type big data and concept drift for $$\mathrm {D^3}$$D3M. The study first establishes a technical framework for real-time $$\mathrm {D^3}$$D3M under concept drift and details the characteristics of high-volume streaming data. The main methodologies and approaches for detecting concept drift and supporting $$\mathrm {D^3}$$D3M are highlighted and presented. Lastly, further resea...
Lu, J, Zheng, X, Sheng, M, Jin, J & Yu, S 2020, 'Efficient Human Activity Recognition Using a Single Wearable Sensor', IEEE Internet of Things Journal, vol. 7, no. 11, pp. 11137-11146.
View/Download from: Publisher's site
Lu, J, Zuo, H & Zhang, G 2020, 'Fuzzy Multiple-Source Transfer Learning', IEEE Transactions on Fuzzy Systems, vol. 28, no. 12, pp. 3418-3431.
View/Download from: Publisher's site
View description>>
Transfer learning is gaining increasing attention due to its ability to leverage previously acquired knowledge to assist in completing a prediction task in a related domain. Fuzzy transfer learning, which is based on fuzzy systems and particularly fuzzy rule-based models, was developed due to its capacity to deal with uncertainty. However, one issue with fuzzy transfer learning, even in the area of general transfer learning, has not been resolved: how to combine and then use knowledge when multiple-source domains are available. This study presents new methods for merging fuzzy rules from multiple domains for regression tasks. Two different settings are separately explored: homogeneous and heterogeneous space. In homogeneous situations, knowledge from the source domains is merged in the form of fuzzy rules. In heterogeneous situations, knowledge is merged in the form of both data and fuzzy rules. Experiments on both synthetic and real-world datasets provide insights into the scope of applications suitable for the proposed methods and validate their effectiveness through comparisons with other state-of-the-art transfer learning methods. An analysis of parameter sensitivity is also included.
Majid, ESA, Garcia, JA, Nordin, AI & Raffe, WL 2020, 'Staying Motivated During Difficult Times: A Snapshot of Serious Games for Paediatric Cancer Patients', IEEE Transactions on Games, vol. 12, no. 4, pp. 367-375.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Research on the use of digital games for cancer patients suggests positive impact in the form of the reduction of depressive symptoms, anxiety, and the feeling of nausea after chemotherapy treatment. This can take the childs focus off their condition and their treatment process and direct it toward other aspects of their childhood. In this article, a comprehensive review of the current literature was conducted to assess how serious games could positively impact paediatric cancer patients. Inclusion criteria were used during data extraction to find the most relevant literature, including the need for a game prototype to have been developed and for the game to specifically target children with cancer as a target audience. Data were extracted including age ranges, treatment and procedure plan, time context, users, purpose, and technology. The resulting serious games were grouped based on their purpose and were classified in three main categories; motivation, education, and distraction. This review demonstrates the positive use of serious games as an intervention for paediatric cancer patients that undergo treatment in hospital. The results suggest that the design of these serious games should consider the purpose of the game within the treatment plan of target audience; the accessibility and suitability of the technology used for the game; and social connection during play.
Maldonado, S, Merigo, J & Miranda, J 2020, 'IOWA-SVM: A Density-Based Weighting Strategy for SVM Classification via OWA Operators', IEEE Transactions on Fuzzy Systems, vol. 28, no. 9, pp. 2143-2150.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. A weighting strategy for handling outliers in binary classification using support vector machine (SVM) is proposed in this article. The traditional SVM model is modified by introducing an induced ordered weighted averaging (IOWA) operator, in which the hinge loss function becomes an ordered weighted sum of the SVM slack variables. These weights are defined using IOWA quantifiers, while the order is induced via fuzzy density-based methods for outlier detection. The proposal is developed for both linear and kernel-based classification using the duality theory and the kernel trick. Our experimental results on well known benchmark datasets demonstrate the virtues of the proposed IOWA-SVM, which achieved the best average performance compared to other machine learning approaches of similar complexity.
Mann, RL, Mathieson, L & Greenhill, C 2020, 'On the Parameterised Complexity of Induced Multipartite Graph Parameters'.
View description>>
We introduce a family of graph parameters, called induced multipartite graph
parameters, and study their computational complexity. First, we consider the
following decision problem: an instance is an induced multipartite graph
parameter $p$ and a given graph $G$, and for natural numbers $k\geq2$ and
$\ell$, we must decide whether the maximum value of $p$ over all induced
$k$-partite subgraphs of $G$ is at most $\ell$. We prove that this problem is
W[1]-hard. Next, we consider a variant of this problem, where we must decide
whether the given graph $G$ contains a sufficiently large induced $k$-partite
subgraph $H$ such that $p(H)\leq\ell$. We show that for certain parameters this
problem is para-NP-hard, while for others it is fixed-parameter tractable.
Martínez-López, FJ, Merigó, JM, Gázquez-Abad, JC & Ruiz-Real, JL 2020, 'Industrial marketing management: Bibliometric overview since its foundation', Industrial Marketing Management, vol. 84, pp. 19-38.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Inc. Industrial Marketing Management (IMM) is an outstanding journal in the field of business-to-business marketing. This paper focuses on this journal, with an extensive bibliometric analysis of IMM from its foundation in 1971 to 2017, the last year analyzed in this study. It identifies, among others, the annual evolution of publications, the most influential countries, the most relevant authors, the most prominent institutions supporting research, as well as the citations of IMM papers in major marketing, but also other, business and management journals. To do so, this research uses the Web of Science Core Collection and Scopus databases, and analyzes a wide range of bibliometric indicators, including the total number of publications and citations, citations per paper, the h-index, m-value and citation thresholds, and also develops a graphical analysis of the bibliographical material using the visualization of similarities (VOS) viewer software. Finally, by applying a cluster analysis by fractional accounting, this research identifies trends and proposes future topics and research lines, such as: trust, innovation, performance, relationship marketing, the future role of new technologies in industrial marketing research, online marketing and corporate image.
Melnikov, A, Maeder, M, Friedrich, N, Pozhanka, Y, Wollmann, A, Scheffler, M, Oberst, S, Powell, D & Marburg, S 2020, 'Acoustic metamaterial capsule for reduction of stage machinery noise', The Journal of the Acoustical Society of America, vol. 147, no. 3, pp. 1491-1503.
View/Download from: Publisher's site
View description>>
Noise mitigation of stage machinery can be quite demanding and requires innovative solutions. In this work, an acoustic metamaterial capsule is proposed to reduce the noise emission of several stage machinery drive trains, while still allowing the ventilation required for cooling. The metamaterial capsule consists of c-shape meta-atoms, which have a simple structure that facilitates manufacturing. Two different metamaterial capsules are designed, simulated, manufactured, and experimentally validated that utilize an ultra-sparse and air-permeable reflective meta-grating. Both designs demonstrate transmission loss peaks that effectively suppress gear mesh noise or other narrow band noise sources. The ventilation by natural convection was numerically verified, and was shown to give adequate cooling, whereas a conventional sound capsule would lead to overheating. The noise spectra of three common stage machinery drive trains are numerically modelled, enabling one to design meta-gratings and determine their noise suppression performance. The results fulfill the stringent stage machinery noise limits, highlighting the benefit of using metamaterial capsules of simple c-shape structure.
Merigó, JM, Linares-Mustaros, S & Ferrer-Comalat, JC 2020, 'Fuzzy systems in management and information science', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 5, pp. 5319-5322.
View/Download from: Publisher's site
Merigó, JM, Mulet-Forteza, C, Martorell, O & Merigó-Lindahl, C 2020, 'Scientific research in the tourism, leisure and hospitality field: a bibliometric analysis', Anatolia, vol. 31, no. 3, pp. 494-508.
View/Download from: Publisher's site
Mery, D, Saavedra, D & Prasad, M 2020, 'X-Ray Baggage Inspection With Computer Vision: A Survey', IEEE Access, vol. 8, pp. 145620-145633.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In the last decades, baggage inspection based on X-ray imaging has been established to protect environments in which access control is of vital significance. In several public entrances, like airports, government buildings, stadiums and large event venues, security checks are carried out on all baggage to detect suspicious objects (e.g., handguns and explosives). Although improvements in X-ray technology and computer vision have made many X-ray detection tasks that were previously unfeasible a reality, the progress that has been made in automated baggage inspection is very limited compared to what is needed. For this reason, X-ray screening systems are usually being manipulated by human inspectors. Research and development experts who focus on X-ray testing are moving towards new approaches that can be used to aid human operators. This paper reports the state of the art in baggage inspection identifying three research fields that have been used to deal with this problem: i) X-ray energies, because there is enough research evidence to show that multi-energy X-ray testing must be used when the material characterization is required; ii) X-ray multi-views, because they can be an effective option for examining complex objects where the uncertainty of only one view can lead to misinterpretation; and iii) X-ray computer vision algorithms, because there are a plethora of computer vision approaches that can address many 3D object recognition problems. Besides, this paper presents useful public datasets that can be used for training and testing, and also summarizes the reported experimental results in this field. Finally, this paper addresses the general limitations and show new avenues for future research.
Miner, AS, Laranjo, L & Kocaballi, AB 2020, 'Chatbots in the fight against the COVID-19 pandemic', npj Digital Medicine, vol. 3, no. 1.
View/Download from: Publisher's site
Ming, Y, Pelusi, D, Fang, C-N, Prasad, M, Wang, Y-K, Wu, D & Lin, C-T 2020, 'EEG data analysis with stacked differentiable neural computers', Neural Computing and Applications, vol. 32, no. 12, pp. 7611-7621.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag London Ltd., part of Springer Nature. Differentiable neural computer (DNC) has demonstrated remarkable capabilities in solving complex problems. In this paper, we propose to stack an enhanced version of differentiable neural computer together to extend its learning capabilities. Firstly, we give an intuitive interpretation of DNC to explain the architectural essence and demonstrate the stacking feasibility by contrasting it with the conventional recurrent neural network. Secondly, the architecture of stacked DNCs is proposed and modified for electroencephalogram (EEG) data analysis. We substitute the original Long Short-Term Memory network controller by a recurrent convolutional network controller and adjust the memory accessing structures for processing EEG topographic data. Thirdly, the practicability of our proposed model is verified by an open-sourced EEG dataset with the highest average accuracy achieved; then after fine-tuning the parameters, we show the minimal mean error obtained on a proprietary EEG dataset. Finally, by analyzing the behavioral characteristics of the trained stacked DNCs model, we highlight the suitableness and potential of utilizing stacked DNCs in EEG signal processing.
Modak, NM, Lobos, V, Merigó, JM, Gabrys, B & Lee, JH 2020, 'Forty years of computers & chemical engineering: A bibliometric analysis', Computers & Chemical Engineering, vol. 141, pp. 106978-106978.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Computers & Chemical Engineering (CCE) is one of the premier international journals in the field of chemical engineering. CCE published its first issue in 1977 and completed forty years in 2016. More than four decades of continuous and successful journey influenced us to celebrate its contribution through a comprehensive bibliometric study. Using the Web of Science Core Collection database we depict trends of the journal in terms of papers, topics, authors, institutions, and countries. Networks visualization of co-citation of journals and authors, bibliographic coupling institutions and countries, and co-occurrence of author keywords are prepared using the visualization of similarities (VOS) viewer software. The present analysis explores publication and citation patterns of the journal. Professor Ignacio E. Grossmann, Carnegie Mellon University, and USA respectively appear as the most productive and influential author, institution, and country in CCE publications. Optimization based research topics received most attention in CCE publications.
Modak, NM, Sinha, S, Raj, A, Panda, S, Merigó, JM & Lopes de Sousa Jabbour, AB 2020, 'Corporate social responsibility and supply chain management: Framing and pushing forward the debate', Journal of Cleaner Production, vol. 273, pp. 122981-122981.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd Corporate social responsibility (CSR) in supply chain management (SCM) is one of the burgeoning fields of the last decade. Significant interest in this area has led to a large number of publications in recent times. For this reason, this study has been carried out to provide a comprehensive framework and future research directions for this topic. This work presents a bibliometric analysis of relevant publications dealing with CSR in SCM up to April 2019. As well as the presentation of an overview of publications and citation structures, it also explores journals and countries based on a bibliometric study. To collect the relevant data for this study, we have utilized the reliable SCOPUS database. Our results highlight the significant contributions of journals, authors, universities, and countries on this topic. With the help of “Visualization of similarities (VOS)” viewer software, this study investigates bibliographic coupling of sources and countries. It also presents co-occurrence of keywords and graphic representations of the bibliographic materials. Finally, it provides an overview of all relevant review papers in this field and a comprehensive view of related research fields.
Mukai, H, Sakata, K, Devitt, SJ, Wang, R, Zhou, Y, Nakajima, Y & Tsai, J-S 2020, 'Pseudo-2D superconducting quantum computing circuit for the surface code: proposal and preliminary tests', New Journal of Physics, vol. 22, no. 4, pp. 043013-043013.
View/Download from: Publisher's site
View description>>
Abstract Among the major hardware platforms for large-scale quantum computing, one of the leading candidates is superconducting quantum circuits. Current proposed architectures for quantum error-correction with the promising surface code require a two-dimensional layout of superconducting qubits with nearest-neighbor interactions. A major hurdle for the scalability in such an architecture using superconducting systems is the so-called wiring problem, where qubits internal to a chipset become difficult to access by the external control/readout lines. In contrast to the existing approaches which address the problem through intricate three-dimensional wiring and packaging technology, leading to a significant engineering challenge, here we address this problem by presenting a modified microarchitecture in which all the wiring can be realized through a newly introduced pseudo two-dimensional resonator network which provides the inter-qubit connections via airbridges. Our proposal is completely compatible with current standard planar circuit technology. We carried out experiments to examine the feasibility of the new airbridge component. The measured quality factor of the airbridged resonator is below the simulated surface-code threshold required for a coupling resonator, and it should not limit simulated gate fidelity. The measured crosstalk between crossed resonators is at most −49 dB in resonance. Further spatial and frequency separation between the resonators should result in relatively limited crosstalk between them, which would not increase as the size of the chipset increases. This architecture and the preliminary tests indicate the possibility that a large-scale, fully error-corrected quantum computer could be constructed by monolithic integration technologies without additional overhead or special packaging know-how.
Naji, M, Braytee, A, Al-Ani, A, Anaissi, A, Goyal, M & Kennedy, PJ 2020, 'Design of airport security screening using queueing theory augmented with particle swarm optimisation', Service Oriented Computing and Applications, vol. 14, no. 2, pp. 119-133.
View/Download from: Publisher's site
View description>>
© 2020, Springer-Verlag London Ltd., part of Springer Nature. Designing an efficient and reliable airport security screening system is a critical and challenging task. It is an essential element of airline and passenger safety which aims to provide the expected level of confidence and to ensure the safety of passengers and the aviation industry. In recent years, security at airports has gone through noticeable improvements with the utilisation of advanced technology and highly trained security officers. However, for many airports, it is important to find the best compromise between the capacity of the security area, the number of passengers and the number of screening machines and officers to maintain a high level of security and to ensure that the cost and waiting times for passengers and airlines are at acceptable levels. This paper proposes a novel method based on queueing theory augmented with particle swarm optimisation (QT-PSO) to predict passenger waiting times in a security screening context. This model consists of multiple servers operating in parallel and takes into consideration the complete scenario such as normal, slow and express lanes. Such an approach has the potential to be a reliable model that is able to assimilate variations in the number of passengers, security officers and security machines on the service time. To evaluate our proposed method, we collected real-world security screening data from an Australian airport from December to March for the two consecutive years of 2016 and 2017. The results show that our proposed QT-PSO method is superior to predict the average waiting time of passengers compared to the state of the art.
Naseem, U, Razzak, I, Musial, K & Imran, M 2020, 'Transformer based Deep Intelligent Contextual Embedding for Twitter sentiment analysis', Future Generation Computer Systems, vol. 113, pp. 58-69.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier B.V. Along with the emergence of the Internet, the rapid development of handheld devices has democratized content creation due to the extensive use of social media and has resulted in an explosion of short informal texts. Although a sentiment analysis of these texts is valuable for many reasons, this task is often perceived as a challenge given that these texts are often short, informal, noisy, and rich in language ambiguities, such as polysemy. Moreover, most of the existing sentiment analysis methods are based on clean data. In this paper, we present DICET, a transformer-based method for sentiment analysis that encodes representation from a transformer and applies deep intelligent contextual embedding to enhance the quality of tweets by removing noise while taking word sentiments, polysemy, syntax, and semantic knowledge into account. We also use the bidirectional long- and short-term memory network to determine the sentiment of a tweet. To validate the performance of the proposed framework, we perform extensive experiments on three benchmark datasets, and results show that DICET considerably outperforms the state of the art in sentiment classification.
Naseer, A, Rani, M, Naz, S, Razzak, MI, Imran, M & Xu, G 2020, 'Refining Parkinson’s neurological disorder identification through deep transfer learning', Neural Computing and Applications, vol. 32, no. 3, pp. 839-854.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag London Ltd., part of Springer Nature. Parkinson’s disease (PD), a multi-system neurodegenerative disorder which affects the brain slowly, is characterized by symptoms such as muscle stiffness, tremor in the limbs and impaired balance, all of which tend to worsen with the passage of time. Available treatments target its symptoms, aiming to improve the quality of life. However, automatic diagnosis at early stages is still a challenging medicine-related task to date, since a patient may have an identical behavior to that of a healthy individual at the very early stage of the disease. Parkinson’s disease detection through handwriting data is a significant classification problem for identification of PD at the infancy stage. In this paper, a PD identification is realized with help of handwriting images that help as one of the earliest indicators for PD. For this purpose, we proposed a deep convolutional neural network classifier with transfer learning and data augmentation techniques to improve the identification. Two approaches like freeze and fine-tuning of transfer learning are investigated using ImageNet and MNIST dataset as source task independently. A trained network achieved 98.28% accuracy using fine-tuning-based approach using ImageNet and PaHaW dataset. Experimental results on benchmark dataset reveal that the proposed approach provides better detection of Parkinson’s disease as compared to state-of-the-art work.
Nguyen, H, Moayedi, H, Foong, LK, Al Najjar, HAH, Jusoh, WAW, Rashid, ASA & Jamali, J 2020, 'Optimizing ANN models with PSO for predicting short building seismic response', Engineering with Computers, vol. 36, no. 3, pp. 823-837.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag London Ltd., part of Springer Nature. The present study aimed to optimize the artificial neural network (ANN) with one of the well-established optimization algorithms called particle swarm optimization (PSO) for the problem of ground response approximation in short structures. Various studies showed that ANN-based solutions are a reliable method for complex engineering problems. Predicting the ground surface respond to seismic loading is one of the engineering problems that still has not received any ANN solution. Therefore, this paper aimed to assess the application of hybrid PSO-based ANN models to the calculation of horizontal deflection of columns in short building after being subjected to a significant seismic loading (e.g., The Chi-Chi earthquake used as one of the input databases). To prepare both of the training and testing datasets, for the ANN and PSO-ANN network models, a series of finite element (FE) modeling were performed. The used FEM simulation database consists of 8324 training datasets and 2081 testing datasets that is equal to 80% and 20% of the whole database, respectively. The input includes Chi-Chi earthquake dynamic time (s), friction angle (φ), dilation angle (ψ), unit weight (γ), soil elastic modulus (E), Poisson’s ratio (v), structure axial stiffness (EA), and bending stiffness (EI) where the output was taken horizontal deflection of the columns at their highest level (Ux). The result indicates higher reliability of the PSO-ANN model in estimating the ground response and horizontal deflection of structural columns in short structures after being subjected to earthquake loading.
Niamir, L, Ivanova, O, Filatova, T, Voinov, A & Bressers, H 2020, 'Demand-side solutions for climate mitigation: Bottom-up drivers of household energy behavior change in the Netherlands and Spain', Energy Research & Social Science, vol. 62, pp. 101356-101356.
View/Download from: Publisher's site
Niamir, L, Kiesewetter, G, Wagner, F, Schöpp, W, Filatova, T, Voinov, A & Bressers, H 2020, 'Assessing the macroeconomic impacts of individual behavioral changes on carbon emissions', Climatic Change, vol. 158, no. 2, pp. 141-160.
View/Download from: Publisher's site
View description>>
AbstractIn the last decade, instigated by the Paris agreement and United Nations Climate Change Conferences (COP22 and COP23), the efforts to limit temperature increase to 1.5 °C above pre-industrial levels are expanding. The required reductions in greenhouse gas emissions imply a massive decarbonization worldwide with much involvement of regions, cities, businesses, and individuals in addition to the commitments at the national levels. Improving end-use efficiency is emphasized in previous IPCC reports (IPCC 2014). Serving as the primary ‘agents of change’ in the transformative process towards green economies, households have a key role in global emission reduction. Individual actions, especially when amplified through social dynamics, shape green energy demand and affect investments in new energy technologies that collectively can curb regional and national emissions. However, most energy-economics models—usually based on equilibrium and optimization assumptions—have a very limited representation of household heterogeneity and treat households as purely rational economic actors. This paper illustrates how computational social science models can complement traditional models by addressing this limitation. We demonstrate the usefulness of behaviorally rich agent-based computational models by simulating various behavioral and climate scenarios for residential electricity demand and compare them with the business as usual (SSP2) scenario. Our results show that residential energy demand is strongly linked to personal and social norms. Empirical evidence from surveys reveals that social norms have an essential role in shaping personal norms. When assessing the cumulative impacts of these behavioral processes, we quantify individual and combined effects of social dynamics and of carbon pricing on individual energy efficiency and on the aggregated regional energy demand and emissions. The intensity of social interactions and ...
Nicolas, C, Valenzuela-Fernández, L & Merigó, JM 2020, 'Research Trends of Marketing: A Bibliometric Study 1990–2017', Journal of Promotion Management, vol. 26, no. 5, pp. 674-703.
View/Download from: Publisher's site
View description>>
© 2020, © 2020 Taylor & Francis Group, LLC. Interest in the role of marketing has grown in recent decades due to its impact in brand value, value creation for customers, profitability of customer base, and organizational results. The paper shows an overall view on marketing research to explore the development of research trends, showing the high-frequency keywords at different time periods. Using bibliometric methods, the research analyzes publications between 1990 and 2017 found in the Web of Science and Scopus databases. The paper shows the evolution of keywords to reveal emerging topics as demonstrated in the connections network which includes “advertising,” “consumer behavior,” “trust,” “innovation,” and “customer satisfaction.”.
Ning, X, Yac, L, Wang, X, Benatallah, B, Dong, M & Zhang, S 2020, 'Rating prediction via generative convolutional neural networks based regression', Pattern Recognition Letters, vol. 132, pp. 12-20.
View/Download from: Publisher's site
View description>>
Ratings are an essential criterion for evaluating the quality of movies and a critical indicator of whether a customer would watch a movie. Therefore, an important related research challenge is to predict the rating of a movie before it is released in cinema or even before it is produced. Many existing approaches fail to address this challenge because they predict movie ratings based on post-production factors such as review comments from social media. Consequently, they are generally inapplicable until a movie has been released for a certain period of time when a sufficient number of review comments have become available. In this paper, we propose a regression model based on generative convolutional neural networks for movie rating prediction. Instead of post-production factors widely used by previous work, this model learns from movies’ intrinsic pillars such as genres, budget, cast, director and plot information, which are obtainable before the production of movies. In particular, the model explores the correlations between the rating of a movie and its intrinsic attributes to predict its rating. The results can serve as a reference for investors and movie studios to determine an optimal portfolio for movie production and a guidance to the interested users to choose the movie to watch. Extensive experiments on a real dataset are benchmarked against a set of baselines and state of the art approaches. The results demonstrate the effectiveness of our approach. The proposed model is also general to be extended to handle other prediction tasks.
Niu, T, Wang, J, Lu, H, Yang, W & Du, P 2020, 'Developing a deep learning framework with two-stage feature selection for multivariate financial time series forecasting', Expert Systems with Applications, vol. 148, pp. 113237-113237.
View/Download from: Publisher's site
Nosouhi, MR, Sood, K, Yu, S, Grobler, M & Zhang, J 2020, 'PASPORT: A Secure and Private Location Proof Generation and Verification Framework', IEEE Transactions on Computational Social Systems, vol. 7, no. 2, pp. 293-307.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Recently, there has been a rapid growth in location-based systems and applications in which users submit their location information to service providers in order to gain access to a service, resource, or reward. We have seen that in these applications, dishonest users have an incentive to cheat on their location. Unfortunately, no effective protection mechanism has been adopted by service providers against these fake location submissions. This is a critical issue that causes severe consequences for these applications. Motivated by this, we propose the Privacy-Aware and Secure Proof Of pRoximiTy (PASPORT) scheme in this article to address the problem. Using PASPORT, users submit a location proof (LP) to service providers to prove that their submitted location is true. PASPORT has a decentralized architecture designed for ad hoc scenarios in which mobile users can act as witnesses and generate LPs for each other. It provides user privacy protection as well as security properties, such as unforgeability and nontransferability of LPs. Furthermore, the PASPORT scheme is resilient to prover-prover collusions and significantly reduces the success probability of Prover-Witness collusion attacks. To further make the proximity checking process private, we propose P-TREAD, a privacy-aware distance bounding protocol and integrate it into PASPORT. To validate our model, we implement a prototype of the proposed scheme on the Android platform. Extensive experiments indicate that the proposed method can efficiently protect location-based applications against fake submissions.
Nosouhi, MR, Yu, S, Zhou, W, Grobler, M & Keshtiar, H 2020, 'Blockchain for secure location verification', Journal of Parallel and Distributed Computing, vol. 136, pp. 40-51.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Inc. In location-sensitive applications, dishonest users may submit fake location claims to illegally access a service or obtain benefit. To address this issue, a number of location proof mechanisms have been proposed in literature. However, they confront various security and privacy challenges, including Prover–Prover collusions (Terrorist Frauds), Prover–Witness collusions, and location privacy threats. In this paper, we utilize the unique features of the blockchain technology to design a decentralized scheme for location proof generation and verification. In the proposed scheme, a user who needs a location proof (called a prover) broadcasts a request to the neighbor devices through a short-range communication interface, e.g. Bluetooth. Those neighbor devices that decide to respond (called witnesses) start to authenticate the requesting user. We integrate an incentive mechanism into the proposed scheme to reward such witnesses. Upon successful authentication, a transaction is generated as a location proof and is broadcast onto a peer-to-peer network where it can be picked up by verifiers for final verification. Our security analysis shows that the proposed scheme achieves a reliable performance against Prover–Prover and Prover–Witness collusions. Moreover, our prototype implementation on the Android platform shows that the proposed scheme outperforms other currently deployed location proof schemes.
Oberst, S, Halkon, B, Ji, J & Brown, T 2020, 'Preface', Vibration Engineering for a Sustainable Future: Active and Passive Noise and Vibration Control, Vol. 1, vol. 1, pp. v-vi.
Oberst, S, Lai, JCS, Martin, R, Halkon, BJ, Saadatfar, M & Evans, TA 2020, 'Revisiting stigmergy in light of multi-functional, biogenic, termite structures as communication channel', Computational and Structural Biotechnology Journal, vol. 18, pp. 2522-2534.
View/Download from: Publisher's site
View description>>
Termite mounds are fascinating because of their intriguing composition of nu- merous geometric shapes and materials. However, little is known about these structures, or of their functionalities. Most research has been on the basic com- position of mounds compared with surrounding soils. There has been some targeted research on the thermoregulation and ventilation of the mounds of a few species of fungi-growing termites, which has generated considerable inter- est from human architecture. Otherwise, research on termite mounds has been scattered, with little work on their explicit properties.This review is focused on how termites design and build functional structures as nest, nursery and food storage; for thermoregulation and climatisation; as defence, shelter and refuge; as a foraging tool or building material; and for colony communication, either as in indirect communication (stigmergy) or as an information channel essential for direct communication through vibrations (biotremology).Our analysis shows that systematic research is required to study the prop- erties of these structures such as porosity and material composition. High res- olution computer tomography in combination with nonlinear dynamics and methods from computational intelligence may provide breakthroughs in un- veiling the secrets of termite behaviour and their mounds. In particular, the ex- amination of dynamic and wave propagation properties of termite-built struc- tures in combination with a detailed signal analysis of termite activities is re- quired to better understand the interplay between termites and their nest as superorganism. How termite structures serve as defence in the form of disguis- ing acoustic and vibration signals from detection by predators, and what role local and global vibration synchronisation plays for building are open ques- tions that need to be addressed to provide insights into how termites utilise materials to thrive in a world of predators and competitors.
Pan, Y, Tsang, IW, Singh, AK, Lin, C-T & Sugiyama, M 2020, 'Stochastic Multichannel Ranking with Brain Dynamics Preferences', Neural Computation, vol. 32, no. 8, pp. 1499-1530.
View/Download from: Publisher's site
View description>>
A driver's cognitive state of mental fatigue significantly affects his or her driving performance and more important, public safety. Previous studies have leveraged reaction time (RT) as the metric for mental fatigue and aim at estimating the exact value of RT using electroencephalogram (EEG) signals within a regression model. However, due to the easily corrupted and also nonsmooth properties of RTs during data collection, methods focusing on predicting the exact value of a noisy measurement, RT generally suffer from poor generalization performance. Considering that human RT is the reflection of brain dynamics preference (BDP) rather than a single regression output of EEG signals, we propose a novel channel-reliability-aware ranking (CArank) model for the multichannel ranking problem. CArank learns from BDPs using EEG data robustly and aims at preserving the ordering corresponding to RTs. In particular, we introduce a transition matrix to characterize the reliability of each channel used in the EEG data, which helps in learning with BDPs only from informative EEG channels. To handle large-scale EEG signals, we propose a stochastic-generalized expectation maximum (SGEM) algorithm to update CArank in an online fashion. Comprehensive empirical analysis on EEG signals from 40 participants shows that our CArank achieves substantial improvements in reliability while simultaneously detecting noisy or less informative EEG channels.
Pileggi, SF & Lamia, SA 2020, 'Climate Change TimeLine: An Ontology to Tell the Story so Far', IEEE Access, vol. 8, pp. 65294-65312.
View/Download from: Publisher's site
Pineda-Escobar, MA & Merigó, JM 2020, 'A bibliometric analysis of the Base/Bottom of the Pyramid research', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 5, pp. 5537-5551.
View/Download from: Publisher's site
Prabhu, CSR, Jan, T, Prasad, M & Varadarajan, V 2020, 'FOG ANALYTICS - A SURVEY', Malaysian Journal of Computer Science, vol. 2020, no. Special Issue 1, pp. 140-151.
View/Download from: Publisher's site
View description>>
Fog computing has emerged as an essential alternative to the cloud. Fog computing is the future as it is nearer to the edge where actually the IOT devices and sensors are located. A Fog Server or Fog Node is located near to the IOT devices, connecting directly (wired or wireless) to them. The Fog Server has a functionality of fast accessibility to the data arising out of IOT devices or sensors, as against cloud server which may be located in data centers (near core Network Centers) located far away from the edge resulting in extreme delays in network transmission and latency, especially when the data is large volume as stream (or ‘Big Data’) arising out of IOT devices or sensors including cameras, etc. Real time response after completing the necessary Analytics on the data generated by IOT devices and sensors becomes critically essential for meeting the real time response requirements of critical applications such as in health care and transportation. What are the relevant techniques for Fog Analytics? In this paper we provide a brief survey of Fog Analytics techniques in stream data analytics, machine learning, deep learning techniques and also game theoretical adversarial learning.
Pradhan, B, Al-Najjar, HAH, Sameen, MI, Mezaal, MR & Alamri, AM 2020, 'Landslide Detection Using a Saliency Feature Enhancement Technique From LiDAR-Derived DEM and Orthophotos', IEEE Access, vol. 8, pp. 121942-121954.
View/Download from: Publisher's site
Pradhan, B, Al-Najjar, HAH, Sameen, MI, Tsang, I & Alamri, AM 2020, 'Unseen Land Cover Classification from High-Resolution Orthophotos Using Integration of Zero-Shot Learning and Convolutional Neural Networks', Remote Sensing, vol. 12, no. 10, pp. 1676-1676.
View/Download from: Publisher's site
View description>>
Zero-shot learning (ZSL) is an approach to classify objects unseen during the training phase and shown to be useful for real-world applications, especially when there is a lack of sufficient training data. Only a limited amount of works has been carried out on ZSL, especially in the field of remote sensing. This research investigates the use of a convolutional neural network (CNN) as a feature extraction and classification method for land cover mapping using high-resolution orthophotos. In the feature extraction phase, we used a CNN model with a single convolutional layer to extract discriminative features. In the second phase, we used class attributes learned from the Word2Vec model (pre-trained by Google News) to train a second CNN model that performed class signature prediction by using both the features extracted by the first CNN and class attributes during training and only the features during prediction. We trained and tested our models on datasets collected over two subareas in the Cameron Highlands (training dataset, first test dataset) and Ipoh (second test dataset) in Malaysia. Several experiments have been conducted on the feature extraction and classification models regarding the main parameters, such as the network’s layers and depth, number of filters, and the impact of Gaussian noise. As a result, the best models were selected using various accuracy metrics such as top-k categorical accuracy for k = [1,2,3], Recall, Precision, and F1-score. The best model for feature extraction achieved 0.953 F1-score, 0.941 precision, 0.882 recall for the training dataset and 0.904 F1-score, 0.869 precision, 0.949 recall for the first test dataset, and 0.898 F1-score, 0.870 precision, 0.838 recall for the second test dataset. The best model for classification achieved an average of 0.778 top-one, 0.890 top-two and 0.942 top-three accuracy, 0.798 F1-score, 0.766 recall and 0.838 precision for the first test dataset and 0.737 top-one, 0.906 top-two...
Prakash, S, Joshi, S, Bhatia, T, Sharma, S, Samadhiya, D, Shah, RR, Kaiwartya, O & Prasad, M 2020, 'Characteristic of enterprise collaboration system and its implementation issues in business management', International Journal of Business Intelligence and Data Mining, vol. 16, no. 1, pp. 49-49.
View/Download from: Publisher's site
View description>>
© 2020 Inderscience Enterprises Ltd. Collaboration is an extremely useful area for the most of the enterprise systems particularly within Web 2.0 and Enterprise 2.0. The collaboration provides help in enterprise collaboration system (ECS) to achieve the desired goal by unifying completed tasks of employees or people working on a similar or the same task. Thus, the collaboration systems have witnessed significant attention. The ECS provides consistent and off-the-shelf support to processes and managements within organisations. Management techniques of the ECS may be useful to a community which manages ECS systems for collaboration. In this context, this paper focuses on enterprise collaboration system and answers critical questions related to ECS including: 1) what does collaboration really means for an enterprise system; 2) how can the collaboration help to improve internal processes and management of the system; 3) how it is helpful to improve interactions with customers and partners?
Pugalia, S, Prakash Sai, L & Cetindamar, DK 2020, 'Personal Networks’ Influence on Student Entrepreneurs: A Qualitative Study', International Journal of Innovation and Technology Management, vol. 17, no. 05, pp. 2050037-2050037.
View/Download from: Publisher's site
View description>>
This study focuses on students who have conceptualized the business idea during their academic studies and created the business venture during or within two years after graduation. The extant literature identifies social networks as a key factor not only for opportunity recognition but also for start-up survival. This study expands the knowledge about the roles of personal networks within the context of student entrepreneurs. By conducting focus group, interviews, and a survey at a top-ranked technological institute of higher learning in India, this study analyzed the role played by the personal networks in facilitating and enabling the creation of a venture by student entrepreneurs. Our study findings indicate that (1) student entrepreneurs expect ten potential roles from their personal networks, (2) the hierarchy of these roles indicates the triggering impact of business networking with a final outcome of motivational support, and (3) business networking, venture financing and the founding team formation are the most important roles in the actual start-up phase.
Qiao, Y, Sun, X & Yu, N 2020, 'Local Equivalence of Multipartite Entanglement', IEEE Journal on Selected Areas in Communications, vol. 38, no. 3, pp. 568-574.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Let R be an invariant polynomial ring of a reductive group acting on a vector space, and let d be the minimum integer such that R is generated by those polynomials in R of degree no more than d. To upper bound such d is a long standing open problem since the very initial study of the invariant theory in the 19th century. Motivated by its significant role in characterizing multipartite entanglement, we study the invariant polynomial rings of local unitary groups-the direct product of unitary groups acting on the tensor product of Hilbert spaces, and local general linear groups-the direct product of general linear groups acting on the tensor product of Hilbert spaces. For these two group actions, we prove explicit upper bounds on the degrees needed to generate the corresponding invariant polynomial rings. On the other hand, systematic methods are provided to construct all homogeneous polynomials that are invariant under these two groups for any fixed degree. Thus, our results can be regarded as a complete characterization of the invariant polynomial rings. As an interesting application, we show that multipartite entanglement is additive in the sense that two multipartite states are local unitary equivalent if and only if r-copies of them are local unitary equivalent for some r.
Qu, X, Yu, Y, Zhou, M, Lin, C-T & Wang, X 2020, 'Jointly dampening traffic oscillations and improving energy consumption with electric, connected and automated vehicles: A reinforcement learning based approach', Applied Energy, vol. 257, pp. 114030-114030.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd It has been well recognized that human driver's limits, heterogeneity, and selfishness substantially compromise the performance of our urban transport systems. In recent years, in order to deal with these deficiencies, our urban transport systems have been transforming with the blossom of key vehicle technology innovations, most notably, connected and automated vehicles. In this paper, we develop a car following model for electric, connected and automated vehicles based on reinforcement learning with the aim to dampen traffic oscillations (stop-and-go traffic waves) caused by human drivers and improve electric energy consumption. Compared to classical modelling approaches, the proposed reinforcement learning based model significantly reduces the modelling constraints and has the capability of self-learning and self-correction. Experiment results demonstrate that the proposed model is able to improve travel efficiency by reducing the negative impact of traffic oscillations, and it can also reduce the average electric energy consumption.
Qu, Y, Gao, L, Luan, TH, Xiang, Y, Yu, S, Li, B & Zheng, G 2020, 'Decentralized Privacy Using Blockchain-Enabled Federated Learning in Fog Computing', IEEE Internet of Things Journal, vol. 7, no. 6, pp. 5171-5183.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. As the extension of cloud computing and a foundation of IoT, fog computing is experiencing fast prosperity because of its potential to mitigate some troublesome issues, such as network congestion, latency, and local autonomy. However, privacy issues and the subsequent inefficiency are dragging down the performances of fog computing. The majority of existing works hardly consider a reasonable balance between them while suffering from poisoning attacks. To address the aforementioned issues, we propose a novel blockchain-enabled federated learning (FL-Block) scheme to close the gap. FL-Block allows local learning updates of end devices exchanges with a blockchain-based global learning model, which is verified by miners. Built upon this, FL-Block enables the autonomous machine learning without any centralized authority to maintain the global model and coordinates by using a Proof-of-Work consensus mechanism of the blockchain. Furthermore, we analyze the latency performance of FL-Block and further derive the optimal block generation rate by taking communication, consensus delays, and computation cost into consideration. Extensive evaluation results show the superior performances of FL-Block from the aspects of privacy protection, efficiency, and resistance to the poisoning attack.
Qu, Y, Yu, S, Zhou, W & Tian, Y 2020, 'GAN-Driven Personalized Spatial-Temporal Private Data Sharing in Cyber-Physical Social Systems', IEEE Transactions on Network Science and Engineering, vol. 7, no. 4, pp. 2576-2586.
View/Download from: Publisher's site
Qu, Y, Zhang, J, Li, R, Zhang, X, Zhai, X & Yu, S 2020, 'Generative adversarial networks enhanced location privacy in 5G networks', Science China Information Sciences, vol. 63, no. 12, p. 220303.
View/Download from: Publisher's site
View description>>
5G networks, as the up-to-date communication platforms, are experiencing fast booming. Meanwhile, increasing volumes of sensitive data, especially location information, are being generated and shared using 5G networks for various purposes ceaselessly. Location and trajectory information in the published data has always been and will keep courting risks and attacks by malicious adversaries. Therefore, there are still privacy leakage threats by simply sharing the original data, especially data with location information, due to the short cover range of 5G signal tower. To better address these issues, we proposed a generative adversarial networks (GAN) enhanced location privacy protection model to cloak the location and even trajectory information. We use posterior sampling to generate a subset of data, which is proved complying with differential privacy requirements from the end device side. After that, a data augmentation algorithm modified from classic GAN is devised to generate a series of privacy-preserving full-sized synthetic data from the central server side. With the synthetic data generated from a real-world dataset, we demonstrate the superiority of the proposed model in terms of location privacy protection, data utility, and prediction accuracy.
Quiroz, JC, Laranjo, L, Kocaballi, AB, Briatore, A, Berkovsky, S, Rezazadegan, D & Coiera, E 2020, 'Identifying relevant information in medical conversations to summarize a clinician-patient encounter', Health Informatics Journal, vol. 26, no. 4, pp. 2906-2914.
View/Download from: Publisher's site
View description>>
To inform the development of automated summarization of clinical conversations, this study sought to estimate the proportion of doctor-patient communication in general practice (GP) consultations used for generating a consultation summary. Two researchers with a medical degree read the transcripts of 44 GP consultations and highlighted the phrases to be used for generating a summary of the consultation. For all consultations, less than 20% of all words in the transcripts were needed for inclusion in the summary. On average, 9.1% of all words in the transcripts, 26.6% of all medical terms, and 27.3% of all speaker turns were highlighted. The results indicate that communication content used for generating a consultation summary makes up a small portion of GP consultations, and automated summarization solutions—such as digital scribes—must focus on identifying the 20% relevant information for automatically generating consultation summaries.
Rafique, W, Zhao, X, Yu, S, Yaqoob, I, Imran, M & Dou, W 2020, 'An Application Development Framework for Internet-of-Things Service Orchestration', IEEE Internet of Things Journal, vol. 7, no. 5, pp. 4543-4556.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Application development for the Internet of Things (IoT) poses immense challenges due to the lack of standard development frameworks, tools, and techniques to assist end users in dealing with the complexity of IoT systems during application development. These challenges invoke the use of model-driven development (MDD) along with the representational state transfer (REST) architecture to develop IoT applications, supporting model generation at different abstraction levels while generating software implementation artifacts for heterogeneous platforms and ensuring loose coupling in complex IoT systems. This article proposes an IoT application development framework, named IADev, which uses attribute-driven design and MDD to address the above-mentioned challenges. This framework is composed of two major steps, including iterative architecture development using attribute-driven design and generating models to guide the transformation using MDD. IADev uses attribute-driven design to transform the requirements into a solution architecture by considering the concerns of all involved stakeholders, and then, MDD metamodels are generated to hierarchically transform the design components into the software artifacts. We evaluate IADev for a smart vehicle scenario in an intelligent transportation system to generate an executable implementation code for a real-world system. The case study experiments proclaim that IADev achieves higher satisfaction of the participants for the IoT application development and service orchestration, as compared to conventional approaches. Finally, we propose an architecture that uses IADev with the Siemens IoT cloud platform for service orchestration in industrial IoT.
Razzak, I, Saris, RA, Blumenstein, M & Xu, G 2020, 'Integrating joint feature selection into subspace learning: A formulation of 2DPCA for outliers robust feature selection', Neural Networks, vol. 121, pp. 441-451.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Since the principal component analysis and its variants are sensitive to outliers that affect their performance and applicability in real world, several variants have been proposed to improve the robustness. However, most of the existing methods are still sensitive to outliers and are unable to select useful features. To overcome the issue of sensitivity of PCA against outliers, in this paper, we introduce two-dimensional outliers-robust principal component analysis (ORPCA) by imposing the joint constraints on the objective function. ORPCA relaxes the orthogonal constraints and penalizes the regression coefficient, thus, it selects important features and ignores the same features that exist in other principal components. It is commonly known that square Frobenius norm is sensitive to outliers. To overcome this issue, we have devised an alternative way to derive objective function. Experimental results on four publicly available benchmark datasets show the effectiveness of joint feature selection and provide better performance as compared to state-of-the-art dimensionality-reduction methods.
Razzak, I, Zafar, K, Imran, M & Xu, G 2020, 'Randomized nonlinear one-class support vector machines with bounded loss function to detect of outliers for large scale IoT data', Future Generation Computer Systems, vol. 112, pp. 715-723.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier B.V. Exponential growth of large scale data industrial internet of things is evident due to the enormous deployment of IoT data acquisition devices. Detection of unusual patterns from large scale IoT data is important though challenging task. Recently, one-class support vector machines is extensively being used for anomaly detection. It tries to find an optimal hyperplane in high dimensional data that best separates the data from anomalies with maximum margin. However, the hinge loss of traditional one-class support vector machines is unbounded, which results in larger loss caused by outliers affecting its performance for anomaly detection. Furthermore, existing methods are computationally complex for larger data. In this paper, we present novel anomaly detection for large scale data by using randomized nonlinear features in support vector machines with bounded loss function rather than finding optimized support vectors with unbounded loss function. Extensive experimental evaluation on ten benchmark datasets shows the robustness of the proposed approach against outliers such as 0.8239, 0.7921, 0.7501, 0.6711, 0.6692, 0.4789, 0.6462, 0.6812, 0.7271 and 0.7873 accuracy for Gas Sensor Array, Human Activity Recognition, Parkinson's, Hepatitis, Breast Cancer, Blood Transfusion, Heart, ILPD and Wholesale Customers datasets respectively. In addition to this, introduction of randomized nonlinear feature helps to considerably decrease the computational complexity and space complexity from O(N3) to O(Bkn) and O(N2) to O(Bkn). Thus, very attractive for larger datasets.
Razzak, MI, Imran, M & Xu, G 2020, 'Big data analytics for preventive medicine', Neural Computing and Applications, vol. 32, no. 9, pp. 4417-4451.
View/Download from: Publisher's site
View description>>
© 2019, Springer-Verlag London Ltd., part of Springer Nature. Medical data is one of the most rewarding and yet most complicated data to analyze. How can healthcare providers use modern data analytics tools and technologies to analyze and create value from complex data? Data analytics, with its promise to efficiently discover valuable pattern by analyzing large amount of unstructured, heterogeneous, non-standard and incomplete healthcare data. It does not only forecast but also helps in decision making and is increasingly noticed as breakthrough in ongoing advancement with the goal is to improve the quality of patient care and reduces the healthcare cost. The aim of this study is to provide a comprehensive and structured overview of extensive research on the advancement of data analytics methods for disease prevention. This review first introduces disease prevention and its challenges followed by traditional prevention methodologies. We summarize state-of-the-art data analytics algorithms used for classification of disease, clustering (unusually high incidence of a particular disease), anomalies detection (detection of disease) and association as well as their respective advantages, drawbacks and guidelines for selection of specific model followed by discussion on recent development and successful application of disease prevention methods. The article concludes with open research challenges and recommendations.
Ren, W, Hu, J, Zhu, T, Ren, Y & Choo, K-KR 2020, 'A flexible method to defend against computationally resourceful miners in blockchain proof of work', Information Sciences, vol. 507, pp. 161-171.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Inc. Blockchain is well known as a decentralized and distributed public digital ledger, and is currently used by most cryptocurrencies to record transactions. One of the fundamental differences between blockchain and traditional distributed systems is that blockchain's decentralization relies on consensus protocols, such as proof of work (PoW). However, computation systems, such as application specific integrated circuit (ASIC) machines, have recently emerged that are specifically designed for PoW computation and may compromise a decentralized system within a short amount of time. These computationally resourceful miners challenge the very nature of blockchain, with potentially serious consequences. Therefore, in this paper, we propose a general and flexible PoW method that enforces memory usage. Specifically, the proposed method blocks computationally resourceful miners and retains the previous design logic without requiring one to replace the original hash function. We also propose the notion of a memory intensive function (MIF) with a memory usage parameter k (kMIF). Our scheme comprises three algorithms that construct a kMIF Hash by invoking any available hash function which is not kMIF to protect against ASICs, and then thwarts the pre-computation of hash results over a nonce. We then design experiments to evaluate memory changes in these three algorithms, and the findings demonstrate that enforcing memory usage in a blockchain can be an effective defense against computationally resourceful miners.
Sanders, YR, Berry, DW, Costa, PCS, Tessler, LW, Wiebe, N, Gidney, C, Neven, H & Babbush, R 2020, 'Compilation of Fault-Tolerant Quantum Heuristics for Combinatorial Optimization', PRX Quantum, vol. 1, no. 2.
View/Download from: Publisher's site
View description>>
Here we explore which heuristic quantum algorithms for combinatorial optimization might be most practical to try out on a small fault-tolerant quantum computer. We compile circuits for several variants of quantum-accelerated simulated annealing including those using qubitization or Szegedy walks to quantize classical Markov chains and those simulating spectral-gap-amplified Hamiltonians encoding a Gibbs state. We also optimize fault-tolerant realizations of the adiabatic algorithm, quantum-enhanced population transfer, the quantum approximate optimization algorithm, and other approaches. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. We compile these bottlenecks for several families of optimization problems and report for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. Our results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup achieves an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. For instance, under quantum-favorable assumptions (e.g., that the quantum algorithm requires exactly quadratically fewer steps), our analysis suggests that quantum-accelerated simulated annealing requires roughly a day and a million physical qubits to optimize spin glasses that could be solved by classical simulated annealing in about 4 CPU-minutes.
Sarin, S, Haon, C, Belkhouja, M, Mas-Tur, A, Roig-Tierno, N, Sego, T, Porter, A, Merigó, JM & Carley, S 2020, 'Uncovering the knowledge flows and intellectual structures of research in Technological Forecasting and Social Change: A journey through history', Technological Forecasting and Social Change, vol. 160, pp. 120210-120210.
View/Download from: Publisher's site
Sarker, PC, Guo, Y, Lu, HY & Zhu, JG 2020, 'A generalized inverse Preisach dynamic hysteresis model of Fe-based amorphous magnetic materials', Journal of Magnetism and Magnetic Materials, vol. 514, pp. 167290-167290.
View/Download from: Publisher's site
View description>>
Fe-based amorphous magnetic materials are attracting more and more attentions in the application of low and medium frequency transformers due to their favorable properties of low core loss and high saturation magnetic flux density. Accurate modelling of their static and dynamic characteristics is required for analysis and design optimization of low and medium frequency transformers. In particular, for numerical analysis using the vectorial magnetic potential, an inverse magnetic hysteresis model is needed to predict the magnetic field strength from the magnetic flux density. When the excitation varies with time, the magnetic hysteresis model must be able to predict the dynamic hysteresis characteristics. This paper presents a generalized inverse Preisach dynamic hysteresis model for dynamic characterization of Fe-based magnetic materials. This model incorporates the reversible magnetization and magnetization dependent hysteresis, as well as all core loss components, including the hysteresis, eddy current, and excess losses. The proposed model can predict accurately the magnetic field strength from the magnetic flux density and hence accurate core losses. The predicted results are verified by experimental measurements.
Saxena, A, Pare, S, Meena, MS, Gupta, D, Gupta, A, Razzak, I, Lin, C-T & Prasad, M 2020, 'A Two-Phase Approach for Semi-Supervised Feature Selection', Algorithms, vol. 13, no. 9, pp. 215-215.
View/Download from: Publisher's site
View description>>
This paper proposes a novel approach for selecting a subset of features in semi-supervised datasets where only some of the patterns are labeled. The whole process is completed in two phases. In the first phase, i.e., Phase-I, the whole dataset is divided into two parts: The first part, which contains labeled patterns, and the second part, which contains unlabeled patterns. In the first part, a small number of features are identified using well-known maximum relevance (from first part) and minimum redundancy (whole dataset) based feature selection approaches using the correlation coefficient. The subset of features from the identified set of features, which produces a high classification accuracy using any supervised classifier from labeled patterns, is selected for later processing. In the second phase, i.e., Phase-II, the patterns belonging to the first and second part are clustered separately into the available number of classes of the dataset. In the clusters of the first part, take the majority of patterns belonging to a cluster as the class for that cluster, which is given already. Form the pairs of cluster centroids made in the first and second part. The centroid of the second part nearest to a centroid of the first part will be paired. As the class of the first centroid is known, the same class can be assigned to the centroid of the cluster of the second part, which is unknown. The actual class of the patterns if known for the second part of the dataset can be used to test the classification accuracy of patterns in the second part. The proposed two-phase approach performs well in terms of classification accuracy and number of features selected on the given benchmarked datasets.
Shi, K, Gong, C, Lu, H, Zhu, Y & Niu, Z 2020, 'Wide-grained capsule network with sentence-level feature to detect meteorological event in social network', Future Generation Computer Systems, vol. 102, pp. 323-332.
View/Download from: Publisher's site
Shi, K, Lu, H, Zhu, Y & Niu, Z 2020, 'Automatic generation of meteorological briefing by event knowledge guided summarization model', Knowledge-Based Systems, vol. 192, pp. 105379-105379.
View/Download from: Publisher's site
Shukla, N, Merigó, JM, Lammers, T & Miranda, L 2020, 'Half a century of computer methods and programs in biomedicine: A bibliometric analysis from 1970 to 2017', Computer Methods and Programs in Biomedicine, vol. 183, pp. 105075-105075.
View/Download from: Publisher's site
View description>>
© 2019 Background and Objective: Computer Methods and Programs in Biomedicine (CMPB) is a leading international journal that presents developments about computing methods and their application in biomedical research. The journal published its first issue in 1970. In 2020, the journal celebrates the 50th anniversary. Motivated by this event, this article presents a bibliometric analysis of the publications of the journal during this period (1970–2017). Methods: The objective is to identify the leading trends occurring in the journal by analysing the most cited papers, keywords, authors, institutions and countries. For doing so, the study uses the Web of Science Core Collection database. Additionally, the work presents a graphical mapping of the bibliographic information by using the visualization of similarities (VOS) viewer software. This is done to analyze bibliographic coupling, co-citation and co-occurrence of keywords. Results: CMPB is identified as a leading and core journal for biomedical researchers. The journal is strongly connected to IEEE Transactions on Biomedical Engineering and IEEE Transactions on Medical Imaging. Paper from Wang, Jacques, Zheng (published in 1995) is its most cited document. The top author in this journal is James Geoffrey Chase and the top contributing institution is Uppsala U (Sweden). Most of the papers in CMPB are from the USA followed by the UK and Italy. China and Taiwan are the only Asian countries to appear in the top 10 publishing in CMPB. A keyword co-occurrences analysis revealed strong co-occurrences for classification, picture archiving and communication system (PACS), heart rate variability, survival analysis and simulation. Keywords analysis for the last decade revealed that machine learning for a variety of healthcare problems (including image processing and analysis) dominated other research fields in CMPB. Conclusions: It can be concluded that CMPB is a world-renowned publication outlet for biomedical re...
Si, Y, Li, F, Duan, K, Tao, Q, Li, C, Cao, Z, Zhang, Y, Biswal, B, Li, P, Yao, D & Xu, P 2020, 'Predicting individual decision-making responses based on single-trial EEG', NeuroImage, vol. 206, pp. 116333-116333.
View/Download from: Publisher's site
View description>>
Decision-making plays an essential role in the interpersonal interactions and cognitive processing of individuals. There has been increasing interest in being able to predict an individual's decision-making response (i.e., acceptance or rejection). We proposed an electroencephalogram (EEG)-based computational intelligence framework to predict individual responses. Specifically, the discriminative spatial network pattern (DSNP), a supervised learning approach, was applied to single-trial EEG data to extract the DSNP feature from the single-trial brain network. A linear discriminate analysis (LDA) trained on the DSNP features was then used to predict the individual response trial-by-trial. To verify the performance of the proposed DSNP, we recruited two independent subject groups, and recorded the EEGs using two types of EEG systems. The performances of the trial-by-trial predictors achieved an accuracy of 0.88 ± 0.09 for the first dataset, and 0.90 ± 0.10 for the second dataset. These trial-by-trial prediction performances suggested that individual responses could be predicted trial-by-trial by using the specific pattern of single-trial EEG networks, and our proposed method has the potential to establish the biologically inspired artificial intelligence decision system.
Sikandar, A, Agrawal, R, Tyagi, MK, Rao, ALN, Prasad, M & Binsawad, M 2020, 'Toward green computing in wireless sensor networks: prediction-oriented distributed clustering for non-uniform node distribution', EURASIP Journal on Wireless Communications and Networking, vol. 2020, no. 1.
View/Download from: Publisher's site
View description>>
AbstractRecently, researchers and practitioners in wireless sensor networks (WSNs) are focusing on energy-oriented communication and computing considering next-generation smaller and tiny wireless devices. The tiny sensor-enabled devices will be used for the purpose of sensing, computing, and wireless communication. The hundreds/thousands of WSNs sensors are used to monitor specific activities and report events via wireless communication. The tiny sensor-enabled devices are powered by smaller batteries to work independently in distributed environments resulting in limited maximum lifetime of the network constituted by these devices. Considering the non-uniform distribution of sensor-enabled devices in the next-generation mobility centric WSNs environments, energy consumption is imbalanced among the different sensors in the overall network environments. Toward this end, in this paper, a cluster-oriented routing protocol termed as prediction-oriented distributed clustering (PODC) mechanism is proposed for WSNs focusing on non-uniform sensor distribution in the network. A network model is presented, while categorizing PODC mechanism in two activities including setting cluster of nodes and the activity in the steady state. Further cluster set up activity is described while categorizing in four subcategories. The proposed protocol is compared with individual sensor energy awareness and distributed networking mode of clustering (EADC) and scheduled sensor activity-based individual sensor energy awareness and distributed networking mode of clustering (SA-ADC). The metrics including the overall lifetime of the network and nodes individual energy consumption in realistic next-generation WSNs environments are considered in the experimental evaluation. The results attest the reduced energy consumption centric benefits of the proposed framework PODC as compared to the literature. Therefore, the framework will be more applicable for ...
Singh, AK, Chen, H-T, Gramann, K & Lin, C-T 2020, 'Intraindividual Completion Time Modulates the Prediction Error Negativity in a Virtual 3-D Object Selection Task', IEEE Transactions on Cognitive and Developmental Systems, vol. 12, no. 2, pp. 354-360.
View/Download from: Publisher's site
View description>>
IEEE A prediction error negativity (PEN) can be observed in the human electroencephalogram when there is a mismatch between the predicted and the perceived changes in the environment. Our previous study using a virtual object selection task demonstrated an impact of the level of avatar realism on the PEN, reflecting a mismatch between visual and proprioceptive feedback about the object selection. To investigate the role of temporal integration of different sensory information on the PEN, this study investigated the impact of task completion times on the PEN amplitude, using the same virtual object selection task. Trials from each participant were divided into slow trials and fast trials based on the task completion time, and their associated PEN amplitudes were separately aggregated and analyzed. The result shows that PEN amplitudes are significantly more pronounced in slow trials than in fast trials. This finding suggests that task completion times modulate the PEN amplitude -a long task completion time allowed for a better integration of information from both visual and proprioceptive systems as the basis to detect a mismatch between the expected hand trajectory during a reaching motion and the perceived visual feedback in the virtual environment.
Singh, AK, Wang, Y-K, King, J-T & Lin, C-T 2020, 'Extended Interaction With a BCI Video Game Changes Resting-State Brain Activity', IEEE Transactions on Cognitive and Developmental Systems, vol. 12, no. 4, pp. 809-823.
View/Download from: Publisher's site
Sohaib, O, Hussain, W, Asif, M, Ahmad, M & Mazzara, M 2020, 'A PLS-SEM Neural Network Approach for Understanding Cryptocurrency Adoption.', IEEE Access, vol. 8, no. 1, pp. 13138-13150.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. The majority of previous research on new technology acceptance has been conducted with single-step Structural Equation Modeling (SEM) based methods. The primary purpose of the study is to enhance the new technology acceptance based research with the Artificial Neural Network (ANN) method to enable more precise and in-depth research results as compared to the single-step SEM method. This study measures the relation between technology readiness dimension (optimism, innovativeness, discomfort, insecurity) and the technology acceptance (perceived ease of use and perceived usefulness) - and the intention to use cryptocurrency, such as bitcoin. The contribution of this study include the use of a multi-analytical approach by combining Partial Least Squares- Structural Equation Modeling (PLS-SEM) and Artificial Neural Network (ANN) analysis. First, PLS-SEM was applied to assess which factor has significant influence toward intention to use cryptocurrency. Second, an ANN was employed to rank the relative influence of the significant predictor variables attained from the PLS-SEM. The findings of the two-step PLS-SEM and ANN approach confirm that the use of ANN further verifies the results obtained by the PLS-SEM analysis. Also, ANN is capable of modelling complex linear and non-linear relationships with high predictive accuracy compared to SEM methods. Also, an Importance-Performance Map Analysis (IPMA) of the PLS-SEM results provides a more specific understanding of each factor's importance-performance.
Song, Y, Lu, J, Lu, H & Zhang, G 2020, 'Fuzzy Clustering-Based Adaptive Regression for Drifting Data Streams', IEEE Transactions on Fuzzy Systems, vol. 28, no. 3, pp. 544-557.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Current models and algorithms have been increasingly required to learn in a nonstationary environment because the phenomenon of concept drift (or pattern shift) may occur, that is, the assumption that data are identically distributed may be invalid in data streams. Once the data pattern changes, a well-trained model built on the previous, now obsolete data cannot provide an accurate prediction for future data. To obtain reliable prediction, it is important to understand the existing patterns in the data stream and to know which pattern the current examples belong to during the modeling process. However, it is ambiguous to classify an example to a certain pattern in many real-world cases. In this paper, we propose a novel adaptive regression approach, called FUZZ-CARE, to dynamically recognize, train, and store patterns, and assign the membership degree of the upcoming examples belonging to these patterns. Membership degrees are presented by the membership matrix obtained from a kernel fuzzy c-means clustering, which is synchronously trained and adapted with regression parameters. Rather than designing a complicated procedure to continuously chase the newest pattern, which is a common approach in the literature, FUZZ-CARE abstracts useful past information to help predict newly arrived examples. It thus effectively avoids the risk of insufficient training due to the lack of new data and improves prediction accuracy. Experiments on six synthetic datasets and 21 real-world datasets validate the high accuracy and robustness of our approach.
Sood, K, Karmakar, KK, Yu, S, Varadharajan, V, Pokhrel, SR & Xiang, Y 2020, 'Alleviating Heterogeneity in SDN-IoT Networks to Maintain QoS and Enhance Security', IEEE Internet of Things Journal, vol. 7, no. 7, pp. 5964-5975.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Software-defined networks (SDNs) offer unique and attractive solutions to solve challenging management issues in Internet of Things (IoT)-based large-scale multitechnological networks. SDN-IoT network collaboration is innovative and attractive but expected to be extremely heterogeneous in future generation IoT systems. For example, multitechnology network, network externality, and nodes heterogeneity in SDN-IoT may seriously affect the flow or application-specific quality-of-service (QoS) requirements. Furthermore, it highly influences security adoption in a network of interconnected IoT nodes. We observe that both QoS and security are interdependent and nonnegligible factors, thus we emphasize that in order to alleviate heterogeneity it is inevitable to study both these factors hand to hand (or vice versa). With this aim, first, we discuss significant and reasonable cases to encourage researchers to study QoS and security integrally in order to alleviate heterogeneity at SDN-IoT control plane. Second, we propose a framework which successfully transforms the m heterogeneous controllers to n homogeneous controller groups. The key metric of our observation and analysis is the SDN controller's response time. Following this, to validate our approach, we use the mathematical model and a proof of concept (PoC) in a virtual SDN ecosystem is demonstrated. From performance evaluation, we observe that the proposed framework significantly alleviates heterogeneity which helps to maintain QoS and enhance security. This fundamental analysis will enable network security individuals to deal heterogeneity, QoS, and security, of SDN-IoT, in more successful and promising ways.
Sood, S & Pattinson, H 2020, 'Globotics Driven Digital Transformation: A Bright Future for Internships, Digital Marketing and E-Commerce Education', INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE AND BUSINESS ADMINISTRATION, vol. 6, no. 3, pp. 20-28.
View/Download from: Publisher's site
View description>>
This paper introduces a new approach, Globotics (Baldwin 2019), with the main focus directed towards the lack of skills in digital marketing and e-commerce. Globotics is assumed to provide insights for the adoption of a pedagogy of experiential learning. Furthermore, the adoption of globotics (ibid) may potentially lead towards a brighter future for tertiary marketing education, as well as fulfil the diverse needs of Asia and Oceania regarding the acquisition of digital marketing talent. The author conducted in-depth interviews with academics and practitioners in order to gain insight into the overall context of marketing practice. Upon reviewing the data, informats have, recognizing its value, highlighted the differences between digital and its counter point – traditional marketing. We assumed that tracking the online search trends can help solidify and feedback some information where past search demands for digital marketing, social media marketing e-commerce marketing and social commerce. An online service using “globotics” (Baldwin 2019) provides a promising approach towards solving the problems of both digital marketing curriculum and scarce talent linking marketing educators and students with practitioners. Importantly, with globotics marketing students as interns have an opportunity to take on tasks well beyond previous undergrad and postgrad entry- level roles of the last century.
Srinivas, S, Gill, AQ & Roach, T 2020, 'Analytics-Enabled Adaptive Business Architecture Modeling.', Complex Syst. Informatics Model. Q., vol. 23, no. 23, pp. 23-43.
View/Download from: Publisher's site
Taghikhah, F, Voinov, A, Shukla, N & Filatova, T 2020, 'Exploring consumer behavior and policy options in organic food adoption: Insights from the Australian wine sector', Environmental Science & Policy, vol. 109, pp. 116-124.
View/Download from: Publisher's site
Tam, NT, Dung, DA, Hung, TH, Binh, HTT & Yu, S 2020, 'Exploiting relay nodes for maximizing wireless underground sensor network lifetime', Applied Intelligence, vol. 50, no. 12, pp. 4568-4585.
View/Download from: Publisher's site
View description>>
© 2020, Springer Science+Business Media, LLC, part of Springer Nature. A major challenge in wireless underground sensor networks is the signal attenuation originated from multi-environment transmission between underground sensor nodes and the above-ground base station. To overcome this issue, an efficient approach is deploying a set of relay nodes aboveground, thereby reducing transmission loss by shortening transmitting distance. However, this introduces several new challenges, including load balancing and transmission loss minimization. This paper tackles the problem of deploying relay nodes to reduce transmission loss under a load balancing constraint by proposing two approximation algorithms. The first algorithm is inspired by Beam Search, combined with a new selection scheme based on Boltzmann distribution. The second algorithm aims to further improve the solutions obtained by the former by reducing the transmission loss. We observe that we can find an optimal assignment between sensor nodes and a set of the chosen relay in polynomial time by reformulating the part of the problem as a bipartite matching problem with minimum cost. Experimental results indicate that the proposed methods perform better than the other existing ones in most of our test instances while reducing the execution time.
Tanveer, M, Khanna, P, Prasad, M & Lin, CT 2020, 'Introduction to the Special Issue on Computational Intelligence for Biomedical Data and Imaging', ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 1s, pp. 1-4.
View/Download from: Publisher's site
Tanveer, M, Richhariya, B, Khan, RU, Rashid, AH, Khanna, P, Prasad, M & Lin, CT 2020, 'Machine Learning Techniques for the Diagnosis of Alzheimer’s Disease', ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 1s, pp. 1-35.
View/Download from: Publisher's site
View description>>
Alzheimer’s disease is an incurable neurodegenerative disease primarily affecting the elderly population. Efficient automated techniques are needed for early diagnosis of Alzheimer’s. Many novel approaches are proposed by researchers for classification of Alzheimer’s disease. However, to develop more efficient learning techniques, better understanding of the work done on Alzheimer’s is needed. Here, we provide a review on 165 papers from 2005 to 2019, using various feature extraction and machine learning techniques. The machine learning techniques are surveyed under three main categories: support vector machine (SVM), artificial neural network (ANN), and deep learning (DL) and ensemble methods. We present a detailed review on these three approaches for Alzheimer’s with possible future directions.
Ubaid, A, Hussain, F & Charles, J 2020, 'Modeling Shipment Spot Pricing in the Australian Container Shipping Industry: Case of ASIA-OCEANIA trade lane', Knowledge-Based Systems, vol. 210, pp. 106483-106483.
View/Download from: Publisher's site
Ureta, FG, Pietroni, N & Zorin, D 2020, 'Reinforcement of General Shell Structures.', ACM Trans. Graph., vol. 39, pp. 153:1-153:1.
View/Download from: Publisher's site
Verhoeven, D, Musial, K, Palmer, S, Taylor, S, Abidi, S, Zemaityte, V & Simpson, L 2020, 'Controlling for openness in the male-dominated collaborative networks of the global film industry', PLOS ONE, vol. 15, no. 6, pp. e0234460-e0234460.
View/Download from: Publisher's site
View description>>
Studies of gender inequality in film industries have noted the persistence of male domination in creative roles (usually defined as director, producer, writer) and the slow pace of reform. Typical policy remedies are premised on aggregate counts of women as a proportion of overall industry participation. Network science offers an alternative way of identifying and proposing change mechanisms, as it puts emphasis on relationships instead of individuals. Preliminary work on applying network analysis to understand inequality in the film industry has been undertaken. However, in this study we offer a comprehensive approach that enables us to not only understand what inequality in the film industry looks like through the lens of network science but also how we can attempt to address this issue. We offer a data-driven simulation framework that investigates various what-if scenarios when it comes to network evolution. We then assess each of these scenarios with respect to its potential to address gender inequality in the film industry. As suggested by previous studies, inequality is exacerbated when industry networks are most closed. We review evidence from three different national film industries on network relationships in creative teams and identify a high proportion of men who only work with other men. In response to this observation, we test several mechanisms through which industry structures may generate higher levels of openness. Our results reveal that the most critical factor for improving network openness is not simply the statistical improvement of the number of women in a network, nor the removal of men who do not work with women. The most likely behavioural changes to a network will involve the production of connections between women and powerful men.
Verma, R & Merigó, JM 2020, 'A New Decision Making Method Using Interval-Valued Intuitionistic Fuzzy Cosine Similarity Measure Based on the Weighted Reduced Intuitionistic Fuzzy Sets', Informatica, vol. 31, no. 2, pp. 399-433.
View/Download from: Publisher's site
View description>>
In this paper, we develop a new flexible method for interval-valued intuitionistic fuzzy decision-making problems with cosine similarity measure. We first introduce the interval-valued intuitionistic fuzzy cosine similarity measure based on the notion of the weighted reduced intuitionistic fuzzy sets. With this cosine similarity measure, we are able to accommodate the attitudinal character of decision-makers in the similarity measuring process. We study some of its essential properties and propose the weighted interval-valued intuitionistic fuzzy cosine similarity measure.
Further, the work uses the idea of GOWA operator to develop the ordered weighted interval-valued intuitionistic fuzzy cosine similarity (OWIVIFCS) measure based on the weighted reduced intuitionistic fuzzy sets. The main advantage of the OWIVIFCS measure is that it provides a parameterized family of cosine similarity measures for interval-valued intuitionistic fuzzy sets and considers different scenarios depending on the attitude of the decision-makers. The measure is demonstrated to satisfy some essential properties, which prepare the ground for applications in different areas. In addition, we define the quasi-ordered weighted interval-valued intuitionistic fuzzy cosine similarity (quasi-OWIVIFCS) measure. It includes a wide range of particular cases such as OWIVIFCS measure, trigonometric-OWIVIFCS measure, exponential-OWIVIFCS measure, radical-OWIVIFCS measure. Finally, the study uses the OWIVIFCS measure to develop a new decision-making method to solve real-world decision problems with interval-valued intuitionistic fuzzy information. A real-life numerical example of contractor selection is also given to demonstrate the effectiveness of the developed approach in solving real-life problems.
Verma, R & Merigó, JM 2020, 'Multiple attribute group decision making based on 2-dimension linguistic intuitionistic fuzzy aggregation operators', Soft Computing, vol. 24, no. 22, pp. 17377-17400.
View/Download from: Publisher's site
Waheed, W, Deng, G & Liu, B 2020, 'Discrete Laplacian Operator and Its Applications in Signal Processing', IEEE Access, vol. 8, pp. 89692-89707.
View/Download from: Publisher's site
Wang, B, Li, T, Yan, Z, Zhang, G & Lu, J 2020, 'DeepPIPE: A distribution-free uncertainty quantification approach for time series forecasting', Neurocomputing, vol. 397, pp. 11-19.
View/Download from: Publisher's site
Wang, G, Lu, J, Choi, K-S & Zhang, G 2020, 'A Transfer-Based Additive LS-SVM Classifier for Handling Missing Data', IEEE Transactions on Cybernetics, vol. 50, no. 2, pp. 739-752.
View/Download from: Publisher's site
View description>>
IEEE The performance of a classifier might greatly deteriorate due to missing data. Many different techniques to handle this problem have been developed. In this paper, we solve the problem of missing data using a novel transfer learning perspective and show that when an additive least squares support vector machine (LS-SVM) is adopted, model transfer learning can be used to enhance the classification performance on incomplete training datasets. A novel transfer-based additive LS-SVM classifier is accordingly proposed. This method also simultaneously determines the influence of classification errors caused by each incomplete sample using a fast leave-one-out cross validation strategy, as an alternative way to clean the training data to further improve the data quality. The proposed method has been applied to seven public datasets. The experimental results indicate that the proposed method achieves at least comparable, if not better, performance than case deletion, mean imputation, and k-nearest neighbor imputation methods, followed by the standard LS-SVM and support vector machine classifiers. Moreover, a case study on a community healthcare dataset using the proposed method is presented in detail, which particularly highlights the contributions and benefits of the proposed method to this real-world application.
Wang, G, Wang, D, Du, C, Li, K, Zhang, J, Liu, Z, Tao, Y, Wang, M, Cao, Z & Yan, X 2020, 'Seizure Prediction Using Directed Transfer Function and Convolution Neural Network on Intracranial EEG', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 12, pp. 2711-2720.
View/Download from: Publisher's site
View description>>
Automatic seizure prediction promotes the development of closed-loop treatment system on intractable epilepsy. In this study, by considering the specific information exchange between EEG channels from the perspective of whole brain activities, the convolution neural network (CNN) and the directed transfer function (DTF) were merged to present a novel method for patient-specific seizure prediction. Firstly, the intracranial electroencephalogram (iEEG) signals were segmented and the information flow features of iEEG signals were calculated by using the DTF algorithm. Then, these features were reconstructed as the channel-frequency maps according to channel pairs and the frequency of information flow. Finally, these maps were fed into the CNN model and the outputs were post-processed by the moving average approach to predict the epileptic seizures. By the evaluation of cross-validation method, the proposed algorithm achieved the averaged sensitivity of 90.8%, the averaged false prediction rate of 0.08 per hour. Compared to the random predictor and other existing algorithms tested on the Freiburg EEG dataset, our proposed method achieved better performance for seizure prediction in all patients. These results demonstrated that the proposed algorithm could provide an robust seizure prediction solution by using deep learning to capture the brain network changes of iEEG signals from epileptic patients.
Wang, G, Zhang, G, Choi, K-S, Lam, K-M & Lu, J 2020, 'Output based transfer learning with least squares support vector machine and its application in bladder cancer prognosis', Neurocomputing, vol. 387, pp. 279-292.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Two dilemmas frequently occur in many real-world clinical prognoses. First, the on-hand data cannot be put entirely into the existing prediction model, since the features from new data do not perfectly match those of the model. As a result, some unique features collected from the patients in the current domain of interest might be wasted. Second, the on-hand data is not sufficient enough to learn a new prediction model. To overcome these challenges, we propose an output-based transfer learning approach with least squares support vector machine (LS-SVM) to make the maximum use of the small dataset and guarantee an enhanced generalization capability. The proposed approach can learn a current domain of interest with limited samples effectively by leveraging the knowledge from the predicted outputs of the existing model in the source domain. Also, the extent of output knowledge transfer from the source domain to the current one can be automatically and rapidly determined using a proposed fast leave-one-out cross validation strategy. The proposed approach is applied to a real-world clinical dataset to predict 5-year overall and cancer-specific mortality of bladder cancer patients after radical cystectomy. The experimental results indicate that the proposed approach achieves better classification performances than the other comparative methods and has the potential to be implemented into the real-world context to deal with small data problems in cancer prediction and prognosis.
Wang, H, Yu, S, Zeadally, S, Rawat, DB & Gao, Y 2020, 'Introduction to the Special Section on Network Science for Internet of Things (IoT)', IEEE Transactions on Network Science and Engineering, vol. 7, no. 1, pp. 237-238.
View/Download from: Publisher's site
Wang, J, Li, H, Lu, H, Yang, H & Wang, C 2020, 'Integrating offline logistics and online system to recycle e-bicycle battery in China', Journal of Cleaner Production, vol. 247, pp. 119095-119095.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd E-bicycles are powered by batteries including lithium-ion, lead–acid, and others. The reuse of waste batteries shows promise for grid-scale storage. The New National Standard for e-bicycles is to be introduced in China, that might result in the country becoming the largest source of battery waste in the world. If the waste batteries are not recycled appropriately, it will cause significant heavy metal pollution, which will in turn, pose a serious threat to the ecological environment and human health. This paper discusses the current status of recycling of e-bicycle batteries in China and reviews the current recycling approaches. We developed a waste e-bicycle battery recycling system based on “Internet+” to solve the dilemma of recycling end-of-life batteries; this system has three subsystems: offline reverse logistics recovery system, online network recycling system, and traceability management system. In particular, the participation of consumers and government, reward-penalty mechanism, “Internet +” development, and other strategies are considered to improve recycling systems throughout life cycle of the products. The proposed recycling system can increase the waste battery recycling rate by 2.59% under the reward-penalty mechanism, and reduce carbon dioxide emissions by 58%, which is conducive to promoting sustainable development.
Wang, J, Niu, T, Lu, H, Yang, W & Du, P 2020, 'A Novel Framework of Reservoir Computing for Deterministic and Probabilistic Wind Power Forecasting', IEEE Transactions on Sustainable Energy, vol. 11, no. 1, pp. 337-349.
View/Download from: Publisher's site
Wang, L, Niu, J & Yu, S 2020, 'SentiDiff: Combining Textual Information and Sentiment Diffusion Patterns for Twitter Sentiment Analysis', IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 10, pp. 2026-2039.
View/Download from: Publisher's site
View description>>
Twitter sentiment analysis has become a hot research topic in recent years. Most of existing solutions to Twitter sentiment analysis basically only consider textual information of Twitter messages, and struggle to perform well when facing short and ambiguous Twitter messages. Recent studies show that sentiment diffusion patterns on Twitter have close relationships with sentiment polarities of Twitter messages. Therefore, in this paper, we focus on how to fuse textual information of Twitter messages and sentiment diffusion patterns to obtain better performance of sentiment analysis on Twitter data. To this end, we first analyze sentiment diffusion by investigating a phenomenon called sentiment reversal, and find some interesting properties of sentiment reversals. Then, we consider the inter-relationships between textual information of Twitter messages and sentiment diffusion patterns, and propose an iterative algorithm called SentiDiff to predict sentiment polarities expressed in Twitter messages. To the best of our knowledge, this work is the first to utilize sentiment diffusion patterns to help improve Twitter sentiment analysis. Extensive experiments on real-world dataset demonstrate that compared with state-of-the-art textual information based sentiment analysis algorithms, our proposed algorithm yields PR-AUC improvements between 5.09 and 8.38 percent on Twitter sentiment classification tasks.
Wang, M, Zhu, T, Zhang, T, Zhang, J, Yu, S & Zhou, W 2020, 'Security and privacy in 6G networks: New areas and new challenges', Digital Communications and Networks, vol. 6, no. 3, pp. 281-291.
View/Download from: Publisher's site
View description>>
© 2020 Chongqing University of Posts and Telecommunications With the deployment of more and more 5g networks, the limitations of 5g networks have been found, which undoubtedly promotes the exploratory research of 6G networks as the next generation solutions. These investigations include the fundamental security and privacy problems associated with 6G technologies. Therefore, in order to consolidate and solidify this foundational research as a basis for future investigations, we have prepared a survey on the status quo of 6G security and privacy. The survey begins with a historical review of previous networking technologies and how they have informed the current trends in 6G networking. We then discuss four key aspects of 6G networks – real-time intelligent edge computing, distributed artificial intelligence, intelligent radio, and 3D intercoms – and some promising emerging technologies in each area, along with the relevant security and privacy issues. The survey concludes with a report on the potential use of 6G. Some of the references used in this paper along and further details of several points raised can be found at: security-privacyin5g-6g.github.io.
Wang, Q, Zhou, Y, Ding, W, Zhang, Z, Muhammad, K & Cao, Z 2020, 'Random Forest with Self-Paced Bootstrap Learning in Lung Cancer Prognosis', ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 1s, pp. 1-12.
View/Download from: Publisher's site
View description>>
Training gene expression data with supervised learning approaches can provide an alarm sign for early treatment of lung cancer to decrease death rates. However, the samples of gene features involve lots of noises in a realistic environment. In this study, we present a random forest with self-paced learning bootstrap for improvement of lung cancer classification and prognosis based on gene expression data. To be specific, we propose an ensemble learning with random forest approach to improving the model classification performance by selecting multi-classifiers. Then, we investigate the sampling strategy by gradually embedding from high- to low-quality samples by self-paced learning. The experimental results based on five public lung cancer datasets show that our proposed method could select significant genes exactly, which improves classification performance compared to that of existing approaches. We believe that our proposed method has the potential to assist doctors in gene selections and lung cancer prognosis.
Wang, X, Gu, B, Ren, Y, Ye, W, Yu, S, Xiang, Y & Gao, L 2020, 'A Fog-Based Recommender System', IEEE Internet of Things Journal, vol. 7, no. 2, pp. 1048-1060.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Fog computing is an emergent computing paradigm that extends the cloud paradigm. With the explosive growth of smart devices and mobile users, cloud computing no longer matches the requirements of the Internet of Things (IoT) era. Fog computing is a promising solution to satisfying these new requirements, such as low latency, uninterrupted service, and location awareness. As a typical new computing paradigm and network architecture, fog computing raises new challenges, such as privacy, data management, data analytics, information overload, and participatory sensing. In this article, we present a fog-based hybrid recommender system to address the issue of information overload in fog computing. Our proposed system not only abstracts useful information from the fog environment but can also be considered as an optimization tool due to its ability to provide suggestions to improve system performance. In particular, we demonstrate that the proposed system provides personalized and localized recommendations to users, and also advise the system itself to precache the content to optimize the storage capacity of the fog server.
Wang, X, Yang, Y, Liu, H, Ren, J, Xu, S, Wang, S & Yu, S 2020, 'Efficient measurement of round-trip link delays in software-defined networks', Journal of Network and Computer Applications, vol. 150, pp. 102468-102468.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier Ltd Round-trip link delay is an important indicator for network performance optimization and troubleshooting. The Software-Defined Networking (SDN) paradigm, which provides flexible and centralized control capability, paves the way for efficient round-trip link delay measurement. In this paper, we study the round-trip link delay measurement problem in SDN networks. We propose an efficient measurement scheme, which infers round-trip link delays from end-to-end delay of some measurement paths implemented with few flow rules in each SDN switch. Furthermore, to reduce measurement cost and meet measurement constraint, we address the Monitor Placement and Link Assignment (MPLA) problem involved in the measurement scheme. Specifically, we formulate the MPLA problem as a Mixed Integer Linear Programming (MILP) problem, prove that it is NP-hard, and propose an efficient algorithm called MPLA Algorithm based on Biding Strategy (MPLAA-BS) to solve the problem. The extensive simulation results on real network topologies reveal that the proposed scheme can efficiently and accurately measure round-trip link delays in SDN networks, and the MPLAA-BS can find feasible and resource-efficient solutions for the MPLA problem.
Wang, Y, Zhang, C, Wang, S, Yu, PS, Bai, L, Cui, L & Xu, G 2020, 'Generative temporal link prediction via self-tokenized sequence modeling', World Wide Web, vol. 23, no. 4, pp. 2471-2488.
View/Download from: Publisher's site
Wu, L, Falque, R, Perez-Puchalt, V, Liu, L, Pietroni, N & Vidal-Calleja, TA 2020, 'Skeleton-Based Conditionally Independent Gaussian Process Implicit Surfaces for Fusion in Sparse to Dense 3D Reconstruction.', IEEE Robotics Autom. Lett., vol. 5, no. 2, pp. 1532-1539.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. 3D object reconstructions obtained from 2D or 3D cameras are typically noisy. Probabilistic algorithms are suitable for information fusion and can deal with noise robustly. Consequently, these algorithms can be useful for accurate surface reconstruction. This paper presents an approach to estimate a probabilistic representation of the implicit surface of 3D objects. One of the contributions of the paper is the pipeline for generating an accurate reconstruction, given a set of sparse points that are close to the surface and a dense noisy point cloud. A novel submapping method following the topology of the object is proposed to generate conditional independent Gaussian Process Implicit Surfaces. This allows inference and fusion mechanisms to be performed in parallel followed by information propagation through the submaps. Large datasets can efficiently be processed by the proposed pipeline producing not only a surface but also the uncertainty information of the reconstruction. We evaluate the performance of our algorithm using simulated and real datasets.
Wu, P, Xiao, F, Huang, H, Sha, C & Yu, S 2020, 'Adaptive and Extensible Energy Supply Mechanism for UAVs-Aided Wireless-Powered Internet of Things', IEEE Internet of Things Journal, vol. 7, no. 9, pp. 9201-9213.
View/Download from: Publisher's site
View description>>
This article studies multiple unmanned aerial vehicles (multi-UAVs)-enabled wireless-powered Internet of Things (IoT), where a group of UAVs is dispatched as mobile power sources to charge a set of ground IoT devices. Different from the conventional radio-frequency (RF) wireless power transfer (WPT) systems, magnetic resonance-coupled (MRC) WPT systems can guarantee high power transfer efficiency without the complete alignment, which is remarkable. In this article, we extend the charging range by the wired connection between the energy receiving systems and IoT devices. Due to the restriction of carriable energy on the UAVs, designing the shortest possible trajectory for each UAV is necessary. We formulate it as a multidepots multi-UAVs trajectory optimization problem, jointly with constraints of the UAV's energy capacity and the area of the target region, to maximize the resource utilization of UAVs. To tackle this nonconvex problem, we decompose it into two subproblems, i.e., hovering locations selection and multi-UAVs trajectory optimization. For the first subproblem, we propose two approximation algorithms to obtain the near-optimal solution in the sparse networks. Then, we adopt a heuristic algorithm, a memetic algorithm-based variable neighborhood search (MAVNS), to achieve the quasioptimal trajectory rapidly. Finally, extensive numerical results are provided to evaluate the performance of the proposed algorithms. New insights are investigated on the estimation of feasibility that whether the given UAVs with energy capacity constraint can fully charge ground IoT devices within open areas.
Wu, W, Li, B, Chen, L, Gao, J & Zhang, C 2020, 'A Review for Weighted MinHash Algorithms', IEEE Transactions on Knowledge and Data Engineering, pp. 1-1.
View/Download from: Publisher's site
View description>>
Data similarity (or distance) computation is a fundamental research topic which underpins many high-level applications based on similarity measures in machine learning and data mining. However, in large-scale real-world scenarios, the exact similarity computation has become daunting due to "3V" nature (volume, velocity and variety) of big data. In this case, the hashing techniques have been verified to efficiently conduct similarity estimation in terms of both theory and practice. Currently, MinHash is a popular technique for efficiently estimating the Jaccard similarity of binary sets and furthermore, weighted MinHash is generalized to estimate the generalized Jaccard similarity of weighted sets. This review focuses on categorizing and discussing the existing works of weighted MinHash algorithms. In this review, we mainly categorize the weighted MinHash algorithms into quantization-based approaches, "active index"-based ones and others, and show the evolution and inherent connection of the weighted MinHash algorithms, from the integer weighted MinHash ones to the real-valued weighted MinHash ones. Also, we have developed a Python toolbox for the algorithms, and released it in our github. We experimentally conduct a comprehensive study of the standard MinHash algorithm and the weighted MinHash ones in the similarity estimation error and the information retrieval task.
Wu, Z, Wang, R, Li, Q, Lian, X, Xu, G, Chen, E & Liu, X 2020, 'A Location Privacy-Preserving System Based on Query Range Cover-Up or Location-Based Services', IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5244-5254.
View/Download from: Publisher's site
View description>>
© 1967-2012 IEEE. Location-based service (LBS) has been widely used in various fields of industry, and become a vital part of people's daily life. However, while providing great convenience for users, LBS results in a serious threat on users' location privacy, due to its more and more untrusted server-side. In this article, we propose a location privacy-preserving system for LBS by constructing 'cover-up ranges' to protect the query ranges associated with a location query sequence. Firstly, we present a client-based system framework for location privacy protection in LBS, which requires no compromise to the accuracy and usability of LBS. Secondly, based on the framework, we introduce a location privacy model to formulate the constraints that ideal cover-up ranges should satisfy, so as to improve the efficiency of location services and the security of location privacy. Finally, we describe an implementation algorithm to well meet the location privacy model. Both theoretical analysis and experimental evaluation demonstrate the effectiveness of our system, which can improve the security of users' location privacy on the untrusted server-side, without compromising the accuracy and usability of LBS.
Xiao, Y, Pei, Q, Yao, L & Wang, X 2020, 'RecRisk: An enhanced recommendation model with multi-facet risk control', Expert Systems with Applications, vol. 158, pp. 113561-113561.
View/Download from: Publisher's site
Xiao, Y, Pei, Q, Yao, L, Yu, S, Bai, L & Wang, X 2020, 'An enhanced probabilistic fairness-aware group recommendation by incorporating social activeness', Journal of Network and Computer Applications, vol. 156, pp. 102579-102579.
View/Download from: Publisher's site
Xiao, Y, Yao, L, Pei, Q, Wang, X, Yang, J & Sheng, QZ 2020, 'MGNN: Mutualistic Graph Neural Network for Joint Friend and Item Recommendation', IEEE Intelligent Systems, vol. 35, no. 5, pp. 7-17.
View/Download from: Publisher's site
View description>>
IEEE Many social studies and practical cases suggest that people's consumption behaviors and social behaviors are not isolated but interrelated in social network services. However, most existing research either predicts users' consumption preferences or recommends friends to users without dealing with them simultaneously. We propose a holistic approach to predict users' preferences on friends and items jointly and thereby make better recommendations. To this end, we design a graph neural network that incorporates a mutualistic mechanism to model the mutual reinforcement relationship between users' consumption behaviors and social behaviors. Our experiments on the two-real world datasets demonstrate the effectiveness of our approach in both social recommendation and link prediction.
Xiong, P, Zhang, L, Zhu, T, Li, G & Zhou, W 2020, 'Private collaborative filtering under untrusted recommender server', Future Generation Computer Systems, vol. 109, pp. 511-520.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Recommender systems play an increasingly vital role in modern E-commerce. However, exploiting users’ preferences with recommender algorithms leads to serious privacy risks, especially when recommender service providers are unreliable. To deal with the problem, this paper proposes a Client/Server framework to create a private recommender system (PrivateRS). The system assumes that the Server side is untrustworthy. On the Client side, each user firstly rates the items and randomizes the ratings with a differential privacy mechanism. The ratings are further substituted by private symbols which are autonomously defined by each user to hide the ordinal meaning of the ratings. Using those symbols, the Server applies a private collaborative filtering algorithm to predict the ratings of items for the user. During this process, new similarity metrics are provided to search the nearest neighbours for users or items without knowing the real meanings of those symbols. Experimental results demonstrate that even though the ordinal meaning of the rating is significantly obfuscated, the proposed algorithms can still generate accurate recommendations with acceptable loss.
Xu, C, Luo, L, Ding, Y, Zhao, G & Yu, S 2020, 'Personalized Location Privacy Protection for Location-Based Services in Vehicular Networks', IEEE Wireless Communications Letters, vol. 9, no. 10, pp. 1633-1637.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. With the development of vehicular network, location-based services (LBSs) provide increasing diversified services for drivers and passengers. When users enjoy the services, users' location needs to be constantly updated to service providers, which causes the location information to be speculated and attacked by attackers. However, existing schemes don't provide differentiated protection for users' different locations, which may lead to the leakage of location information. Therefore, we propose a location privacy protection method to satisfy users' personalized privacy needs with reasonable protection of their privacy. Firstly, we define a normalized decision matrix to describe the efficiency and privacy effects of a route, and establish a multi-attribute utility function to quantify the utility of different routes for route selection. Then, according to users' personalized privacy protection need, we allocate the privacy budget for each query location on the selected route based on the distance between it and his nearest sensitive location. Experimental results demonstrate that compared to existing methods, our scheme can meet the user's service requirements and achieve better service quality under the conditions of reasonable protection of their privacy.
Xu, C, Ren, W, Yu, L, Zhu, T & Choo, K-KR 2020, 'A Hierarchical Encryption and Key Management Scheme for Layered Access Control on H.264/SVC Bitstream in the Internet of Things', IEEE Internet of Things Journal, vol. 7, no. 9, pp. 8932-8942.
View/Download from: Publisher's site
View description>>
Terminals with diverse technological specifications, heterogeneous network environment, and personalized user requirements raise new challenges to streaming media services. Solutions such as the newly standardized H.264/SVC (scalable video coding; designed to compress original video bitstream into a multilayer video stream according to requirements) have been proposed. With the pervasive application of SVC in applications, such as video on demand, video conferencing, and video surveillance in the Internet of Things (IoT), there has been increased scrutiny on security of H.264/SVC. In this article, we propose a bitstream-oriented layered encryption scheme for SVC bitstream. According to the multilayer bit code structure of SVC, the bitstream is separated and encrypted, respectively, by rearranging the network abstraction layer (NAL) unit of SVC bitstream. This provides hierarchical protection for the multilayer characteristic of SVC. In order to provide sufficient security, as well as achieving improved computational efficiency, we use different cryptographic algorithms for the base layer and enhancement layers according to its requirements. The base layer adopts off-the-shelf high-security encryption algorithms, such as block cipher, to ensure security. Each enhancement layer is encrypted with a different key through the stream cipher with low computational complexity, providing layered control of the video. Furthermore, we propose a hierarchical key management scheme to implement layered access control according to the principle of hierarchical deterministic wallet (H-D wallet). Our scheme can be applied to the user-level distinction in video on demand and video surveillance systems in IoT. The analysis and experiments indicate that the proposed scheme achieves a high-security level, yet incurs reasonably low compression cost and computational complexity.
Xu, C, Xiong, Z, Han, Z, Zhao, G & Yu, S 2020, 'Link Reliability-Based Adaptive Routing for Multilevel Vehicular Networks', IEEE Transactions on Vehicular Technology, vol. 69, no. 10, pp. 11771-11785.
View/Download from: Publisher's site
View description>>
© 1967-2012 IEEE. In multilevel vehicular ad-hoc network (VANET) scenario, dynamic vehicles, complex node distribution and poor wireless channel environment deteriorate the reliability of routing protocols. However, for the key issues of relay selection, existing algorithms analyze the wireless link performance without considering the influence of dynamics and shadow fading on location from GPS, as well as channel condition and buffer queue, which would lead to inaccurate link characterization and maladaptive to network variation. In this paper, we establish a dynamic link reliability model to portray the link complexity of multilevel VANET scenario, and propose a link reliability-based adaptive routing algorithm (LRAR) to improve the transmission efficiency. Firstly, we propose a Kalman filter-based estimation approach to amend GPS original data for precise location of vehicles. Then, we define link reliability to quantify the wireless link performance, and establish a multilevel dynamic link model (MDLM) to evaluate it. Moreover, to accurately describe the complexity of wireless links, we integrate the corrected GPS data and characteristics of multilevel VANET including vehicle dynamics, distribution hierarchy and shadow fading into the modeling of link reliability. Considering the difference of link state among diverse vehicles, a maximum deviation algorithm is introduced to adaptively calculate the weight of each parameter in the modeling. Finally, we formulate the routing decision as a multi-attribute decision problem, and select the link with highest reliability as transmission path. Simulation results demonstrate that LRAR outperforms the existing routing algorithms in terms of average end-to-end delay and packet delivery ratio.
Xu, C, Xiong, Z, Kong, X, Zhao, G & Yu, S 2020, 'A Packet Reception Probability-Based Reliable Routing Protocol for 3D VANET', IEEE Wireless Communications Letters, vol. 9, no. 4, pp. 495-498.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. In three-dimensional (3D) vehicular ad-hoc network (VANET) scenario, dynamic vehicles, complex node distribution and severe path loss increase the probability of link interruption significantly, which deteriorates the packet reception probability sharply. However, reliability is a crucial issue as efficiency to improve the performance of routing protocol in 3D VANET. In this letter, for the dynamic and multi-level shadowing scenario, we propose a packet reception probability-based reliable routing (PPR) protocol to improve the transmission link reliability. Especially, we introduce a packet reception probability model to characterize the link reliability of 3D network, and the unique characteristics of 3D VANETs are integrated into this model. Then, we formulate the routing decision issue as a constrained multi-objective optimization problem, which attempts to find a link with the highest packet reception probability as the relay link. The simulation results demonstrate that PPR outperforms the existing protocols in the aspects of packet delivery ratio, end-to-end delay and throughput.
Xu, G, Duong, TD, Li, Q, Liu, S & Wang, X 2020, 'Causality Learning: A New Perspective for Interpretable Machine Learning', IEEE Intelligent Informatics Bulletin, vol. 20, no. 1, pp. 27-33.
View description>>
Recent years have witnessed the rapid growth of machine learning in a wide range of fields such as image recognition, text classification, credit scoring prediction, recommendation system, etc. In spite of their great performance in different sectors, researchers still concern about the mechanism under any machine learning (ML) techniques that are inherently blackbox and becoming more complex to achieve higher accuracy. Therefore, interpreting machine learning model is currently a mainstream topic in the research community. However, the traditional interpretable machine learning focuses on the association instead of the causality. This paper provides an overview of causal analysis with the fundamental background and key concepts, and then summarizes most recent causal approaches for interpretable machine learning. The evaluation techniques for assessing method quality, and open problems in causal interpretability are also discussed in this paper.
Xu, Q, Su, Z, Dai, M & Yu, S 2020, 'APIS: Privacy-Preserving Incentive for Sensing Task Allocation in Cloud and Edge-Cooperation Mobile Internet of Things With SDN', IEEE Internet of Things Journal, vol. 7, no. 7, pp. 5892-5905.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The popularization of mobile devices connected to the network promotes the rise and development of the emerging mobile Internet of Things (MIoT). Crowdsensing is a promising mode to perceive data in MIoT, where the collection of sensing data is outsourced to the public crowd carrying mobile devices. However, this crowdsensing mode inevitably makes privacy compromised, due to the workers' sensitive information in the sensing data. As such, how to incentivize workers' participation with privacy preservation becomes a challenge. To tackle this problem, in this article, we propose an auction-based privacy-preserving incentive scheme (APIS) for sensing task allocation in MIoT. Specifically, integrating the idea of software-defined network (SDN), we first present a cloud and edge cooperation-based crowdsensing framework, where the cloud is designed as the controller to collect sensing results from the distributed edge nodes and each edge node outsources sensing tasks to participating workers. To motivate workers' participation, we devise a differential privacy-based auction mechanism, whereby each worker can utilize her privacy budget to control how much privacy can be leaked and decide the sensing precision by the sensing time. Moreover, to maximize the utility of the sensing platform, we design a greed-based algorithm to select the winning workers and determine payments to winners. Finally, we conduct extensive simulations to verify the effectiveness of APIS and demonstrate its superiority.
Xu, X, Zhang, X, Khan, M, Dou, W, Xue, S & Yu, S 2020, 'A balanced virtual machine scheduling method for energy-performance trade-offs in cyber-physical cloud systems', Future Generation Computer Systems, vol. 105, pp. 789-799.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. The cloud computing scheme promises many salient features such as on-demand resource provisioning to users, and it therefore has drawn significant attention from the cyber-physical systems (CPS). An increasing number of CPS have been deployed in cloud platforms, and to accommodate numerous CPS applications, cloud datacenters often consist of a huge number of physical computation and storage nodes, and the number is still increasing. As a result, the electricity power consumption in cloud datacenters is considerable, currently accounting for about 1.3% of the worldwide electricity. How to reduce the energy consumption of datacenters is an economically beneficial but challenging problem. Optimizing virtual machine (VM) scheduling in datacenters by live VM migration is an appealing method to save energy consumption. However, it is still a challenge to conduct VM scheduling in an energy-efficient and performance-guaranteed manner, since VM migration can suffer from severe performance degradation while saving energy. In this paper, we propose a balanced VM scheduling method to achieve trade-offs between energy and performance in cyber-physical cloud systems. Specifically, the problem is formulated via a joint optimization model, and a balanced VM scheduling method is proposed accordingly to determine which VMs and where should be migrated, aiming at both reducing energy consumption and mitigating performance degradation. Both analytical and simulation results demonstrate the effectiveness and efficiency of our method.
Xuan, J, Lu, J & Zhang, G 2020, 'A Survey on Bayesian Nonparametric Learning', ACM Computing Surveys, vol. 52, no. 1, pp. 1-36.
View/Download from: Publisher's site
View description>>
Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning’s great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure—from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications i...
Xuan, J, Luo, X, Lu, J & Zhang, G 2020, 'Web event evolution trend prediction based on its computational social context', World Wide Web, vol. 23, no. 3, pp. 1861-1886.
View/Download from: Publisher's site
View description>>
© 2020, Springer Science+Business Media, LLC, part of Springer Nature. Predicting future trends of Web events can help significantly improve the quality of Web services, e.g., improving the user satisfaction of news websites. Existing approaches in this regard are based mainly on temporal patterns mined with the assumption that enough temporal data is available on hand. However, most Web events do not have a long lifecycle, but a burst property, which drastically reduces the performance of temporal patterns mining. Furthermore, these approaches overlook the influence of the social context surrounding the Web events. In this paper, we propose a novel method to predict future trends of Web events, based on their social contexts rather than temporal patterns. More specially, in the proposed method, a computational model for the social context is first built as a two-layer Association Linked Network considering its properties, such as the associative network property and the small world property. Then, the interaction between a Web event and the social context is simulated, based on the anchoring theory. Finally, an external force is defined and evaluated to quantify the influence of the social context on the evolution of Web events, which is used to predict future trends of Web events. Experiments show that the performance of the proposed method is better than that of the traditional time series-based approaches.
Yadav, R, Zhang, W, Kaiwartya, O, Song, H & Yu, S 2020, 'Energy-Latency Tradeoff for Dynamic Computation Offloading in Vehicular Fog Computing', IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 14198-14211.
View/Download from: Publisher's site
Yang, S, Bai, L, Cui, L, Ming, Z, Wu, Y, Yu, S, Shen, H & Pan, Y 2020, 'An efficient pipeline processing scheme for programming Protocol-independent Packet Processors', Journal of Network and Computer Applications, vol. 171, pp. 102806-102806.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier Ltd OpenFlow is unable to provide customized flow tables, resulting in memory explosions and high switch retirement rates. This is the bottleneck for the development of SDN. Recently, P4 (Programming Protocol-independent Packet Processors) attracts much attentions from both academia and industry. It provides customized networking services by offering flow-level control. P4 can “produce” various forwarding tables according to packets. P4 increases the speed of custom ASICs. However, with the prevalence of P4, the multiple forwarding tables could explode when used in large scale networks. The explosion problem can slow down the lookup speed, which causes congestions and packet losses. In addition, the pipelined structure of forwarding tables brings additional processing delay. In this study, we will improve the lookup performance by optimizing the forwarding tables of P4. Intuitively, we will install the rules according to their popularity, i.e., the popular rules will appear earlier than others. Thus, the packets can hit the matched rule sooner. In this paper, we formalize the optimization problem, and prove that the problem is NP-hard. To solve the problem, we propose a heuristic algorithm called EPSP (Efficient Pipeline Processing Scheme for P4), which can largely reduce the lookup time while keeping the forwarding actions the same. Because running the optimization algorithm frequently brings additional processing burdens, wedesign an incremental update algorithm to alleviate this problem. To evaluate the proposed algorithms, we set up the simulation environments based on ns-3. The simulation results show that the algorithm greatly reduces both the lookup time and the number of memory accesses. The incremental algorithm largely reduces the processing burdens while the lookup time remains almost the same with the non-incremental algorithm. We also implemented a prototype using floodlight and mininet. The results show that our algorithm b...
Yang, S, Jiang, J, Pal, A, Yu, K, Chen, F & Yu, S 2020, 'Analysis and Insights for Myths Circulating on Twitter During the COVID-19 Pandemic', IEEE Open Journal of the Computer Society, vol. 1, pp. 209-219.
View/Download from: Publisher's site
View description>>
The current COVID-19 pandemic and its uncertainty have given rise to various myths and rumours. These myths spread incredibly fast through social media, which has caused massive panic in society. In this paper, we comprehensively examined the prevailing myths related to COVID-19 in regard to the diffusion of myths, people's engagement with myths and people's subjective emotions to myths. First, we classified the myths into five categories: spread of infection, preventive measures, detection measures, treatment and miscellaneous. We collected the tweets about each category of myths from 1 January to 7 July in the year 2020. We found that the vast majority of the myth tweets were about the spread of the infection. Next, we fitted myths spreading with the SIR epidemic model and calculated the basic reproduction number R0 for each category of myths. We observed that the myths about the spread of infection and preventive measures propagated faster than other categories of myths, and more miscellaneous myths raised and quickly spread from later June 2020. We further analyzed people's emotions evoked by each category of myths and found that fear was the strongest emotion in all categories of myths and around 64% of the collected tweets expressed the emotion of fear. The study in this paper provides insights for authorities and governments to understand the myths during the eruption of the pandemic, and hence enable targeted and feasible measures to demystify the most concerned myths in due time.
Ye, D, Zhu, T, Shen, S, Zhou, W & Yu, P 2020, 'Differentially Private Multi-Agent Planning for Logistic-like Problems', IEEE Transactions on Dependable and Secure Computing, vol. PP, no. 99, pp. 1-1.
View/Download from: Publisher's site
Ye, D, Zhu, T, Zhou, W & Yu, PS 2020, 'Differentially Private Malicious Agent Avoidance in Multiagent Advising Learning', IEEE Transactions on Cybernetics, vol. 50, no. 10, pp. 4214-4227.
View/Download from: Publisher's site
View description>>
Agent advising is one of the key approaches to improve agent learning performance by enabling agents to ask for advice between each other. Existing agent advising approaches have two limitations. The first limitation is that all the agents in a system are assumed to be friendly and cooperative. However, in the real world, malicious agents may exist and provide false advice to hinder the learning performance of other agents. The second limitation is that the analysis of communication overhead in these approaches is either overlooked or simplified. However, in communication-constrained environments, communication overhead has to be carefully considered. To overcome the two limitations, this paper proposes a novel differentially private agent advising approach. Our approach employs the Laplace mechanism to add noise on the rewards used by student agents to select teacher agents. By using the differential privacy technique, the proposed approach can reduce the impact of malicious agents without identifying them. Also, by adopting the privacy budget concept, the proposed approach can naturally control communication overhead. The experimental results demonstrate the effectiveness of the proposed approach.
Yu, H, Lu, J & Zhang, G 2020, 'An Online Robust Support Vector Regression for Data Streams', IEEE Transactions on Knowledge and Data Engineering, vol. PP, no. 99, pp. 1-1.
View/Download from: Publisher's site
Yu, H, Lu, J & Zhang, G 2020, 'Online Topology Learning by a Gaussian Membership-Based Self-Organizing Incremental Neural Network', IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 10, pp. 3947-3961.
View/Download from: Publisher's site
Yuan, B, Zhao, H, Lin, C, Zou, D, Yang, LT, Jin, H, He, L & Yu, S 2020, 'Minimizing Financial Cost of DDoS Attack Defense in Clouds With Fine-Grained Resource Management', IEEE Transactions on Network Science and Engineering, vol. 7, no. 4, pp. 2541-2554.
View/Download from: Publisher's site
View description>>
As the cloud systems gain in popularity, they suffer from cyber attacks. One of the most notorious cyber attacks is Distributed Denial of Service (DDoS) attack, which aims to drain the system resources so that the system becomes unresponsive to the genuine users. DDoS attack and defense essentially revolve around resource competition. Many efforts have been made from the perspective of resource investment and management. However, these defending schemes assume that the resources available to defend the attacks are unlimited without taking the financial cost into account. Such coarse-grained defense strategies could cause the problem of resource overprovisioning, which would incur unwanted extra costs to the defender. To tackle this issue, we systematically investigate the problem and propose a birth-death-based fine-grained resource management mechanism, which can both scale in/out and scale down/up. That is, the proposed mechanism adaptively selects the optimal resource leasing mode for cloud service customers so that they can defeat the DDoS attack with minimal financial cost. Extensive analyses and empirical data-based experiments are conducted. The results show both the effectiveness and efficiency of the proposed approach. Comparing to existing work, our proposal can averagely save 53.58% (up to 93.75%) of the cost for the attack defense.
Yuan, B, Zou, D, Jin, H, Yu, S & Yang, LT 2020, 'HostWatcher: Protecting hosts in cloud data centers through software-defined networking', Future Generation Computer Systems, vol. 105, pp. 964-972.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Cloud has become a dominant computing platform, and cloud data centers have been widely deployed all over the world. Naturally, cloud data centers become the targets of cyber attacks due to the feature of publicity. In addition, the price of renting resources from cloud constantly gets cheaper and cheaper. Therefore, attackers can rent hosts from cloud data centers to initiate attacks with rather low cost. As a result, hosts in a cloud center could be either victims or attackers. However, most existing researches only treat the hosts as the targets or the sources of attacks, either protecting the hosts from being attacked or identifying the malicious hosts, which is insufficient to protect the cloud data centers comprehensively. In this paper, we hire the novel techniques of SDN to protect the cloud data centers in both directions. Aiming at mitigating DDoS attacks, we propose HostWatcher, a system that watches and protects every host in cloud data center. HostWatcher leverages the advantages of SDN techniques and distributed processing. Caching and round-robin-resending scheme is introduced to the proposed system. Our goal is to protect the hosts comprehensively with QoS guarantee. The extensive experiments show that HostWatcher can effectively mitigate the DDoS attacks that target the hosts. Meanwhile, HostWatcher can also significantly limit the packet rate of hosts that are controlled by attackers. Also, the comprehensive evaluations show that the overheads of our system are trivial, and that our system is practical to implement and deploy in the cloud data centers.
Zhang, H-W, Kok, VC, Chuang, S-C, Tseng, C-H, Lin, C-T, Li, T-C, Sung, F-C, Wen, CP, Hsiung, CA & Hsu, CY 2020, 'Long-Term Exposure to Ambient Hydrocarbons Increases Dementia Risk in People Aged 50 Years and above in Taiwan', Current Alzheimer Research, vol. 16, no. 14, pp. 1276-1289.
View/Download from: Publisher's site
View description>>
Background:Alzheimer’s disease, the most common cause of dementia among the elderly, isa progressive and irreversible neurodegenerative disease. Exposure to air pollutants is known to haveadverse effects on human health, however, little is known about hydrocarbons in the air that can triggera dementia event.Objective:We aimed to investigate whether long-term exposure to airborne hydrocarbons increases therisk of developing dementia.Method:The present cohort study included 178,085 people aged 50 years and older in Taiwan. Coxproportional hazards regression analysis was used to fit the multiple pollutant models for two targetedpollutants, including total hydrocarbons and non-methane hydrocarbons, and estimated the risk of dementia.Results:Before controlling for multiple pollutants, hazard ratios with 95% confidence intervals for theoverall population were 7.63 (7.28-7.99, p <0.001) at a 0.51-ppm increases in total hydrocarbons, and2.94 (2.82-3.05, p <0.001) at a 0.32-ppm increases in non-methane hydrocarbons. The highest adjustedhazard ratios for different multiple-pollutant models of each targeted pollutant were statistically significant(p <0.001) for all patients: 11.52 (10.86-12.24) for total hydrocarbons and 9.73 (9.18-10.32) fornon-methane hydrocarbons.Conclusion:Our findings suggest that total hydrocarbons and non-methane hydrocarbons may be contributingto dementia development.
Zhang, T, Zhu, T, Xiong, P, Huo, H, Tari, Z & Zhou, W 2020, 'Correlated Differential Privacy: Feature Selection in Machine Learning', IEEE Transactions on Industrial Informatics, vol. 16, no. 3, pp. 2115-2124.
View/Download from: Publisher's site
View description>>
© 2005-2012 IEEE. Privacy preserving in machine learning is a crucial issue in industry informatics since data used for training in industries usually contain sensitive information. Existing differentially private machine learning algorithms have not considered the impact of data correlation, which may lead to more privacy leakage than expected in industrial applications. For example, data collected for traffic monitoring may contain some correlated records due to temporal correlation or user correlation. To fill this gap, in this article, we propose a correlation reduction scheme with differentially private feature selection considering the issue of privacy loss when data have correlation in machine learning tasks. The proposed scheme involves five steps with the goal of managing the extent of data correlation, preserving the privacy, and supporting accuracy in the prediction results. In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover the privacy issue of data correlation in learning is guaranteed. The proposed method can be widely used in machine learning algorithms, which provide services in industrial areas. Experiments show that the proposed scheme can produce better prediction results with machine learning tasks and fewer mean square errors for data queries compared to existing schemes.
Zhang, W, Xiong, J, Gui, L, Liu, B, Qiu, M & Shi, Z 2020, 'Distributed Caching Mechanism for Popular Services Distribution in Converged Overlay Networks', IEEE Transactions on Broadcasting, vol. 66, no. 1, pp. 66-77.
View/Download from: Publisher's site
View description>>
IEEE With the proliferation of portable devices, the exponential growth of the global mobile traffic brings great challenges to the traditional communication networks and the traditional wireless communication technologies. In this context, converged networks and cache-based data offloading have drawn more and more attention based on the strong correlation of services. This paper proposes a novel popular services pushing and caching scheme by using converged overlay networks. The most popular services are pushed by terrestrial broadcasting networks. And they are cached in n router-nodes with limited cache sizes. Each router-node only interconnects with its neighbor nodes. Users are served through the router's WiFi link. If the services requested are cached in the routers, the user can be immediately responded; otherwise, the requests can be responded through the link from cellular stations to the router. In the proposed scheme, the cache size of the router, the maximum number of requests each router can serve, and the whole-time delay are limited. Three node-selecting and dynamic programming algorithms are adopted to maximize the equivalent throughput. Analytical and numerical results demonstrate that the proposed scheme is very effective.
Zhang, Y, Wang, M, Saberi, M & Chang, E 2020, 'Knowledge fusion through academic articles: a survey of definitions, techniques, applications and challenges', Scientometrics, vol. 125, no. 3, pp. 2637-2666.
View/Download from: Publisher's site
View description>>
© 2020, Akadémiai Kiadó, Budapest, Hungary. The ever growing volume of academic articles stresses the need for a new generation of knowledge management method to intelligently reuse the academic knowledge and facilitate the development of scientific research. Knowledge fusion (KF) serves a key element of such method addressing those needs, and breakthrough progress has taken place in the field of KF. This brings a great opportunity for the academic community to expedite the process of literature review and automatically retrieve the required knowledge from academic publications. Therefore, a survey reviewing the KF studies in terms of the related technologies and applications for valuable insights to reuse academic knowledge, which is missing from the state-of-the-art literature, is in need. Motivated to bridge this gap, this paper conducts a systematic survey reviewing the existing studies on KF, meanwhile discussing the opportunities and challenges of applying KF through academic articles. To this end, we revisit the definitions of knowledge and KF in the context of academic articles, and summarise the fusion patterns and their usage in existing applications. Furthermore, we review the techniques and applications of KF, especially those with academic articles as sources of knowledge. Finally, we discuss the challenges and future directions in order to bring new insights to researchers and practitioners to deepen their understanding of knowledge fusion and to develop versatile functions.
Zhang, Y, Wu, M, Lin, H, Tipper, S, Grosser, M, Zhang, G & Lu, J 2020, 'Framework of Computational Intelligence-Enhanced Knowledge Base Construction: Methodology and A Case of Gene-Related Cardiovascular Disease', International Journal of Computational Intelligence Systems, vol. 13, no. 1, pp. 1109-1109.
View/Download from: Publisher's site
View description>>
Knowledge base construction (KBC) aims to populate knowledge bases with high-quality information from unstructured data but how to effectively conduct KBC from scientific documents with limited preknowledge is still elusive. This paper proposes a KBC framework by applying computational intelligent techniques through the integration of intelligent bibliometrics—e.g., co-occurrence analysis is used for profiling research topics/domains and identifying key players, and recommending potential collaborators based on the incorporation of a link prediction approach; an approach of scientific evolutionary pathways is exploited to trace the evolution of research topics; and a search engine incorporating with fuzzy logics, word embedding, and genetic algorithm is developed for knowledge searching and ranking. Aiming to examine and demonstrate the reliability of the proposed framework, a case of gene-related cardiovascular diseases is selected, and a knowledge base is constructed, with the validation of domain experts.
Zhao, J, Wang, W, Sun, Q, Huo, H, Sun, G, Gao, X & Zhu, C 2020, 'CSELM‐QE: A Composite Semi‐supervised Extreme Learning Machine with Unlabeled RSS Quality Estimation for Radio Map Construction', Chinese Journal of Electronics, vol. 29, no. 6, pp. 1016-1024.
View/Download from: Publisher's site
View description>>
Wireless local area network (WLAN) fingerprint-based localization has become the most attractive and popular approach for indoor localization. However, the primary concern for its practical implementation is the laborious manual effort of calibrating sufficient location-labeled fingerprints. The Semi-supervised extreme learning machine (SELM) performs well in reducing calibration effort. Traditional SELM methods only use Received signal strength (RSS) information to construct the neighbor graph and ignores location information, which helps recognizing prior information for manifold alignments. We propose Composite SELM (CSELM) method by using both RSS signals and location information to construct composite graph. Besides, the issue of unlabeled RSS data quality has not been solved. We propose a novel approach called Composite semi-supervised extreme learning machine with unlabeled RSS Quality estimation (CSELM-QE) that takes into account the quality of unlabeled RSS data and combines the composite neighbor graph, which considers location information in the semi-supervised extreme learning machine. Experimental results show that the CSELM-QE could construct a precise localization model, reduce the calibration effort for radio map construction and improve localization accuracy. Our quality estimation method can be applied to other methods that need to retain high quality unlabeled Received signal strength data to improve model accuracy.
Zhao, J, Wang, W, Zhang, Z, Sun, Q, Huo, H, Qu, L & Zheng, S 2020, 'TrustTF: A tensor factorization model using user trust and implicit feedback for context-aware recommender systems', Knowledge-Based Systems, vol. 209, pp. 106434-106434.
View/Download from: Publisher's site
View description>>
© 2020 Elsevier B.V. In recent years, context information has been widely used in recommender systems. Tensor factorization is an effective method to process high-dimensional information. However, data sparsity is more serious in tensor factorization, and it is difficult to build a more accurate recommender system only based on user–item–context interaction information. Making full use of user's social information and implicit feedback can alleviate this problem. In this paper, we propose a new tensor factorization model named TrustTF, which mainly works as follows: (1) Using user's social trust information and implicit feedback to extend the bias tensor factorization (BiasTF), effectively alleviate data sparsity problem and improve the recommendation accuracy; (2) Dividing user's trust relationship into unilateral trust and mutual trust, which makes better use of user's social information. To our knowledge, this is the first work to consider the effects of both user trust and implicit feedback on the basis of the BiasTF model. The experimental results in two real-world data sets demonstrate that the TrustTF proposed in this paper can achieve higher accuracy than BiasTF and other social recommendation methods.
Zhao, X, Guo, J, Nie, F, Chen, L, Li, Z & Zhang, H 2020, 'Joint Principal Component and Discriminant Analysis for Dimensionality Reduction', IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 2, pp. 433-444.
View/Download from: Publisher's site
View description>>
Linear discriminant analysis (LDA) is the most widely used supervised dimensionality reduction approach. After removing the null space of the total scatter matrix St via principal component analysis (PCA), the LDA algorithm can avoid the small sample size problem. Most existing supervised dimensionality reduction methods extract the principal component of data first, and then conduct LDA on it. However, 'most variance' is very often the most important, but not always in PCA. Thus, this two-step strategy may not be able to obtain the most discriminant information for classification tasks. Different from traditional approaches which conduct PCA and LDA in sequence, we propose a novel method referred to as joint principal component and discriminant analysis (JPCDA) for dimensionality reduction. Using this method, we are able to not only avoid the small sample size problem but also extract discriminant information for classification tasks. An iterative optimization algorithm is proposed to solve the method. To validate the efficacy of the proposed method, we perform extensive experiments on several benchmark data sets in comparison with some state-of-the-art dimensionality reduction methods. A large number of experimental results illustrate that the proposed method has quite promising classification performance.
Zhou, C, Fu, A, Yu, S, Yang, W, Wang, H & Zhang, Y 2020, 'Privacy-Preserving Federated Learning in Fog Computing', IEEE Internet of Things Journal, vol. 7, no. 11, pp. 10782-10793.
View/Download from: Publisher's site
View description>>
Federated learning can combine a large number of scattered user groups and train models collaboratively without uploading data sets, so as to avoid the server collecting user sensitive data. However, the model of federated learning will expose the training set information of users, and the uneven amount of data owned by users in multiple users' scenarios will lead to the inefficiency of training. In this article, we propose a privacy-preserving federated learning scheme in fog computing. Acting as a participant, each fog node is enabled to collect Internet-of-Things (IoT) device data and complete the learning task in our scheme. Such design effectively improves the low training efficiency and model accuracy caused by the uneven distribution of data and the large gap of computing power. We enable IoT device data to satisfy -differential privacy to resist data attacks and leverage the combination of blinding and Paillier homomorphic encryption against model attacks, which realize the security aggregation of model parameters. In addition, we formally verified our scheme can not only guarantee both data security and model security but completely resist collusion attacks launched by multiple malicious entities. Our experiments based on the Fashion-MNIST data set prove that our scheme is highly efficient in practice.
Zhou, J, Zogan, H, Yang, S, Jameel, S, Xu, G & Chen, F 2020, 'Detecting Community Depression Dynamics Due to COVID-19 Pandemic in Australia', IEEE Transactions on Computational Social Systems, pp. 1-10.
View/Download from: Publisher's site
View description>>
The recent COVID-19 pandemic has caused unprecedented impact across theglobe. We have also witnessed millions of people with increased mental healthissues, such as depression, stress, worry, fear, disgust, sadness, and anxiety,which have become one of the major public health concerns during this severehealth crisis. For instance, depression is one of the most common mental healthissues according to the findings made by the World Health Organisation (WHO).Depression can cause serious emotional, behavioural and physical healthproblems with significant consequences, both personal and social costsincluded. This paper studies community depression dynamics due to COVID-19pandemic through user-generated content on Twitter. A new approach based onmulti-modal features from tweets and Term Frequency-Inverse Document Frequency(TF-IDF) is proposed to build depression classification models. Multi-modalfeatures capture depression cues from emotion, topic and domain-specificperspectives. We study the problem using recently scraped tweets from Twitterusers emanating from the state of New South Wales in Australia. Our novelclassification model is capable of extracting depression polarities which maybe affected by COVID-19 and related events during the COVID-19 period. Theresults found that people became more depressed after the outbreak of COVID-19.The measures implemented by the government such as the state lockdown alsoincreased depression levels. Further analysis in the Local Government Area(LGA) level found that the community depression level was different acrossdifferent LGAs. Such granular level analysis of depression dynamics not onlycan help authorities such as governmental departments to take correspondingactions more objectively in specific regions if necessary but also allows usersto perceive the dynamics of depression over the time.
Zhou, Y, Yu, S, Zhao, J, Feng, X, Zhang, M & Zhao, Z 2020, 'Effectiveness and Safety of Botulinum Toxin Type A in the Treatment of Androgenetic Alopecia', BioMed Research International, vol. 2020, pp. 1-7.
View/Download from: Publisher's site
View description>>
Background. Androgenetic alopecia (AGA) represents the most frequent clinical complaint encountered by dermatologists and is characterized by a progressive miniaturization of the hair follicle. However, the efficacy and safety of current medical treatment remain limited, and more personalized therapeutic approaches for AGA are needed. Therefore, the present study is aimed at investigating the efficacy and safety of botulinum toxin type A (BTA) in patients with AGA. Methods. 63 patients with AGA meeting the inclusion criteria were included in this study and treated with BTA injection or BTA injection combined with oral finasteride (FNS). In the scalp, 30 sites were injected with 100 U of BTA in each site and patients received BTA after every 3 months for a total of 4 times. Hair counts, head photographs, evaluation scores, and self-assessment were assessed in patients with AGA. Results. Hair counts in both groups at all time points were significantly higher as compared with those before treatment. After 4 times of treatment, hair counts in the BTA+FNS group were higher than those in the BTA group. Hair growth and density were significantly augmented, and the area of hair loss was attenuated after each treatment as revealed by head photographs. The effective rates of BTA and BTA+FNS groups were 73.3% and 84.8%, respectively, following 4 times treatment. Conclusion. BTA is a safe and effective therapeutic strategy for the treatment of AGA without adverse effects, and BTA combined with FNS exhibited a superior therapeutic effect than BTA alone.
Zhu, F, Lu, J, Lin, A & Zhang, G 2020, 'A Pareto-smoothing method for causal inference using generalized Pareto distribution', Neurocomputing, vol. 378, pp. 142-152.
View/Download from: Publisher's site
View description>>
© 2019 Elsevier B.V. Causal inference aims to estimate the treatment effect of an intervention on the target outcome variable and has received great attention across fields ranging from economics and statistics to machine learning. Observational causal inference is challenging because the pre-treatment variables may influence both the treatment and the outcome, resulting in confounding bias. The classic inverse propensity weighting (IPW) estimator is theoretically able to eliminate the confounding bias. However, in observational studies, the propensity scores used in the IPW estimator must be estimated from finite observational data and may be subject to extreme values, leading to the problem of highly variable importance weights, which consequently makes the estimated causal effect unstable or even misleading. In this paper, by reframing the IPW estimator in the importance sampling framework, we propose a Pareto-smoothing method to tackle this problem. The generalized Pareto distribution (GPD) from extreme value theory is used to fit the upper tail of the estimated importance weights and to replace them using the order statistics of the fitted GPD. To validate the performance of the new method, we conducted extensive experiments on simulated and semi-simulated datasets. Compared with two existing methods for importance weight stabilization, i.e., weight truncation and self-normalization, the proposed method generally achieves better performance in settings with a small sample size and high-dimensional covariates. Its application on a real-world heath dataset indicates its utility in estimating causal effects for program evaluation.
Zhu, T, Xiong, P, Li, G, Zhou, W & Yu, PS 2020, 'Differentially private model publishing in cyber physical systems', Future Generation Computer Systems, vol. 108, pp. 1297-1306.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. With the development of Cyber Physical Systems, privacy issues become an important topics in the past few years. It is worthwhile to apply differential privacy, one of the most influential privacy definitions, in cyber physical system. However, as the essential idea of differential privacy is to release query results rather than entire datasets, a large volume of noise has to be introduced. To provide high quality services we need to decrease the correlation between large sets of queries, while to predict on newly entered queries. This paper transfers the data publishing problem in cyber physical systems to a machine learning problem, in which a prediction model will be shared with clients. The predict model is used to answer current submitted queries and predict results for newly entered queries from the public.
Zhu, Y, Lu, H, Qiu, P, Shi, K, Chambua, J & Niu, Z 2020, 'Heterogeneous teaching evaluation network based offline course recommendation with graph learning and tensor factorization', Neurocomputing, vol. 415, pp. 84-95.
View/Download from: Publisher's site
Zhu, Y, Zhang, S, Li, Y, Lu, H, Shi, K & Niu, Z 2020, 'Social weather: A review of crowdsourcing‐assisted meteorological knowledge services through social cyberspace', Geoscience Data Journal, vol. 7, no. 1, pp. 61-79.
View/Download from: Publisher's site
View description>>
AbstractCrowdsourcing has significantly motivated the development of meteorological services. Starting from the beginning of 2010s and highly motivating after 2014, crowdsourcing‐driven meteorological services have evolved from a single collection and observation of data to the systematic acquisition, analysis and application of these data. In this review, by focusing on papers and databases that have combined crowdsourcing methods to promote or implement meteorological knowledge services, we analysed the relevant literature in three dimensions: data collection, information analysis and meteorological knowledge applications. First, we selected the potential data sources for crowdsourcing and discussed the characteristics of the collected data in four dimensions: consciousness, objectiveness, mobility and multidisciplinary. Second, based on the purpose of these studies and the extent of utilizing data as well as knowledge, we categorize the crowdsourcing‐based meteorological analysis into three levels: relationship discovery, knowledge generalization and systemized service. Third, according to the application scenario, we discussed the applications that have already been put into use, and we suggest current challenges and future research directions. These previous studies show that the use of crowdsourcing in social space can expand the coverage as well as enhance the performance of meteorological service. It was also evident that current researches are contributing towards a systemic and intelligent knowledge service to establish a better bridge among academic, industrial and individual community.
Zurita, G, Merigó, JM, Lobos-Ossandón, V & Mulet-Forteza, C 2020, 'Bibliometrics in computer science: An institution ranking', Journal of Intelligent & Fuzzy Systems, vol. 38, no. 5, pp. 5441-5453.
View/Download from: Publisher's site
Zurita, G, Shukla, AK, Pino, JA, Merigó, JM, Lobos-Ossandón, V & Muhuri, PK 2020, 'A bibliometric overview of the Journal of Network and Computer Applications between 1997 and 2019', Journal of Network and Computer Applications, vol. 165, pp. 102695-102695.
View/Download from: Publisher's site
Adak, C, Chaudhuri, BB, Lin, C-T & Blumenstein, M 1970, 'Why Not? Tell us the Reason for Writer Dissimilarity', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Writer verification has drawn significant attention over the past few decades due to its extensive applications in forensics and biometrics. In traditional writer verification, handwriting similarity/dissimilarity analysis is mostly performed by extracting two feature vectors from two respective handwritten samples, followed by comparing them in relation to their similarity. In the state-of-the-art writer verification approaches, a distance metric is usually employed in terms of the similarity between two handwritten samples. If the distance between two handwritten samples is greater than a given threshold, then the samples are assumed to be written by two different writers, otherwise, they are considered to be due to the same writer. In this paper, for the very first time, we propose a model that generates English sentences to explain reasons for writer dissimilarity/similarity. First, our proposed model obtains features from handwritten images by employing a convolutional neural network, verifies the writer using a Siamese architecture, and generates English words using a recurrent neural network. Finally, these two networks are merged using an affine transformation to produce an explanatory sentence in support of writer similarity/dissimilarity. We evaluated our model on a handwritten numeral database of 100 writers and obtained promising results.
Agarwal, A, Chivukula, AS, Bhuyan, MH, Jan, T, Narayan, B & Prasad, M 1970, 'Identification and Classification of Cyberbullying Posts: A Recurrent Neural Network Approach Using Under-Sampling and Class Weighting', Communications in Computer and Information Science, International Conference on Neural Information Processing, Springer International Publishing, Online, pp. 113-120.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. With the number of users of social media and web platforms increasing day-by-day in recent years, cyberbullying has become a ubiquitous problem on the internet. Controlling and moderating these social media platforms manually for online abuse and cyberbullying has become a very challenging task. This paper proposes a Recurrent Neural Network (RNN) based approach for the identification and classification of cyberbullying posts. In highly imbalanced input data, a Tomek Links approach does under-sampling to reduce the data imbalance and remove ambiguities in class labelling. Further, the proposed classification model uses Max-Pooling in combination with Bi-directional Long Short-Term Memory (LSTM) network and attention layers. The proposed model is evaluated using Wikipedia datasets to establish the effectiveness of identifying and classifying cyberbullying posts. The extensive experimental results show that our approach performs well in comparison to competing approaches in terms of precision, recall, with F1 score as 0.89, 0.86 and 0.88, respectively.
Ahmed, SB, Naz, S, Razzak, I & Prasad, M 1970, 'Unconstrained Arabic Scene Text Analysis using Concurrent Invariant Points', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Text in natural scene image portrays rich semantic information that plays an important role in content analysis. However, apart from Arabic text in documents, the text in natural scene images exhibit much higher diversity and variability, especially in uncontrolled circumstances. In this paper, a hybrid feature extraction approach is presented to detect extremal region of Arabic scene text. The binary image and image mask are considered as a variant of input image and look for concurrent extremal regions in both images. After determination of conjoined extremal points, the scale invariant technique is applied to consider those invariant points which are common in both images based on their coordinate positions. To evaluate the performance, a multidimensional long short term memory (LSTM) network is adapted and obtained 94.21% accuracy for word recognition on unconstrained Arabic scene text recognition (ASTR) dataset.
Alam, SL & Gill, AQ 1970, 'A social engagement framework for the government ecosystem: Insights from australian government facebook pages', International Conference on Information Systems, ICIS 2020 - Making Digital Inclusive: Blending the Local and the Global, International Conference on Information Systems, AISEL, India.
View description>>
Government agencies are using social media in an ad-hoc manner for bi-directional broadcast style communication, rather than systematic and deep engagement through open participation for co-creating value. However, capabilities and practices of participation for value creation is less understood for an increasingly networked government ecosystem. This calls for the need of a structured social engagement framework for government agencies. Thus, based on an empirical analysis of over 68 federal government Facebook pages, this paper presents insights on online engagement and levels of maturity among Australian federal government Facebook pages. Informed through engagement research and social architecture lens, we propose an empirically bounded government Facebook engagement framework (GFEF) that has implications and recommendations for agency benchmarking and social engagement capability building.
Al-Hadhrami, Y & Hussain, FK 1970, 'A Machine Learning Architecture Towards Detecting Denial of Service Attack in IoT', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 417-429.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. Internet of thing is part of our everyday life nowadays. Where millions of devices contented to the internet to collect and share data. Although IoT devices are evolving quickly to the consumer market where smart devices and sensors are becoming one of the main components of many households, IoT sensors and actuators have been also heavily used in the industry where thousands of devices are used to collect and share data for different purposes. With the rapid development of the Internet of Things in different areas, IoT is facing difficulty in securing overall availability of the network due to its heterogeneous nature. There are many types of vulnerability in IoT that can be mitigated with further research, however, in this paper, we have concentrated on distributed denial of Service attack (DDoS) on IoT. In this paper, we propose a machine learning architecture to detect DDoS attacks in IoT networks. The architecture collects IoT network traffic and analyzes the traffic through passing to machine learning model for attack detection. We propose the use of real-time data collection tool to dynamically monitor the network.
Al-Hadhrami, Y, Al-Hadhrami, N & Hussain, FK 1970, 'Data Exportation Framework for IoT Simulation Based Devices', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 212-222.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2020. Internet of things (IoT) is part of everyday life nowadays. Millions of devices are connected to the internet to collect and share data. Although IoT devices are evolving quickly in the consumer market where smart devices and sensors are becoming one of the main components of many households, IoT sensors and actuators are also heavily used in the industry where thousands of devices are used to collect and share data for different purposes. A need for an IoT simulation tool is necessary for development purposes and testing before deployments. One of the widely used tools among IoT researchers is the open-source tool Cooja simulator. Cooja has limitations—one is the lack of a way to export collected data as a data set for further processing. Therefore, this study introduces an extension tool to present and export the data into different forms.
Almansor, EH & Hussain, FK 1970, 'Modeling the Chatbot Quality of Services (CQoS) Using Word Embedding to Intelligently Detect Inappropriate Responses', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 60-70.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. The rapid growth of intelligent chatbots as conversational agents with the assistance of artificial intelligence has recently attracted much research attention. The major role of a chatbot is to generate appropriate responses to the user, however sometimes the chatbot fails to understand the user’s meaning. Therefore, detecting inappropriate responses from a chatbot is a critical issue. Several studies based on annotated datasets have investigated the problem of inappropriate responses from a chatbots perspective without considering the user’s perspective. Understanding the context of the conversation is an important point in determining whether a response is appropriate or inappropriate. Sentiment analysis is a natural language processing task that supports mining in user behavior. Therefore, we propose an intelligent framework that combines automated sentiment scoring and a word embedding model to detect the quality of chatbot responses considering the end-user’s point of view. We find our model achieves higher quality results than logistic regression.
Almansor, EH & Hussain, FK 1970, 'Survey on Intelligent Chatbots: State-of-the-Art and Future Research Directions', Complex, Intelligent, and Software Intensive Systems, International Conference on Complex, Intelligent, and Software Intensive Systems, Springer International Publishing, Sydney, pp. 534-543.
View/Download from: Publisher's site
View description>>
Human-computer interaction (HCI) is an area of interest which plays a major role in understanding the interaction between humans and machines. Dialogue systems or conversational systems including chatbots, voice control interfaces and personal assistants are examples of HCI application that have been developed to interact with users using natural language. Chatbots can help customers find useful information for their needs. Thus, numerous organizations are using chatbots to automate their customer service. Thus, the needs for using artificial intelligence has been increasing due to the needs of automated services. However, devolving smart bots that can respond at the human level is challenging. In this paper, we survey the state-of-art chatbot approaches from based on the ability to generate appropriate responses perspective. After summarizing the review from this aspect, we identify the research issues and challenges in chatbots. The findings of this research will highlight directions for future work.
Alnefaie, A, Gupta, D, Bhuyan, MH, Razzak, I, Gupta, P & Prasad, M 1970, 'End-to-End Analysis for Text Detection and Recognition in Natural Scene Images', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Right from the very beginning, the text has vital importance in human life. As compared to the vision-based applications, preference is always given to the precise and productive information embodied in the text. Considering the importance of text, recognition, and detection of text is also equally important in human life. This paper presents a deep analysis of recent development on scene text and compare their performance and bring into light the real modern applications. Future potential directions of scene text detection and recognition are also discussed.
Alqahtani, A, Hawryszkiewycz, I & Erfani, E 1970, 'Analysing Citizens' Inputs in Public Online Open Innovation Platforms', 26th Americas Conference on Information Systems, AMCIS 2020, Americas Conference on Information Systems, AISEL, USA.
View description>>
Online open innovation platforms are being used widely in the public sector of many countries. Citizens are the main users of these types of platforms which allow citizens to share and post their ideas online. The citizens' values that can be derived from the content of public open innovation platforms are not clear in the literature as previous studies were limited to studying open innovation platforms in the private sector. This study will explore the content of two public online open innovation platforms, specifically citizens' interests which are called “values”. The ideas of around 2580 citizens from open innovation platforms in Saudi Arabia and Australia will be analysed. By using thematic analysis and a non-linear coding process, themes will be generated. These themes are categories of citizens' values. Finally, citizens' values will be represented as a framework of the content of citizen inputs in public online open innovation platforms.
Alrashed, BA & Hussain, W 1970, 'Managing SLA Violation in the cloud using Fuzzy re-SchdNeg Decision Model', 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), IEEE.
View/Download from: Publisher's site
Amirbagheri, K, Merigó, JM & Yang, J-B 1970, 'A Bibliometric Analysis of Leading Countries in Supply Chain Management Research', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 182-192.
View/Download from: Publisher's site
View description>>
Supply chain management as a newly comer discipline has attracted many attentions of the scholars to do an investigation based on its prominent level of importance for the economy and its influence on the management of the organizations. So, the key point is to understand the trends among the countries throughout the time to have a powerful insight about this issue. To this end, this work does a comprehensive analysis from 1990 to 2017. The purpose of this study is to analyze the leading countries and understand thoroughly their trends during the time. The work has dedicated to three sections. In the first one the countries have studied globally to give a comprehensive overview to academics. Next, the performance of the countries is studied in three periods to understand better the changes of each during the time. Finally, some individual journals and groups of journals are also investigated. The results show that the USA is the leader of the countries while China has experienced an enormous growth and it is predictable that with this trend can reach to the top of the list.
Anjum, M, Voinov, A, Pileggi, SF & Castilla Rho, J 1970, 'Eliciting, Formalising, And Debiasing Mental Models Through An Online Tool For Serious Discussions', International Environmental Modelling & Software Society, Online.
Antani, S, Carey, C, Hudson, D, Kun, L, Lee, C, McGregor, C & Kojima, K 1970, 'Introduction to the Technical Program of IEEE LifeTech 2020', 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech), 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech), IEEE, p. 6.
View/Download from: Publisher's site
Anwar, MJ & Gill, AQ 1970, 'Developing an Integrated ISO 27701 and GDPR based Information Privacy Compliance Requirements Model', ACIS 2020 Proceedings - 31st Australasian Conference on Information Systems, Australasian Conference on Information Systems 2020, Wellington, New Zealand.
View description>>
The protection of information assets requires interdisciplinary approach and cross-functional capabilities. In recent times, information security and privacy compliance continue to be a complicated task due to increasing regulatory restrictions, changing legislations and public awareness. The newly published information security and privacy standard ISO/IEC 27701:2019 provides support for organisations looking to put in place systems to support compliance with global data privacy requirements. However, there is little known about how does this standard map to other regulatory requirements in different jurisdictions specifically the globally relevant General Data Protection Regulation (GDPR). Hence, this research aims to answer an important research question: whether and how the ISO/IEC 27701:2019 framework represents an opportunity for the GDPR compliance? This research provides a review and mapping of ISO/IEC 27701:2019 and GDPR by using an integrated requirement engineering model as a kernel theory. The results of this research will assist organisations contemplating to meet their compliance needs. It will also help academics and practitioners interested in integrating the ISO/IEC 27701:2019 and GDPR for developing relevant compliance frameworks and tools.
Aung, TWW, Huo, H & Sui, Y 1970, 'A Literature Review of Automatic Traceability Links Recovery for Software Change Impact Analysis', Proceedings of the 28th International Conference on Program Comprehension, ICPC '20: 28th International Conference on Program Comprehension, ACM, pp. 14-24.
View/Download from: Publisher's site
View description>>
In large-scale software development projects, change impact analysis (CIA) plays an important role in controlling software designevolution. Identifying and accessing the effects of software changesusing traceability links between various software artifacts is a common practice during the software development cycle. Recently,research in automated traceability-link recovery has received broadattention in the software maintenance community to reduce themanual maintenance cost of trace links by developers. In this study,we conducted a systematic literature review related to automatictraceability link recovery approaches with a focus on CIA. We identified 33 relevant studies and investigated the following aspects ofCIA: traceability approaches, CIA sets, degrees of evaluation, tracedirection and methods for recovering traceability link between artifacts of different types. Our review indicated that few traceabilitystudies focused on designing and testing impact analysis sets, presumably due to the scarcity of datasets. Based on the findings, weurge further industrial case studies. Finally, we suggest developingtraceability tools to support fully automatic traceability approaches,such as machine learning and deep learning.
Bai, L, Yao, L, Li, C, Wang, X & Wang, C 1970, 'Adaptive graph convolutional recurrent network for traffic forecasting', Advances in Neural Information Processing Systems, 34th Conference on Neural Information Processing Systems, online.
View description>>
Modeling complex spatial and temporal correlations in the correlated time series data is indispensable for understanding the traffic dynamics and predicting the future status of an evolving traffic system. Recent works focus on designing complicated graph neural network architectures to capture shared patterns with the help of pre-defined graphs. In this paper, we argue that learning node-specific patterns is essential for traffic forecasting while the pre-defined graph is avoidable. To this end, we propose two adaptive modules for enhancing Graph Convolutional Network (GCN) with new capabilities: 1) a Node Adaptive Parameter Learning (NAPL) module to capture node-specific patterns; 2) a Data Adaptive Graph Generation (DAGG) module to infer the inter-dependencies among different traffic series automatically. We further propose an Adaptive Graph Convolutional Recurrent Network (AGCRN) to capture fine-grained spatial and temporal correlations in traffic series automatically based on the two modules and recurrent networks. Our experiments 1 on two real-world traffic datasets show AGCRN outperforms state-of-the-art by a significant margin without pre-defined graphs about spatial connections.
Bai, L, Yao, L, Wang, X, Kanhere, SS & Xiao, Y 1970, 'Prototype Similarity Learning for Activity Recognition', Lecture Notes in Computer Science, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, online, pp. 649-661.
View/Download from: Publisher's site
View description>>
Human Activity Recognition (HAR) plays an irreplaceable role in various applications such as security, gaming, and assisted living. Recent studies introduce deep learning to mitigate the manual feature extraction (i.e., data representation) efforts and achieve high accuracy. However, there are still challenges in learning accurate representations for sensory data due to the weakness of representation modules and the subject variances. We propose a scheme called Distance-based HAR from Ensembled spatial-temporal Representations (DHARER) to address above challenges. The idea behind DHARER is straightforward—the same activities should have similar representations. We first learn representations of the input sensory segments and latent prototype representations of each class, using a Convolution Neural Network (CNN)-based dual-stream representation module; then the learned representations are projected to activity types by measuring their similarity to the learned prototypes. We have conducted extensive experiments under a strict subject-independent setting on three large-scale datasets to evaluate the proposed scheme, and our experimental results demonstrate superior performance of DHARER to several state-of-the-art methods.
Banerjee, S, Misra, R, Prasad, M, Elmroth, E & Bhuyan, MH 1970, 'Multi-diseases Classification from Chest-X-ray: A Federated Deep Learning Approach', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 3-15.
View/Download from: Publisher's site
View description>>
Data plays a vital role in deep learning model training. In large-scale medical image analysis, data privacy and ownership make data gathering challenging in a centralized location. Hence, federated learning has been shown as successful in alleviating both problems for the last few years. In this work, we have proposed multi-diseases classification from chest-X-ray using Federated Deep Learning (FDL). The FDL approach detects pneumonia from chest-X-ray and also identify viral and bacterial pneumonia. Without submitting the chest-X-ray images to a central server, clients train the local models with limited private data at the edge server and send them to the central server for global aggregation. We have used four pre-trained models such as ResNet18, ResNet50, DenseNet121, and MobileNetV2 and applied transfer learning on them at each edge server. The learned models in the federated setting have compared with centrally trained deep learning models. It has been observed that the models trained using the ResNet18 in federated environment produce accuracy up to 98.3 % for pneumonia detection and up to 87.3% accuracy for viral and bacterial pneumonia detection. We have compared the performance of adaptive learning rate based optimizers such as Adam and Adamax with Momentum based Stochastic Gradient Descent (SGD) and found out that Momentum SGD yields better results than others. Lastly, for visualization, we have used Class Activation Mapping (CAM) approaches such as Grad-CAM, Grad-CAM++, and Score-CAM to identify pneumonia affected regions in a chest-X-ray.
Bano, M, Zowghi, D, Ferrari, A & Spoletini, P 1970, 'Inspectors Academy : Pedagogical Design for Requirements Inspection Training.', RE, IEEE International Requirements Engineering Conference, IEEE, Zurich, pp. 215-226.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. The core aim of requirements inspection is to ensure the high quality of already elicited requirements in the Software Requirements Specification. Teaching requirements inspection to novices is challenging, as inspecting requirements needs several skills as well as knowledge of the product and process that is hard to achieve in a classroom environment. Published studies about pedagogical design specifically for teaching requirements inspection are scarce. Our objective is to present the design and evaluation of a postgraduate course for requirements inspection training. We conducted an empirical study with 138 postgraduate students, teamed up in 34 groups to conduct requirements inspection. We performed qualitative analysis on the data collected from students' reflection reports to assess the effects of the pedagogical design in terms of benefits and challenges. We also quantitatively analyze the correlation between the students' performance in conducting inspections and their ability of writing specifications. From the analysis of students' reflections, several themes emerged such as their difficulty of working with limited information, but also revealed the benefits of learning teamwork and writing good requirements. This qualitative analysis also provides recommendations for improving the related activities. The results revealed a moderate positive correlation between the performance in writing specification and inspection.
Bastwadkar, M, McGregor, C & Balaji, S 1970, 'A Cloud Based Big Data Health-Analytics-as-a-Service Framework to Support Low Resource Setting Neonatal Intensive Care Unit', Proceedings of the 4th International Conference on Medical and Health Informatics, ICMHI 2020: 2020 4th International Conference on Medical and Health Informatics, ACM, Kamakura City, Japan, pp. 30-36.
View/Download from: Publisher's site
View description>>
© 2020 ACM. Critical care patients are monitored by a range of medical devices collecting high frequency data. New computing frameworks and platforms are being proposed to review and analyze the data in detail. The application of these approaches in a low resource setting is challenged by the approaches used for data acquisition. Software as a Service (SaaS) is a form of cloud computing where a cloud-based software application enables the storage, analysis and visualization of data within the cloud. A subset of SaaS is Health Analytics as a Service (HAaaS), which provides software to support health analytics in the cloud. The objective of this study is to design, implement, and demonstrate an extendable big-data compatible HAaaS framework that offers both real-time and retrospective analysis where data acquisition is not tightly coupled. A data warehousing framework is presented to facilitate analysis within a low resource setting. The framework has been instantiated in the Artemis platform within the context of the Belgaum Children Hospital (BCH) case study. Initial end-to-end testing with the Nellcor monitor (bedside monitor at BCH), which was not connected to any human, was completed. This testing confirms the functionality of the new Artemis cloud instance to receive data from test device using an alternate data acquisition approach.
Bei, X, Chen, S, Guan, J, Qiao, Y & Sun, X 1970, 'From independent sets and vertex colorings to isotropic spaces and isotropic decompositions: Another bridge between graphs and alternating matrix spaces', Leibniz International Proceedings in Informatics, LIPIcs.
View/Download from: Publisher's site
View description>>
In the 1970’s, Lovász built a bridge between graphs and alternating matrix spaces, in the context of perfect matchings (FCT 1979). A similar connection between bipartite graphs and matrix spaces plays a key role in the recent resolutions of the non-commutative rank problem (Garg-Gurvits-Oliveira-Wigderson, FOCS 2016; Ivanyos-Qiao-Subrahmanyam, ITCS 2017). In this paper, we lay the foundation for another bridge between graphs and alternating matrix spaces, in the context of independent sets and vertex colorings. The corresponding structures in alternating matrix spaces are isotropic spaces and isotropic decompositions, both useful structures in group theory and manifold theory. We first show that the maximum independent set problem and the vertex c-coloring problem reduce to the maximum isotropic space problem and the isotropic c-decomposition problem, respectively. Next, we show that several topics and results about independent sets and vertex colorings have natural correspondences for isotropic spaces and decompositions. These include algorithmic problems, such as the maximum independent set problem for bipartite graphs, and exact exponential-time algorithms for the chromatic number, as well as mathematical questions, such as the number of maximal independent sets, and the relation between the maximum degree and the chromatic number. These connections lead to new interactions between graph theory and algebra. Some results have concrete applications to group theory and manifold theory, and we initiate a variant of these structures in the context of quantum information theory. Finally, we propose several open questions for further exploration.
Biddle, R, Joshi, A, Liu, S, Paris, C & Xu, G 1970, 'Leveraging Sentiment Distributions to Distinguish Figurative From Literal Health Reports on Twitter', Proceedings of The Web Conference 2020, WWW '20: The Web Conference 2020, ACM, pp. 1217-1227.
View/Download from: Publisher's site
View description>>
© 2020 ACM. Harnessing data from social media to monitor health events is a promising avenue for public health surveillance. A key step is the detection of reports of a disease (referred to as ĝ€?health mention classification') amongst tweets that mention disease words. Prior work shows that figurative usage of disease words may prove to be challenging for health mention classification. Since the experience of a disease is associated with a negative sentiment, we present a method that utilises sentiment information to improve health mention classification. Specifically, our classifier for health mention classification combines pre-trained contextual word representations with sentiment distributions of words in the tweet. For our experiments, we extend a benchmark dataset of tweets for health mention classification, adding over 14k manually annotated tweets across diseases. We also additionally annotate each tweet with a label that indicates if the disease words are used in a figurative sense. Our classifier outperforms current SOTA approaches in detecting both health-related and figurative tweets that mention disease words. We also show that tweets containing disease words are mentioned figuratively more often than in a health-related context, proving to be challenging for classifiers targeting health-related tweets.
Bossalini, C, Raffe, W & Andres Garcia, J 1970, 'Generative Audio and Real-Time Soundtrack Synthesis in Gaming Environments', 32nd Australian Conference on Human-Computer Interaction, OzCHI '20: 32nd Australian Conference on Human-Computer-Interaction, ACM, Australia, pp. 281-292.
View/Download from: Publisher's site
View description>>
An important yet oft-overlooked front in the scope of interactive media, audio technologies have remained relatively stagnant compared to groundbreaking advancements made in fields such as visual fidelity and virtual reality. This paper explores the use of generative audio within a gaming environment, examining how dynamically-rendered audio can modify the creative pipeline, offer greater flexibility for audio designers, and improve the overall immersion of games and interactive media. A prototype generative audio engine is created, allowing for various musical parameters like tempo and pitch to be changed at runtime. Additionally, bidirectional linking between gameplay and music is explored, allowing player inputs to influence the soundtrack and the soundtrack to trigger or quantize player inputs. The final result, while somewhat limited in scope, demonstrates the potential of partially generative soundtracks to provide greater variety and freedom for audio engineers.
Brooksbank, PA, Li, Y, Qiao, Y & Wilson, JB 1970, 'Improved algorithms for alternating matrix space isometry: From theory to practice', Leibniz International Proceedings in Informatics, LIPIcs.
View/Download from: Publisher's site
View description>>
Motivated by testing isomorphism of p-groups, we study the alternating matrix space isometry problem (AltMatSpIso), which asks to decide whether two m-dimensional subspaces of n × n alternating (skew-symmetric if the field is not of characteristic 2) matrices are the same up to a change of basis. Over a finite field Fp with some prime p 6= 2, solving AltMatSpIso in time pO(n+m) is equivalent to testing isomorphism of p-groups of class 2 and exponent p in time polynomial in the group order. The latter problem has long been considered a bottleneck case for the group isomorphism problem. Recently, Li and Qiao presented an average-case algorithm for AltMatSpIso in time pO(n) when n and m are linearly related (FOCS’17). In this paper, we present an average-case algorithm for AltMatSpIso in time pO(n+m). Besides removing the restriction on the relation between n and m, our algorithm is considerably simpler, and the average-case analysis is stronger. We then implement our algorithm, with suitable modifications, in Magma. Our experiments indicate that it improves significantly over default (brute-force) algorithms for this problem.
Buchan, J, Zowghi, D & Bano, M 1970, 'Applying Distributed Cognition Theory to Agile Requirements Engineering.', REFSQ, International Working Conference on Requirements Engineering: Foundation for Software Quality, Springer, Pisa, Italy, pp. 186-202.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. [Context & Motivation] Agile Requirements Engineering (ARE) is a collaborative, team-based process based on frequent elicitation, elaboration, estimation and prioritization of the user requirements, typically represented as user stories. While it is claimed that this Agile approach and the associated RE activities are effective, there is sparse empirical evidence and limited theoretical foundation to explain this efficacy. [Question/problem] We aim to understand and explain aspects of the ARE process by focusing on a cognitive perspective. We appropriate ideas and techniques from Distributed Cognition (DC) theory to analyze the cognitive roles of people, artefacts and the physical work environment in a successful collaborative ARE activity, namely requirement prioritization. [Principal idea/results] This paper presents a field study of two early requirements related meetings in an Agile product development project. Observation data, field notes and transcripts were collected and qualitatively analyzed. We have used DiCoT, a framework for systematically applying DC as a methodological contribution, to analyze the ARE process and explain its efficacy from a cognitive perspective. The analysis identified three main areas of cognitive effort in the ARE process as well as the significant information flows and artefacts. Analysis of these have identified that the use of physical user story cards, specific facilitator skills, and development of shared understanding of the user stories, were all key to the effectiveness of the ARE activity observed. [Contribution] The deeper understanding of cognition involved in ARE provides an empirically evidenced explanation, based on DC theory, of why this way of collaboratively prioritizing requirements was effective. Our result provides a basis for designing other ARE activities.
Buruk, OO, Özcan, O, Baykal, GE, Göksun, T, Acar, S, Akduman, G, Baytaş, MA, Beşevli, C, Best, J, Coşkun, A, Genç, HU, Kocaballi, AB, Laato, S, Mota, C, Papangelis, K, Raftopoulos, M, Ramchurn, R, Sádaba, J, Thibault, M, Wolff, A & Yildiz, M 1970, 'Children in 2077: Designing Children's Technologies in the Age of Transhumanism', Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20: CHI Conference on Human Factors in Computing Systems, ACM.
View/Download from: Publisher's site
Cai, L, Lin, D, Zhang, J & Yu, S 1970, 'Dynamic Sample Selection for Federated Learning with Heterogeneous Data in Fog Computing', ICC 2020 - 2020 IEEE International Conference on Communications (ICC), ICC 2020 - 2020 IEEE International Conference on Communications (ICC), IEEE, Dublin, Ireland, pp. 1-6.
View/Download from: Publisher's site
View description>>
Federated learning is a state-of-the-art technology used in the fog computing, which allows distributed learning to train cross-device data while achieving efficient performance. Many current works have optimized the federated learning algorithm in homogeneous networks. However, in the actual application scenario of distributed learning, data is independently generated by each device, and this non-homologous data has different distribution characteristics. Therefore, the data used by each device for local learning is unbalanced and non-IID, and the heterogeneity of data affects the performance of federated learning and slows down the convergence. In this paper, we present a dynamic sample selection optimization algorithm, FedSS, to tackle heterogeneous data in federated learning. FedSS dynamically selects the training sample size during the gradient iteration based on the locally available data size, to settle the expensive evaluations of the local objective function with a massive amount of dataset. We theoretically analyze the convergence and present the complexity estimates of our framework when learning large data from unbalanced distribution. Our experimental results show that the use of dynamic sampling methods can effectively improve the convergence speed with heterogeneous data, and keep computational costs low while achieving the desired accuracy.
Cao, Y, Chen, X, Yao, L, Wang, X & Zhang, WE 1970, 'Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems', Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval, ACM, pp. 1669-1672.
View/Download from: Publisher's site
View description>>
Adversarial attacks pose significant challenges for detecting adversarial attacks at an early stage. We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems. We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data. Finally, we study the attack strength and frequency of adversarial examples and evaluate our model on standard datasets with multiple crafting methods. Our extensive experiments show that most adversarial attacks are effective, and both attack strength and attack frequency impact the attack performance. The strategically-timed attack achieves comparative attack performance with only 1/3 to 1/2 attack frequency. Besides, our black-box detector trained with one crafting method has the generalization ability over several crafting methods.
Cao, Z, Wong, K, Bai, Q & Lin, CT 1970, 'Hierarchical and non-hierarchical multi-agent interactions based on unity reinforcement learning', Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 2095-2097.
View description>>
The open-source Unity platform, where agents can be trained using hierarchical or non-hierarchical reinforcement learning, supports the use of games and simulations as environments for multiple-agent interactions. In this demonstration, we present hierarchical and non-hierarchical multi-agent interactions based on Unity reinforcement learning, specifically, hierarchical reinforcement learning that sets different levels of agent's observations to achieve the goal. We created four multi-agent scenarios in the Unity environment, namely, Crawler, Tennis, Banana Collector, and Soccer, to test the interaction performances of hierarchical and non-hierarchical reinforcement learning. The simulation-interaction performances show that hierarchical reinforcement learning can be applied to multi-agent environments and can compete with agents trained via non-hierarchical reinforcement learning.
Cetindamar, D, Shdifat, B & Erfani, S 1970, 'Assessing Big Data Analytics Capability and Sustainability in Supply Chains', Proceedings of the Annual Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, Hawaii, USA, pp. 208-217.
View/Download from: Publisher's site
Cetindamar, D, Shdifat, B & Erfani, S 1970, 'Assessing Big Data Analytics Capability and Sustainability in Supply Chains.', HICSS, ScholarSpace, pp. 1-10.
Chen, H, Yin, H, Sun, X, Chen, T, Gabrys, B & Musial, K 1970, 'Multi-level Graph Convolutional Networks for Cross-platform Anchor Link Prediction', Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM, pp. 1503-1511.
View/Download from: Publisher's site
View description>>
Cross-platform account matching plays a significant role in social networkanalytics, and is beneficial for a wide range of applications. However,existing methods either heavily rely on high-quality user generated content(including user profiles) or suffer from data insufficiency problem if onlyfocusing on network topology, which brings researchers into an insolubledilemma of model selection. In this paper, to address this problem, we proposea novel framework that considers multi-level graph convolutions on both localnetwork structure and hypergraph structure in a unified manner. The proposedmethod overcomes data insufficiency problem of existing work and does notnecessarily rely on user demographic information. Moreover, to adapt theproposed method to be capable of handling large-scale social networks, wepropose a two-phase space reconciliation mechanism to align the embeddingspaces in both network partitioning based parallel training and accountmatching across different social networks. Extensive experiments have beenconducted on two large-scale real-life social networks. The experimentalresults demonstrate that the proposed method outperforms the state-of-the-artmodels with a big margin.
Chen, S-K, Chen, C-S, Wang, Y-K & Lin, C-T 1970, 'An SSVEP Stimuli Design using Real-time Camera View with Object Recognition', 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Canberra Australia, pp. 562-567.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Most SSVEP-based stimuli BCIs are pre-defined using the white blocks. This kind of scenario lead less flexibility in the real life. To represent the flickers with the location, types and configurations of the objects in real world, this paper proposes an SSVEP-based BCI using real-time camera view with object recognition algorithm to provide intuitive BCI for users. A deep learning-based object recognition algorithm is used to calculate the location of the objects on the online camera view from a depth camera. After the bounding box of the objects is estimated, the location of the SSVEP flickers are designed to overlap on the object locations. An overlapping FFT and SVM is used to recognize the EEG signals into corresponding classes. In experimental results, the classification rate for camera view scenario is more than 94.1%. The results show that proposed SSVEP stimuli design is available to create an intuitive and reliable human machine interaction. The proposed results can be used for the users who have motor disabilities to further used to interact with assistive devices, such as: robotic arm and wheelchairs.
Chen, X, Huang, C, Yao, L, Wang, X, liu, W & Zhang, W 1970, 'Knowledge-guided Deep Reinforcement Learning for Interactive Recommendation', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK.
View/Download from: Publisher's site
Chen, Y, Dong, G, Hao, Y, Zhang, Z, Peng, H & Yu, S 1970, 'An Open Identity Authentication Scheme Based on Blockchain', Algorithms and Architectures for Parallel Processing, International Conference on Algorithms and Architectures for Parallel Processing, Springer International Publishing, Australia, pp. 421-438.
View/Download from: Publisher's site
View description>>
With the development of Public Key Infrastructure (PKI), there implements lots of identity management systems in enterprises, hospitals, government departments, etc. These systems based on PKI are typically centralized systems. Each of them has their own certificate authority (CA) as trust anchor and is designed according their own understanding, thus formalizing lots of trust domains isolated from each other and there is no unified business standards with regard to trust delivery of an identity system to another, which caused a lot of inconveniences to users who have cross-domain requirements, for example, repeatedly register same physical identity in different domains, hard to prove the validity of an attestation issued by a domain to another. Present PKI systems choose solutions such as Trust list, Bridge CA or Cross-authentication of CAs to break trust isolation, but practice shows that they all have obvious defects under existing PKI structure. We propose an open identity authentication structure based on blockchain and design 3 protocols including: Physical identity registration protocol, virtual identity binding protocol and Attribution attestation protocol. The tests and security analysis show that the scheme has better practice value compared to traditional ones.
Darwish, A, Halkon, B, Oberst, S, Fitch, R & Rothberg, S 1970, 'CORRECTION OF LASER DOPPLER VIBROMETER MEASUREMENTS AFFECTED BY SENSOR HEAD VIBRATION USING TIME DOMAIN TECHNIQUES', XI International Conference on Structural Dynamics, XI International Conference on Structural Dynamics, EASD, Athens, pp. 4842-4850.
View/Download from: Publisher's site
View description>>
Despite widespread use in a variety of areas, in-field applications of laser Doppler vibrometers (LDVs) are still somewhat limited due to their inherent sensitivity to vibration of the instrument sensor head itself. Earlier work, briefly reviewed herein, has shown it to be possible
to subtract the instrument vibration via a number of means, however, it has been difficult up to now to truly compare the performance of these. This is compounded by the constraint that a frequency domain based approach only holds for stationary vibration signals while, particularly for in-field applications, an approach that is also applicable to transient signals is necessary.
This paper therefore describes the development of a novel time domain post-processing based approach for vibrating LDV measurement correction and compares it with the frequency domain counterpart. Results show that, while both techniques offer significant improvements in the corrected LDV signal when compared to a reference accelerometer measurement, the time domain based correction outperforms the frequency domain based method by a factor of eight
Do, T-TN, Singh, AK, Cortes, CAT & Lin, C-T 1970, 'Estimating the cognitive load in physical spatial navigation', 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Canberra Australia, pp. 568-575.
View/Download from: Publisher's site
Dong, Y, Fauth, A, Huang, M, Chen, Y & Liang, J 1970, 'PansyTree: Merging Multiple Hierarchies', 2020 IEEE Pacific Visualization Symposium (PacificVis), 2020 IEEE Pacific Visualization Symposium (PacificVis), IEEE, Tianjin, China.
View/Download from: Publisher's site
View description>>
Hierarchical structures are very common in the real world for recording all kinds of relational data generated in our daily life and business procedures. A very popular visualization method for displaying such data structures is called "Tree". So far, there are a variety of Tree visualization methods that have been proposed and most of them can only visualize one hierarchical dataset at a time. Hence, it raises the difficulty of comparison between two or more hierarchical datasets.In this paper, we proposed Pansy Tree which used a tree metaphor to visualize merged hierarchies. We design a unique icon named pansy to represent each merged node in the structure. Each Pansy is encoded by three colors mapping data items from three different datasets in the same hierarchical position (or tree node). The petals and sepal on Pansy are designed for showing each attribute's values and hierarchical information. We also redefine the links in force layout encoded by width and animation to better convey hierarchical information. We further apply Pansy Tree into CNCEE datasets and demonstrate two use cases to verify its effectiveness.The main contribution of this work is to merge three datasets into one tree that makes it much easier to explore and compare the structures, data items and data attributes with visual tools.
Dourish, P, Lawrence, C, Leong, TW & Wadley, G 1970, 'On Being Iterated: The Affective Demands of Design Participation', Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20: CHI Conference on Human Factors in Computing Systems, ACM, pp. 1-11.
View/Download from: Publisher's site
View description>>
© 2020 ACM. Iteration is a central feature of most HCI design methods, creating as it does opportunities for engagements with stakeholder groups. But what does iteration demand of those groups? Under what conditions do iterative engagements arise, and with what stakes? Building on experiences with Aboriginal Australian communities, and drawing on feminist and decolonial thinking, we examine the nature of iteration for HCI and how it frames encounters between design and use, with a focus on the affective dimension of engagement in iterative design processes.
Erfani, E 1970, 'Wellness management of seniors: Mobile health (mHealth) solutions', AMCIS 2020 Proceedings.
Erfani, E, Abedin, B, Luckett, T, Lawrence, C & Hanna, ASH 1970, 'A Culturally and Language Appropriate Smartphone-based Support Intervention for Enhancinng the Psychological Well-being of Indigenous Australian People with cancer.', ECIS.
Ghantous, GB & Gill, A 1970, 'The DevOps Reference Architecture Evaluation : A Design Science Research Case Study', 2020 IEEE International Conference on Smart Internet of Things (SmartIoT), 2020 IEEE International Conference on Smart Internet of Things (SmartIoT), IEEE, Beijing, China, pp. 295-299.
View/Download from: Publisher's site
View description>>
There is a growing interest to adopt vendor-driven DevOps tools in organizations. However, it is not clear which tools to use in a reference architecture which enables the deployment of the emerging IoT applications to multi-cloud environments. A research-based and vendor-neutral DevOps reference architecture (DRA) framework has been developed to address this critical challenge. The DRA framework can be utilized to architect and implement the DevOps environment that enables automation and continuous integration of software applications deployment to multi-cloud. This paper confers and discusses the evaluation outcomes of the DRA framework at the DigiSAS research Lab. The evaluation outcomes present practical evidence about the applicability of the DRA framework. The evaluation results also indicate that the DRA framework provides general knowledge-base to researchers and practitioners about the adoption DevOps approach in reference architecture design for deploying IoT-applications to multi-cloud environments.
Gill, AQ, Beydoun, G, Niazi, M & Khan, HU 1970, 'Adaptive Architecture and Principles for Securing the IoT Systems.', IMIS, International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Springer, Lodz, Poland, pp. 173-182.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2021. There is an increasing interest in IoT-enabled smart digital systems. However, it is important to address their security concerns. This paper aims to address this need and proposes an adaptive architecture driven approach to securing IoT systems. The paper proposes IoT security principles and a foundational adaptive architecture framework. These two combined provide a guide to design and embed the security across various layers of an IoT system. This will ensure that the important aspects of the IoT security are not accidentally missed, and thus provides a holistic end to end adaptive architecture driven approach for IoT security. This paper covers the interaction, human, digital technology, physical facility and environment architecture layers and principles related to IoT security as opposed to focusing only on the IoT devices. Thus, it demonstrates and concludes that the IoT security is much more than IoT device, network and perimeter security.
Glass, J & McGregor, C 1970, 'Towards Player Health Analytics in Overwatch', 2020 IEEE 8th International Conference on Serious Games and Applications for Health (SeGAH), 2020 IEEE 8th International Conference on Serious Games and Applications for Health(SeGAH), IEEE, Vancouver, BC, Canada, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Overwatch is a competitive, team-based first-person shooter game, with a professional eSports league supporting competitive play. Player mental health has been an issue in eSports, and in Overwatch multiple players have quit playing professionally and cited mental health concerns. Player physiology during gameplay presents an opportunity to understand stressors during gameplay that may affect individual performance and health. This paper presents the collection of physiological data from Overwatch players and overlays it with data from the video game. This method, demonstrated in a pilot study could be used to learn more about how in game events affect player mental health, and lead to the development of resilience building approaches for eSports athletes.
Go, JH, Jan, T, Mohanty, M, Patel, OP, Puthal, D & Prasad, M 1970, 'Visualization Approach for Malware Classification with ResNeXt', 2020 IEEE Congress on Evolutionary Computation (CEC), 2020 IEEE Congress on Evolutionary Computation (CEC), IEEE, Glasgow, United Kingdom, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. The Internet has resulted in cyber-threats and cyber-crimes, which can occur anywhere at any time. Among various cyber threats, modern malware with applied metamorphosis and polymorphic technology is a concern as it can proliferate to advanced variants from its original shape. The typical malware analysis methods, including signature-based approach, remain vulnerable to such advanced variants. This paper proposes a visualization-based approach for malware analysis using the state-of-the-art Convolution Neural Network (CNN) model such as ResNeXt, which had achieved outstanding performance in image classifications with competitive computational complexity. The proposed method transforms the attributes of raw malware binary executable files to greyscale images for further analysis by well-established deep learning models. The greyscale images, which result of data transformation for visualization, are classified using ResNeXt. The experiment results show that the proposed solution achieves 98.32% and 98.86% of accuracy in malware classification on Malimg dataset and modified Malimg dataset, respectively. The proposed method outperforms other comparable methods in terms of classification accuracy and requires similar level of computational power.
Golzan, M, Gheisari, S, Shariflou, S, Phu, J, Kennedy, PJ, Agar, A & Kalloniatis, M 1970, 'A combined convolutional and recurrent neural network applied to fundus videos markedly enhances glaucoma detection', INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, Annual Meeting of the Association-for-Research-in-Vision-and-Ophthalmology (ARVO), ASSOC RESEARCH VISION OPHTHALMOLOGY INC, ELECTR NETWORK.
Gong, Y, Li, Z, Zhang, J, Liu, W & Yi, J 1970, 'Potential Passenger Flow Prediction: A Novel Study for Urban Transportation Development', Proceedings of the AAAI Conference on Artificial Intelligence, Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Association for the Advancement of Artificial Intelligence (AAAI), New York USA, pp. 4020-4027.
View/Download from: Publisher's site
View description>>
Recently, practical applications for passenger flow prediction have brought many benefits to urban transportation development. With the development of urbanization, a real-world demand from transportation managers is to construct a new metro station in one city area that never planned before. Authorities are interested in the picture of the future volume of commuters before constructing a new station, and estimate how would it affect other areas. In this paper, this specific problem is termed as potential passenger flow (PPF) prediction, which is a novel and important study connected with urban computing and intelligent transportation systems. For example, an accurate PPF predictor can provide invaluable knowledge to designers, such as the advice of station scales and influences on other areas, etc. To address this problem, we propose a multi-view localized correlation learning method. The core idea of our strategy is to learn the passenger flow correlations between the target areas and their localized areas with adaptive-weight. To improve the prediction accuracy, other domain knowledge is involved via a multi-view learning process. We conduct intensive experiments to evaluate the effectiveness of our method with real-world official transportation datasets. The results demonstrate that our method can achieve excellent performance compared with other available baselines. Besides, our method can provide an effective solution to the cold-start problem in the recommender system as well, which proved by its outperformed experimental results.
Gromov, A, Maslennikov, A, Dawson, N, Musial, K & Kitto, K 1970, 'Curriculum profile: modelling the gaps between curriculum and the job market', Proceedings of the 13th International Conference on Educational Data Mining, EDM 2020, Ifrane, Morocco (Fully Virtual Conference), pp. 610-614.
View description>>
This study uses skill-based curriculum analytics to mine the curriculum of an entire university. A curriculum profile is constructed, providing insights about university curriculum design and the match between one institution’s curriculum and the job market for a cluster of data-intensive fields. Automating the delivery of diagnostic information like this would enable institutions to ensure that their professionally-oriented degrees meet the needs of industry, so helping to improve learner outcomes and graduate employability.
Gu, C, Xiong, J, Shi, Z & Liu, B 1970, 'Video Cooperative Caching in High-Speed Train by using Differential Evolution Algorithm', 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE.
View/Download from: Publisher's site
Gu, X & Cao, Z 1970, 'An EEG Majority Vote Based BCI Classification System for Discrimination of Hand Motor Attempts in Stroke Patients', Communications in Computer and Information Science, Springer International Publishing, pp. 46-53.
View/Download from: Publisher's site
View description>>
Stroke patients have symptoms of cerebral functional disturbance that could aggressively impair patient’s physical mobility, such as hand impairments. Although rehabilitation training from external devices is beneficial for hand movement recovery, for initiating motor function restoration purposes, there are still valuable research merits for identifying the side of hands in motion. In this preliminary study, we used an electroencephalogram (EEG) dataset from 8 stroke patients, with each subject conducting 40 EEG trials of left motor attempts and 40 EEG trials of right motor attempts. Then, we proposed a majority vote based EEG classification system for identifying the side in motion. In specific, we extracted 1–50 Hz power spectral features as input for a series of well-known classification models. The predicted labels from these classification models were compared and a majority vote based method was applied, which determined the finalised predicted label. Our experiment results showed that our proposed EEG classification system achieved $$99.83 \pm 0.42 \% $$ accuracy, $$ 99.98 \pm 0.13\% $$ precision, $$ 99.66 \pm 0.84 \% $$ recall, and $$ 99.83 \pm 0.43\% $$ f-score, which outperformed the performance of single well-known classification models. Our findings suggest that the superior performance of our proposed majority vote based EEG classification system has the potential for stroke patients’ hand rehabilitation.
Gui, L, Xiao, F, Zhou, Y, Shu, F & Yu, S 1970, 'Performance analysis of indoor localization based on channel state information ranging model', Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Mobihoc '20: The Twenty-first ACM International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, ACM, USA - Online, pp. 191-200.
View/Download from: Publisher's site
View description>>
Due to robustness against multi-path effect, channel state information (CSI) of Orthogonal Frequency Division Multiplexing (OFDM) systems is supposed to provide accurate distance measurement for indoor localization. However, we find that the original CSI ranging model is biased, so the model cannot be used to directly derive Cramer-Rao lower bound (CRLB) of positioning error for CSI-ranging based localization scheme. In this paper we first analyze the estimation bias of the original CSI ranging model according to indoor wireless channel model. Then we propose a negative power summation ranging model which can be used as an unbiased ranging model for both Line-Of-Sight (LOS) and Non-LOS scenarios. Subsequently, based on the proposed model, we derive both the CRLB of ranging error and the CRLB of positioning error for CSI-ranging localization scheme. Through simulation we validate the bias of the original ranging model and the approximately zero bias of our proposed ranging model. Through comprehensive experiments in different indoor scenarios, localization errors by different ranging models are compared to the CRLB, meanwhile our proposed ranging model is demonstrated to have better ranging and localization accuracy than the original ranging model.
Halkon, B, Cheong, I, Visser, G, Walker, P & Oberst, S 1970, 'An experimental assessment of torsional and package vibration in an industrial engine-compressor system', 12th International Conference on Vibrations in Rotating Machinery, Vibrations in Rotating Machinery, CRC Press, Liverpool, pp. 625-639.
View description>>
An experimental field vibration measurement campaign was conducted on an engine-compressor system. Torsional vibrations were measured using both a strain-gauge based technique at the engine-compressor coupling and a rotational laser vibrometer at the torsional vibration damper. Package vibration measurements were simultaneously captured using a number of accelerometers mounted at various locations on the engine and compressor casings. Findings from the study include the observation that the coupling/damper dominant order 1.5 torsional vibration level was higher at idle (c14.1 Hz) than at full speed (c19.1 Hz) and that this is likely the result of the coincidence of the first torsional natural frequency (c19-20 Hz); vibration remained within limits. The package vibration observed was in general within limits and displayed the expected behaviour when shaft speeds coincided with structural resonances. Increasing of system load was observed to result in package vibration level increase in the engine but reduction in the compressor and this is suspected to be as a result of the effect of increased damping. Induced cylinder misfire scenarios were shown to lead to higher vibration levels. To the authors’ knowledge, this is the first time that angular displacement, vibratory torque and package vibration have been simultaneously measured, analysed and reported in an industrial context/scenario. It is hoped that this contribution might, therefore, serve as a practical guide to vibration engineers that wish to embark on similar campaigns.
Huang, W, Zhou, S, Zhu, T, Liao, Y, Wu, C & Qiu, S 1970, 'Improving Laplace Mechanism of Differential Privacy by Personalized Sampling', 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), IEEE.
View/Download from: Publisher's site
Hughes, M, Garcia, J, Wilcox, F, Sazdov, R, Johnston, A & Bluff, A 1970, 'Immerse: Game Engines for Audio-Visual Art in the Future of Ubiquitous Mixed Reality', ICLI 2020 : International Conference on Live Interfaces, Trondheim.
Ikram, MA, Sharma, N, Raza, M & Hussain, FK 1970, 'Dynamic Ranking System of Cloud SaaS Based on Consumer Preferences - Find SaaS M2NFCP', Advances in Intelligent Systems and Computing, International Conference on Advanced Information Networking and Applications, Springer International Publishing, Japan, pp. 1000-1010.
View/Download from: Publisher's site
View description>>
Software as a Service (SaaS) is a type of software application that runs and operates over a cloud computing infrastructure. SaaS has grown more dramatically compared to other cloud services delivery models (i.e. PaaS and IaaS) in terms of the number of available services. This rapid growth in SaaS brings a lot of challenges for consumers in selecting the optimum services. The aim of this article is to propose a ranking system for SaaS based on consumer’s preferences called Find SaaS M2NFCP. The proposed ranking system is based on measuring the shortest distance to the minimum and maximum of the selected consumer’s non-functional preferences. In addition, linguistic terms are taken into account to weight the most important non-functional preferences. The proposed system is evaluated against traditional SaaS ranking systems using data collected from online CRM SaaS and achieved improved results.
Inibhunu, C & McGregor, C 1970, 'Application of TPRMine Method for Identification of Temporal Changes on Patients with COPD: A Case Study in Telehealth', 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Rochester, MN, USA, pp. 380-383.
View/Download from: Publisher's site
View description>>
Monitoring the vital status of an aging population especially those with chronic diseases can potentially reduce the multiple emergency room visits and hospitalizations if patients and care providers are provided with information that might help them make informed decisions on appropriate cause of actions. This process can be enabled by utilizing temporal abstraction and deriving temporal patterns in order to understand the underlying temporal relationships on vital status in data collected from patients participating in telehealth programs in combination with other data sets in order to get a complete patient flow. Such discovery can highlight when an elderly patient is at risk of an adverse event and this is information that can be utilized for provision of appropriate care for the patient. This paper demonstrates application of a method for deriving temporal patterns from patient's physiological data thereby quantifying the many states a patient can transition to before, during and after an adverse event. With this approach, it is possible to quantify patients with vital scores based on their physiology.
Inibhunu, C & McGregor, C 1970, 'Identification of Temporal Changes on Patients at Risk of LONS with TPRMine: A Case Study in NICU', 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Rochester, MN, USA, pp. 33-36.
View/Download from: Publisher's site
View description>>
A neonatal intensive care unit (NICU) provides specialized care for preterm or ill term infants. The onset of many conditions they can develop are not obvious to physicians until they are significantly impacted and this could result in death. An example of such a problem is neonatal infection which is a common cause of death for premature infants. It remains a challenging task for clinicians to accurately diagnose the presence of bacteria on patients with frequent presence of multiple comorbidities. There is potential for early detection of neonatal infections by timely analysis of patient physiological data and this can lead to improved health outcome of critically ill patients. This paper demonstrates application of a method for Temporal Pattern Recognition and Mining (TPRMine) in order to (a) understand if continuous analysis of temporal changes in patient physiological data streams can lead to discovery of pathophysiological patterns from patients at risk of neonatal sepsis and, (b) utilize the resulting analysis for formulating and testing hypothesis facilitating statistical quantification of patients.
Islam, MR, Liu, S, Razzak, I, Kabir, MA, Wang, X, Tilocca, P & Xu, G 1970, 'MHIVis: Visual Analytics for Exploring Mental Illness of Policyholders in Life Insurance Industry', 2020 7th International Conference on Behavioural and Social Computing (BESC), 2020 7th International Conference on Behavioural and Social Computing (BESC), IEEE, Bournemouth, United Kingdom.
View/Download from: Publisher's site
View description>>
Stakeholders such as insurance managers (IMs) in the insurance industry are committed to yet lack the timely and actionable information for alleviating policyholder's mental health concerns and the industry's mental health climate. Existing research has revealed that depression, anxiety, stress, etc., can provide deeper insights into policyholders' mental health states. However, such data remain unexplored for supporting stakeholders and government goals. In this paper, we design an interactive visualization system to provide deeper insight into policyholder's mental health states. Our study has three implications: (i) insurance data are potentially useful for understanding policyholders' mental health; (ii) a dashboard-like visual representation is helpful for the decision-making of Stakeholders; and (iii) some insight into the mental health of Australians have been deduced. Finally, we evaluate the utility of our visualization system by comparing it's features with the existing dashboards.
Jha, M, Richards, D, Porte, M & Atif, A 1970, 'Work-in-Progress—Virtual Agents in Teaching: A Study of Human Aspects', 2020 6th International Conference of the Immersive Learning Research Network (iLRN), 2020 6th International Conference of the Immersive Learning Research Network (iLRN), IEEE, Online, pp. 259-262.
View/Download from: Publisher's site
View description>>
Students require human intelligence and social interaction in the form of academic assistance at different times of their study period. Their desire to get and find academic assistance varies and is dependent on many factors such as attendance mode, personal situation, semester timetables, and assessment due dates. Providing students with access to this expertise when it is needed and to large numbers of students is problematic. Virtual Agents (VAs) seek to provide a technology-enabled social element to encourage and provide timely support to aid students’ learning. We have implemented 4 unit-specific VIRtual Teaching Assistants (VIRTAs) across 2 universities to provide support to answer student’s questions about various aspects of the unit. In this paper, we present the usage patterns of students to show how many questions were asked by students and at what point of time in the semester the questions were asked addressing the desire to find assistance when required from VIRTA.
Jin, M, Xiong, J, Liu, B & Xiao, L 1970, 'On Channel Classification by Using DTMB Signal', 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Paris, France.
View/Download from: Publisher's site
View description>>
This paper proposes machine learning algorithms to classify channels by using Digital Terrestrial Multimedia Broadcast (DTMB) signal. Channel state information (CSI) usually reflects the environment where a receiver is in. In this paper, the DTMB signal is adopted to extract the CSI features, including cross-correlation of the PN sequence in frame header and the baseband DTMB signal and the high order cumulants (HOCs) of the DTMB signal. Machine Learning algorithms, K-nearest neighbor (KNN), supported vector machine (SVM), Random Forest and Neural Network with one hidden layer, are employed respectively to classify and recognize ten typical broadcasting channel models. Simulations illustrate that the accuracy of the scheme based on the PN correlation features outweighs the HOCs features; and the adopted classification algorithms all show good performance in terms of accuracy; moreover, KNN has the lowest complexity compared to the other three. The accuracy of KNN based on PN correlation is over 95% even when the SNR is below -5dB if the correlation gains of two neighbour frame headers are combined.
Kacprzyk, J, Merigó, JM, Nurmi, H & Zadrożny, S 1970, 'Multi-agent Systems and Voting: How Similar Are Voting Procedures', Springer International Publishing, pp. 172-184.
View/Download from: Publisher's site
Kalantar, B, Ueda, N, Al-Najjar, HAH, Saeidi, V, Gibril, MBA & Halin, AA 1970, 'A COMPARISON BETWEEN THREE CONDITIONING FACTORS DATASET FOR LANDSLIDE PREDICTION IN THE SAJADROOD CATCHMENT OF IRAN', ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Copernicus GmbH, pp. 625-632.
View/Download from: Publisher's site
View description>>
Abstract. This study investigates the effectiveness of three datasets for the prediction of landslides in the Sajadrood catchment (Babol County, Mazandaran Province, Iran). The three datasets (D1, D2 and D3) are constructed based on fourteen conditioning factors (CFs) obtained from Digital Elevation Model (DEM) derivatives, topography maps, land use maps and geological maps. Precisely, D1 consists of all 14 CFs namely altitude, slope, aspect, topographic wetness index (TWI), terrain roughness index (TRI), distance to fault, distance to stream, distance to road, total curvature, profile curvatures, plan curvature, land use, steam power index (SPI) and geology. D2, on the other hand, is a subset of D1, consisting of eight CFs. This reduction was achieved by exploiting the Variance Inflation Factor, Gini Importance Indices and Chi-Square factor optimization methods. Dataset D3 includes only selected factors derived from the DEM. Three supervised classification algorithms were trained for landslide prediction namely the Support Vector Machine (SVM), Logistic Regression (LR), and Artificial Neural Network (ANN). Experimental results indicate that D2 performed the best for landslide prediction with the SVM producing the best overall accuracy at 82.81%, followed by LR (81.71%) and ANN (80.18%). Extensive investigations on the results of factor optimization analysis indicate that the CFs distance to road, altitude, and geology were significant contributors to the prediction results. Land use map, slope, total-, plan-, and profile curvature and TRI, on the other hand, were deemed redundant. The analysis also revealed that sole reliance on Gini Indices could lead to inefficient optimization.
Khan, S & Hussain, FK 1970, 'A SOA Based SLA Negotiation and Formulation Architecture for Personalized Service Delivery in SDN', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 108-119.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2020. Supporting end-to-end personalized Quality of Services (QoS) delivery in existing network architecture is an ongoing issue. Software Defined Networking (SDN) model has emerged in response to the limitations of traditional network. Integrating Software Defined Network (SDN) architecture with Service Oriented Architecture (SOA) brings new concept for future service oriented delivery in SDN services. Researchers from both academic and industry are working to resolve the QoS limitations of service delivery, however; most of the proposed solutions are application oriented and unable to provide a reliable personalized QoS delivery in future service oriented SDN. This research propose a reliable Service Level Agreement (SLA) oriented Service Negotiation framework that would be able to provide reputation based personalized service delivery and assist in QoS management in SDN for informed decision making. Moreover, potential benefits of the proposed framework are also discussed in this paper in social, scientific and business aspects.
Khan, S & Hussain, FK 1970, 'Evaluation of SLA Negotiation for Personalized SDN Service Delivery', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 579-590.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. Ensuring the quality of services (QoS) is crucial in a service-oriented business model. A service level agreement (SLA) is an important agreement between a consumer and a provider and is a key element in ensuring QoS. Service negotiation occurs in an initial stage of the SLA where service requirements are agreed upon to avoid conflict situations. Guaranteeing QoS is one of the key challenges in software defined networking (SDN). Several intelligent solutions have been proposed, however most of them are application focused and are unable to provide personalized and reliable QoS delivery in SDN. This paper presents a reputation data-driven SLA negotiation framework that provides personalized and reliable service delivery in SDN and assists in QoS management for informed decision making. In addition, a fuzzy inference system (FIS) is used to implement the framework and the results are discussed in this paper.
Kitto, K, Sarathy, N, Gromov, A, Liu, M, Musial, K & Buckingham Shum, S 1970, 'Towards skills-based curriculum analytics', Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, LAK '20: 10th International Conference on Learning Analytics and Knowledge, ACM, ELECTR NETWORK, pp. 171-180.
View/Download from: Publisher's site
Kocaballi, AB, Coiera, E & Berkovsky, S 1970, 'Revisiting Habitability in Conversational Systems', Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20: CHI Conference on Human Factors in Computing Systems, ACM.
View/Download from: Publisher's site
Kocaballi, AB, Quiroz, JC, Laranjo, L, Rezazadegan, D, Kocielnik, R, Clark, L, Liao, QV, Park, SY, Moore, RJ & Miner, A 1970, 'Conversational Agents for Health and Wellbeing', Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20: CHI Conference on Human Factors in Computing Systems, ACM.
View/Download from: Publisher's site
Kutay, C, Szapiro, D, Garcia, J & Raffe, W 1970, 'Learning on country: A game-based approach towards preserving an Australian aboriginal language', ICCE 2020 - 28th International Conference on Computers in Education, Proceedings, International Conference of Innovation in Media and Visual Design, Atlantis Press, Tangerang, Indonesia, pp. 540-545.
View/Download from: Publisher's site
View description>>
Nginya naaa-da banga-mari dalang wingaru-dane. Ngyina diya-ma murri dalan-wa dalang-ra1. This paper presents the design of a prototype 360 degree, interactive, Indigenous language learning game to support the reclamation of Indigenous languages through immersion in community oral traditions expressed through visual and audio effects and the choreography of the characters within the game. The project is underpinned by a foundational acknowledgement that Aboriginal culture is held within the country specific to the language and embodied in that country's landscape. Learning within the game is based around themes of country, weather, local environment and kinship. Animation and design principles were applied from an embodied communication perspective to increase engagement and to reinforce language learning principles, with Indigenous animation and design students bringing an Indigenous perspective to the gestural and design content of the game.
Li, K, Lu, J, Zuo, H & Zhang, G 1970, 'Multi-Source Domain Adaptation with Distribution Fusion and Relationship Extraction', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Transfer learning is gaining increasing attention due to its ability to leverage previously acquired knowledge to assist in completing a prediction task in a similar domain. While many existing transfer learning methods deal with single source and single target problem without considering the fact that a target domain maybe similar to multiple source domains, this work proposes a multi-source domain adaptation method based on a deep neural network. Our method contains common feature extraction, specific predictor learning and target predictor estimation. Common feature extraction explores the relationship between source domains and target domain by distribution fusion and guarantees the strength of similar source domains during training, something which has not been well considered in existing works. Specific predictor learning trains source tasks with cross-domain distribution constraint and cross-domain predictor constraint to enhance the performance of single source. Target predictor estimation employs relationship extraction and selective strategy to improve the performance of the target task and to avoid negative transfer. Experiments on real-world visual datasets show the performance of the proposed method is superior to other deep learning baselines.
Li, Y, Fan, X, Chen, L, Li, B, Yu, Z & Sisson, SA 1970, 'Recurrent Dirichlet Belief Networks for interpretable Dynamic Relational Data Modelling', Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}, International Joint Conferences on Artificial Intelligence Organization, pp. 2470-2476.
View/Download from: Publisher's site
View description>>
The Dirichlet Belief Network~(DirBN) has been recently proposed as a promising approach in learning interpretable deep latent representations for objects. In this work, we leverage its interpretable modelling architecture and propose a deep dynamic probabilistic framework -- the Recurrent Dirichlet Belief Network~(Recurrent-DBN) -- to study interpretable hidden structures from dynamic relational data. The proposed Recurrent-DBN has the following merits: (1) it infers interpretable and organised hierarchical latent structures for objects within and across time steps; (2) it enables recurrent long-term temporal dependence modelling, which outperforms the one-order Markov descriptions in most of the dynamic probabilistic frameworks; (3) the computational cost scales to the number of positive links only. In addition, we develop a new inference strategy, which first upward-and-backward propagates latent counts and then downward-and-forward samples variables, to enable efficient Gibbs sampling for the Recurrent-DBN. We apply the Recurrent-DBN to dynamic relational data problems. The extensive experiment results on real-world data validate the advantages of the Recurrent-DBN over the state-of-the-art models in interpretable latent structure discovery and improved link prediction performance.
Liang, R, Zhang, Q, Lu, J, Zhang, G & Wang, J 1970, 'A cross-domain group recommender system with a generalized aggregation strategy', Developments of Artificial Intelligence Technologies in Computation and Robotics, 14th International FLINS Conference (FLINS 2020), WORLD SCIENTIFIC, Cologne, Germany, pp. 455-462.
View/Download from: Publisher's site
View description>>
Developing group recommender systems has been a vital requirement due to the prevalence of group activities. However, existing group recommender systems still suffer from data sparsity problem because they rely on individual recommendation methods with a predefined aggregation strategy. To solve this problem, we propose a cross-domain group recommender system with a generalized aggregation strategy in this paper. A generalized aggregation strategy is developed to build group profile in the target domain with the help of individual preferences extracted from a source domain with sufficient data. By adding the constraints between the individual preference and the group profile, knowledge is transferred to assist in the group recommendation task in the target domain. Experiments on a real-world dataset justify the effectiveness and rationality of our proposed cross-domain recommender systems. The results show that we increase the accuracy of group recommendation on different sparse ratios with the help of individual data from the source domain.
Liao, W, Zhang, Q, Zhang, G & Lu, J 1970, 'Multi-source shared autoencoder for cross-domain recommendation', Developments of Artificial Intelligence Technologies in Computation and Robotics, 14th International FLINS Conference (FLINS 2020), WORLD SCIENTIFIC, Cologne, Germany, pp. 463-471.
View/Download from: Publisher's site
View description>>
Cross-domain recommendation has been proved to be an effective solution to the data sparsity problem, which commonly exists in recommender systems. However, a challenging issue remains to be studied: how to transfer valuable knowledge from multiple source domains and balance the effect of them to the target domain under a sparse setting. To handle the issue, we develop a multi-source shared cross-domain recommender system, which aims to extract shared latent features from multiple domains to assist the recommendation task in a sparse target domain. It’s achieved through a multiple domain-shared autoencoder and an attentive module. Then we further propose an enhanced method by making it specific to each user so that it can provide personalized services. Experiments conducted on real world datasets show that the proposed methods perform well and improve the accuracy of recommendations in the target domain even though the datasets are quite sparse.
Lin, C-T, Huang, K-C, Pal, NR, Cao, Z, Liu, Y-T, Fang, C-N, Hsieh, T-Y, Lin, Y-Y & Wu, S-L 1970, 'Adaptive Subspace Sampling for Class Imbalance Processing-Some clarifications, algorithm, and further investigation including applications to Brain Computer Interface', 2020 International Conference on Fuzzy Theory and Its Applications (iFUZZY), 2020 International Conference on Fuzzy Theory and Its Applications (iFUZZY), IEEE, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Kohonen's Adaptive Subspace Self-Organizing Map (ASSOM) learns several subspaces of the data where each subspace represents some invariant characteristics of the data. To deal with the imbalance classification problem, earlier we have proposed a method for oversampling the minority class using Kohonen's ASSOM. This investigation extends that study, clarifies some issues related to our earlier work, provides the algorithm for generation of the oversamples, applies the method on several benchmark data sets, and makes an application to a Brain Computer Interface (BCI) problem. First we compare the performance of our method using some benchmark data sets with several state-of-The-Art methods. Finally, we apply the ASSOM-based technique to analyze a BCI based application using electroencephalogram (EEG) datasets. Our results demonstrate the effectiveness of the ASSOM-based method in dealing with imbalance classification problem.
Lister, R 1970, 'On the cognitive development of the novice programmer', Proceedings of the 9th Computer Science Education Research Conference, CSERC '20: the 9th Computer Science Education Research Conference, ACM.
View/Download from: Publisher's site
Liu, A, Zhang, G, Wang, K & Lu, J 1970, 'Fast Switch Naïve Bayes to Avoid Redundant Update for Concept Drift Learning', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK, pp. 1-7.
View/Download from: Publisher's site
View description>>
In data stream mining, concept drift may cause the predictions given by machine learning models become less accurate as time passes. Existing concept drift detection and adaptation methods are built based on a framework that is buffering new samples if a drift-warming level is triggered and retraining a new model if a drift-alarm level is triggered. However, these methods neglected the problem that the performance of a learning model could be more sensitive to the amount of training data rather than the concept drift. In other words, a retrained model built on very few data instances could be even worse than the old model trained before the drift. To elaborate and address this problem, we propose a fast switch Naïve Bayes model (fsNB) for concept drift detection and adaptation. The intuition is to apply the idea of following the leader in online learning. We manipulate a sliding and an incremental Naïve Bayes classifier, if the sliding one overwhelms the incremental one, the model reports a drift. The experimental evaluation shows the advantages of fsNB and demonstrates that retraining may not be the best options for a marginal drift.
Liu, C, Zowghi, D & Talaei-Khoei, A 1970, 'Empirical Evaluation of the Influence of EMR Alignment to Care Processes on Data Completeness', Proceedings of the Annual Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences.
View/Download from: Publisher's site
Liu, F, Xu, W, Lu, J, Zhang, G, Gretton, A & Sutherland, DJ 1970, 'Learning deep kernels for non-parametric two-sample tests', 37th International Conference on Machine Learning, ICML 2020, International Conference on Machine Learning, MLR, Virtual, pp. 6272-6282.
View description>>
We propose a class of kernel-based two-sample tests, which aim to determine whether two sets of samples are drawn from the same distribution. Our tests are constructed from kernels parameterized by deep neural nets, trained to maximize test power. These tests adapt to variations in distribution smoothness and shape over space, and are especially suited to high dimensions and complex data. By contrast, the simpler kernels used in prior kernel testing work are spatially homogeneous, and adaptive only in lengthscale. We explain how this scheme includes popular classifier-based two-sample tests as a special case, but improves on them in general. We provide the first proof of consistency for the proposed adaptation method, which applies both to kernels on deep features and to simpler radial basis kernels or multiple kernel learning. In experiments, we establish the superior performance of our deep kernels in hypothesis testing on benchmark and real-world data.
Liu, F, Zhang, G & Lu, J 1970, 'A Novel Non-parametric Two-Sample Test on Imprecise Observations', 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Glasgow, United Kingdom, pp. 1-6.
View/Download from: Publisher's site
View description>>
In kernel non-parametric two-sample test, we aim to determine whether two sets of precise observations (i.e., samples) are from the same distribution based on a selected kernel. However, in real world, precise observations may be unavailable. For example, readings on an analogue measurement equipment are not precise numbers but intervals since there is only a finite number of decimals available. Hence, we consider a new and more realistic problem setting-two-sample test on imprecise observations. We show that the test power of existing kernel two- sample tests will drop significantly if they do not take care of the vagueness of the imprecise observations, and to this end, we propose a fuzzy-based maximum mean discrepancy (F-MMD), a powerful two-sample test on imprecise observations. F-MMD is based on a novel fuzzy-based kernel function that can measure the discrepancy between two imprecise observations. This novel kernel function takes care of the vagueness of the imprecise observations and its parameters are optimized to maximize the approximate test power of F-MMD. Experiments demonstrate that F-MMD significantly outperforms competitive two-sample test methods when facing imprecise observations.
Liu, Z, Yao, L, Bai, L, Wang, X & Wang, C 1970, 'Spectrum-Guided Adversarial Disparity Learning', Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM, San Diego, CA, United States.
View/Download from: Publisher's site
Liu, Z, Yao, L, Wang, X, Bai, L & An, J 1970, 'Are You A Risk Taker? Adversarial Learning of Asymmetric Cross-Domain Alignment for Risk Tolerance Prediction', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK.
View/Download from: Publisher's site
Madhuri, M, Gill, AQ & Khan, HU 1970, 'IoT-Enabled Smart Child Safety Digital System Architecture.', ICSC, 2020 IEEE 14th International Conference on Semantic Computing, IEEE, San Diego, CA, USA, pp. 166-169.
View/Download from: Publisher's site
View description>>
Safety of a child in a large public event is a major concern for event organizers and parents. This paper addresses this important concern and proposes an architecture model of the IoT-enable smart child safety tracking digital system. This IoT-enabled digital system architecture integrates the Cloud, Mobile and GPS technology to precisely locate the geographical location of a child on an event map. The proposed architecture model describes the people, information, process, and technology architecture elements, and their relationships for the complex IoT-enable smart child safety tracking digital system. The proposed architecture model can be used as a reference or guide to assist in the safe architecture driven development of the various child tracking digital systems for different public events.
Mathieson, L & Moscato, P 1970, 'The Unexpected Virtue of Problem Reductions or How to Solve Problems Being Lazy but Wise', 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE.
View/Download from: Publisher's site
McGregor, C, Inibhunu, C, Glass, J, Doyle, I, Gates, A, Madill, J & Pugh, JE 1970, 'Health Analytics as a Service with Artemis Cloud: Service Availability', 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) in conjunction with the 43rd Annual Conference of the Canadian Medical and Biological Engineering Society, IEEE, United States, pp. 5644-5648.
View/Download from: Publisher's site
View description>>
Critical care units internationally contain medical devices that generate Big Data in the form of high speed physiological data streams. Great opportunities exist for systemic and reliable approaches for the analysis of high speed physiological data for clinical decision support. This paper presents the instantiation of a Big Data analytics based Health Analytics as-a-Service model. The availability results of the deployment of two instances of Artemis Cloud to support two neonatal ICUs (NICUs) in Ontario Canada are presented.
Mittal, DA, Liu, S & Xu, G 1970, 'Electricity Price Forecasting using Convolution and LSTM Models', 2020 7th International Conference on Behavioural and Social Computing (BESC), 2020 7th International Conference on Behavioural and Social Computing (BESC), IEEE, pp. 1-4.
View/Download from: Publisher's site
View description>>
Electricity Market uses Demand and Supply chain strategy. Also, it is prone to random fluctuations that directly impact profit. Therefore forecasting demand becomes very important to mitigate the consequences of price dynamics. This paper proposes a Deep Learning model using Long Short Term Memory (LSTM) and Convolution Neural Network to forecast future electricity prices on the Australian electricity market and compares them with other state of the art models. We have selected evaluation metrics to prove that our model outperforms the other existing models for electricity price prediction.
Mohammed, A, Hawryszkiewycz, I & Kozanoglu, DC 1970, 'Enabling Strategic Agility through Dynamic Cloud Capability', ACIS 2020 Proceedings - 31st Australasian Conference on Information Systems, Australasian Conference on Information System, Wellington, NZ, pp. 1-7.
View description>>
Organisations and its leadership team are confronted with the challenge of emerging digital economy, fast-changing innovations, globalisation and pandemics such as Covid-19. Organisations need to alter its business models to counter these effects, hence being strategically agile. Among the most prominent solutions proposed by various authors are a set of capabilities, such as strategic sensitivity, resource fluidity and leadership unity in organisational settings. In this research, we are proposing the Dynamic Cloud Capability (DCC) Framework which aims to help organisations realize an IT/IS strategy enabling them to improve Strategic Agility. DCC builds upon Dynamic IT Capability theory. We will be using a quantitative survey-based approach that involves IT SMEs in Australia, to investigate the effect of DCC on Resource Fluidity and Strategic Agility. This is a research in progress article, which intends to outline the literature review, theoretical underpinning, research methodology and expected results.
Mols, I, van den Hoven, E & Eggen, B 1970, 'Everyday Life Reflection', Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '20: Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Sydney, AUSTRALIA, pp. 67-79.
View/Download from: Publisher's site
Mughal, F, Raffe, W & Garcia, J 1970, 'Emotion Recognition Techniques for Geriatric Users: A Snapshot', 2020 IEEE 8th International Conference on Serious Games and Applications for Health (SeGAH), 2020 IEEE 8th International Conference on Serious Games and Applications for Health(SeGAH), IEEE, pp. 1-8.
View/Download from: Publisher's site
View description>>
Several elderly people prefer their independence, however due to cognitive impairment or other age-related ailments they cannot necessarily be left on their own. In order to aid the elderly in living independently, we consider the use of emotion recognition as a relatively autonomous monitoring approach for geriatric people. An analysis and comparison among various emotion recognition studies has shown that close to none of these studies have taken age related cognitive decline into account, which comes with various issues. The aim of this paper is to provide an overview of current emotion recognition techniques and why they may not necessarily be suitable or feasible for geriatric people. This analysis serves as a foundation for a proposed conceptual framework toward an autonomous monitoring system for geriatric people which could minimize the need for explicit user input or interaction while still monitoring the geriatric person(s) well-being.
Naderpour, M, Rizeei, HM & Ramezani, F 1970, 'Wildfire Prediction: Handling Uncertainties Using Integrated Bayesian Networks and Fuzzy Logic', 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Glasgow, United Kingdom.
View/Download from: Publisher's site
View description>>
Wildfire is one of the most frequent natural hazards across the globe, one which has cast a malevolent shroud over many communities in recent years, causing significant risk to human lives, infrastructure, and property. Wildfires are hydrogeological events which are bound to escalate, especially due to climate change. They are different from other natural hazards as they are mainly triggered by human interventions rather than natural triggers. The wildfire risk management is a complex process with many uncertainties in the assessment, fire behavior and spread modelling, and decision making. To predict wildfires, sophisticated temporal geospatial methods are required. This paper develops a wildfire probability prediction method considering the capabilities of Bayesian networks and fuzzy logic that can handle uncertainties and update probabilities in response to the availability of new data. The model takes into account the data from a geographic information system (GIS) for a specific area at micro level to estimate the wildfire probability and is able to update the probability due to any planned or unplanned changes in the area. Therefore, the proposed method can feed to future macro and micro risk-based decision-making situations in wildfire prone areas. The method is evaluated through a sensitivity analysis and its performance is investigated through a case study in New South Wales (NSW), Australia.
Nahar, K & Gill, AQ 1970, 'A Review Towards the Development of Ontology Based Identity and Access Management Metamodel.', AINA Workshops, International Conference on Advanced Information Networking and Applications, Springer, Caserta, Italy, pp. 223-232.
View/Download from: Publisher's site
View description>>
Building an identity and access management (IAM) system that satisfies business needs and can evolve over time with the ever-changing business environment, is always a challenging endeavour. This calls for the need of an ontology based adaptive IAM metamodel, which can adapt to and instantiated for different situations. To achieve this objective, the first step is to identify the relevant elements and their relationships for developing a detailed IAM ontology. Thus, this paper mainly focuses on the review of the available key models as a starting point for the identification of relevant elements and relationships for the development of the adaptable and yet generic metamodel for our industry research partner. This paper uses the graph modelling approach to present the identified elements and their relationship as an ontology, which can be used for developing the metamodel.
Naji, M, Braytee, A, Anaissi, A, Sianaki, OA & Al-Ani, A 1970, 'Optimizing the Waiting Time for Airport Security Screening Using Multiple Queues and Servers', Complex, Intelligent, and Software Intensive Systems Proceedings of the 13th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2019), International Conference on Complex, Intelligent and Software Intensive Systems, Springer International Publishing, Australia, pp. 496-507.
View/Download from: Publisher's site
View description>>
Airport security screening processes are essential to ensure the safety of passengers and the aviation industry. Security at airports has improved noticeably in recent years through the utilisation of state-of-the-art technologies and highly trained security officers. However, maintaining a high level of security can be costly to operate and implement and can cause delays for passengers and airlines. In optimising a security process it is essential to strike a balance between time delays, security and reduced operation cost. This paper uses queueing theory as a method to study the impact of queue formation and the size of the security area on the average waiting time for the case of multi-lane parallel servers. An experiment is conducted to validate the proposed approach.
Nalamati, M, Sharma, N, Saqib, M & Blumenstein, M 1970, 'Automated Monitoring in Maritime Video Surveillance System', 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
Maritime surveillance for intruders/illegal activities requires monitoring of a large area of the coastline. This task being manually exhaustive, would benefit immensely by application of object detection techniques to surveillance videos. However, object detection models trained on general objects datasets cannot be expected to give best performance for this scenario as marine vessels are only a small subset of these huge datasets and also do not classify the specific type of sea vehicle. Hence, their benchmarks are not appropriate for maritime surveillance. Some studies have been done with applications of Convolutional Neural Networks (CNN) for ship/boat detection on private and publicly available sea vessels datasets. This paper presents a summary of the benchmarks so far and presents our experiments of the latest object detection techniques for combined marine vessels dataset. A survey of the currently available datasets is also given. Results of our experiments in terms of mean Average Precision (mAP) and Frames Per Second (FPS) are presented.
Naseem, U & Musial, K 2019, 'DICE: Deep intelligent contextual embedding for twitter sentiment analysis', Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, International Conference on Document Analysis and Recognition, IEEE, Sydney, Australia, pp. 953-958.
View/Download from: Publisher's site
View description>>
© 2019 IEEE. The sentiment analysis of the social media-based short text (e.g., Twitter messages) is very valuable for many good reasons, explored increasingly in different communities such as text analysis, social media analysis, and recommendation. However, it is challenging as tweet-like social media text is often short, informal and noisy, and involves language ambiguity such as polysemy. The existing sentiment analysis approaches are mainly for document and clean textual data. Accordingly, we propose a Deep Intelligent Contextual Embedding (DICE), which enhances the tweet quality by handling noises within contexts, and then integrates four embeddings to involve polysemy in context, semantics, syntax, and sentiment knowledge of words in a tweet. DICE is then fed to a Bi-directional Long Short Term Memory (BiLSTM) network with attention to determine the sentiment of a tweet. The experimental results show that our model outperforms several baselines of both classic classifiers and combinations of various word embedding models in the sentiment analysis of airline-related tweets.
Naseem, U, Musial, K, Eklund, P & Prasad, M 1970, 'Biomedical Named-Entity Recognition by Hierarchically Fusing BioBERT Representations and Deep Contextual-Level Word-Embedding', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Text mining in the biomedical domain is increasingly important as the volume of biomedical documents increases. Thanks to advances in natural language processing (NLP), extracting valuable information from the biomedical literature is gaining popularity among researchers, and deep learning has enabled the development of effective biomedical text mining models. However, directly applying advancements in NLP to biomedical sources often yields unsatisfactory results, due to a word distribution drift from the general language domain corpora to specific biomedical corpora, and this drift introduces linguistic ambiguities. To overcome these challenges, this paper presents a novel method for biomedical named entity-recognition (BioNER) through hierarchically fusing representations from BioBERT, which is trained on biomedical corpora and Deep contextual-level word embeddings to handle the linguistic challenges within biomedical literature. Proposed text representation is then fed to attention-based Bi-directional Long Short Term Memory (BiLSTM) with Conditional random field (CRF) for the BioNER task. The experimental analysis shows that our proposed end-to-end methodology outperforms existing state-of-the-art methods for the BioNER task.
Naseem, U, Razzak, I, Eklund, P & Musial, K 1970, 'Towards Improved Deep Contextual Embedding for the identification of Irony and Sarcasm', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK.
View/Download from: Publisher's site
View description>>
Humans use tonal stress and gestural cues to reveal negative feelings that are expressed ironically using positive or intensified positive words when communicating vocally. However, in textual data, like posts on social media, cues on sentiment valence are absent, thus making it challenging to identify the true meaning of utterances, even for the human reader. For a given post, an intelligent natural language processing system should be able to identify whether a post is ironic/sarcastic or not. Recent work confirms the difficulty of detecting sarcastic/ironic posts. To overcome challenges involved in the identification of sentiment valence, this paper presents the identification of irony and sarcasm in social media posts through transformer-based deep, intelligent contextual embedding - T-DICE - which improves noise within contexts. It solves the language ambiguities such as polysemy, semantics, syntax, and words sentiments by integrating embeddings. T-DICE is then forwarded to attention-based Bidirectional Long Short Term Memory (BiLSTM) to find out the sentiment of a post. We report the classification performance of the proposed network on benchmark datasets for #irony and #sarcasm. Results demonstrate that our approach outperforms existing state-of-the-art methods.
Ngo, HH, Guo, W, Ng, HY, Mannina, G & Pandey, A 1970, 'Preface', Conferences in Research and Practice in Information Technology Series, Elsevier, pp. xxi-xxii.
View/Download from: Publisher's site
Nguyen, T-D, Maszczyk, T, Musial, K, Zöller, M-A & Gabrys, B 1970, 'AVATAR - Machine Learning Pipeline Evaluation Using Surrogate Model', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), IDA 2020: Advances in Intelligent Data Analysis XVIII, Springer International Publishing, Konstanz, Germany, pp. 352-365.
View/Download from: Publisher's site
View description>>
© 2020, The Author(s). The evaluation of machine learning (ML) pipelines is essential during automatic ML pipeline composition and optimisation. The previous methods such as Bayesian-based and genetic-based optimisation, which are implemented in Auto-Weka, Auto-sklearn and TPOT, evaluate pipelines by executing them. Therefore, the pipeline composition and optimisation of these methods requires a tremendous amount of time that prevents them from exploring complex pipelines to find better predictive models. To further explore this research challenge, we have conducted experiments showing that many of the generated pipelines are invalid, and it is unnecessary to execute them to find out whether they are good pipelines. To address this issue, we propose a novel method to evaluate the validity of ML pipelines using a surrogate model (AVATAR). The AVATAR enables to accelerate automatic ML pipeline composition and optimisation by quickly ignoring invalid pipelines. Our experiments show that the AVATAR is more efficient in evaluating complex pipelines in comparison with the traditional evaluation approaches requiring their execution.
Olszak, C, Zurada, J & Kozanoglu, D 1970, 'Introduction to the Minitrack on Business Intelligence and Big Data for Innovative and Sustainable Development of Organizations', Proceedings of the Annual Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, pp. 216-217.
View/Download from: Publisher's site
Orth, D, Thurgood, C & van den Hoven, E 1970, 'Embodying Meaningful Digital Media', Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '20: Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Sydney NSW Australia, pp. 81-94.
View/Download from: Publisher's site
View description>>
© 2020 Association for Computing Machinery. Technological products have become central to the ways in which many people communicate with others, conduct business and spend their leisure time. Despite their prevalence and significance in people's lives, these devices are often perceived to be highly replaceable. From a sustainability perspective, there is value in creating technological products with meaning directly associated with their materiality to reduce the rate of product consumption. We set out to explore the potential for design to promote the formation of product attachment by creating technological devices with meaningful materiality, closely integrating the physical form with the significance of its digital contents. We used the life stories and ongoing input of our intended user as inspiration for the creation of Melo, a bespoke music player. The evaluation and critical reflection of our design process and resulting artefact are used to propose a design strategy for promoting product attachment within the growing sector of technological devices.
Ou, L, Zeng, G, Chang, Y-C & Lin, C-T 1970, 'Multi-Objective Vibration-Based Particle-Swarm-Optimized Fuzzy Controller With Application to Boundary-Following of Mobile-Robot Simulation Environment', 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Toronto, ON, Canada, pp. 1893-1898.
View/Download from: Publisher's site
View description>>
This paper presents a multi-objective vibration-based particle-swarm-optimization (MO-VBPSO) algorithm with enhanced exploration ability and convergence performance, for training fuzzy-controller (FC) to achieve robot control. The MO-VBPSO applies a reference point-based leader selection schema that assigns leaders for MO-PSOs' searching optimal parameters of the FC. Besides, the MO-VBPSO framework is integrated with a vibration factor to strengthen the exploration ability for resolving the local minima issue, which is inspired by the amplitude of the Firework Algorithm (FWA). The evaluation of MO-VBPSO focuses on the effect of the vibration factor by applying it to training a mobile robot in a simulation environment. The evaluation results are discussed concerning exploration ability, convergence performance, and performance stability. Experimental results reveal that the proposed MO- VBPSO lifts the performance of robot training significantly
Panta, A, Khushi, M, Naseem, U, Kennedy, P & Catchpoole, D 1970, 'Classification of Neuroblastoma Histopathological Images Using Machine Learning', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 3-14.
View/Download from: Publisher's site
View description>>
Neuroblastoma is the most common cancer in young children accounting for over 15% of deaths in children due to cancer. Identification of the class of neuroblastoma is dependent on histopathological classification performed by pathologists which are considered the gold standard. However, due to the heterogeneous nature of neuroblast tumours, the human eye can miss critical visual features in histopathology. Hence, the use of computer-based models can assist pathologists in classification through mathematical analysis. There is no publicly available dataset containing neuroblastoma histopathological images. So, this study uses dataset gathered from The Tumour Bank at Kids Research at The Children’s Hospital at Westmead, which has been used in previous research. Previous work on this dataset has shown maximum accuracy of 84%. One main issue that previous research fails to address is the class imbalance problem that exists in the dataset as one class represents over 50% of the samples. This study explores a range of feature extraction and data undersampling and over-sampling techniques to improve classification accuracy. Using these methods, this study was able to achieve accuracy of over 90% in the dataset. Moreover, significant improvements observed in this study were in the minority classes where previous work failed to achieve high level of classification accuracy. In doing so, this study shows importance of effective management of available data for any application of machine learning.
Patibanda, R, Semertzidis, NA, Vranic-Peters, M, La Delfa, JN, Andres, J, Baytaş, MA, Martin-Niedecken, AL, Strohmeier, P, Fruchard, B, Leigh, S-W, Mekler, ED, Nanayakkara, S, Wiemeyer, J, Berthouze, N, Kunze, K, Rikakis, T, Kelliher, A, Warwick, K, van den Hoven, E, Mueller, FF & Mann, S 1970, 'Motor Memory in HCI', Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20: CHI Conference on Human Factors in Computing Systems, ACM, Honolulu, HI.
View/Download from: Publisher's site
View description>>
There is mounting evidence acknowledging that embodiment is foundational to cognition. In HCI, this understanding has been incorporated in concepts like embodied interaction, bodily play, and natural user-interfaces. However, while embodied cognition suggests a strong connection between motor activity and memory, we find the design of technological systems that target this connection to be largely overlooked. Considering this, we are provided with an opportunity to extend human capabilities through augmenting motor memory. Augmentation of motor memory is now possible with the advent of new and emerging technologies including neuromodulation, electric stimulation, brain-computer interfaces, and adaptive intelligent systems. This workshop aims to explore the possibility of augmenting motor memory using these and other technologies. In doing so, we stand to benefit not only from new technologies and interactions, but also a means to further study cognition.
Pelchen, T, Mathieson, L & Lister, R 1970, 'On the Evidence for a Learning Hierarchy in Data Structures Exams', Proceedings of the Twenty-Second Australasian Computing Education Conference, ACE'20: Twenty-Second Australasian Computing Education Conference, ACM, pp. 122-131.
View/Download from: Publisher's site
View description>>
© 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-7686-0/20/02...$15.00 Several previous research studies have found a relationship between the ability of novices to trace and explain code, and the ability to write code. Harrington and Cheng refer to that relationship as the Learning Hierarchy. However, almost all of those studies examined students at the end of their first semester of learning to program (i.e. CS1). This paper is only the third paper to describe a study of explain in plain English questions on students at the end of an introductory data structures course. The preceding two papers reached contradictory conclusions. Corney et al. presented results consistent with the Learning Hierarchy identified in the CS1 studies. However, Harrington and Cheng presented results for data structures students suggesting that the hierarchy reversed by the time students had progressed to the level of learning about data structures; that is, tracing and explaining were skills that followed writing. In our study of data structures students, we present results that are consistent with the Learning Hierarchy derived from the CS1 students. We believe that the reversal identified by Harrington and Cheng can occur, but only as a consequence of a mismatch in the relative difficulty of tracing, explaining and writing questions.
Perdomo, W, Prior, J & Leaney, J 1970, 'How do Colombian software companies evaluate software product quality?', CEUR Workshop Proceedings.
View description>>
Software developers confuse product quality with process quality, leading them to think they are measuring product quality when they are not. This is the main finding of our study of software developers in young companies in Colombia. Software product quality (SPQ) reflects two perspectives: conformance to specifications, and satisfying expectations when it is used under specified conditions. Measuring product quality still remains a problem for software development companies in relation to factors such as cost, effort, time, and competitiveness. There are few studies that show the current state of SPQ in companies, how companies evaluate product quality, and which measures they use to develop and launch products that meet high-quality criteria. This paper presents a study of SPQ in seven young software development companies in a developing country. We used a qualitative research approach to understand, through their experiences and knowledge, how 20 employees—developers, testers, and project managers—and their companies evaluate SPQ, and which measures they apply in their companies. Our results demonstrate that software process quality is better understood, and applied, by these software companies than SPQ. A greater difficulty is that most study participants ‘overlaid’ the idea of product quality with process quality, i.e. they talked about product quality as if it were process quality. These findings have implications for companies that wish to increase competitiveness and productivity, as they must develop a working knowledge of SPQ that is not confused with software process quality. It also has implications for educators, to ensure that the distinction between process and product quality is explicitly taught.
Perera, D, Wang, Y-K, Lin, C-T, Zheng, J, Nguyen, HT & Chai, R 1970, 'Statistical Analysis of Brain Connectivity Estimators during Distracted Driving', 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) in conjunction with the 43rd Annual Conference of the Canadian Medical and Biological Engineering Society, IEEE, United States, pp. 3208-3211.
View/Download from: Publisher's site
View description>>
This paper presents comparison of brain connectivity estimators of distracted drivers and non-distracted drivers based on statistical analysis. Twelve healthy volunteers with more than one year of driving experience participated in this experiment. Lane-keeping tasks and the Math problem-solving task were introduced in the experiment and EEGs (electroencephalogram) were used to record the brain waves. Granger-Geweke causality (GGC), directed transfer function (DTF) and partial directed coherence (PDC) brain connectivity estimation methods were used in brain connectivity analysis. Correlation test and a student's t-test were conducted on the connectivity matrixes. Results show a significant difference between the mean of distracted drivers and non-distracted driver's brain connectivity matrixes. GGC and DTF methods student's t-tests shows a p-value below 0.05 with the correlation coefficients varying from 0.62 to 0.38. PDC connectivity estimation method does not show a significant difference between the connectivity matrixes means unless it is compared with lane keeping task and the normal driving task. Furthermore, it shows a strong positive correlation between the connectivity matrixes.
Perez-Romero, ME, Alfaro-Garcia, VG, Merigo, JM & Flores-Romero, MB 1970, 'Covariance in Ordered Weighted Logarithm Aggregation Operators', 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE.
View/Download from: Publisher's site
Pillai, AG, Ahmadpour, N, Yoo, S, Kocaballi, AB, Pedell, S, Sermuga Pandian, VP & Suleri, S 1970, 'Communicate, Critique and Co-create (CCC) Future Technologies through Design Fictions in VR Environment', Companion Publication of the 2020 ACM Designing Interactive Systems Conference, DIS '20: Designing Interactive Systems Conference 2020, ACM, pp. 413-416.
View/Download from: Publisher's site
View description>>
Design fiction enables HCI and design researchers to co-create, explore and speculate the future. It is growing in popularity given the growing complexities of emerging HCI systems and innovations. Diegetic props (like sound, videos, images) are sometimes used in design fiction to blur the lines between imagination and reality. These props enable the designers to be empathetic, feel present in the fiction as they investigate the complexity of technologies explored within the fiction, critique these technologies and think about their consequences. With a higher level of immersion and sense of embodiment, Virtual Reality (VR) can be a powerful tool for mediating and creating design fiction. However, there are few examples of VR as platform for design fiction. This workshop aims to investigate new opportunities for communicating, critiquing and co-creating design fiction narratives in immersive VR environments.
Potena, C, Carpio, RF, Pietroni, N, Maiolini, J, Ulivi, G, Garone, E & Gasparri, A 1970, 'Suckers Emission Detection and Volume Estimation for the Precision Farming of Hazelnut Orchards.', CCTA, 2020 IEEE Conference on Control Technology and Applications (CCTA), IEEE, Montreal, QC, Canada, pp. 285-290.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. In this work, inspired by the needs of the H2020 European Project PANTHEON11http://www.project-pantheon.eu, we address the hazelnut sucker detection and canopy volume estimation problem on a per-plant basis. Sucker control is an essential but challenging practice in agriculture, given the fact that suckers, i.e., shoots that grow from the tree roots, compete with the tree itself for water and nutrients. This research is motivated by the observation that in current best-practice, sucker control is carried out by applying a non-calibrated amount of chemical inputs to each tree. Indeed, a proper sucker detection and estimation algorithm would represent the enabling technology for an environmentally friendly sucker control approach where the amount of herbicide could be properly calibrated according to the needs of each individual plant. In this work, we propose an end-to-end algorithm for detecting the presence of suckers and for estimating their canopy. First a sparse point cloud-based representation of the suckers is detected, then an approximated canopy estimation is achieved by means of a tailored meshing strategy that performs a leaf-based clustering and an iterative clusters connection. The volume is then estimated by the resulting mesh. Preliminary real-world experiments are provided to corroborate the effectiveness of the proposed canopy estimation strategy.
Prestigiacomo, R, Hadgraft, R, Hunter, J, Locker, L, Knight, S, van den Hoven, E & Martinez-Maldonado, R 1970, 'Learning-centred translucence', Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, LAK '20: 10th International Conference on Learning Analytics and Knowledge, ACM, Frankfurt, Germany, pp. 100-105.
View/Download from: Publisher's site
View description>>
© 2020 Association for Computing Machinery. Teachers are increasingly being encouraged to embrace evidencebased practices. Learning analytics (LA) offer great promise in supporting these by providing evidence for teachers and learners to make informed decisions and transform the educational experience. However, LA limitations and their uptake by educators are coming under critical scrutiny. This is in part due to the lack of involvement of teachers and learners in the design of LA tools. In this paper, we propose a human-centred approach to generate understanding of teachers' data needs through the lens of three key principles of translucence: visibility, awareness and accountability. We illustrate our approach through a participatory design sprint to identify how teachers talk about classroom data. We describe teachers' perspectives on the evidence they need for making better-informed decisions and discuss the implications of our approach for the design of human-centred LA in the next years.
Prior, J & Leaney, J 1970, 'Software Quality and its Entanglements in Practice', Ethnographic Praxis in Industry Conference, Wiley Blackwell, Melbourne, Australia, pp. 163-176.
View description>>
Effective software quality assurance in large-scale, complex software systems is one of the most vexed issues in software engineering, and, it is becoming ever more challenging. Software quality and its assurance is part of software development practice, a messy, complicated and constantly shifting human endeavor. What emerged from our immersive study in a large Australian software development company is that software quality in practice is inextricably entangled with the phenomena of productivity, time, infrastructure and human practice. This ethnographic insight --- made visible to the organization and its developers via the rich picture and the concept of entanglements--- built their trust in our work and expertise. This led to us being invited to work with the software development teams on areas for change and improvement and moving to a participatory and leading role in organizational change.
Prysyazhnyuk, A & McGregor, C 1970, 'A wholistic approach to assessement of adaptation and resilience during spaceflight', Proceedings of the International Astronautical Congress, IAC.
View description>>
Human performance within the context of extreme environments both terrestrially and in outer space continues to lead the frontier of new physiological discoveries, further enhancing the knowledge on limitations of human mind and body systems, the role and activity of adaptation mechanisms, as well as assessment and development of resilience strategies. The acquired knowledge informs the development of innovative prognostic, diagnostic and therapeutic medical tools and resources aboard the spacecraft and in terrestrial medical centres. Despite decades of research and space exploration, the prognostic and diagnostic capacity aboard the spacecraft remains limited and fragmented, while health assessments constitute of questionnaires and collection of nominal physiological parameters, both of which are analyzed retrospectively, upon return to Earth, unless there is an apparent onset of medical contingency which necessitates immediate therapeutic intervention. Even then, the use of the acquired physiological data is limited, as it is being down-sampled to manageable data tuples for clinical evaluation and interpretation. In prior research we proposed the use of a big-data analytics platform, Artemis, for real-time assessment of adaptation during spaceflight. The capability of Artemis to support acquisition, storage and analysis of large volumes of physiological, environmental and activity data presents a great prospect for enhanced medical capacity during long duration spaceflights and deep space exploration. As such, we would like to propose a framework of an extension of Artemis to further incorporate activity data and mental health evaluations, so as to develop a more wholistic approach to assessment of crew's well-being during spaceflight. The proposed extension would also enable investigation of the team dynamics and how interpersonal relationships influence individual's performance and well-being. From a biomedical monitoring perspective, utilization of...
Qiao, M, Yu, J, Liu, T, Wang, X & Tao, D 1970, 'Diversified Bayesian nonnegative matrix factorization', AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, pp. 5420-5427.
View description>>
Nonnegative matrix factorization (NMF) has been widely employed in a variety of scenarios due to its capability of inducing semantic part-based representation. However, because of the non-convexity of its objective, the factorization is generally not unique and may inaccurately discover intrinsic “parts” from the data. In this paper, we approach this issue using a Bayesian framework. We propose to assign a diversity prior to the parts of the factorization to induce correctness based on the assumption that useful parts should be distinct and thus well-spread. A Bayesian framework including this diversity prior is then established. This framework aims at inducing factorizations embracing both good data fitness from maximizing likelihood and large separability from the diversity prior. Specifically, the diversity prior is formulated with determinantal point processes (DPP) and is seamlessly embedded into a Bayesian NMF framework. To carry out the inference, a Monte Carlo Markov Chain (MCMC) based procedure is derived. Experiments conducted on a synthetic dataset and a real-world MULAN dataset for multi-label learning (MLL) task demonstrate the superiority of the proposed method.
Ramakrishnan, RK, Ravichandran, AB, Talabattula, S, Vijayan, MK, Lund, AP & Rohde, PP 1970, 'Photonic Quantum Error Correction of Qudits Using W-state Encoding', 14th Pacific Rim Conference on Lasers and Electro-Optics (CLEO PR 2020), Conference on Lasers and Electro-Optics/Pacific Rim, Optica Publishing Group.
View/Download from: Publisher's site
View description>>
In this paper, we present a passive linear optics error correction scheme for qudits using W-state encoding, based on post selection, which reduces the effects of dephasing noise in photonic quantum communication.
Reddy, TK, Arora, V, Behera, L, Wang, Y-K & Lin, C-T 1970, 'Fuzzy Divergence Based Analysis for Eeg Drowsiness Detection Brain Computer Interfaces', 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Glasgow (UK), pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. EEG signals can be processed and classified into commands for brain-computer interface (BCI). Stable deciphering of EEG is one of the leading challenges in BCI design owing to low signal to noise ratio and non-stationarities. Presence of non-stationarities in the EEG signals significantly perturb the feature distribution thus deteriorating the performance of Brain Computer Interface. Stationary Subspace methods discover subspaces in which data distribution remains steady over time. In this paper, we develop novel spatial filtering based feature extraction methods for dealing with nonstationarity in EEG signals from a drowsiness detection problem (a machine learning regression problem). The proposed method: DivOVR-FuzzyCSP-WS based features clearly outperformed fuzzy CSP based baseline features in terms of both RMSE and CC performance metrics. It is hoped that the proposed feature extraction method based on DivOVR-FuzzyCSP-WS will bring in a lot of interest in researchers working in developing algorithms for signal processing, in general, for BCI regression problems.
Saberi, M, Saberi, Z, Aasadabadi, MR, Hussain, OK & Chang, E 1970, 'A Customer-Oriented Assortment Selection in the Big Data Environment', Advances in E-Business Engineering for Ubiquitous Computing, International Conference on e-Business Engineering, Springer International Publishing, Shanghai, China, pp. 161-172.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. Customers prefer the availability of a range of products when they shop online. This enables them to identify their needs and select products that best match their desires. This is addressed through assortment planning. Some customers have strong awareness of what they want to purchase and from which provider. When considering customer taste as an abstract concept, such customers’ decisions may be influenced by the existence of the variety of products and the current variant market may affect their initial desire. Previous studies dealing with assortment planning have commonly addressed it from the retailer’s point of view. This paper will provide customers with a ranking method to find what they want. We propose that this provision benefits both the retailer and the customer. This study provides a customer-oriented assortment ranking approach. The ranking model facilitates browsing and exploring the current big market in order to help customers find their desired item considering their own taste. In this study, a scalable and customised multi-criteria decision making (MCDM) method is structured and utilised to help customers in the process of finding their most suitable assortment while shopping online. The proposed MCDM method is tailored to fit the big data environment.
Sangiovanni-Vincentelli, A, Sztipanovits, J, Zhu, Q & Yu, S 1970, 'Message from Organizers', 2020 IEEE Workshop on Design Automation for CPS and IoT (DESTION), 2020 IEEE Workshop on Design Automation for CPS and IoT (DESTION), IEEE, Sydney, NSW, Australia, p. VII.
View/Download from: Publisher's site
View description>>
The second DESTION workshop focuses on co-design and co-simulation tools for CPS development, formal methods that address the needs and challenges of incorporating LEC in CPS and IoT, and the application of these approaches in transportation and energy domains. The program of the workshop includes a keynote, presentations of contributed and invited papers, and demonstrations. The COVID-19 pandemic forced us to make the workshop virtual. Consequently, we will miss the stimulating in-person discussions and interactions among researchers. Even with this unique challenge, we hope to continue our path towards becoming a premier forum for researchers and engineers from academia, industry, and government to present and discuss pressing technical challenges, promising solutions, and emerging applications in design automation for CPS and IoT.
Shang, D, Zhang, G & Lu, J 1970, 'Fast concept drift detection using unlabeled data', Developments of Artificial Intelligence Technologies in Computation and Robotics, 14th International FLINS Conference (FLINS 2020), WORLD SCIENTIFIC.
View/Download from: Publisher's site
Shen, S, Zhu, T, Ye, D, Yang, M, Liao, T & Zhou, W 1970, 'Simultaneously Advising via Differential Privacy in Cloud Servers Environment', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ICA3PP 2019: Algorithms and Architectures for Parallel Processing, Springer International Publishing, Melbourne, VIC, Australia, pp. 550-563.
View/Download from: Publisher's site
View description>>
Due to the rapid development of the cloud computing environment, it is widely accepted that cloud servers are important for users to improve work efficiency. Users need to know servers’ capabilities and make optimal decisions on selecting the best available servers for users’ tasks. We consider the process that users learn servers’ capabilities as a multi-agent Reinforcement learning process. The learning speed and efficiency in Reinforcement learning can be improved by transferring the learning experience among learning agents which is defined as advising. However, existing advising frameworks are limited by a requirement during experience transfer, which all learning agents in a Reinforcement learning environment must have the completely same available choices, also called actions. To address the above limit, this paper proposes a novel differential privacy agent advising approach in Reinforcement learning. Our proposed approach can significantly improve the conventional advising frameworks’ application when agents’ choices are not the completely same. The approach can also speed up the Reinforcement learning by the increase of possibility of experience transfer among agents with different available choices.
Shi, Z, Wu, D, Huang, J, Wang, Y-K & Lin, C-T 1970, 'Supervised Discriminative Sparse PCA with Adaptive Neighbors for Dimensionality Reduction', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow (UK), pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Dimensionality reduction is an important operation in information visualization, feature extraction, clustering, regression, and classification, especially for processing noisy high dimensional data. However, most existing approaches preserve either the global or the local structure of the data, but not both. Approaches that preserve only the global data structure, such as principal component analysis (PCA), are usually sensitive to outliers. Approaches that preserve only the local data structure, such as locality preserving projections, are usually unsupervised (and hence cannot use label information) and uses a fixed similarity graph. We propose a novel linear dimensionality reduction approach, supervised discriminative sparse PCA with adaptive neighbors (SDSPCAAN), to integrate neighborhood-free supervised discriminative sparse PCA and projected clustering with adaptive neighbors. As a result, both global and local data structures, as well as the label information, are used for better dimensionality reduction. Classification experiments on nine high-dimensional datasets validated the effectiveness and robustness of our proposed SDSPCAAN.
Shu, Y, Sui, Y, Zhang, H & Xu, G 1970, 'Perf-AL', Proceedings of the 14th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), ESEM '20: ACM / IEEE International Symposium on Empirical Software Engineering and Measurement, ACM, pp. 1-11.
View/Download from: Publisher's site
View description>>
© 2020 IEEE Computer Society. All rights reserved. Context: Many software systems are highly configurable. Different configuration options could lead to varying performances of the system. It is difficult to measure system performance in the presence of an exponential number of possible combinations of these options. Goal: Predicting software performance by using a small configuration sample. Method: This paper proposes PERF-AL to address this problem via adversarial learning. Specifically, we use a generative network combined with several different regularization techniques (L1 regularization, L2 regularization and a dropout technique) to output predicted values as close to the ground truth labels as possible. With the use of adversarial learning, our network identifies and distinguishes the predicted values of the generator network from the ground truth value distribution. The generator and the discriminator compete with each other by refining the prediction model iteratively until its predicted values converge towards the ground truth distribution. Results:We argue that (i) the proposed method can achieve the same level of prediction accuracy, but with a smaller number of training samples. (ii) Our proposed model using seven real-world datasets show that our approach outperforms the state-of-the-art methods. This help to further promote software configurable performance. Conclusion: Experimental results on seven public real-world datasets demonstrate that PERF-AL outperforms state-of-the-art software performance prediction methods.
Singh, AK & Tao, X 1970, 'BCINet: An Optimized Convolutional Neural Network for EEG-Based Brain-Computer Interface Applications', 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, pp. 582-587.
View/Download from: Publisher's site
Singh, AK, Aldini, S, Leong, D, Wang, Y-K, Carmichael, MG, Liu, D & Lin, C-T 1970, 'Prediction Error Negativity in Physical Human-Robot Collaboration', 2020 8TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE (BCI), 8th International Winter Conference on Brain-Computer Interface (BCI), IEEE, SOUTH KOREA, Tech Univ Berlin, Korea Univ Machine Learning Grp, BK21 Plus Global Leader, Gangwon, pp. 58-63.
View/Download from: Publisher's site
Singh, AK, Aldini, S, Leong, D, Wang, Y-K, Carmichael, MG, Liu, D & Lin, C-T 1970, 'Prediction Error Negativity in Physical Human-Robot Collaboration', 2020 8th International Winter Conference on Brain-Computer Interface (BCI), 2020 8th International Winter Conference on Brain-Computer Interface (BCI), IEEE, Gangwon, Korea (South), pp. 1-6.
View/Download from: Publisher's site
View description>>
Cognitive conflict is a fundamental phenomenon of human cognition, particularly during interaction with the real world. Understanding and detecting cognitive conflict can help to improve interactions in a variety of applications, such as in human-robot collaboration (HRC), which involves continuously guiding the semi-autonomous robot to perform a task in given settings. There have been several works to detect cognitive conflict in HRC but without physical control settings. In this work, we have conducted the first study to explore cognitive conflict using prediction error negativity (PEN) in physical human-robot collaboration (pHRC). Our results show that there was a statistically significant (p =. 047) higher PEN for conflict condition compared to normal conditions, as well as a statistically significant difference between different levels of PEN (p =. 020). These results indicate that cognitive conflict can be detected in pHRC settings and, consequently, provide a window of opportunities to improve the interaction in pHRC.
Song, Y, Zhang, G, Lu, H & Lu, J 1970, 'A Fuzzy Drift Correlation Matrix for Multiple Data Stream Regression', 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Glasgow, United Kingdom, pp. 1-6.
View/Download from: Publisher's site
View description>>
How to handle concept drift problem is a big challenge for algorithms designed for the data streams. Currently, techniques related to the concept drift problem focus on single data stream. However, it normally needs to handle multiple relevant data streams in the real-world application. Current concept drift methods can not be directly used in the multistream setting. They can only be limitedly applied on each stream separately, which omits the drift correlation between streams. In the multi-stream scenario, when drift occurs in a stream, other streams may face or have faced a similar drift problem as well. This pattern of simultaneous or delayed occurrence of drift is critical to analyze and predict multiple streams as a whole dynamic system. To fill the gap in the multi-stream scenario, this paper proposes a fuzzy drift variance (FDV) to measure the correlated drift patterns among streams. FDA is able to present how the pattern of drift occurrence for any two streams correlates and how delayed this correlation is. Seven synthetic streams are designed to validate FDA. The experimental results show a good presentation ability of FDA for drift-correlated multiple streams.
Soomro, WA, Guo, Y, Lu, HY & Jin, JX 1970, 'Advancements and Impediments in Applications of High-Temperature Superconducting Material', 2020 IEEE International Conference on Applied Superconductivity and Electromagnetic Devices (ASEMD), 2020 IEEE International Conference on Applied Superconductivity and Electromagnetic Devices (ASEMD), IEEE.
View/Download from: Publisher's site
Su, Z, Lin, T, Xu, Q, Chen, N, Yu, S & Guo, S 1970, 'An Online Pricing Strategy of EV Charging and Data Caching in Highway Service Stations', 2020 16th International Conference on Mobility, Sensing and Networking (MSN), 2020 16th International Conference on Mobility, Sensing and Networking (MSN), IEEE, pp. 81-85.
View/Download from: Publisher's site
Tao, X, Zhang, D, Singh, AK, Prasad, M, Lin, C-T & Xu, D 1970, 'Weak Scratch Detection of Optical Components Using Attention Fusion Network', 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), IEEE, ELECTR NETWORK, pp. 855-862.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Scratches on the optical surface can directly affect the reliability of the optical system. Machine vision-based methods have been widely applied in various industrial surface defect inspection scenarios. Since weak scratches imaging in the dark field has an ambiguous edge and low contrast, which brings difficulty in automatic defect detection. Recently, many existing visual inspection methods based on deep learning cannot effectively inspect weak scratches due to the lack of attention-aware features. To address the problems arising from industry-specific characteristics, this paper proposes 'Attention Fusion Network;', a convolutional neural network using attention mechanism built by hard and soft attention modules to generate attention-aware features. The hard attention module is implemented by integrating the brightness adjustment operation in the network, and the soft attention module is composed of scale attention and channel attention. The proposed model is trained on a real-world industrial scratch dataset and compared with other defect inspection methods. The proposed method can achieve the best performance to detect the weak scratch inspection of optical components compared to the traditional scratch detection methods and other deep learning-based methods.
Thapa, S, Adhikari, S, Naseem, U, Singh, P, Bharathy, G & Prasad, M 1970, 'Detecting Alzheimer's Disease by Exploiting Linguistic Information from Nepali Transcript.', ICONIP (4), Springer, pp. 176-184.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. Alzheimer’s disease (AD) is the most common form of neurodegenerating disorder accounting for 60–80% of all dementia cases. The lack of effective clinical treatment options to completely cure or even slow the progression of disease makes it even more serious. Treatment options are available to treat the milder stage of the disease to provide symptomatic short-term relief and improve quality of life. Early diagnosis is key in the treatment and management of AD as advanced stages of disease cause severe cognitive decline and permanent brain damage. This has prompted researchers to explore innovative ways to detect AD early on. Changes in speech are one of the main signs of AD patients. As the brain deteriorates the language processing ability of the patients deteriorates too. Previous research has been done in the English language using Natural Language Processing (NLP) techniques for early detection of AD. However, research using local languages and low resourced language like Nepali still lag behind. NLP is an important tool in Artificial Intelligence to decipher the human language and perform various tasks. In this paper, various classifiers have been discussed for the early detection of Alzheimer’s in the Nepali language. The proposed study makes a convincing conclusion that the difficulty in processing information in AD patients reflects in their speech while describing a picture. The study incorporates the speech decline of AD patients to classify them as control subjects or AD patients using various classifiers and NLP techniques. Furthermore, in this experiment a new dataset consisting of transcripts of AD patients and Control normal (CN) subjects in the Nepali language. In addition, this paper sets a baseline for the early detection of AD using NLP in the Nepali language.
Thapa, S, Singh, P, Jain, DK, Bharill, N, Gupta, A & Prasad, M 1970, 'Data-Driven Approach based on Feature Selection Technique for Early Diagnosis of Alzheimer’s Disease', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Alzheimer's disease (AD) is a neurodegenerative disorder resulting in memory loss and cognitive decline caused due to the death of brain cells. It is the most common form of dementia and accounts for 60-80% of all dementia cases. There is no single test for diagnosis of AD, the doctors rely on medical history, neuropsychological assessments, computed tomography (CT) or magnetic resonance imaging (MRI) scan of the brain, etc. to confirm a diagnosis. In terms of the treatment, currently, there is neither a cure nor any way to slow the progression of AD. However, for people with mild or moderate stages of this disease, there are some medications available to temporarily reduce symptoms and help to improve quality of life. Hence, early diagnosis of AD is extremely crucial for overall better management of the disease. The researches have shown some relation between neuropsychological scores and atrophies of the brain. This can be leveraged for the early diagnosis of AD. This paper makes use of feature selection techniques to extract the most important features in the diagnosis of AD. This paper demonstrates the need to combine neuropsychological scores like mini-mental state examination (MMSE) with MRI features to provide better decisional space for early diagnosis of AD. Through the experiments, including MMSE along with other features are found to improve the classification of AD, significantly.
Thiyagarajan, K, Kodagoda, S, Ulapane, N & Prasad, M 1970, 'A Temporal Forecasting Driven Approach Using Facebook’s Prophet Method for Anomaly Detection in Sewer Air Temperature Sensor System', 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), IEEE, Kristiansand, Norway, pp. 25-30.
View/Download from: Publisher's site
Ubaid, A, Dong, F & Hussain, FK 1970, 'Framework for Feature Selection in Health Assessment Systems', Advances in Intelligent Systems and Computing, International Conference on Advanced Information Networking and Applications, Springer International Publishing, Japan, pp. 313-324.
View/Download from: Publisher's site
View description>>
Anomaly detection in health assessment systems has gained much attention in the recent past. Various feature selection techniques have been proposed for successful anomaly detection. However, these methods do not cater for the need to select features in health assessment systems. Most of the present techniques are data dependent and do not offer an option for incorporating domain information. This paper proposes a novel domain knowledge-driven feature selection framework named domain-driven selective wrapping (DSW) that can help in the selection of a correlated feature subset. The proposed framework uses an expert’s domain knowledge for the selection of subsets. The framework uses a custom-designed logic-driven anomaly detection block (LDAB) as a wrapper. The experiment results show that the proposed framework is able to select feature subsets more efficiently than traditional sequential selection methods and is very successful in detecting anomalies.
Ubaid, A, Hussain, FK & Charles, J 1970, 'Machine Learning-Based Regression Models for Price Prediction in the Australian Container Shipping Industry: Case Study of Asia-Oceania Trade Lane', Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 52-59.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. The objective of this paper is to train a data-driven price prediction model for container pricing based on demand and supply for the Australian container shipping industry. The sourcing of demand, supply and pricing data has been done from Australian ports, Sea-Intelligence maritime analysis and the Shanghai Freight Index (SCFI) respectively. Data-driven prediction have been realized by applying three different regression models that include support vector regression (SVR), random forest regression (RFR) and gradient booster regression (GBR) over the gathered datasets after initial feature engineering. A comparison of research outcomes shows that GBR outperforms all the other models by offering a test accuracy of 84%.
Van Den Hoven, E, Shaer, O, Loke, L, Van Dijk, J & Kun, A 1970, 'TEI 2020 Chairs? Welcome', TEI 2020 - Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction, pp. III-IV.
Wang, D, Arzhaeva, Y, Devnath, L, Qiao, M, Amirgholipour, S, Liao, Q, McBean, R, Hillhouse, J, Luo, S, Meredith, D, Newbigin, K & Yates, D 1970, 'Automated Pneumoconiosis Detection on Chest X-Rays Using Cascaded Learning with Real and Synthetic Radiographs', 2020 Digital Image Computing: Techniques and Applications (DICTA), 2020 Digital Image Computing: Techniques and Applications (DICTA), IEEE.
View/Download from: Publisher's site
Wang, K, Liu, A, Lu, J, Zhang, G & Xiong, L 1970, 'An Elastic Gradient Boosting Decision Tree for Concept Drift Learning', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 420-432.
View/Download from: Publisher's site
View description>>
In a non-stationary data stream, concept drift occurs when different chunks of incoming data have different distributions. Hence, over time, the global optimization point of a learning model might permanently drift to the point where the model no longer adequately performs the task it was designed for. This phenomenon needs to be addressed to maintain the integrity and effectiveness of a model over the long term. In this paper, we propose a simple but effective drift learning algorithm called elastic Gradient Boosting Decision Tree (eGBDT). Since the prediction of a GBDT model is the sum output of a list of trees, we can easily append new trees to perform incremental learning or delete the last few trees to roll back to a previously known optimization point. The proposed eGBDT incrementally fits new data and detect drift by searching for the tree with the lowest residual. If the rollback deletions required would exceed the initial number of trees, a retraining process is triggered. Comparisons of eGBDT with five state-of-the-art methods on eight data sets show the efficacy of eGBDT.
Wang, X, Jin, D, Musial, K & Dang, J 1970, 'Topic enhanced sentiment spreading model in social networks considering user interest', AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, 34th AAAI Conference on Artificial Intelligence / 32nd Innovative Applications of Artificial Intelligence Conference / 10th AAAI Symposium on Educational Advances in Artificial Intelligence, ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, New York, NY, pp. 989-996.
View description>>
Emotion is a complex emotional state, which can affect our physiology and psychology and lead to behavior changes. The spreading process of emotions in the text-based social networks is referred to as sentiment spreading. In this paper, we study an interesting problem of sentiment spreading in social networks. In particular, by employing a text-based social network (Twitter), we try to unveil the correlation between users’ sentimental statuses and topic distributions embedded in the tweets, then to automatically learn the influence strength between linked users. Furthermore, we introduce user interest to refine the influence strength. We develop a unified probabilistic framework to formalize the problem into a topic-enhanced sentiment spreading model. The model can predict users’ sentimental statuses based on their historical emotional status, topic distributions in tweets and social structures. Experiments on the Twitter dataset show that the proposed model significantly outperforms several alternative methods in predicting users’ sentimental status. We also discover an intriguing phenomenon that positive and negative sentiment is more relevant to user interest than neutral ones. Our method offers a new opportunity to understand the underlying mechanism of sentimental spreading in online social networks.
Wang, X, Li, Q, Zhang, W, Xu, G, Liu, S & Zhu, W 1970, 'Joint Relational Dependency Learning for Sequential Recommendation', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Singapore, pp. 168-180.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2020. Sequential recommendation leverages the temporal information of users’ transactions as transition dependencies for better inferring user preference, which has become increasingly popular in academic research and practical applications. Short-term transition dependencies contain the information of partial item orders, while long-term transition dependencies infer long-range user preference, the two dependencies are mutually restrictive and complementary. Although some work investigates unifying both long-term and short-term dependencies for better performance, they still neglect the fact that short-term interactions are multi-folds, which are either individual-level interactions or union-level interactions. Existing sequential recommendations mainly focus on user’s individual (i.e., individual-level) interactions but ignore the important collective influence at union-level. Since union-level interactions can reflect that human decisions are made based on multiple items he/she has already interacted, ignoring such interactions can result in the disability of capturing the collective influence between items. To alleviate this issue, we proposed a Joint Relational Dependency learning (JRD-L) for sequential recommendation that exploits both long-term and short-term preferences at individual-level and union-level. Specifically, JRD-L combines long-term user preferences with short-term interests by measuring short-term pair relations at individual-level and union-level. Moreover, JRD-L can alleviate the sparsity problem of union-level interactions by adding more descriptive details to each item, which is carried by individual-level relations. Extensive numerical experiments demonstrate JRD-L outperforms state-of-the-art baselines for the sequential recommendation.
Wang, Y, Shi, K & Niu, Z 1970, 'A session-based job recommendation system combining area knowledge and interest graph neural networks', Proceedings of the International Conference on Software Engineering and Knowledge Engineering, SEKE, pp. 489-492.
View/Download from: Publisher's site
View description>>
Online job boards become one of the central components of the modern recruitment industry. Existing systems are mainly focused on content analysis of resumes and job descriptions, so they heavily rely on the accuracy of semantic analysis and the coverage of content modeling, in which case they usually suffer from rigidity and the lack of implicit semantic relations. In recent years, session recommendation has attracted the attention of many researchers, as it can judge the user's interest preferences and recommend items based on the user's historical clicks. Most existing session-based recommendation systems are insufficient to obtain accurate user vectors in sessions and neglect complex transitions of items. We propose a novel method, Area Knowledge and Interest Graph Neural Networks(AIGNN). We add job area knowledge to job session recommendations, in which session sequences are modeled as graph-structured data, then GNN can capture complex transitions of items. Moreover, the attention mechanism is introduced to represent the user's interest. Experiments on real-world data set prove that the model we proposed better than other algorithms.
Wang, Z, Pei, Q, Liui, X, Ma, L, Li, H & Yu, S 1970, 'DAPS: A Decentralized Anonymous Payment Scheme with Supervision', Algorithms and Architectures for Parallel Processing, International Conference on Algorithms and Architectures for Parallel Processing, Springer International Publishing, Australia, pp. 537-550.
View/Download from: Publisher's site
View description>>
With the emergence of blockchain-based multi-party trading scenarios, such as finance, government work, and supply chain management. Information on the blockchain poses a serious threat to users’ privacy, and anonymous transactions become the most urgent need. At present, solutions to the realization of anonymous transactions can only achieve a certain degree of trader identity privacy and transaction content privacy, so we introduce zero knowledge proof to achieve complete privacy. At the same time, unconditional privacy provides conditions for cybercrime. Due to the great application potential of the blockchain in many fields, supporting privacy protection and supervision simultaneously in the blockchain is a bottleneck, and existing works can not solve the problem of coexistence of privacy protection and supervision. This paper takes the lead in studying the privacy and supervision in multi-party anonymous transactions, and proposes a distributed anonymous payment scheme with supervision (DAPS) based on zk-SNARK, signature, commitment and elliptic curve cryptography, which enables users to be anonymous under supervision in transactions. The advantages of DAPS are twofold: enhanced privacy and additional supervision. We formally discussed the security of the whole system framework provided by the zero-knowledge proof, and verified its feasibility and practicability in the open source blockchain framework BCOS.
Wen, H, Wu, Y, Yang, C, Duan, H & Yu, S 1970, 'A Unified Federated Learning Framework for Wireless Communications: towards Privacy, Efficiency, and Security', IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE, ELECTR NETWORK, pp. 653-658.
View/Download from: Publisher's site
View description>>
Training high-quality machine learning models on distributed systems is a critical issue to achieve edge intelligence in wireless communications. Conventional data-driven machine learning approaches are infeasible due to non-IID data caused by privacy issues and the limited communication resources in wireless networks. Besides, considering the complex user identities, the training process also faces the challenges of Byzantine devices, which can inject poisoning information into models. In this paper, we propose a two-step federated learning framework, robust federated augmentation and distillation (RFA-RFD), to enable privacy-preserving, communication-efficient, and Byzantine-tolerant on-device machine learning in wireless communications. RFA is a method to tackle the problem of non-IID local data, which firstly trains local data generators on edge devices, then trains a global generator in the cloud server according to the IID dataset generated by the uploaded local generators, and finally, devices rectify non-IID dataset by downloading the global generator. After obtaining IID local data in edge devices, RFD is implemented to improve the performance of local models, in which devices only share the local information of models' outputs to reduce communication overhead. By employing a detection and discard mechanism in both RFA and RFD, our framework achieves robustness to the influence of Byzantine devices. Experiments show the effectiveness of RFA-RFD on preserving privacy, correcting non-IID data, reducing communication overhead, and resisting Byzantine devices, without much loss of accuracy compared with existing state-of-the-art methods.
Wen, Y, Liu, B, Xie, R, Zhu, Y, Cao, J & Song, L 1970, 'A Hybrid Model for Natural Face De-Identiation with Adjustable Privacy', 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), IEEE.
View/Download from: Publisher's site
Weng, J, Xiao, F & Cao, Z 1970, 'Uncertainty modelling in multi-agent information fusion systems', Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 1494-1502.
View description>>
In the field of informed decision-making, the usage of a single diagnostic expert system has limitations when dealing with complicated circumstances. The usage of a multi-agent information fusion (MAIF) system can mitigate this situation, as it allows multiple agents collaborating to solve the problems in a complex environment. However, the MAIF system needs to handle the uncertainty problem between different agents objectively at the same time. Target to this goal, this study reconstructs the generation of basic probability assignments (BPAs) based on the framework of evidence theory, and presents the uncertainty relationship between recognition sets, which are beneficial to the applications of the MAIF system. On the basis of evidence distance measurement, our method demonstrates the effectiveness and extendibility in numerical examples, and improves the accuracy and anti-interference ability during the identification process in the MAIF system.
Wu, J, Qiang, W, Zhu, T, Jin, H, Xu, P & Shen, S 1970, 'Differential Privacy Preservation for Smart Meter Systems', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ICA3PP 2019: Algorithms and Architectures for Parallel Processing, Springer International Publishing, Melbourne, VIC, Australia, pp. 669-685.
View/Download from: Publisher's site
View description>>
With the rapid development of IoT and smart homes, smart meters have received extensive attention. The third-party applications, such as smart home controlling, dynamic demand-response, power monitoring, etc., can provide services to users based on consumption data of household electricity collected from smart meters. With the emergence of non-intrusive load monitoring, privacy issues from the data of smart meters become more and more severe. Differential privacy is a recognized concept that has become an important standard of privacy preservation for data with personal information. However, the existing privacy protection methods for the data of smart meters that are based on differential privacy sacrifices the actual energy consumption to protect the privacy of users, thus affecting the charging of power suppliers. To solve this problem, we propose a group-based noise adding method, so as to ensure the correct electricity billing. The experiments with two real-world data sets demonstrate that our approach can not only provide a strict privacy guarantee but also improve performance significantly.
Wu, X, Ji, G, Dou, W, Yu, S & Qi, L 1970, 'Game Theory for Mobile Location Privacy', Proceedings of the 2nd ACM International Symposium on Blockchain and Secure Critical Infrastructure, ASIA CCS '20: The 15th ACM Asia Conference on Computer and Communications Security, ACM, pp. 106-116.
View/Download from: Publisher's site
View description>>
With the rapid growth of mobile network and infrastructures, location-related services accross multiple domains have received a great deal of attention. However, privacy is always an important problem that has a great influence on services. In order to research the attack and defence of privacy, we present the existing game-theoretic literatures for their interactions about mobile location privacy problems. We first demonstrate the necessity of game theory applied to location privacy. Then, we divide the literatures into four types according to different game players. Next, we describe the detailed content and analyse the equilibrium of privacy games. In addition, we also provide the works based on mechanism design to motive the defenders to increase the defence in various contexts. Finally, we also discuss the possible trends and challenges of the future research. Our survey provides a systematic and comprehensive understanding about location privacy preservation problems in mobile network.
Wu, Y, Cao, J & Xu, G 1970, 'FAST: A Fairness Assured Service Recommendation Strategy Considering Service Capacity Constraint', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 287-303.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2020. An excessive number of customers often leads to a degradation in service quality. However, the capacity constraints of services are ignored by recommender systems, which may lead to unsatisfactory recommendation. This problem can be solved by limiting the number of users who receive the recommendation for a service, but this may be viewed as unfair. In this paper, we propose a novel metric Top-N Fairness to measure the individual fairness of multi-round recommendations of services with capacity constraints. By considering the fact that users are often only affected by top-ranked items in a recommendation, Top-N Fairness only considers a sub-list consisting of top N services. Based on the metric, we design FAST, a Fairness Assured service recommendation STrategy. FAST adjusts the original recommendation list to provide users with recommendation results that guarantee the long-term fairness of multi-round recommendations. We prove the convergence property of the variance of Top-N Fairness of FAST theoretically. FAST is tested on the Yelp dataset and synthetic datasets. The experimental results show that FAST achieves better recommendation fairness while still maintaining high recommendation quality.
Xing, Y, Guo, L, Xie, Z, Cui, L, Gao, L & Yu, S 1970, 'Non-Technical Losses Detection in Smart Grids: An Ensemble Data-Driven Approach', 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS), 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS), IEEE, Hong Kong, pp. 563-568.
View/Download from: Publisher's site
View description>>
Non technical losses (NTL) detection plays a crucial role in protecting the security of smart grids. Employing massive energy consumption data and advanced artificial intelligence (AI) techniques for NTL detection are helpful. However, there are concerns regarding the effectiveness of existing AI-based detectors against covert attack methods. In particular, the tampered metering data with normal consumption patterns may result in low detection rate. Motivated by this, we propose a hybrid data-driven detection framework. In particular, we introduce a wide deep convolutional neural networks (CNN) model to capture the global and periodic features of consumption data. We also leverage the maximal information coefficient algorithm to analysis and detect those covert abnormal measurements. Our extensive experiments under different attack scenarios demonstrate the effectiveness of the proposed method.
Xu, Y, Chen, L, Fang, M, Wang, Y & Zhang, C 1970, 'Deep Reinforcement Learning with Transformers for Text Adventure Games', 2020 IEEE Conference on Games (CoG), 2020 IEEE Conference on Games (CoG), IEEE, pp. 65-72.
View/Download from: Publisher's site
View description>>
In this paper, we study transformers for text-based games. As a promising replacement of recurrent modules in Natural Language Processing (NLP) tasks, the transformer architecture could be treated as a powerful state representation generator for reinforcement learning. However, the vanilla transformer is neither effective nor efficient to learn with a huge amount of weight parameters. Unlike existing research that encodes states using LSTMs or GRUs, we develop a novel lightweight transformer-based representation generator featured with reordered layer normalization, weight sharing and block-wise aggregation. The experimental results show that our proposed model not only solves single games with much fewer interactions, but also achieves better generalization on a set of unseen games. Furthermore, our model outperforms state-of-the-art agents in a variety of man-made games.
Xu, Y, Fang, M, Chen, L, Du, Y, Zhou, JT & Zhang, C 1970, 'Deep reinforcement learning with stacked hierarchical attention for text-based games', Advances in Neural Information Processing Systems, Conference on Neural Information Processing Systems, NIPS, Virtual, pp. 1-13.
View description>>
We study reinforcement learning (RL) for text-based games, which are interactive simulations in the context of natural language. While different methods have been developed to represent the environment information and language actions, existing RL agents are not empowered with any reasoning capabilities to deal with textual games. In this work, we aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure. We propose a stacked hierarchical attention mechanism to construct an explicit representation of the reasoning process by exploiting the structure of the knowledge graph. We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
Xue, H, Liu, B, Din, M, Song, L & Zhu, T 1970, 'Hiding Private Information in Images From AI', ICC 2020 - 2020 IEEE International Conference on Communications (ICC), ICC 2020 - 2020 IEEE International Conference on Communications (ICC), IEEE, Dublin, Ireland.
View/Download from: Publisher's site
View description>>
Privacy protection attracts increasing concerns these days. People tend to believe that large social platforms will comply with the agreement to protect their privacy. However, photos uploaded by people are usually not treated to achieve privacy protection. For example, Facebook, the world's largest social platform, was found leaking photos of millions of users to commercial organizations for big data analytics. A common analytical tool used by these commercial organizations is the Deep Neural Network (DNN). Today's DNN can accurately identify people's appearance, body shape, hobbies and even more sensitive personal information, such as addresses, phone numbers, emails, bank cards and so on. To enable people to enjoy sharing photos without worrying about their privacy, we propose an algorithm that allows users to selectively protect their privacy while preserving the contextual information contained in images. The results show that the proposed algorithm can select and perturb private objects to be protected among multiple optional objects so that the DNN can only identify non-private objects in images.
Yang, H, Chen, L, Lei, M, Niu, L, Zhou, C & Zhang, P 1970, 'Discrete Embedding for Latent Networks', Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}, International Joint Conferences on Artificial Intelligence Organization, pp. 1223-1229.
View/Download from: Publisher's site
View description>>
Discrete network embedding emerged recently as a new direction of network representation learning. Compared with traditional network embedding models, discrete network embedding aims to compress model size and accelerate model inference by learning a set of short binary codes for network vertices. However, existing discrete network embedding methods usually assume that the network structures (e.g., edge weights) are readily available. In real-world scenarios such as social networks, sometimes it is impossible to collect explicit network structure information and it usually needs to be inferred from implicit data such as information cascades in the networks. To address this issue, we present an end-to-end discrete network embedding model for latent networks DELN that can learn binary representations from underlying information cascades. The essential idea is to infer a latent Weisfeiler-Lehman proximity matrix that captures node dependence based on information cascades and then to factorize the latent Weisfiler-Lehman matrix under the binary node representation constraint. Since the learning problem is a mixed integer optimization problem, an efficient maximal likelihood estimation based cyclic coordinate descent (MLE-CCD) algorithm is used as the solution. Experiments on real-world datasets show that the proposed model outperforms the state-of-the-art network embedding methods.
Yang, X & Liu, W 1970, 'Population Location and Movement Estimation through Cross-domain Data Analysis', Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}, International Joint Conferences on Artificial Intelligence Organization, Yokahama, Japan.
View/Download from: Publisher's site
View description>>
Estimations on people movement behaviour within a country can provide valuable information to government strategic resource plannings. In this paper, we propose to utilize multi-domain statistical data to estimate people movements under the assumption that most population tend to move to areas with similar or better living conditions. We design a Multi-domain Matrix Factorization (MdMF) model to discover the underlying consistency patterns from these cross-domain data and estimate the movement trends using the proposed model. This research can provide important theoretical support to government and agencies in strategic resource planning and investments.
Yu, H, Liu, A, Wang, B, Li, R, Zhang, G & Lu, J 1970, 'Real-Time Decision Making for Train Carriage Load Prediction via Multi-stream Learning', AI 2020: Advances in Artificial Intelligence, Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Canberra, ACT, Australia, pp. 29-41.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. Real-time traffic planning and scheduling optimization are critical for developing low-cost, reliable, resilient, and efficient transport systems. In this paper, we present a real-time application that uses machine learning techniques to forecast the train carriage load when a train departure from a platform. With the predicted carriage load, crew can efficiently manage passenger flow, improving the efficiency of boarding and alighting, and thereby reducing the time trains spend at stations. We developed the application in collaboration with Sydney Trains. Most data are publicly available on Open Data Hub, which is supported by the Transport for NSW. We investigated the performance of different models, features, and measured their contributions to prediction accuracy. From this we propose a novel learning strategy, called Multi-Stream Learning, which merges streams having similar concept drift patterns to boost the training data size with the aim of achieving lower generalization errors. We have summarized our solutions and hope researchers and industrial users who might be facing similar problems will benefit from our findings.
Zhan, X, Chen, M, Yu, S & Zhang, Y 1970, 'Adaptive Detection Method for Packet-In Message Injection Attack in SDN', Algorithms and Architectures for Parallel Processing, International Conference on Algorithms and Architectures for Parallel Processing, Springer International Publishing, Australia, pp. 482-495.
View/Download from: Publisher's site
View description>>
Packet-In message injection attack is severe in Software Defined Network (SDN), which will cause a single point of failure of the centralized controller and the crash of the entire network. Nowadays, there are many detection methods for it, including entropy detection and so on. We propose an adaptive detection method to proactively defend against this attack. We establish a Poisson probability distribution detection model to find the attack and use the flow table filter to mitigate it. We also use the EWMA method to update the expectation value of the model to adapt the actual network conditions. Our method has no need to send additional packets to request the switch information. The experiment results show that there is 92% true positive rate in case of attack with random destination IP packets injected, and true positive rate is 98.2% under the attack with random source IP packets injected.
Zhang, C, Zhang, S, Yu, JJQ & Yu, S 1970, 'An Enhanced Motif Graph Clustering-Based Deep Learning Approach for Traffic Forecasting', GLOBECOM 2020 - 2020 IEEE Global Communications Conference, GLOBECOM 2020 - 2020 IEEE Global Communications Conference, IEEE, Taipei, Taiwan, pp. 1-6.
View/Download from: Publisher's site
View description>>
Traffic speed prediction is among the key problems in intelligent transportation system (ITS). Traffic patterns with complex spatial dependency make accurate prediction on traffic networks a challenging task. Recently, a deep learning approach named Spatio-Temporal Graph Convolutional Networks (STGCN) has achieved state-of-the-art results in traffic speed prediction by jointly exploiting the spatial and temporal features of traffic data. Nonetheless, applying STGCN to large-scale urban traffic network may develop degenerated results, which is due to redundant spatial information engaging in graph convolution. In this work, we propose a motif-based graph-clustering approach to apply STGCN to large-scale traffic networks. By using graphclustering, we partition a large urban traffic network into smaller clusters to prompt the learning effect of graph convolution. The proposed approach is evaluated on two real-world datasets and is compared with its variants and baseline methods. The results show that graph-clustering approaches generally outperform the other methods, and the proposed approach obtains the best performance.
Zhang, D, Zhang, Q, Zhang, G & Lu, J 1970, 'Recommender systems with heterogeneous information network for cold start items', Developments of Artificial Intelligence Technologies in Computation and Robotics, 14th International FLINS Conference (FLINS 2020), WORLD SCIENTIFIC, Cologne, Germany, pp. 496-504.
View/Download from: Publisher's site
View description>>
Recommender System has been widely adopted in real-world applications. Collaborative Filtering (CF) and matrix based approach has been the forefront for the past decade in both implicit and explicit recommendation tasks. One prominent challenge that most recommendation approach facing is dealing with different data quality conditions. I.e. cold start and data sparsity. Some model based CF use condensed latent space to overcome sparsity problem. However, when dealing with constant cold start problem, CF based approach can be ineffective and costly. In this paper, we propose MERec, a novel approach that adopts graph meta-path embedding to learn item/user features independently besides learning from user-item interactions. It allows unseen data to be incorporated as part of user/item learning process. Our experiments demonstrated a effective impact reduction in cold start scenario for both new and sparse dataset.
Zhang, J, Zhang, J, Chen, J & Yu, S 1970, 'GAN Enhanced Membership Inference: A Passive Local Attack in Federated Learning', ICC 2020 - 2020 IEEE International Conference on Communications (ICC), ICC 2020 - 2020 IEEE International Conference on Communications (ICC), IEEE, Dublin, Ireland, pp. 1-6.
View/Download from: Publisher's site
View description>>
Federated learning has lately received great attention for its privacy protection feature. However, recent researches found that federated learning models are susceptible to various inference attacks. In this paper, we point out a membership inference attack method that can cause a serious privacy leakage in federated learning. An adversary who is a participant in federated learning can train a classification attack model to launch the membership inference attack, which determines if a data record is in the model's training dataset. The existing membership inference method is dissatisfied due to a lack of attack data since the training data of each participant are independent. To overcome the lack of attack data, an adversary can enrich attack data using the generative adversarial network (GAN), which is a practical method to increase data diversity. We substantiate that this GAN enhanced membership inference attack method has a 98 attack accuracy. We perform experiments to show that data diversity and the overfitting make federated learning models susceptible.
Zhang, L, Wang, X, Yao, L & Zheng, F 1970, 'Zero-Shot Object Detection with Textual Descriptions Using Convolutional Neural Networks', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK.
View/Download from: Publisher's site
Zhang, L, Wang, X, Yao, L, Wu, L & Zheng, F 1970, 'Zero-Shot Object Detection via Learning an Embedding from Semantic Space to Visual Space', Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}, International Joint Conferences on Artificial Intelligence Organization, Yokahama, Japan, pp. 906-912.
View/Download from: Publisher's site
View description>>
Zero-shot object detection (ZSD) has received considerable attention from the community of computer vision in recent years. It aims to simultaneously locate and categorize previously unseen objects during inference. One crucial problem of ZSD is how to accurately predict the label of each object proposal, i.e. categorizing object proposals, when conducting ZSD for unseen categories.Previous ZSD models generally relied on learning an embedding from visual space to semantic space or learning a joint embedding between semantic description and visual representation. As the features in the learned semantic space or the joint projected space tend to suffer from the hubness problem, namely the feature vectors are likely embedded to an area of incorrect labels, and thus it will lead to lower detection precision. In this paper, instead, we propose to learn a deep embedding from the semantic space to the visual space, which enables to well alleviate the hubness problem, because, compared with semantic space or joint embedding space, the distribution in visual space has smaller variance. After learning a deep embedding model, we perform $k$ nearest neighbor search in the visual space of unseen categories to determine the category of each semantic description. Extensive experiments on two public datasets show that our approach significantly outperforms the existing methods.
Zhang, M, Zhou, J, Zhang, G, Huang, L, Wang, T & Yu, S 1970, 'Scalable and Updatable Attribute-based Privacy Protection Scheme for Big Data Publishing', GLOBECOM 2020 - 2020 IEEE Global Communications Conference, GLOBECOM 2020 - 2020 IEEE Global Communications Conference, IEEE, Taipei, Taiwan, pp. 1-6.
View/Download from: Publisher's site
View description>>
To ensure data security and privacy during big data publishing, it is challenging to design a security and privacy protection scheme for the big data environment with a large scale of users. At the same time, due to the users' dynamically joining and exiting, it is also very important to design a user's dynamic update mechanism. To address such challenges, we design a novel scalable and updatable attribute-based privacy protection scheme (SUAPP) for big data publishing. The proposed scheme can realize users' hierarchical management, which can reduce the overhead on key generation and management caused by the large scale of data users in the big data center (BDC). We set a user group for each attribute, then adapt the Chinese remaining theorem to dynamically assist the big data center to generate and update group keys for the attribute users group. Analyses and experiments show that while ensuring the privacy protection of big data publishing, our scheme also has low communication and computation overhead and higher efficiency compared with two state peer schemes.
Zhang, Q, Li, Y, Zhang, G & Lu, J 1970, 'A recurrent neural network-based recommender system framework and prototype for sequential E-learning', Developments of Artificial Intelligence Technologies in Computation and Robotics, 14th International FLINS Conference (FLINS 2020), WORLD SCIENTIFIC, Cologne, Germany, pp. 488-495.
View/Download from: Publisher's site
View description>>
In the fast pace of life, E-learning has become a new way for self-improvement and competitiveness. The recommendation is needed in an E-learning system to filter suitable courses for users when they are facing a massive amount of information in course enrolment. However, due to the complexity of each learning course and the change of user interest, it is challenging to provide accurate recommendations. This paper proposes an E-learning recommender system that combines the recurrent neural network (RNN) and content-based technique to support users in course selection. The content-based techniques are to mine the relationships between courses, and the recurrent neural network is to extract user interests with a series of his/her enrolled courses. The proposed E-learning recommender system framework takes sequential connections into consideration. It intends to provide students with more precise course recommendations. The system is implemented with the Django framework and ElephantSQL cloud database and deployed on the Amazon Elastic Compute Cloud.
Zhang, Q, Lu, J & Zhang, G 1970, 'Cross-Domain Recommendation with Multiple Sources', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK, pp. 1-7.
View/Download from: Publisher's site
View description>>
Data sparsity remains a challenging and common problem in real-world recommender systems, which impairs the accuracy of recommendation thus damages user experience. Cross-domain recommender systems are developed to deal with data sparsity problem through transferring knowledge from a source domain with relatively abundant data to the target domain with insufficient data. However, two challenging issues exist in cross-domain recommender systems: 1) domain shift which makes the knowledge from source domain inconsistent with that in the target domain; 2) knowledge extracted from only one source domain is insufficient, while knowledge is potentially available in many other source domains. To handle the above issues, we develop a cross-domain recommendation method in this paper to extract group-level knowledge from multiple source domains to improve recommendation in a sparse target domain. Domain adaptation techniques are applied to eliminate the domain shift and align user and item groups to maintain knowledge consistency during the transfer learning process. Knowledge is extracted not from one but multiple source domains through an intermediate subspace and adapted through flexible constraints of matrix factorization in the target domain. Experiments conducted on five datasets in three categories show that the proposed method outperforms six benchmarks and increases the accuracy of recommendations in the target domain.
Zhang, Q, Zhang, G, Lu, J & Lin, H 1970, 'A framework of clinical recommender system with genomic information', Developments of Artificial Intelligence Technologies in Computation and Robotics, 14th International FLINS Conference (FLINS 2020), WORLD SCIENTIFIC, Cologne, Germany, pp. 522-529.
View/Download from: Publisher's site
View description>>
Clinicians make decisions that affect life and death, quality of life, every single day. It is important to support clinicians by discovering medical knowledge from the accumulated electronic health records (EHRs). The integration of genomic information and EHRs are long recognized by the medical community as the inherent feature of the disease. The demand for developing a clinical recommender system that is able to deal with both genomic and phenotypic data is urgent. This paper proposes a framework of clinical recommender system with genomic information, which is used in the clinical process and connects the four types of users: clinicians, patients, clinical labs, researchers. With models and methods in artificial intelligence (AI), five functions are designed in this framework: diagnosis prediction, disease risk prediction, test prediction, and event prediction. The proposed framework will help clinicians to make decisions on the next step in clinical care action for patients.
Zhang, Y, Bai, G, Li, X, Curtis, C, Chen, C & Ko, RKL 1970, 'PrivColl: Practical Privacy-Preserving Collaborative Machine Learning'.
View description>>
Collaborative learning enables two or more participants, each with their owntraining dataset, to collaboratively learn a joint model. It is desirable thatthe collaboration should not cause the disclosure of either the raw datasets ofeach individual owner or the local model parameters trained on them. Thisprivacy-preservation requirement has been approached through differentialprivacy mechanisms, homomorphic encryption (HE) and secure multipartycomputation (MPC), but existing attempts may either introduce the loss of modelaccuracy or imply significant computational and/or communicational overhead. Inthis work, we address this problem with the lightweight additive secret sharingtechnique. We propose PrivColl, a framework for protecting local data and localmodels while ensuring the correctness of training processes. PrivColl employssecret sharing technique for securely evaluating addition operations in amultiparty computation environment, and achieves practicability by employingonly the homomorphic addition operations. We formally prove that it guaranteesprivacy preservation even though the majority (n-2 out of n) of participantsare corrupted. With experiments on real-world datasets, we further demonstratethat PrivColl retains high efficiency. It achieves a speedup of more than 45Xover the state-of-the-art MPC/HE based schemes for training linear/logisticregression, and 216X faster for training neural network.
Zhang, Y, Liu, F, Fang, Z, Yuan, B, Zhang, G & Lu, J 1970, 'Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation', Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}, International Joint Conferences on Artificial Intelligence Organization, pp. 2526-2532.
View/Download from: Publisher's site
View description>>
In unsupervised domain adaptation (UDA), classifiers for the target domain are trained with massive true-label data from the source domain and unlabeled data from the target domain. However, it may be difficult to collect fully-true-label data in a source domain given limited budget. To mitigate this problem, we consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA (BFUDA). The key benefit is that it is much less costly to collect complementary-label source data (required by BFUDA) than collecting the true-label source data (required by ordinary UDA). To this end, complementary label adversarial network (CLARINET) is proposed to solve the BFUDA problem. CLARINET maintains two deep networks simultaneously, where one focuses on classifying complementary-label source data and the other takes care of the source-to-target distributional adaptation. Experiments show that CLARINET significantly outperforms a series of competent baselines.
Zhang, Y, Wang, M, Saberi, M & Chang, E 1970, 'Towards Expert Preference on Academic Article Recommendation Using Bibliometric Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 11-19.
View/Download from: Publisher's site
View description>>
Expert knowledge can be valuable for academic article recommendation, however, hiring domain experts for this purpose is rather expensive as it is extremely demanding for human to deal with a large volume of academic publications. Therefore, developing an article ranking method which can automatically provide recommendations that are close to expert decisions is needed. Many algorithms have been proposed to rank articles but pursuing quality article recommendations that approximate to expert decisions has hardly been considered. In this study, domain expert decisions on recommending quality articles are investigated. Specifically, we hire domain experts to mark articles and a comprehensive correlation analysis is then performed between the ranking results generated by the experts and state-of-the-art automatic ranking algorithms. In addition, we propose a computational model using heterogeneous bibliometric networks to approximate human expert decisions. The model takes into account paper citations, semantic and network-level similarities amongst papers, authorship, venues, publishing time, and the relationships amongst them to approximate human decision-making factors. Results demonstrate that the proposed model is able to effectively achieve human expert-alike decisions on recommending quality articles.
Zhang, Y, Xiao, G, Zheng, Z, Zhu, T, Tsang, IW & Sui, Y 1970, 'An Empirical Study of Code Deobfuscations on Detecting Obfuscated Android Piggybacked Apps', 2020 27th Asia-Pacific Software Engineering Conference (APSEC), 2020 27th Asia-Pacific Software Engineering Conference (APSEC), IEEE, Singapore, Singapore, pp. 41-50.
View/Download from: Publisher's site
View description>>
Android piggybacked malware (i.e., apps that piggyback malicious code) are becoming ubiquitous in app stores. Malware writers often use obfuscation techniques to obfuscate piggybacked apps to evade detection by Android malware detectors. Previous studies in this field have focused on the impact of code obfuscations on the detection of piggybacked malware, but the impact of code deobfuscation on detecting obfuscated piggybacked apps has rarely been studied. Knowing about the impact of code deobfuscation can provide useful insights into obfuscated piggybacked apps and therefore the design of resilient Android malware detectors. In this paper we conduct an empirical study of code deobfuscations on detecting obfuscated Android piggybacked apps, focusing on three types of malware detectors: commercial anti-malware products, machine learning-based detectors, and similarity-based detectors. We observe that code deobfuscations can impact differently depending on the malware detectors. For example, some deobfuscation strategies can improve the precision of detecting obfuscated piggybacked apps. Also we observe that the examined deobfuscation tools (Simplify and Deguard) have a different impact on obfuscated piggybacked apps after deobfuscations.
Zhao, H, Lin, Y, Gao, S & Yu, S 1970, 'Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition', GLOBECOM 2020 - 2020 IEEE Global Communications Conference, GLOBECOM 2020 - 2020 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
Zhao, Y, Chen, J, Zhang, J, Wu, D, Teng, J & Yu, S 1970, 'PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 595-609.
View/Download from: Publisher's site
View description>>
© 2020, Springer Nature Switzerland AG. Federated learning can complete an enormous training task efficiently by inviting participants to train a deep learning model collaboratively, and the user privacy will be well preserved for the users only upload model parameters to the centralized server. However, the attackers can initiate poisoning attacks by uploading malicious updates in federated learning. Therefore, the accuracy of the global model will be impacted significantly after the attack. To address this vulnerability, we propose a novel poisoning defense generative adversarial network (PDGAN) to defend the poising attack. The PDGAN can reconstruct training data from model updates and audit the accuracy for each participant model by using the generated data. Precisely, the participant whose accuracy is lower than a predefined threshold will be identified as an attacker and model parameters of the attacker will be removed from the training procedure in this iteration. Experiments conducted on MNIST and Fashion-MNIST datasets demonstrate that our approach can indeed defend the poisoning attacks in federated learning.
Zheng, X, Cao, Z & Bai, Q 1970, 'An Evoked Potential-Guided Deep Learning Brain Representation for Visual Classification', Neural Information Processing, International Conference on Neural Information Processing, Springer International Publishing, Thailand, pp. 54-61.
View/Download from: Publisher's site
View description>>
The new perspective in visual classification aims to decode the feature representation of visual objects from human brain activities. Recording electroencephalogram (EEG) from the brain cortex has been seen as a prevalent approach to understand the cognition process of an image classification task. In this study, we proposed a deep learning framework guided by the visual evoked potentials, called the Event-Related Potential (ERP)-Long short-term memory (LSTM) framework, extracted by EEG signals for visual classification. In specific, we first extracted the ERP sequences from multiple EEG channels to response image stimuli-related information. Then, we trained an LSTM network to learn the feature representation space of visual objects for classification. In the experiment, 10 subjects were recorded by over 50,000 EEG trials from an image dataset with 6 categories, including a total of 72 exemplars. Our results showed that our proposed ERP-LSTM framework could achieve classification accuracies of cross-subject of 66.81% and 27.08% for categories (6 classes) and exemplars (72 classes), respectively. Our results outperformed that of using the existing visual classification frameworks, by improving classification accuracies in the range of 12.62%–53.99%. Our findings suggested that decoding visual evoked potentials from EEG signals is an effective strategy to learn discriminative brain representations for visual classification.
Zuo, H, Lu, J & Zhang, G 1970, 'Multiple-source Domain Adaptation in Rule-based Neural Network', 2020 International Joint Conference on Neural Networks (IJCNN), 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2020 IEEE. Domain adaptation uses the previously acquired knowledge (source domain) to support predicted tasks in the current domain without sufficient labeled data (target domain). Although many methods have been developed in domain adaptation, one issue hasn't been solved: how to implement knowledge transfer when more than one source domain is available. In this paper we present a neural network-based method which extracts domain knowledge in the form of rules to facilitate knowledge transfer, merge rules from all source domains and further select related rules for target domain and clip redundant rules. The method presented is validated on datasets that simulate the multi-source scenario and the experimental results verify the superiority of our method in handling multi-source domain adaptation problems.