Ashraf, J, Hussain, OK, Hussain, FK & Chang, EJ 2018, Measuring and Analysing the Use of Ontologies, Springer International Publishing, Switzerland.
View/Download from: Publisher's site
Liu, B, Zhou, W, Zhu, T, Xiang, Y & Wang, K 2018, Location Privacy in Mobile Applications, Springer Singapore.
View/Download from: Publisher's site
Beydoun, G, Voinov, A & Sugumaran, V 2018, 'Beyond Service-Oriented Architectures' in Sugumaran, V (ed), Developments and Trends in Intelligent Technologies and Smart Systems, IGI Global, USA, pp. 16-27.
View/Download from: Publisher's site
View description>>
Predictions for Service Oriented Architectures (SOA) to deliver transformational results to the role and capabilities of IT for businesses have fallen short. Unforeseen challenges have often emerged in SOA adoption. They fall into two categories: technical issues stemming from service components reuse difficulties and organizational issues stemming from inadequate support or understanding of what is required from the executive management in an organization to facilitate the technical rollout. This paper first explores and analyses the hindrances to the full exploitation of SOA. It then proposes an alternative service delivery approach that is based on even a higher degree of loose coupling than SOA. The approach promotes knowledge services and agent-based support for integration and identification of services. To support the arguments, this chapter sketches as a proof of concept the operationalization of such a service delivery system in disaster management.
Cotta, C, Mathieson, L & Moscato, P 2018, 'Memetic Algorithms' in Resende, MGC, Marti, R & Pardalos, PM (eds), Handbook of Heuristics, Springer International Publishing, Switzerland, pp. 607-638.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. All rights reserved. Memetic algorithms provide one of the most effective and flexible metaheuristic approaches for tackling hard optimization problems. Memetic algorithms address the difficulty of developing high-performance universal heuristics by encouraging the exploitation of multiple heuristics acting in concert, making use of all available sources of information for a problem. This approach has resulted in a rich arsenal of heuristic algorithms and metaheuristic frameworks for many problems. This chapter discusses the philosophy of the memetic paradigm, lays out the structure of a memetic algorithm, develops several example algorithms, surveys recent work in the field, and discusses the possible future directions of memetic algorithms.
Daim, TU, Oliver, T & Phaal, R 2018, 'Technology Roadmapping' in Daim, T, Oliver, T & Phaal, R (eds), Technology Roadmapping (World Scientific Series in R&D Management Vol 2), World Scientific, Singapore, pp. 33-62.
View/Download from: Publisher's site
View description>>
This study investigates which technology management (TM) tools are used in practice, what determines their usage, and whether they affect the user firms’ performance. Based on a survey of 52 electronics and machinery firms in Turkey, the study shows there are significant relationships between the num-ber of TM tools and techniques that a firm uses and (i) the hierarchical level of the chief technology officer (CTO) or most senior manager responsible for technology, (ii) his/her field of education, and (iii) the size of the firm. The findings indicate a significant and linear relationship between the extent to which the firms have reached their growth targets and the number of TM tools and techniques used. This relationship is, however, not observed between the firms profitability and the number of TM tools and techniques. The findings have important implications for the practice of TM.
Gil-Lafuente, AM, Merigó, JM, Dass, BK & Verma, R 2018, 'Preface', pp. v-vii.
Li, J, Chen, Z & Ma, Z 2018, 'Learning Colours from Textures by Effective Representation of Images' in Yurish, SY (ed), Advances in Signal Processing: Reviews, International Frequency Sensor Association (IFSA) Publishing, Spain, pp. 277-304.
View description>>
Arguably the majority of existing image and video analytics are done based on the texture. However, the other important aspect, colours, must also be considered for comprehensive analytics. Colours do not only make images feel more vivid to viewers, they also contains important visual clues of the image [20, 54, 24]. Although a modern point-and-shoot digital camera can easily capture colour images, there are circumstances where we need to recover the chromatic information in an image. For example, photography in the old days was monochrome and provided only gray-scale images. Adding colours can rejuvenate these old pictures and make them more adorable as personal memoir or more accessible as archival documents for public or educational purposes. For a colour image, re-coloursation may be necessary if the white balance was poorly set when shooting the picture. In this case, a particular colour channel can be severely over- or under- exposure, and makes infeasible to adjust the white balance based on the recorded colours. A possible rescue of the picture is to keep only the luminance and re-colourise the image. Another example of the application of colourisation arises from the area of specialised imaging, where the sensors capture signals that are out of the visible spectrum of light, e.g. X-ray, MRI, near infrared images. Pseudo colours for these images make them more readily for interpretation by human experts, and can also indicate potentially interesting regions.
Xu, G, Wu, Z, Cao, J & Tao, H 2018, 'Models for Community Dynamics' in Encyclopedia of Social Network Analysis and Mining, Springer New York, pp. 1378-1392.
View/Download from: Publisher's site
Za’in, C, Pratama, M, Prasad, M, Puthal, D, Lim, CP & Seera, M 2018, 'Motor Fault Detection and Diagnosis Based on a Meta-cognitive Random Vector Functional Link Network' in Fault Diagnosis of Hybrid Dynamic and Complex Systems, Springer International Publishing, Switzerland, pp. 15-44.
View/Download from: Publisher's site
Zhang, Y & Xu, G 2018, 'Singular Value Decomposition' in Encyclopedia of Database Systems, Springer New York, pp. 3506-3508.
View/Download from: Publisher's site
Abolbashari, MH, Chang, E, Hussain, OK & Saberi, M 2018, 'Smart Buyer: A Bayesian Network modelling approach for measuring and improving procurement performance in organisations', Knowledge-Based Systems, vol. 142, pp. 127-148.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Procurement, the act of buying goods or services from an external supplier, plays an important role in any organisation. To measure how well an organisation undertakes this activity, it needs to measure all associated Key Performance Indicators (KPIs). The current literature's major drawback in performing such a measurement is how to integrate the different KPIs, each of which captures a specific aspect of the organisation's performance. In this paper, we highlight this drawback and present our proposed Smart Buyer framework that is based on a Bayesian Network (BN) model capable of capturing and integrating the different KPIs. The measured procurement performance value can then be used by organisations to identify the areas in which they need to improve and develop plans to achieve this. Four scenarios are presented to show how the proposed BN model can be further used for analysis and decision making within organisations. Finally, a recent real-world procurement example is studied to demonstrate the applicability of the proposed Smart Buyer framework.
Aboutorab, H, Saberi, M, Asadabadi, MR, Hussain, O & Chang, E 2018, 'ZBWM: The Z-number extension of Best Worst Method and its application for supplier development', Expert Systems with Applications, vol. 107, pp. 115-125.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Best Worst Method (BWM) has recently been proposed as a method for Multi Criteria Decision Making (MCDM). Studies show that BWM compared with other methods such as Analytic Hierarchy Process (AHP), leads to lower inconsistency of the results while reducing the number of required pairwise comparisons. MCDM methods such as BWM require accurate information. However, it often happens in practice that a level of uncertainty accompanies the information. The main aim of this paper is to address this problem and provide an integration of BWM and Z-numbers, namely ZBWM. Providing BWM with Z-numbers enables the BWM method to handle the uncertainty of information of a multi-criteria decision. Additionally, the capabilities of the proposed method in the process of utilizing the linguistic information dealing with big data are highlighted. The proposed method is examined to address a supplier development problem. By experimental results, we show that ZBWM results lower inconsistency when compared with BWM. A Z-number contains subjectivity in its fuzzy part, which can be addressed in future applications of ZBWM.
Alderighi, T, Malomo, L, Giorgi, D, Pietroni, N, Bickel, B & Cignoni, P 2018, 'Metamolds: computational design of silicone molds.', ACM Trans. Graph., vol. 37, no. 4, pp. 136-136.
View/Download from: Publisher's site
Alfaro-García, VG, Merigó, JM, Gil-Lafuente, AM & Kacprzyk, J 2018, 'Logarithmic aggregation operators and distance measures', International Journal of Intelligent Systems, vol. 33, no. 7, pp. 1488-1506.
View/Download from: Publisher's site
View description>>
© 2018 Wiley Periodicals, Inc. The Hamming distance is a well-known measure that is designed to provide insights into the similarity between two strings of information. In this study, we use the Hamming distance, the optimal deviation model, and the generalized ordered weighted logarithmic averaging (GOWLA) operator to develop the ordered weighted logarithmic averaging distance (OWLAD) operator and the generalized ordered weighted logarithmic averaging distance (GOWLAD) operator. The main advantage of these operators is the possibility of modeling a wider range of complex representations of problems under the assumption of an ideal possibility. We study the main properties, alternative formulations, and families of the proposed operators. We analyze multiple classical measures to characterize the weighting vector and propose alternatives to deal with the logarithmic properties of the operators. Furthermore, we present generalizations of the operators, which are obtained by studying their weighting vectors and the lambda parameter. Finally, an illustrative example regarding innovation project management measurement is proposed, in which a multi-expert analysis and several of the newly introduced operators are utilized.
Alshehri, MD, Hussain, FK & Hussain, OK 2018, 'Clustering-Driven Intelligent Trust Management Methodology for the Internet of Things (CITM-IoT)', Mobile Networks and Applications, vol. 23, no. 3, pp. 419-431.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. The growth and adoption of the Internet of Things (IoT) is increasing day by day. The large number of IoT devices increases the risk of security threats such as (but not limited to) viruses or cyber-attacks. One possible approach to achieve IoT security is to enable a trustworthy IoT environment in IoT wherein the interactions are based on the trust value of the communicating nodes. Trust management and trust assessment has been extensively studied in distributed networks in general and the IoT in particular, but there are still outstanding pressing issues such as bad-mouthing of trust values which prevent them from being used in practical IoT applications. Furthermore, there is no research in ensuring that the developed IoT trust solutions are scalable across billions of IoT nodes. To address the above-mentioned issues, we propose a methodology for scalable trust management solution in the IoT. The methodology addresses practical and pressing issues related to IoT trust management such as trust-based IoT clustering, intelligent methods for countering bad-mouthing attacks on trust systems, issues of memory-efficient trust computation and trust-based migration of IoT nodes from one cluster to another. Experimental results demonstrate the effectiveness of the proposed approaches.
Alzoubi, YI, Gill, AQ & Moulton, B 2018, 'A measurement model to analyze the effect of agile enterprise architecture on geographically distributed agile development.', J. Softw. Eng. Res. Dev., vol. 6, no. 4, pp. 4-4.
View/Download from: Publisher's site
Andrade-Valbuena, NA & Merigo, JM 2018, 'Outlining new product development research through bibliometrics', Journal of Strategy and Management, vol. 11, no. 3, pp. 328-350.
View/Download from: Publisher's site
View description>>
PurposeNew product development (NPD) is a noteworthy field that has attracted the attention of scholars for its relevance for firm success. Based on bibliometric indicators and spatial distance network analysis, the authors outline the general structure overview of NPD research through the last 40 years of scientific production; identify and categorize key articles, authors, journals, institutions, and countries related to NPD research; identify and map the research subareas that have mostly contributed to the construction of NPD intellectual structure. The paper aims to discuss these issues.Design/methodology/approachThe work uses the Web of Science Core Collection and the visualization of similarities viewer software. The analysis searches for all the documents connected to NPD available in the database. The graphical visualization maps the bibliographic data in terms of bibliographic coupling and co-citation.FindingsThe general NPD citation pattern evidences a construction of knowledge and learning, as evidenced in different subjects, such as biology or physics. Relevant contributions and contributors are highlighted as journals, articles, researchers, countries and institutions in overall NPD research and in its constituent subfields. Five subareas related to the NPD field based on journals and authors network are identified: marketing; operations and production; strategy; industrial engineering and operations; and management.Originality/valueThis paper contributes to the NPD literature by offering a global perspective on the field by using bibliometric data graphical networks, provid...
Asadabadi, MR, Saberi, M & Chang, E 2018, 'Targets of Unequal Importance Using the Concept of Stratification in a Big Data Environment', International Journal of Fuzzy Systems, vol. 20, no. 4, pp. 1373-1384.
View/Download from: Publisher's site
View description>>
© 2017, Taiwan Fuzzy Systems Association and Springer-Verlag GmbH Germany, part of Springer Nature. The concept of stratification (CST) has recently been proposed as an innovative approach in problem solving. CST takes a recursive approach to solve problems. It considers a system which has to transition through states until it arrives to a state which belongs to a desired set of states, namely a target set. The states can be stratified by enlarging the target (absorbing adjacent states). Incremental enlargement is a means to identify possible paths to achieve the target. Such an enlargement can also be used to degrade the target when the original target is not reachable. Although the characteristics of the concept, such as incremental enlargement, enhance its potential application in robotics, artificial intelligence, and planning and monitoring, there is a major shortcoming in the approach, namely its inability to consider targets of unequal importance. This study considers two targets of unequal importance for the system in CST, labelled Bi-Objective CST model (BOCST). In comparison with the original proposed CST model in this research, a version of CST with finite states which is much easier to apply than the original CST is proposed, labelled fuzzy CST. Following that, a combination of Fuzzy CST and BOCST (FBO-CST) is proposed. The model is then employed to address a restaurant selection problem using data from Google. The example illustrates how the model should be applied in a big data environment. By defining finite state CST and considering targets of unequal importance, this study is expected to facilitate future applications of CST.
Avilés-Ochoa, E, León-Castro, E, Perez-Arellano, LA & Merigó, JM 2018, 'Government transparency measurement through prioritized distance operators', Journal of Intelligent & Fuzzy Systems, vol. 34, no. 4, pp. 2783-2794.
View/Download from: Publisher's site
View description>>
© 2018 - IOS Press and the authors. All rights reserved. The prioritized induced probabilistic ordered weighted average distance (PIPOWAD) has been developed. This new operator is an extension of the ordered weighted average (OWA) operator that can be used in cases where we have two sets of data that want to be compared. Some of the main characteristics of this new operator are: 1) Not all the decision makers are equally important, so the information needs to be prioritized, 2) The information has a probability to occur and 3) The decision makers can change the importance of the information based in an induced variable. Additionally, characteristics and families of the PIPOWAD operator are presented. Finally, an application of the PIPOWAD operator in order to measure government transparency in Mexico is presented.
Babar, A, Bunker, D & Qumer Gill, A 2018, 'Investigating the Relationship between Business Analysts’ Competency and IS Requirements Elicitation: A Thematic-analysis Approach', Communications of the Association for Information Systems, vol. 42, no. 1, pp. 334-362.
View/Download from: Publisher's site
View description>>
© 2018 by the Association for Information Systems. Researchers and practitioners have consistently reported poor requirements elicitation (RE) as one of the major reasons for information system (IS) project failures. In the last two decades, RE research and practice have focused predominantly on developing tools and techniques for business analysts (BAs) to use and improve RE; however, they have paid little attention to the importance of the competency of the BAs involved in RE. We investigate the relationship between the BAs’ competency and RE through an exploratory study. We applied a thematic network analysis approach, along with a four-stage qualitative data-analysis process, to discover four business view and six system view themes and their relationships to BAs’ competency. Our results indicate that senior, intermediate, and junior BAs performed similarly in selecting stakeholders’ viewpoints and collecting requirements from them; however, senior BAs focused more on high-level requirements than the low-level technical requirements of the system. The results suggest that BAs’ competency play a significant role in RE and that organizations that clearly define BAs’ competency can help them to identify the right BA for the right job.
Babbush, R, Berry, DW, Sanders, YR, Kivlichan, ID, Scherer, A, Wei, AY, Love, PJ & Aspuru-Guzik, A 2018, 'Exponentially more precise quantum simulation of fermions in the configuration interaction representation', Quantum Science and Technology, vol. 3, no. 1, pp. 015006-015006.
View/Download from: Publisher's site
View description>>
We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires qubits, where is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as .
Bai, L, Wang, J, Ma, X & Lu, H 2018, 'Air Pollution Forecasts: An Overview', International Journal of Environmental Research and Public Health, vol. 15, no. 4, pp. 780-780.
View/Download from: Publisher's site
View description>>
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. Air pollution is defined as a phenomenon harmful to the ecological system and the normal conditions of human existence and development when some substances in the atmosphere exceed a certain concentration. In the face of increasingly serious environmental pollution problems, scholars have conducted a significant quantity of related research, and in those studies, the forecasting of air pollution has been of paramount importance. As a precaution, the air pollution forecast is the basis for taking effective pollution control measures, and accurate forecasting of air pollution has become an important task. Extensive research indicates that the methods of air pollution forecasting can be broadly divided into three classical categories: statistical forecasting methods, artificial intelligence methods, and numerical forecasting methods. More recently, some hybrid models have been proposed, which can improve the forecast accuracy. To provide a clear perspective on air pollution forecasting, this study reviews the theory and application of those forecasting models. In addition, based on a comparison of different forecasting methods, the advantages and disadvantages of some methods of forecasting are also provided. This study aims to provide an overview of air pollution forecasting methods for easy access and reference by researchers, which will be helpful in further studies.
Baier-Fuentes, H, Cascón-Katchadourian, J, Sánchez, ÁM, Herrera-Viedma, E & Merigó, J 2018, 'A Bibliometric Overview of the International Journal of Interactive Multimedia and Artificial Intelligence', International Journal of Interactive Multimedia and Artificial Intelligence, vol. 5, no. 3, pp. 9-9.
View/Download from: Publisher's site
Bano, M, Zowghi, D & Rimini, FD 2018, 'User Involvement in Software Development: The Good, the Bad, and the Ugly.', IEEE Softw., vol. 35, no. 6, pp. 8-11.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Merely involving the users in software development won't guarantee system success. User involvement is a complex, multifaceted phenomenon with a good side, a bad side, and an ugly side. A better, deeper understanding of those sides can help project managers develop responsive strategies for increasing user involvement's effectiveness.
Bano, M, Zowghi, D, Kearney, M, Schuck, S & Aubusson, P 2018, 'Mobile learning for science and mathematics school education: A systematic review of empirical evidence.', Comput. Educ., vol. 121, pp. 30-58.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd The ubiquity, flexibility, ease of access and diverse capabilities of mobile technologies make them valuable and a necessity in current times. However, they are under-utilized assets in mathematics and science school education. This article analyses the high quality empirical evidence on mobile learning in secondary school science and mathematics education. Our study employed a Systematic Literature Review (SLR) using well-accepted and robust guidelines. The SLR resulted in the detailed analysis of 49 studies (60 papers) published during 2003–2016. Content and thematic analyses were used to ascertain pedagogical approaches, methodological designs, foci, and intended and achieved outcomes of the studies. The apps and technologies used in these studies were further classified for domain, type and context of use. The review has highlighted gaps in existing literature on the topic and has provided insights that have implications for future research.
Berry, DW, Kieferová, M, Scherer, A, Sanders, YR, Low, GH, Wiebe, N, Gidney, C & Babbush, R 2018, 'Improved techniques for preparing eigenstates of fermionic Hamiltonians', npj Quantum Information, vol. 4, no. 1.
View/Download from: Publisher's site
View description>>
AbstractModeling low energy eigenstates of fermionic systems can provide insight into chemical reactions and material properties and is one of the most anticipated applications of quantum computing. We present three techniques for reducing the cost of preparing fermionic Hamiltonian eigenstates using phase estimation. First, we report a polylogarithmic-depth quantum algorithm for antisymmetrizing the initial states required for simulation of fermions in first quantization. This is an exponential improvement over the previous state-of-the-art. Next, we show how to reduce the overhead due to repeated state preparation in phase estimation when the goal is to prepare the ground state to high precision and one has knowledge of an upper bound on the ground state energy that is less than the excited state energy (often the case in quantum chemistry). Finally, we explain how one can perform the time evolution necessary for the phase estimation based preparation of Hamiltonian eigenstates with exactly zero error by using the recently introduced qubitization procedure.
BEZDEK, J, KELLER, J, PAL, N, LIN, C-T & GARIBALDI, J 2018, 'Editorial Celebrating 25 Years of the IEEE Transactions on Fuzzy Systems', IEEE Transactions on Fuzzy Systems, vol. 26, no. 1, pp. 1-5.
View/Download from: Publisher's site
Bickel, B, Cignoni, P, Malomo, L & Pietroni, N 2018, 'State of the Art on Stylized Fabrication.', Comput. Graph. Forum, vol. 37, pp. 325-342.
View/Download from: Publisher's site
Blanco-Mesa, F, Gil-Lafuente, AM & Merigo, JM 2018, 'Dynamics of stakeholder relations with multi-person aggregation', Kybernetes, vol. 47, no. 9, pp. 1801-1820.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to develop a novel method to analyse dynamic interactions of stakeholders to explain how a set of agents can act by considering the power/influence positions.Design/methodology/approachA novel mathematical application uses the importance of characteristics algorithm in combination with composition max-min to compare, group and order information according to the importance of its characteristics. The mathematical application is focused on a strategic analysis, evaluating stakeholder dynamics through power relationships.FindingsThe results show a comparison of the relationships among each of the stakeholders to obtain the relative intensity and importance of relationships between them, given by the fuzzy matrix FRInM and the fuzzy matrix FRIM, respectively. This application provides a useful tool for a dynamic analysis of stakeholders in a complex environment, where the best approach to performing a strategic analysis process is sought.Research limitations/implicationsThe main implication of the proposed approach is taking into account the importance of information to establish the boundaries and relationships of each characteristic according to its intensity. However, limitations are due to the nature of this research, based on theoretical assumptions regarding stakeholders and the use of a hypothetical example to show the operation of algorithms.Originality/valueThe primary advantage of this proposition is that it takes into account the im...
BLANCO-MESA, F, GIL-LAFUENTE, AM & MERIGÓ, JM 2018, 'NEW AGGREGATION OPERATORS FOR DECISION-MAKING UNDER UNCERTAINTY: AN APPLICATIONS IN SELECTION OF ENTREPRENEURIAL OPPORTUNITIES', Technological and Economic Development of Economy, vol. 24, no. 2, pp. 335-357.
View/Download from: Publisher's site
View description>>
The main aim of this paper is to study how economic environment and logic reasoning guidance the decision-making process to start-up a new business by potential entrepreneurs. The study proposes a new method using the family of selection indices with OWA operator, which allows aggregating information according to the level of importance and their level of objectivity and subjectivity in the same formulation within the decision-making process. To develop case study, we have taken into account some industries of the sports sector and some critical environmental factors that influence the competitiveness and entrepreneurship in Colombia to start a new business. The results show in an orderly way all information aggregated, which can help potential investors and entrepreneurs to make a decision based on their preferences. Finally, the applicability of this method in real case can be given in aggregation different sources of information to help at dealing decision-making processes.
Blanco-Mesa, F, Gil-Lafuente, AM & Merigó, JM 2018, 'Subjective stakeholder dynamics relationships treatment: a methodological approach using fuzzy decision-making', Computational and Mathematical Organization Theory, vol. 24, no. 4, pp. 441-472.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Since the stakeholder theory was proposed to explain the interaction among its agents, extensive approaches have been developed. However, the literature continues to suggest the development of new methodologies that allow an analysis of the dynamics and uncertainty of the relationships between each agent. In this sense, this research proposes a novel methodology for the treatment of subjective stakeholder dynamics using fuzzy decision-making. The study proposes a mathematical methodological perspective for the treatment of subjective relationships among stakeholders, which allows a predictive simulation tool to be developed for attitude and personal preferences to analyze the links among all stakeholders. A mathematical application is developed to help the decision-making process in uncertainty concerning the ordering-according-to-their-importance and linking-of-relation algorithms, which are based on notions of relation, gathering and ordering. A numerical example is proposed to understand the method’s usefulness and feasibility. The results approximate how stakeholder ambiguity and fuzziness can be managed considering the decision-maker’s preference subjectivity. In addition, these results highlight the different relationships among each stakeholder, their intensity levels, the incidence linkage loops and the incidence relative on stakeholder behaviors. The main implication of this proposition is to deal with the subjective preferences provide by decision-maker to better interpret environmental and subjective factors. Furthermore, this study contributes to the strategic planning and decision-making processes for operative units within uncertain environment in the short term.
Blanco-Mesa, F, León-Castro, E & Merigó, JM 2018, 'Bonferroni induced heavy operators in ERM decision-making: A case on large companies in Colombia', Applied Soft Computing, vol. 72, pp. 371-391.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Averaging aggregation operators analyse a set of data providing a summary of the results. This study focuses on the Bonferroni mean and the induced and heavy aggregation operators. The aim of the work is to present new aggregation operators that combine these concepts forming the Bonferroni induced heavy ordered weighted average and several particular formulations. This approach represents Bonferroni means with order inducing variables and with weighting vectors that can be higher than one. The paper also develops some extensions by using distance measures forming the Bonferroni induced heavy ordered weighted average distance and several particular cases. The study ends with an application in a large companies risk management problem in Colombia. The main advantage of this approach is that it provides a more general framework for analysing the data in scenarios where the numerical values may have some complexities that should be assessed with complex attitudinal characters.
Bracci, M, Tarini, M, Pietroni, N, Livesu, M & Cignoni, P 2018, 'HexaLab.net: an online viewer for hexahedral meshes.', CoRR, vol. abs/1806.06639, pp. 24-36.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd We introduce HexaLab: a WebGL application for real time visualization, exploration and assessment of hexahedral meshes. HexaLab can be used by simply opening www.hexalab.net. Our visualization tool targets both users and scholars. Practitioners who employ hexmeshes for Finite Element Analysis, can readily check mesh quality and assess its usability for simulation. Researchers involved in mesh generation may use HexaLab to perform a detailed analysis of the mesh structure, isolating weak points and testing new solutions to improve on the state of the art and generate high quality images. To this end, we support a wide variety of visualization and volume inspection tools. Our system offers also immediate access to a repository containing all the publicly available meshes produced with the most recent techniques for hexmesh generation. We believe HexaLab, providing a common tool for visualizing, assessing and distributing results, will push forward the recent strive for replicability in our scientific community.
Cancino, CA, Merigo, JM, Torres, JP & Diaz, D 2018, 'A bibliometric analysis of venture capital research', Journal of Economics, Finance and Administrative Science, vol. 23, no. 45, pp. 182-195.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this study is to present the evolution of academic research in venture capital (VC) research between 1990 and 2014.Design/methodology/approachThe study analyzes the most influential journals in VC research by analyzing papers, which were published on the Web of Science database.FindingsResults show a steady increasing rate of VC research during the past 25 years. The paper reports the 40 academic journals that permanently publish articles about VC research.Originality/valueThe main contribution of this work is to develop a general overview of the leading journals in VC research, which leads to the development of a future research agenda for bibliometric analysis, such as the review of the most productive and influential authors, universities and countries in VC research.
Cao, Y, Romero, J, Olson, JP, Degroote, M, Johnson, PD, Kieferová, M, Kivlichan, ID, Menke, T, Peropadre, B, Sawaya, NPD, Sim, S, Veis, L & Aspuru-Guzik, A 2018, 'Quantum Chemistry in the Age of Quantum Computing', Chemical Reviews, vol. 119, no. 19.
View/Download from: Publisher's site
View description>>
Practical challenges in simulating quantum systems on classical computershave been widely recognized in the quantum physics and quantum chemistrycommunities over the past century. Although many approximation methods havebeen introduced, the complexity of quantum mechanics remains hard to appease.The advent of quantum computation brings new pathways to navigate thischallenging complexity landscape. By manipulating quantum states of matter andtaking advantage of their unique features such as superposition andentanglement, quantum computers promise to efficiently deliver accurate resultsfor many important problems in quantum chemistry such as the electronicstructure of molecules. In the past two decades significant advances have beenmade in developing algorithms and physical hardware for quantum computing,heralding a revolution in simulation of quantum systems. This article is anoverview of the algorithms and results that are relevant for quantum chemistry.The intended audience is both quantum chemists who seek to learn more aboutquantum computing, and quantum computing researchers who would like to exploreapplications in quantum chemistry.
Cao, Z & Lin, C-T 2018, 'Inherent Fuzzy Entropy for the Improvement of EEG Complexity Evaluation', IEEE Transactions on Fuzzy Systems, vol. 26, no. 2, pp. 1032-1035.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In recent years, the concept of entropy has been widely used to measure the dynamic complexity of signals. Since the state of complexity of human beings is significantly affected by their health state, developing accurate complexity evaluation algorithms is a crucial and urgent area of study. This paper proposes using inherent fuzzy entropy (Inherent FuzzyEn) and its multiscale version, which employs empirical mode decomposition and fuzzy membership function (exponential function) to address the dynamic complexity in electroencephalogram (EEG) data. In the literature, the reliability of entropy-based complexity evaluations has been limited by superimposed trends in signals and a lack of multiple time scales. Our proposed method represents the first attempt to use the Inherent FuzzyEn algorithm to increase the reliability of complexity evaluation in realistic EEG applications. We recorded the EEG signals of several subjects under resting condition, and the EEG complexity was evaluated using approximate entropy, sample entropy, FuzzyEn, and Inherent FuzzyEn, respectively. The results indicate that Inherent FuzzyEn is superior to other competing models regardless of the use of fuzzy or nonfuzzy structures, and has the most stable complexity and smallest root mean square deviation.
Cao, Z, Lai, K-L, Lin, C-T, Chuang, C-H, Chou, C-C & Wang, S-J 2018, 'Exploring resting-state EEG complexity before migraine attacks', Cephalalgia, vol. 38, no. 7, pp. 1296-1306.
View/Download from: Publisher's site
View description>>
Objective Entropy-based approaches to understanding the temporal dynamics of complexity have revealed novel insights into various brain activities. Herein, electroencephalogram complexity before migraine attacks was examined using an inherent fuzzy entropy approach, allowing the development of an electroencephalogram-based classification model to recognize the difference between interictal and preictal phases. Methods Forty patients with migraine without aura and 40 age-matched normal control subjects were recruited, and the resting-state electroencephalogram signals of their prefrontal and occipital areas were prospectively collected. The migraine phases were defined based on the headache diary, and the preictal phase was defined as within 72 hours before a migraine attack. Results The electroencephalogram complexity of patients in the preictal phase, which resembled that of normal control subjects, was significantly higher than that of patients in the interictal phase in the prefrontal area (FDR-adjusted p < 0.05) but not in the occipital area. The measurement of test-retest reliability (n = 8) using the intra-class correlation coefficient was good with r1 = 0.73 ( p = 0.01). Furthermore, the classification model, support vector machine, showed the highest accuracy (76 ± 4%) for classifying interictal and preictal phases using the prefrontal electroencephalogram complexity. Conclusion Entropy-based analytical methods identified enhancement or “normalization” of frontal electroencephalogram complexity during the preictal phase compared with the interictal phase. This classification model, using this complexity feature, may have the potential to provide a preictal alert to migraine without aura patients.
CARLES, M-F, PATRICIA, H, ANTONIO, S & JOSÉ M., M 2018, 'The Forgotten Effects: An Application in the Social Economy of Companies of the Balearic Islands', ECONOMIC COMPUTATION AND ECONOMIC CYBERNETICS STUDIES AND RESEARCH, vol. 52, no. 3/2018, pp. 147-160.
View/Download from: Publisher's site
View description>>
© 2018, Bucharest University of Economic Studies. All rights reserved. Few studies have analyzed how to improve the results and productivity of companies with very peculiar characteristics, such as social economy entities. This paper determines the principal worth-creating activities for this type of companies that dedicate their activities to the service sector of the Balearic Islands. In order to carry out this work, incidence matrixes and recovery of forgotten effects have been used. Both direct cause and second generation causes that arise in the majority of the socio-economic cases have been identified. In fact, determining the second generation effects, or forgotten effects, is one of the main contributions of this study as it shows that those causes that are usually not foreseen, at least in the first instance, affect notably in the generation of social economy companies value to the service sector of the Balearic Islands.
Castro, J, Lu, J, Zhang, G, Dong, Y & Martinez, L 2018, 'Opinion Dynamics-Based Group Recommender Systems', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 12, pp. 2394-2406.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. With the accessibility to information, users often face the problem of selecting one item (a product or a service) from a huge search space. This problem is known as information overload. Recommender systems (RSs) personalize content to a user's interests to help them select the right item in information overload scenarios. Group RSs (GRSs) recommend items to a group of users. In GRSs, a recommendation is usually computed by a simple aggregation method for individual information. However, the aggregations are rigid and overlook certain group features, such as the relationships between the group members' preferences. In this paper, it is proposed a GRS based on opinion dynamics that considers these relationships using a smart weights matrix to drive the process. In some groups, opinions do not agree, hence the weights matrix is modified to reach a consensus value. The impact of ensuring agreed recommendations is evaluated through a set of experiments. Additionally, a sensitivity analysis studies its behavior. Compared to existing group recommendation models and frameworks, the proposal based on opinion dynamics would have the following advantages: 1) flexible aggregation method; 2) member relationships; and 3) agreed recommendations.
Cetindamar, D 2018, 'Designed by law: Purpose, accountability, and transparency at benefit corporations', Cogent Business & Management, vol. 5, no. 1, pp. 1423787-1423787.
View/Download from: Publisher's site
View description>>
The article explores the realization of major goals of the Benefit
Corporation (BC) law, which is a corporation form designed for social enterprises in the United States in 2010. BCs have a dual mission of generating both profit and social value and hence they might have the potential to transform society. This paper attempts to observe the first movers established as BCs during the period of 2010–2012. By adopting the institutional theory approach, the study examines the realization of the BC law’s three major goals: purpose, accountability, and transparency.
The paper utilizes the regulatory legitimacy concept to measure the discrepancy between design and implementation of law. The observations point out some of the challenges of establishing new innovative organizations through an institutional intervention of a law. Conclusions consist of implications of the study as well as suggestions
for further studies.
Chauhan, J, Seneviratne, S, Hu, Y, Misra, A, Seneviratne, A & Lee, Y 2018, 'Breathing-Based Authentication on Resource-Constrained IoT Devices using Recurrent Neural Networks', Computer, vol. 51, no. 5, pp. 60-67.
View/Download from: Publisher's site
Chen, S, Wang, Z, Liang, J & Yuan, X 2018, 'Uncertainty-aware visual analytics for exploring human behaviors from heterogeneous spatial temporal data', Journal of Visual Languages & Computing, vol. 48, pp. 187-198.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd When analyzing human behaviors, we need to construct the human behaviors from multiple sources of data, e.g. trajectory data, transaction data, identity data, etc. The problems we're facing are the data conflicts, different resolution, missing and conflicting data, which together lead to the uncertainty in the spatial temporal data. Such uncertainty in data leads to difficulties and even failure in the visual analytics task for analyzing people behavior, pattern and outliers. However, traditional automatic methods can not solve the problems in such complex scenario, where the uncertain and conflicting patterns are not well-defined. To solve the problems, we proposed a semi-automatic approach, for users to solve the conflicts and identify the uncertainties. To be general, we summarized five types of uncertainties and solutions to conduct the tasks of behavior analysis. Combined with the uncertainty-aware methods, we proposed a visual analytics system to analyze human behaviors, detect patterns and find outliers. Case studies from the IEEE VAST Challenge 2014 dataset confirm the effectiveness of our approach.
Chen, Y, Dong, Y, Sun, Y & Liang, J 2018, 'A Multi-comparable visual analytic approach for complex hierarchical data', Journal of Visual Languages & Computing, vol. 47, pp. 19-30.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Maximum residue limit (MRL) standard which specifies the highest level of every pesticide residue in different agricultural products plays a critical role in food safety. However, such standards which related to the characteristics of pesticides and the classification of agricultural products which organized into a hierarchical structure are complex and vary widely across different regions or countries. So it is a big challenge to compare multi-regional MRL standard data comprehensively. In this paper, we present a multi-comparable visual analytic approach for complex hierarchical data and a visual analytics system (McVA) to support multiple comparison and evaluation of MRL standard. With a cooperative multi-view visual design, our proposed approach links the hierarchies of MRL datasets and provides the capacity for comparison at different levels and dimensions. We also introduce a metric model for evaluating the completeness and strictness of MRL standards quantitatively. The case study of real problems and the positive feedback from domain experts demonstrate the effectiveness of this approach.
Cheng, Z, Zhang, X, Shen, S, Yu, S, Ren, J & Lin, R 2018, 'T-Trail: Link Failure Monitoring in Software-Defined Optical Networks', Journal of Optical Communications and Networking, vol. 10, no. 4, pp. 344-344.
View/Download from: Publisher's site
View description>>
Monitoring trail (m-trail) provides a striking mechanism for fast and unambiguous link failure localization in all-optical networks. However, allocating dedicated supervisory lightpaths (m-trail) undoubtedly increases total network cost. Accordingly, how to maximally reduce monitoring cost in an optical network is an important issue. To this end, we propose a concept of traffic trail (t-trail) that uses traffic lightpaths, instead of dedicated supervisory lightpaths, to localize a single link failure in the context of a software-defined optical network (SDON). The central controller of an SDON collects routing information of all t-trails in the network. Thus, any link failure can be localized according to the ON-OFF status of the traversing t-trails. We first formulate the problem as an integer linear programming (ILP) model. Since the ILP is not feasible for solving the problem in large-size networks, an efficient heuristic algorithm t-trail allocation (TTA) is proposed to address it. We conduct extensive simulations to evaluate the performance of TTA. The results show that compared with the existing m-trail schemes, TTA can reduce total costs by 20.91% on average.
Chi, L, Li, B, Zhu, X, Pan, S & Chen, L 2018, 'Hashing for Adaptive Real-Time Graph Stream Classification With Concept Drifts', IEEE Transactions on Cybernetics, vol. 48, no. 5, pp. 1591-1604.
View/Download from: Publisher's site
View description>>
Many applications involve processing networked streaming data in a timely manner. Graph stream classification aims to learn a classification model from a stream of graphs with only one-pass of data, requiring real-time processing in training and prediction. This is a nontrivial task, as many existing methods require multipass of the graph stream to extract subgraph structures as features for graph classification which does not simultaneously satisfy "one-pass" and "real-time" requirements. In this paper, we propose an adaptive real-time graph stream classification method to address this challenge. We partition the unbounded graph stream data into consecutive graph chunks, each consisting of a fixed number of graphs and delivering a corresponding chunk-level classifier. We employ a random hashing function to compress the original node set of graphs in each chunk for fast feature detection when training chunk-level classifiers. Furthermore, a differential hashing strategy is applied to map unlimited increasing features (i.e., cliques) into a fixed-size feature space which is then used as a feature vector for stochastic learning. Finally, the chunk-level classifiers are weighted in an ensemble learning model for graph classification. The proposed method substantially speeds up the graph feature extraction and avoids unbounded graph feature growth. Moreover, it effectively offsets concept drifts in graph stream classification. Experiments on real-world and synthetic graph streams demonstrate that our method significantly outperforms existing methods in both classification accuracy and learning efficiency.
Chikara, RK, Chang, EC, Lu, Y-C, Lin, D-S, Lin, C-T & Ko, L-W 2018, 'Monetary Reward and Punishment to Response Inhibition Modulate Activation and Synchronization Within the Inhibitory Brain Network', Frontiers in Human Neuroscience, vol. 12.
View/Download from: Publisher's site
View description>>
© 2018 Chikara, Chang, Lu, Lin, Lin and Ko. A reward or punishment can modulate motivation and emotions, which in turn affect cognitive processing. The present simultaneous functional magnetic resonance imaging-electroencephalography study examines neural mechanisms of response inhibition under the influence of a monetary reward or punishment by implementing a modified stop-signal task in a virtual battlefield scenario. The participants were instructed to play as snipers who open fire at a terrorist target but withhold shooting in the presence of a hostage. The participants performed the task under three different feedback conditions in counterbalanced order: a reward condition where each successfully withheld response added a bonus (i.e., positive feedback) to the startup credit, a punishment condition where each failure in stopping deduced a penalty (i.e., negative feedback), and a no-feedback condition where response outcome had no consequences and served as a control setting. Behaviorally both reward and punishment conditions led to significantly down-regulated inhibitory function in terms of the critical stop-signal delay. As for the neuroimaging results, increased activities were found for the no-feedback condition in regions previously reported to be associated with response inhibition, including the right inferior frontal gyrus and the pre-supplementary motor area. Moreover, higher activation of the lingual gyrus, posterior cingulate gyrus (PCG) and inferior parietal lobule were found in the reward condition, while stronger activation of the precuneus gyrus was found in the punishment condition. The positive feedback was also associated with stronger changes of delta, theta, and alpha synchronization in the PCG than were the negative or no-feedback conditions. These findings depicted the intertwining relationship between response inhibition and motivation networks.
Choi, I, Milne, DN, Deady, M, Calvo, RA, Harvey, SB & Glozier, N 2018, 'Impact of Mental Health Screening on Promoting Immediate Online Help-Seeking: Randomized Trial Comparing Normative Versus Humor-Driven Feedback', JMIR Mental Health, vol. 5, no. 2, pp. e26-e26.
View/Download from: Publisher's site
View description>>
Background Given the widespread availability of mental health screening apps, providing personalized feedback may encourage people at high risk to seek help to manage their symptoms. While apps typically provide personal score feedback only, feedback types that are user-friendly and increase personal relevance may encourage further help-seeking. Objective The aim of this study was to compare the effects of providing normative and humor-driven feedback on immediate online help-seeking, defined as clicking on a link to an external resource, and to explore demographic predictors that encourage help-seeking. Methods An online sample of 549 adults were recruited using social media advertisements. Participants downloaded a smartphone app known as “Mindgauge” which allowed them to screen their mental wellbeing by completing standardized measures on Symptoms (Kessler 6-item Scale), Wellbeing (World Health Organization [Five] Wellbeing Index), and Resilience (Brief Resilience Scale). Participants were randomized to receive normative feedback that compared their scores to a reference group or humor-driven feedback that presented their scores in a relaxed manner. Those who scored in the moderate or poor ranges in any measure were encouraged to seek help by clicking on a link to an external online resource. Results A total of 318 participants scored poorly on one or more measures and were provided with an external link after being randomized to receive normative or humor-driven feedback. There was no significant difference of feedback type on clicking on the external link acros...
Chotipant, S, Hussain, FK & Hussain, OK 2018, 'SERNOTATE: An automated approach for business service description annotation for efficient service retrieval and composition', Concurrency and Computation: Practice and Experience, vol. 30, no. 1, pp. e4189-e4189.
View/Download from: Publisher's site
View description>>
SummaryBusiness service advertisements are today published online to convey essential information about services to customers. However, current Web search engines are unable to search and combine online service advertisements. Semantic service annotation is important for its ability to enable machines to understand the meaning of services and support in effective service retrieval and service composition. Existing research in the area of semantic service annotation has focused on the annotation of Web services in a semi‐automated approach. It cannot be applied to business service information as it is not in the form of Web Services Description Language but in free text format. Moreover, semi‐automated approaches are inappropriate for annotating a large amount of online service information which changes dynamically and they are therefore not suitable for the timely dissemination of service information to customers. To solve these issues, we propose SERNOTATE, which is an automated approach for business service description annotation for efficient service retrieval and composition. We propose new semantic‐based linking approaches, namely, Extended Case‐based Reasoning, vector‐based, and classification‐based, that automatically annotate business services to relevant service concepts. Each approach assists in the single‐label and multi‐label annotation of service terms to concept terms to provide a better representation of services. The experimental results test and validate the applicability of the proposed approaches to the automatic annotation of business service descriptions to service concepts on a real‐world dataset.
Chou, K-P, Prasad, M, Wu, D, Sharma, N, Li, D-L, Lin, Y-F, Blumenstein, M, Lin, W-C & Lin, C-T 2018, 'Robust Feature-Based Automated Multi-View Human Action Recognition System', IEEE Access, vol. 6, pp. 15283-15296.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Automated human action recognition has the potential to play an important role in public security, for example, in relation to the multiview surveillance videos taken in public places, such as train stations or airports. This paper compares three practical, reliable, and generic systems for multiview video-based human action recognition, namely, the nearest neighbor classifier, Gaussian mixture model classifier, and the nearest mean classifier. To describe the different actions performed in different views, view-invariant features are proposed to address multiview action recognition. These features are obtained by extracting the holistic features from different temporal scales which are modeled as points of interest which represent the global spatial-temporal distribution. Experiments and cross-data testing are conducted on the KTH, WEIZMANN, and MuHAVi datasets. The system does not need to be retrained when scenarios are changed which means the trained database can be applied in a wide variety of environments, such as view angle or background changes. The experiment results show that the proposed approach outperforms the existing methods on the KTH and WEIZMANN datasets.
Chuang, C-H, Cao, Z, King, J-T, Wu, B-S, Wang, Y-K & Lin, C-T 2018, 'Brain Electrodynamic and Hemodynamic Signatures Against Fatigue During Driving', Frontiers in Neuroscience, vol. 12, no. MAR, pp. 1-12.
View/Download from: Publisher's site
View description>>
© 2018 Chuang, Cao, King, Wu, Wang and Lin. Fatigue is likely to be gradually cumulated in a prolonged and attention-demanding task that may adversely affect task performance. To address the brain dynamics during a driving task, this study recruited 16 subjects to participate in an event-related lane-departure driving experiment. Each subject was instructed to maintain attention and task performance throughout an hour-long driving experiment. The subjects' brain electrodynamics and hemodynamics were simultaneously recorded via 32-channel electroencephalography (EEG) and 8-source/16-detector functional near-infrared spectroscopy (fNIRS). The behavior performance demonstrated that all subjects were able to promptly respond to lane-deviation events, even if the sign of fatigue arose in the brain, which suggests that the subjects were fighting fatigue during the driving experiment. The EEG event-related analysis showed strengthening alpha suppression in the occipital cortex, a common brain region of fatigue. Furthermore, we noted increasing oxygenated hemoglobin (HbO) of the brain to fight driving fatigue in the frontal cortex, primary motor cortex, parieto-occipital cortex and supplementary motor area. In conclusion, the increasing neural activity and cortical activations were aimed at maintaining driving performance when fatigue emerged. The electrodynamic and hemodynamic signatures of fatigue fighting contribute to our understanding of the brain dynamics of driving fatigue and address driving safety issues through the maintenance of attention and behavioral performance.
Coiera, E, Kocaballi, B, Halamka, J & Laranjo, L 2018, 'Author Correction: The digital scribe', npj Digital Medicine, vol. 1, no. 1.
View/Download from: Publisher's site
View description>>
The original version of the published Article contained an error in the spelling of the third Author’s name. “John Halamaka” has been changed to “John Halamka”. This has been corrected in the HTML and PDF version of the Article.
Coiera, E, Kocaballi, B, Halamka, J & Laranjo, L 2018, 'The digital scribe', npj Digital Medicine, vol. 1, no. 1.
View/Download from: Publisher's site
View description>>
AbstractCurrent generation electronic health records suffer a number of problems that make them inefficient and associated with poor clinical satisfaction. Digital scribes or intelligent documentation support systems, take advantage of advances in speech recognition, natural language processing and artificial intelligence, to automate the clinical documentation task currently conducted by humans. Whilst in their infancy, digital scribes are likely to evolve through three broad stages. Human led systems task clinicians with creating documentation, but provide tools to make the task simpler and more effective, for example with dictation support, semantic checking and templates. Mixed-initiative systems are delegated part of the documentation task, converting the conversations in a clinical encounter into summaries suitable for the electronic record. Computer-led systems are delegated full control of documentation and only request human interaction when exceptions are encountered. Intelligent clinical environments permit such augmented clinical encounters to occur in a fully digitised space where the environment becomes the computer. Data from clinical instruments can be automatically transmitted, interpreted using AI and entered directly into the record. Digital scribes raise many issues for clinical practice, including new patient safety risks. Automation bias may see clinicians automatically accept scribe documents without checking. The electronic record also shifts from a human created summary of events to potentially a full audio, video and sensor record of the clinical encounter. Digital scribes promisingly offer a gateway into the clinical workflow for more advanced support for diagnostic, prognostic and therapeutic tasks.
Cui, L, Hu, H, Yu, S, Yan, Q, Ming, Z, Wen, Z & Lu, N 2018, 'DDSE: A novel evolutionary algorithm based on degree-descending search strategy for influence maximization in social networks', Journal of Network and Computer Applications, vol. 103, pp. 119-130.
View/Download from: Publisher's site
View description>>
Influence maximization (IM) is the problem of finding a small subset of nodes in a social network so that the number of nodes influenced by this subset can be maximized. Influence maximization problem plays an important role in viral marketing and information diffusions. The existing solutions to influence maximization perform badly in either efficiency or accuracy. In this study, we analyze the causes for the low efficiency of the greedy approaches and propose a more efficient algorithm called degree-descending search evolution (DDSE). Firstly, we propose a degree-descending search strategy (DDS). DDS is capable of generating a node set whose influence spread is comparable to the degree centrality. Based on DDS, we develop an evolutionary algorithm that is capable of improving the efficiency significantly by eliminating the time-consuming simulations of the greedy algorithms. Experimental results on real-world social networks demonstrate that DDSE is about five orders of magnitude faster than the state-of-art greedy method while keeping competitive accuracy, which can verify the high effectiveness and efficiency of our proposed algorithm for influence maximization.
Deady, M, Johnston, D, Milne, D, Glozier, N, Peters, D, Calvo, R & Harvey, S 2018, 'Preliminary Effectiveness of a Smartphone App to Reduce Depressive Symptoms in the Workplace: Feasibility and Acceptability Study', JMIR mHealth and uHealth, vol. 6, no. 12, pp. e11661-e11661.
View/Download from: Publisher's site
View description>>
© Mark Deady, David Johnston, David Milne, Nick Glozier, Dorian Peters, Rafael Calvo, Samuel Harvey. Background: The workplace represents a unique setting for mental health interventions. Due to range of job-related factors, employees in male-dominated industries are at an elevated risk. However, these at-risk groups are often overlooked. HeadGear is a smartphone app–based intervention designed to reduce depressive symptoms and increase well-being in these populations. Objective: This paper presents the development and pilot testing of the app’s usability, acceptability, feasibility, and preliminary effectiveness. Methods: The development process took place from January 2016 to August 2017. Participants for prototype testing (n=21; stage 1) were recruited from industry partner organizations to assess acceptability and utility. A 5-week effectiveness and feasibility pilot study (n=84; stage 2) was then undertaken, utilizing social media recruitment. Demographic data, acceptability and utility questionnaires, depression (Patient Health Questionnaire-9), and other mental health measures were collected. Results: The majority of respondents felt HeadGear was easy to use (92%), easily understood (92%), were satisfied with the app (67%), and would recommend it to a friend (75%; stage 1). Stage 2 found that compared with baseline, depression and anxiety symptoms were significantly lower at follow-up (t30=2.53; P=.02 and t30=2.18; P=.04, respectively), days of sick leave in past month (t28=2.38; P=.02), and higher self-reported job performance (t28=−2.09; P=.046; stage 2). Over 90% of respondents claimed it helped improve their mental fitness, and user feedback was again positive. Attrition was high across the stages. Conclusions: Overall, HeadGear was well received, and preliminary findings indicate it may provide an innovative new platform for improving mental health outcomes. Unfortunately, attrition was a significant issue, and findings should be interpreted...
Deady, M, Johnston, DA, Glozier, N, Milne, D, Choi, I, Mackinnon, A, Mykletun, A, Calvo, RA, Gayed, A, Bryant, R, Christensen, H & Harvey, SB 2018, 'A smartphone application for treating depressive symptoms: study protocol for a randomised controlled trial', BMC Psychiatry, vol. 18, no. 1, pp. 1-9.
View/Download from: Publisher's site
View description>>
© 2018 The Author(s). Background: Depression is a commonly occurring disorder linked to diminished role functioning and quality of life. The development of treatments that overcome barriers to accessing treatment remains an important area of clinical research as most people delay or do not receive treatment at an appropriate time. The workplace is an ideal setting to roll-out an intervention, particularly given the substantial psychological benefits associated with remaining in the workforce. Mobile health (mhealth) interventions utilising smartphone applications (apps) offer novel solutions to disseminating evidence based programs, however few apps have undergone rigorous testing. The present study aims to evaluate the effectiveness of a smartphone app designed to treat depressive symptoms in workers. Methods: The present study is a multicentre randomised controlled trial (RCT), comparing the effectiveness of the intervention to that of an attention control. The primary outcome measured will be reduced depressive symptoms at 3 months. Secondary outcomes such as wellbeing and work performance will also be measured. Employees from a range of industries will be recruited via a mixture of targeted social media advertising and Industry partners. Participants will be included if they present with likely current depression at baseline. Following baseline assessment (administered within the app), participants will be randomised to receive one of two versions of the Headgear application: 1) Intervention (a 30-day mental health intervention focusing on behavioural activation and mindfulness), or 2) attention control app (mood monitoring for 30 days). Participants will be blinded to their allocation. Analyses will be conducted within an intention to treat framework using mixed modelling. Discussion: The results of this trial will provide valuable information about the effectiveness of mhealth interventions in the treatment of depressive symptoms in a workplace context.
Deady, M, Johnston, DA, Glozier, N, Milne, D, Choi, I, Mackinnon, A, Mykletun, A, Calvo, RA, Gayed, A, Bryant, R, Christensen, H & Harvey, SB 2018, 'Smartphone application for preventing depression: study protocol for a workplace randomised controlled trial', BMJ Open, vol. 8, no. 7, pp. e020510-e020510.
View/Download from: Publisher's site
View description>>
IntroductionDepression is the leading cause of life years lost due to disability. Appropriate prevention has the potential to reduce the incidence of new cases of depression, however, traditional prevention approaches face significant scalability issues. Prevention programmes delivered by via smartphone applications provide a potential solution. The workplace is an ideal setting to roll out this form of intervention, particularly among industries that are unlikely to access traditional health initiatives and whose workplace characteristics create accessibility and portability issues. The study aims to evaluate the effectiveness of a smartphone application designed to prevent depression and improve well-being. The effectiveness of the app as a universal, selective and indicated prevention tool will also be evaluated.Methods and analysisA multicentre randomised controlled trial, to determine the effectiveness of the intervention compared with an active mood monitoring control in reducing depressive symptoms (primary outcome) and the prevalence of depression at 3 months, with secondary outcomes assessing well-being and work performance. Employees from a range of industries will be invited to participate. Participants with likely current depression at baseline will be excluded. Following baseline assessment, participants, blinded to their allocation, will be randomised to receive one of two versions of the application: headgear (a 30-day mental health intervention) or a control application (mood monitoring for 30 days). Both versions of the app contain a risk calculator to provide a measure of future risk. Analyses will be conducted within an intention-to-treat framework using mixed modelling, with additional analyses conducted to compare the moderating effect of baseline risk level and depression symptom severity on the intervention’s effectiveness.
Deng, Z, He, T, Ding, W & Cao, Z 2018, 'A Multimodel Fusion Engine for Filtering Webpages', IEEE Access, vol. 6, pp. 66062-66071.
View/Download from: Publisher's site
View description>>
OAPA Fusing multiple existing models for filtering webpages can mitigate the shortcomings of individual filtering models. To provide an engine for such fusion, we propose a multimodel fusion engine for filtering webpages (MMFEFWP) for the extraction of target webpages. This engine can handle large datasets of webpages crawled from websites and supports five individual filtering models and the fusion of any two of them. There are two possible fusion methods: one is to simultaneously satisfy the conditions of both individual models, and the other is to satisfy the conditions of one of the two individual models. We present the functions, architecture, and software design of the proposed engine. We use recall ratio (RR) and precision ratio (PR) as the evaluation indices of the filtering models and propose rules describing how PR and RR change when individual models are fused. We use 200,000 webpages collected by crawling the popular online shopping website "www.jd.com" as the experimental dataset to verify these rules. The experimental results show that two-model fusion can improve either PR or RR. Thus, the proposed engine has good practical value for engineering applications.
Ding, W, Lin, C-T & Prasad, M 2018, 'Hierarchical co-evolutionary clustering tree-based rough feature game equilibrium selection and its application in neonatal cerebral cortex MRI', Expert Systems with Applications, vol. 101, pp. 243-257.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd A wide variety of feature selection methods have been developed as promising solutions to find the classification pattern inside increasing applications. But the exploring efficient, flexible and robust feature selection method to handle the rising big data is still an exciting challenge. This paper presents a novel hierarchical co-evolutionary clustering tree-based rough feature game equilibrium selection algorithm (CTFGES). It aims to select out the high-quality feature subsets, which can enrich the research of feature selection and classification in the heterogeneous big data. Firstly, we construct a flexible hierarchical co-evolutionary clustering tree model to speed up the process of feature selection, which can effectively extract the features from the parent and children branches of four-layer co-evolutionary clustering tree. Secondly, we design a mixed co-evolutionary game equilibrium scheme with adaptive dynamics to guide parent and children branch subtrees to approach the optimal equilibrium regions, and enable their feature sets to converge stably to the Nash equilibrium. So both noisy heterogeneous features and non-identified redundant ones can be further eliminated. Finally, the extensive experiments on various big datasets are conducted to demonstrate the more excellent performance of CTFGES, in terms of accuracy, efficiency and robustness, compared with the representative feature selection algorithms. In addition, the proposed CTFGES algorithm has been successfully applied into the feature segmentation of large-scale neonatal cerebral cortex MRI with varying noise ratios and intensity non-uniformity levels. The results indicate that it can be adaptive to derive from the cortical folding surfaces and achieves the satisfying consistency with medical experts, which will be potential significance for successfully assessing the impact of aberrant brain growth on the neurodevelopment of neonatal cerebrum.
Ding, W, Lin, C-T, Chen, S, Zhang, X & Hu, B 2018, 'Multiagent-consensus-MapReduce-based attribute reduction using co-evolutionary quantum PSO for big data applications', Neurocomputing, vol. 272, pp. 136-153.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. The attribute reduction for big data applications has become an urgent challenge in pattern recognition, machine learning and data mining. In this paper, we introduce the multi-agent consensus MapReduce optimization model and co-evolutionary quantum PSO with self-adaptive memeplexes for designing the attribute reduction method, and propose a multiagent-consensus-MapReduce-based attribute reduction algorithm (MCMAR). Firstly, the co-evolutionary quantum PSO with self-adaptive memeplexes is designed for grouping particles into different memeplexes, which aims to explore the search space and locate the global best region during the attribute reduction of big datasets. Secondly, the four layers neighborhood radius framework with compensatory scheme is constructed to partition big attribute sets by exploiting the interdependency among multiple-relevant-attribute sets. Thirdly, a novel multi-agent consensus MapReduce optimization model is adopted to perform the multiple-relevance-attribute reduction, in which five kinds of agents are used to conduct the ensemble co-evolutionary optimization. So the uniform reduction framework of different agents’ co-evolutionary game under the bounded rationality is further refined. Fourthly, the approximation MapReduce parallelism mechanism is permitted to formalize to the multi-agent co-evolutionary consensus structure, interaction and adaptation, which enhances different agents to share their solutions. Finally, extensive experimental studies substantiate the effectiveness and accuracy of MCMAR on some well-known benchmark datasets. Moreover, successful applications in big medical datasets are expected to dramatically scaling up MCMAR for complex infant brain MRI in terms of efficiency and feasibility.
Ding, W, Lin, C-T, Prasad, M, Cao, Z & Wang, J 2018, 'A Layered-Coevolution-Based Attribute-Boosted Reduction Using Adaptive Quantum-Behavior PSO and Its Consistent Segmentation for Neonates Brain Tissue', IEEE Transactions on Fuzzy Systems, vol. 26, no. 3, pp. 1177-1191.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. The main challenge of attribute reduction in large data applications is to develop a new algorithm to deal with large, noisy, and uncertain large data linking multiple relevant data sources, structured or unstructured. This paper proposes a new and efficient layered-coevolution-based attribute-boosted reduction algorithm (LCQ-ABR∗) using adaptive quantum-behavior particle swarm optimization (PSO). First, the quantum rotation angle of an evolutionary particle is updated by a dynamic change of self-adapting step size. Second, a self-adaptive partitioning strategy is employed to group particles into different memeplexes, and the quantum-behavior mechanism with the particles' states depicted by the wave function cooperates to achieve superior performance in their respective memeplexes. Third, a new layered coevolutionary model with multiagent interaction is constructed to decompose a complex attribute set, and it can self-adapt the attribute sizes among different layers and produce the reasonable decompositions by exploiting any interdependence among multiple relevant attribute subsets. Fourth, the decomposed attribute subsets are evolved to compute the positive region and discernibility matrix by using their best quantum particles, and the global optimal reduction set is induced successfully. Finally, extensive comparative experiments are provided to illustrate that LCQ-ABR∗ has better feasibility and effectiveness of attribute reduction on large-scale and uncertain dataset problems with complex noise as compared with representative algorithms. Moreover, LCQ-ABR∗ can be successfully applied in the consistent segmentation for neonatal brain three-dimensional MRI, and the consistent segmentation results further demonstrate its stronger applicability.
Ding, Z, Dong, Y, Kou, G, Palomares, I & Yu, S 2018, 'Consensus formation in opinion dynamics with online and offline interactions at complex networks', International Journal of Modern Physics C, vol. 29, no. 07, pp. 1850046-1850046.
View/Download from: Publisher's site
View description>>
Nowadays, with the development of information communication technology and Internet, more and more people receive information and exchange their opinions with others via online environments (e.g. Twitter, Facebook, Weibo, and WeChat). According to eMarketer Report [Worldwide Internet and Mobile Users: eMarketer’s Updated Estimates and Forecast for 2015–2020 (eMarketer Report). Published October 11, 2016, https://www.emarketer.com/Report/Worldwide-Internet-Mobile-Users-eMarketers-Updated-Estimates-Forecast-20152020/2001897 ).], by the end of 2016, more than 3.2 billion individuals worldwide will use the Internet regularly, accounting for nearly 45% of the world population. By contrast, the other half of the global population still obtain information and regularly exchange their opinions in a more traditional way (e.g. face to face). Generally, the speed at which information spreads and opinions are exchanged and updated in an online environment is much faster than in an offline environment. This paper focuses on jointly investigating the challenge of consensus formation in opinion dynamics with online and offline interactions. Without loss of generality, we assume the speed at which information spreads and opinions are exchanged and updated in an online environment is [Formula: see text] times as fast as in an offline environment. We demonstrate that the update speed ratio in mixed online and offline environments (i.e. [Formula: see text]) strongly impacts the consensus formation at complex networks: a large update speed ratio of online and offline environments (i.e. [Formula: see text]) makes it difficult for all agents to reach consensus in opinion dynamics. Furthermore, these effects are often further intensified as the number of online participating agents increases.
Dong, F, Lu, J, Zhang, G & Li, K 2018, 'Active Fuzzy Weighting Ensemble for Dealing with Concept Drift', International Journal of Computational Intelligence Systems, vol. 11, no. 1, pp. 438-438.
View/Download from: Publisher's site
View description>>
© 2018, the Authors. The concept drift problem is a pervasive phenomenon in real-world data stream applications. It makes well-trained static learning models lose accuracy and become outdated as time goes by. The existence of different types of concept drift makes it more difficult for learning algorithms to track. This paper proposes a novel adaptive ensemble algorithm, the Active Fuzzy Weighting Ensemble, to handle data streams involving concept drift. During the processing of data instances in the data streams, our algorithm first identifies whether or not a drift occurs. Once a drift is confirmed, it uses data instances accumulated by the drift detection method to create a new base classifier. Then, it applies fuzzy instance weighting and a dynamic voting strategy to organize all the existing base classifiers to construct an ensemble learning model. Experimental evaluations on seven datasets show that our proposed algorithm can shorten the recovery time of accuracy drop when concept drift occurs, adapt to different types of concept drift, and obtain better performance with less computation costs than the other adaptive ensembles.
Dong, F, Zhang, G, Lu, J & Li, K 2018, 'Fuzzy competence model drift detection for data-driven decision support systems', Knowledge-Based Systems, vol. 143, pp. 284-294.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. This paper focuses on concept drift in business intelligence and data-driven decision support systems (DSSs). The assumption of a fixed distribution in the data renders conventional static DSSs inaccurate and unable to make correct decisions when concept drift occurs. However, it is important to know when, how, and where concept drift occurs so a DSS can adjust its decision processing knowledge to adapt to an ever-changing environment at the appropriate time. This paper presents a data distribution-based concept drift detection method called fuzzy competence model drift detection (FCM-DD). By introducing fuzzy sets theory and replacing crisp boundaries with fuzzy ones, we have improved the competence model to provide a better, more refined empirical distribution of the data stream. FCM-DD requires no prior knowledge of the underlying distribution and provides statistical guarantee of the reliability of the detected drift, based on the theory of bootstrapping. A series of experiments show that our proposed FCM-DD method can detect drift more accurately, has good sensitivity, and is robust.
Durán Santomil, P, Otero González, L, Martorell Cunill, O & Merigó Lindahl, JM 2018, 'Backtesting an equity risk model under Solvency II', Journal of Business Research, vol. 89, pp. 216-222.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Inc. Backtesting is a technique for validating internal models under Solvency II, which allows for evaluating the discrepancies between the results provided by a model and real observations. This paper aims to establish various backtesting tests and to show their applications to equity risk in Solvency II. Normal and empirical models with a rolling window are used to determine VaR at the 99.5% confidence level over a one-year time horizon. The proposed methodology performs the backtesting of annualized returns arising from the accumulation of daily returns. The results show that even if a model is conservative when tested out of a sample, it may be inadequate when evaluated in a sample, thereby highlighting the problems inherent in the out-of-sample backtesting proposed by the regulator.
Dyson, LE & Frawley, JK 2018, 'A Student-Generated Video Careers Project', International Journal of Mobile and Blended Learning, vol. 10, no. 4, pp. 32-51.
View/Download from: Publisher's site
View description>>
This article describes how in recent years, the multimedia recording capabilities of mobile devices have been used increasingly to create a more active, learner-centred educational experience. Despite the proven value of student-generated multimedia projects, there are still gaps in our understanding of how students learn during them. This article reports on a project in which first-year information technology students interviewed IT professionals in their workplace and video-recorded the interview to enable sharing with their peers. In order to understand the statistically significant increases found in students' learning, student diaries and reflections were analyzed qualitatively. Factors found to contribute to learning included: the iterative nature of student activities; the multiple, evolving representations of knowledge as students proceeded through the project; the importance of the workplace context in engaging students and enhancing learning; the affordance of mobile technology for capturing and sharing this context; and the collaborative and metacognitive processes fostered by the project.
El-Sayed, H, Sankar, S, Daraghmi, Y-A, Tiwari, P, Rattagan, E, Mohanty, M, Puthal, D & Prasad, M 2018, 'Accurate Traffic Flow Prediction in Heterogeneous Vehicular Networks in an Intelligent Transport System Using a Supervised Non-Parametric Classifier', Sensors, vol. 18, no. 6, pp. 1696-1696.
View/Download from: Publisher's site
View description>>
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. Heterogeneous vehicular networks (HETVNETs) evolve from vehicular ad hoc networks (VANETs), which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs). The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS) improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM) kernels with a radial basis function (RBF). The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.
El-Sayed, H, Sankar, S, Prasad, M, Puthal, D, Gupta, A, Mohanty, M & Lin, C-T 2018, 'Edge of Things: The Big Picture on the Integration of Edge, IoT and the Cloud in a Distributed Computing Environment', IEEE Access, vol. 6, pp. 1706-1717.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. A centralized infrastructure system carries out existing data analytics and decision-making processes from our current highly virtualized platform of wireless networks and the Internet of Things (IoT) applications. There is a high possibility that these existing methods will encounter more challenges and issues in relation to network dynamics, resulting in a high overhead in the network response time, leading to latency and traffic. In order to avoid these problems in the network and achieve an optimum level of resource utilization, a new paradigm called edge computing (EC) is proposed to pave the way for the evolution of new age applications and services. With the integration of EC, the processing capabilities are pushed to the edge of network devices such as smart phones, sensor nodes, wearables, and on-board units, where data analytics and knowledge generation are performed which removes the necessity for a centralized system. Many IoT applications, such as smart cities, the smart grid, smart traffic lights, and smart vehicles, are rapidly upgrading their applications with EC, significantly improving response time as well as conserving network resources. Irrespective of the fact that EC shifts the workload from a centralized cloud to the edge, the analogy between EC and the cloud pertaining to factors such as resource management and computation optimization are still open to research studies. Hence, this paper aims to validate the efficiency and resourcefulness of EC. We extensively survey the edge systems and present a comparative study of cloud computing systems. After analyzing the different network properties in the system, the results show that EC systems perform better than cloud computing systems. Finally, the research challenges in implementing an EC system and future research directions are discussed.
Engemann, KJ, Merigó, JM, Terceño, A & Yager, RR 2018, 'Foreword', International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 26, no. Suppl. 1, pp. v-vii.
View/Download from: Publisher's site
Erfani, SS & Abedin, B 2018, 'Impacts of the use of social network sites on users' psychological well‐being: A systematic review', Journal of the Association for Information Science and Technology, vol. 69, no. 7, pp. 900-912.
View/Download from: Publisher's site
View description>>
As Social Network Sites (SNSs) are increasingly becoming part of people's everyday lives, the implications of their use need to be investigated and understood. We conducted a systematic literature review to lay the groundwork for understanding the relationship between SNS use and users' psychological well‐being and for devising strategies for taking advantage of this relationship. The review included articles published between 2003 and 2016, extracted from major academic databases. Findings revealed that the use of SNSs is both positively and negatively related to users' psychological well‐being. We discuss the factors that moderate this relationship and their implications on users' psychological well‐being. Many of the studies we reviewed lacked a sound theoretical justification for their findings and most involved young and healthy students, leaving other cohorts of SNS users neglected. The paper concludes with the presentation of a platform for future investigation.
Fan, X, Zhao, J, Ren, F, Wang, Y, Feng, Y, Ding, L, Zhao, L, Shang, Y, Li, J, Ni, J, Jia, B, Liu, Y & Chang, Z 2018, 'Dimerization of p15RS mediated by a leucine zipper–like motif is critical for its inhibitory role on Wnt signaling', Journal of Biological Chemistry, vol. 293, no. 20, pp. 7618-7628.
View/Download from: Publisher's site
View description>>
© 2018 Fan et al. We previously demonstrated that p15RS, a newly discovered tumor suppressor, inhibits Wnt/-catenin signaling by interrupting the formation of -cateninTCF4 complex. However, it remains unclear how p15RS helps exert such an inhibitory effect on Wnt signaling based on its molecular structure. In this study, we reported that dimerization of p15RS is required for its inhibition on the transcription regulation of Wnt-targeted genes. We found that p15RS forms a dimer through a highly conserved leucine zipper–like motif in the coiled-coil terminus domain. In particular, residues Leu-248 and Leu-255 were identified as being responsible for p15RS dimerization, as mutation of these two leucines into prolines disrupted the homodimer formation of p15RS and weakened its suppression of Wnt signaling. Functional studies further confirmed that mutations of p15RS at these residues results in diminishment of its inhibition on cell proliferation and tumor formation. We therefore concluded that dimerization of p15RS governed by the leucine zipper–like motif is critical for its inhibition of Wnt/-catenin signaling and tumorigenesis.
Fonseca, A, Kerick, S, King, J-T, Lin, C-T & Jung, T-P 2018, 'Brain Network Changes in Fatigued Drivers: A Longitudinal Study in a Real-World Environment Based on the Effective Connectivity Analysis and Actigraphy Data', Frontiers in Human Neuroscience, vol. 12.
View/Download from: Publisher's site
View description>>
© 2018 Fonseca, Kerick, King, Lin and Jung. The analysis of neurophysiological changes during driving can clarify the mechanisms of fatigue, considered an important cause of vehicle accidents. The fluctuations in alertness can be investigated as changes in the brain network connections, reflected in the direction and magnitude of the information transferred. Those changes are induced not only by the time on task but also by the quality of sleep. In an unprecedented 5-month longitudinal study, daily sampling actigraphy and EEG data were collected during a sustained-attention driving task within a near-real-world environment. Using a performance index associated with the subjects' reaction times and a predictive score related to the sleep quality, we identify fatigue levels in drivers and investigate the shifts in their effective connectivity in different frequency bands, through the analysis of the dynamical coupling between brain areas. Study results support the hypothesis that combining EEG, behavioral and actigraphy data can reveal new features of the decline in alertness. In addition, the use of directed measures such as the Convergent Cross Mapping can contribute to the development of fatigue countermeasure devices.
Frawley, JK & Dyson, LE 2018, 'Literacies and Learning in Motion', International Journal of Mobile and Blended Learning, vol. 10, no. 4, pp. 52-72.
View/Download from: Publisher's site
View description>>
Mobile and participatory cultures have led to widespread change in the way we communicate; emphasizing user generated content and digital multimedia. In this environment, informal learning may occur through digital and networked activities, with literacy no longer limited to alphabetic and character-based texts. This article explores adult learners' new literacies within the context of a digital mobile storytelling project. A qualitative approach is used to explore the artifacts and practices of nine adult participants who comprise the study. Participants created a range of fiction, non-fiction, poetry and diary-style content in a variety of modes and media. Outcomes from content analysis, interview and survey methods depict mobile digital literacies as characteristically situated, experiential and multimodal. The mobile and participatory nature of this project was catalytic to participants' imaginative re-interpretation of the world around them as sources for meaning making and transformation. This paper contributes a case example of mobile learning with adults in a community setting.
Fu, A, Li, S, Yu, S, Zhang, Y & Sun, Y 2018, 'Privacy-preserving composite modular exponentiation outsourcing with optimal checkability in single untrusted cloud server', Journal of Network and Computer Applications, vol. 118, pp. 102-112.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Outsourcing computing allows users with resource-constrained devices to outsource their complex computation workloads to cloud servers, which is more economical for cloud customers. However, since users lose direct control of the computation task, possible threats need to be addressed, such as data privacy and the correctness of results. Modular exponentiation is one of the most basic and time-consuming operations but widely applied in the field of cryptography. In this paper, we propose two new and efficient algorithms for secure outsourcing of single and multiple composite modular exponentiations. Unlike the algorithms based on two untrusted servers, we outsource modular exponentiation operation to only a single server, eliminating the possible collusion attack with two servers. Moreover, we put forward a new mathematical division method, which hides the base and exponent of the outsourced data, without exposing sensitive information to the cloud server. In addition, compared with other state-of-the-art algorithms, our scheme shows a remarkable improvement in checkability, enabling the user to detect any misbehavior with the optimal probability close to 1. Finally, we use our proposed algorithms as a subroutine to realize Shamir's Identity-Based Signature Scheme and Identity-Based Multi-Signatures Scheme.
Fu, A, Li, Y, Yu, S, Yu, Y & Zhang, G 2018, 'DIPOR: An IDA-based dynamic proof of retrievability scheme for cloud storage systems', Journal of Network and Computer Applications, vol. 104, pp. 97-106.
View/Download from: Publisher's site
View description>>
As cloud storage has become more and more ubiquitous, there are a large number of consumers renting cloud storage services. However, as users lose direct control over the data, the integrity and availability of the outsourced data become a big concern for users. Accordingly, how to verify the integrity of stored data and retrieve the availability of the corrupted data has become an urgent problem. Moreover, in most cases, users' data is not always static, but needs to be updated. In this paper, we propose a dynamic proof of retrievability scheme for cloud storage system, named as DIPOR. The DIPOR not only can retrieve the original data of corrupted blocks by using partial healthy data stored in healthy servers, but also support for updating operations of data. Furthermore, the number of forks in our scheme is not fixed, which means we can always look for the optimal forks based on the number of data blocks. In addition, the security analysis indicates that our scheme is provably secure and the performance evaluations show the efficiency of the proposed scheme.
Fu, A, Zhu, Y, Yang, G, Yu, S & Yu, Y 2018, 'Secure outsourcing algorithms of modular exponentiations with optimal checkability based on a single untrusted cloud server', Cluster Computing, vol. 21, no. 4, pp. 1933-1947.
View/Download from: Publisher's site
Gaviria-Marin, M, Merigo, JM & Popa, S 2018, 'Twenty years of theJournal of Knowledge Management: a bibliometric analysis', Journal of Knowledge Management, vol. 22, no. 8, pp. 1655-1687.
View/Download from: Publisher's site
View description>>
PurposeIn 2017, theJournal of Knowledge Management(JKM) celebrates its 20th anniversary. This study aims to show an updated analysis of their publications to provide a general overview of the journal, focusing on a bibliometric analysis of its publications between 1997 and 2016.Design/methodology/approachThe methodology involves two procedures: a performance analysis and a science mapping analysis of JKM. The performance analysis uses a series of bibliometric indicators such ash-index, productivity and citations. This analysis considers different dimensions, including papers, authors, universities and countries. VOSviewer software is used to carry out the mapping of science of JKM, which, based on the concurrence of key words and co-citation points of view, seeks to graphically analyze the structure of the references of this journal.FindingsThere is a positive evolution in the number of publications (although with certain oscillations), which shows a growing interest in publishing in JKM. The USA and the UK lead the publications in this journal, although at a regional level, Europe is the most productive. The low participation of emerging economies in JKM is also observed.Practical implicationsThe paper will identify the leading trends in the journal in terms of papers, authors, institutions, countries, journals and keywords. This study is useful for obtaining a quick snapshot of what is happening in the journal.Originality/valueFrom the h...
Gheisari, S, Catchpoole, D, Charlton, A, Melegh, Z, Gradhand, E & Kennedy, P 2018, 'Computer Aided Classification of Neuroblastoma Histological Images Using Scale Invariant Feature Transform with Feature Encoding', Diagnostics, vol. 8, no. 3, pp. 56-56.
View/Download from: Publisher's site
View description>>
Neuroblastoma is the most common extracranial solid malignancy in early childhood. Optimal management of neuroblastoma depends on many factors, including histopathological classification. Although histopathology study is considered the gold standard for classification of neuroblastoma histological images, computers can help to extract many more features some of which may not be recognizable by human eyes. This paper, proposes a combination of Scale Invariant Feature Transform with feature encoding algorithm to extract highly discriminative features. Then, distinctive image features are classified by Support Vector Machine classifier into five clinically relevant classes. The advantage of our model is extracting features which are more robust to scale variation compared to the Patched Completed Local Binary Pattern and Completed Local Binary Pattern methods. We gathered a database of 1043 histologic images of neuroblastic tumours classified into five subtypes. Our approach identified features that outperformed the state-of-the-art on both our neuroblastoma dataset and a benchmark breast cancer dataset. Our method shows promise for classification of neuroblastoma histological images.
Gheisari, S, Catchpoole, DR, Charlton, A & Kennedy, PJ 2018, 'Convolutional Deep Belief Network with Feature Encoding for Classification of Neuroblastoma Histological Images', Journal of Pathology Informatics, vol. 9, no. 1, pp. 17-17.
View/Download from: Publisher's site
View description>>
© 2018 Journal of Pathology Informatics. Background: Neuroblastoma is the most common extracranial solid tumor in children younger than 5 years old. Optimal management of neuroblastic tumors depends on many factors including histopathological classification. The gold standard for classification of neuroblastoma histological images is visual microscopic assessment. In this study, we propose and evaluate a deep learning approach to classify high-resolution digital images of neuroblastoma histology into five different classes determined by the Shimada classification. Subjects and Methods: We apply a combination of convolutional deep belief network (CDBN) with feature encoding algorithm that automatically classifies digital images of neuroblastoma histology into five different classes. We design a three-layer CDBN to extract high-level features from neuroblastoma histological images and combine with a feature encoding model to extract features that are highly discriminative in the classification task. The extracted features are classified into five different classes using a support vector machine classifier. Data: We constructed a dataset of 1043 neuroblastoma histological images derived from Aperio scanner from 125 patients representing different classes of neuroblastoma tumors. Results: The weighted average F-measure of 86.01% was obtained from the selected high-level features, outperforming state-of-the-art methods. Conclusion: The proposed computer-aided classification system, which uses the combination of deep architecture and feature encoding to learn high-level features, is highly effective in the classification of neuroblastoma histological images.
Gill, AQ, Henderson-Sellers, B & Niazi, M 2018, 'Scaling for agility: A reference model for hybrid traditional-agile software development methodologies', Information Systems Frontiers, vol. 20, no. 2, pp. 315-341.
View/Download from: Publisher's site
View description>>
© 2016, Springer Science+Business Media New York. The adoption of agility at a large scale often requires the integration of agile and non-agile development elements for architecting a hybrid adaptive methodology. The challenge is ”which elements or components (agile or non-agile) are relevant to develop the context-aware hybrid adaptive methodology reference architecture?” This paper addresses this important challenge and develops a hybrid adaptive methodology reference architecture model using a qualitative constructive empirical research approach. In this way, we have uncovered the agility, abstraction, business value, business policy, rules, legal, context and facility elements or components that have not been explicitly modelled or discussed in International Standards (IS) such as the ISO/IEC 24744 metamodel. It is anticipated that a context-aware hybrid adaptive methodology can be architected by using the proposed context-aware hybrid adaptive methodology reference architecture elements for a particular situation when using a situational method engineering approach.
Glynn, PD, Voinov, AA, Shapiro, CD & White, PA 2018, 'Response to Comment by Walker et al. on “From Data to Decisions: Processing Information, Biases, and Beliefs for Improved Management of Natural Resources and Environments”', Earth's Future, vol. 6, no. 5, pp. 762-769.
View/Download from: Publisher's site
View description>>
AbstractOur different kinds of minds and types of thinking affect the ways we decide, take action, and cooperate (or not). The comment by Walker et al. (2018, https://doi.org/10.1002/2017EF000750) illustrates several points made by Glynn et al. (2017, https://doi.org/10.1002/2016EF000487) and many other articles. Namely, biases and beliefs often drive scientific reasoning, and scientists, just like other humans, are intimately attached to their values and heuristics. Scientists, just like many other people, also tend to read and interpret text in ways that best match their individual perceptions of a problem or issue: in many cases paraphrasing and changing the meaning of what they read to better match their initial ideas. Walker et al. are doing interesting and important research on uncertainty. Nonetheless, they misinterpret the work, assumptions, and conclusions brought forth by Glynn et al. (2017, https://doi.org/10.1002/2016EF000487).
Gonzalez Cruz, C, Naderpour, M & Ramezani, F 2018, 'Water resource selection and optimisation for shale gas developments in Australia: A combinatorial approach', Computers & Industrial Engineering, vol. 124, pp. 1-11.
View/Download from: Publisher's site
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2018, 'A Gene-Based Positive Selection Detection Approach to Identify Vaccine Candidates Using Toxoplasma gondii as a Test Case Protozoan Pathogen', Frontiers in Genetics, vol. 9, no. AUG.
View/Download from: Publisher's site
View description>>
© 2018 Goodswen, Kennedy and Ellis. Over the last two decades, various in silico approaches have been developed and refined that attempt to identify protein and/or peptide vaccines candidates from informative signals encoded in protein sequences of a target pathogen. As to date, no signal has been identified that clearly indicates a protein will effectively contribute to a protective immune response in a host. The premise for this study is that proteins under positive selection from the immune system are more likely suitable vaccine candidates than proteins exposed to other selection pressures. Furthermore, our expectation is that protein sequence regions encoding major histocompatibility complexes (MHC) binding peptides will contain consecutive positive selection sites. Using freely available data and bioinformatic tools, we present a high-throughput approach through a pipeline that predicts positive selection sites, protein subcellular locations, and sequence locations of medium to high T-Cell MHC class I binding peptides. Positive selection sites are estimated from a sequence alignment by comparing rates of synonymous (dS) and non-synonymous (dN) substitutions among protein coding sequences of orthologous genes in a phylogeny. The main pipeline output is a list of protein vaccine candidates predicted to be naturally exposed to the immune system and containing sites under positive selection. Candidates are ranked with respect to the number of consecutive sites located on protein sequence regions encoding MHCI-binding peptides. Results are constrained by the reliability of prediction programs and quality of input data. Protein sequences from Toxoplasma gondii ME49 strain (TGME49) were used as a case study. Surface antigen (SAG), dense granules (GRA), microneme (MIC), and rhoptry (ROP) proteins are considered worthy T. gondii candidates. Given 8263 TGME49 protein sequences processed anonymously, the top 10 predicted candidates were all worthy candidates...
Graham, C, Smith, W, Moncur, W & van den Hoven, E 2018, 'Introduction: Mortality in Design', Design Issues, vol. 34, no. 1, pp. 3-14.
View/Download from: Publisher's site
Gray, S, Voinov, A, Paolisso, M, Jordan, R, BenDor, T, Bommel, P, Glynn, P, Hedelin, B, Hubacek, K, Introne, J, Kolagani, N, Laursen, B, Prell, C, Schmitt Olabisi, L, Singer, A, Sterling, E & Zellner, M 2018, 'Purpose, processes, partnerships, and products: four Ps to advance participatory socio‐environmental modeling', Ecological Applications, vol. 28, no. 1, pp. 46-61.
View/Download from: Publisher's site
View description>>
AbstractIncluding stakeholders in environmental model building and analysis is an increasingly popular approach to understanding ecological change. This is because stakeholders often hold valuable knowledge about socio‐environmental dynamics and collaborative forms of modeling produce important boundary objects used to collectively reason about environmental problems. Although the number of participatory modeling (PM) case studies and the number of researchers adopting these approaches has grown in recent years, the lack of standardized reporting and limited reproducibility have prevented PM's establishment and advancement as a cohesive field of study. We suggest a four‐dimensional framework (4P) that includes reporting on dimensions of (1) the Purpose for selecting a PM approach (the why); (2) the Process by which the public was involved in model building or evaluation (the how); (3) the Partnerships formed (the who); and (4) the Products that resulted from these efforts (the what). We highlight four case studies that use common PM software‐based approaches (fuzzy cognitive mapping, agent‐based modeling, system dynamics, and participatory geospatial modeling) to understand human–environment interactions and the consequences of ecological changes, including bushmeat hunting in Tanzania and Cameroon, agricultural production and deforestation in Zambia, and groundwater management in India. We demonstrate how standardizing communication about PM case studies can lead to innovation and new insights about model‐base...
Gu, Y, Gu, M, Long, Y, Xu, G, Yang, Z, Zhou, J & Qu, W 2018, 'An enhanced short text categorization model with deep abundant representation', World Wide Web, vol. 21, no. 6, pp. 1705-1719.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Short text categorization is a crucial issue to many applications, e.g., Information Retrieval, Question-Answering System, MRI Database Construction and so forth. Many researches focus on data sparsity and ambiguity issues in short text categorization. To tackle these issues, we propose a novel short text categorization strategy based on abundant representation, which utilizes Bi-directional Recurrent Neural Network(Bi-RNN) with Long Short-Term Memory(LSTM) and topic model to catch more contextual and semantic information. Bi-RNN enriches contextual information, and topic model discovers more latent semantic information for abundant text representation of short text. Experimental results demonstrate that the proposed model is comparable to state-of-the-art neural network models and method proposed is effective.
Guo, J, Ren, W, Ren, Y & Zhu, T 2018, 'A Watermark-Based in-Situ Access Control Model for Image Big Data', Future Internet, vol. 10, no. 8, pp. 69-69.
View/Download from: Publisher's site
View description>>
When large images are used for big data analysis, they impose new challenges in protecting image privacy. For example, a geographic image may consist of several sensitive areas or layers. When it is uploaded into servers, the image will be accessed by diverse subjects. Traditional access control methods regulate access privileges to a single image, and their access control strategies are stored in servers, which imposes two shortcomings: (1) fine-grained access control is not guaranteed for areas/layers in a single image that need to maintain secret for different roles; and (2) access control policies that are stored in servers suffers from multiple attacks (e.g., transferring attacks). In this paper, we propose a novel watermark-based access control model in which access control policies are associated with objects being accessed (called an in-situ model). The proposed model integrates access control policies as watermarks within images, without relying on the availability of servers or connecting networks. The access control for images is still maintained even though images are redistributed again to further subjects. Therefore, access control policies can be delivered together with the big data of images. Moreover, we propose a hierarchical key-role-area model for fine-grained encryption, especially for large size images such as geographic maps. The extensive analysis justifies the security and performance of the proposed model
Han, B, Tsang, IW, Chen, L, Yu, CP & Fung, S-F 2018, 'Progressive Stochastic Learning for Noisy Labels', IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 10, pp. 5136-5148.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Large-scale learning problems require a plethora of labels that can be efficiently collected from crowdsourcing services at low cost. However, labels annotated by crowdsourced workers are often noisy, which inevitably degrades the performance of large-scale optimizations including the prevalent stochastic gradient descent (SGD). Specifically, these noisy labels adversely affect updates of the primal variable in conventional SGD. To solve this challenge, we propose a robust SGD mechanism called progressive stochastic learning (POSTAL), which naturally integrates the learning regime of curriculum learning (CL) with the update process of vanilla SGD. Our inspiration comes from the progressive learning process of CL, namely learning from 'easy' tasks to 'complex' tasks. Through the robust learning process of CL, POSTAL aims to yield robust updates of the primal variable on an ordered label sequence, namely, from 'reliable' labels to 'noisy' labels. To realize POSTAL mechanism, we design a cluster of 'screening losses,' which sorts all labels from the reliable region to the noisy region. To sum up, POSTAL using screening losses ensures robust updates of the primal variable on reliable labels first, then on noisy labels incrementally until convergence. In theory, we derive the convergence rate of POSTAL realized by screening losses. Meanwhile, we provide the robustness analysis of representative screening losses. Experimental results on UCI1 simulated and Amazon Mechanical Turk crowdsourcing data sets show that the POSTAL using screening losses is more effective and robust than several existing baselines.1UCI is the abbreviation of University of California Irvine.
He, Q, Wang, J & Lu, H 2018, 'A hybrid system for short-term wind speed forecasting', Applied Energy, vol. 226, pp. 756-771.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Wind speed forecasting is important for high-efficiency utilization of wind energy. Correspondingly, numerous researchers have always focused on the development of reliable forecasting models of wind speed, which is often noisy, unstable and irregular. Current approaches could adapt to various wind speed data. However, many of these usually ignore the importance of the selection of the modeling sample, which often results in poor forecasting performance. In this study, a hybrid forecasting system is proposed that contains three modules: data preprocessing, data clustering, and forecasting modules. In this system, the decomposing technique is applied to reduce the influence of noise within the raw data series to obtain a more stable sequence that is conducive to extract traits from the original data. To extract the characteristic of similarity within wind speed data, a kernel-based fuzzy c-means clustering algorithm is used in data clustering module. In the forecasting module, a sample with a highly similar fluctuation pattern is selected as training dataset, and which could reduce the training requirement of model to improve the forecasting accuracy. The experimental results indicate that the developed system outperforms the discussed traditional forecasting models with respect to forecasting accuracy.
He, T, Cai, L, Meng, T, Chen, L, Deng, Z & Cao, Z 2018, 'Parallel Community Detection Based on Distance Dynamics for Large-Scale Network', IEEE Access, vol. 6, pp. 42775-42789.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Data mining task is a challenge on finding a high-quality community structure from large-scale networks. The distance dynamics model was proved to be active on regular-size network community, but it is difficult to discover the community structure effectively from the large-scale network (0.1-1 billion edges), due to the limit of machine hardware and high time complexity. In this paper, we proposed a parallel community detection algorithm based on the distance dynamics model called P-Attractor, which is capable of handling the detection problem of large networks community. Our algorithm first developed a graph partitioning method to divide large network into lots of sub-networks, yet maintaining the complete neighbor structure of the original network. Then, the traditional distance dynamics model was improved by the dynamic interaction process to simulate the distance evolution of each sub-network. Finally, we discovered the real community structure by removing all external edges after evolution process. In our extensive experiments on multiple synthetic networks and real-world networks, the results showed the effectiveness and efficiency of P-Attractor, and the execution time on 4 threads and 32 threads are around 10 and 2 h, respectively. Our proposed algorithm is potential to discover community from a billion-scale network, such as Uk-2007.
He, X, Wang, K, Huang, H & Liu, B 2018, 'QoE-Driven Big Data Architecture for Smart City', IEEE Communications Magazine, vol. 56, no. 2, pp. 88-93.
View/Download from: Publisher's site
View description>>
In the era of big data, the applications/services of the smart city are expected to offer end users better QoE than in a conventional smart city. Nevertheless, various types of sensors will produce an increasing volume of big data along with the implementation of a smart city, where we face redundant and diverse data. Therefore, providing satisfactory QoE will become the major challenge in the big-data-based smart city. In this article, to enhance the QoE, we propose a novel big data architecture consisting of three planes: The data storage plane, the data processing plane, and the data application plane. The data storage plane stores a wide variety of data collected by sensors and originating from different data sources. Then the data processing plane filters, analyzes, and processes the ocean of data to make decisions autonomously for extracting high-quality information. Finally, the application plane initiates the execution of the events corresponding to the decisions delivered from the data processing plane. Under this architecture, we particularly use machine learning techniques, trying to acquire accurate data and deliver precise information to end users. Simulation results indicate that our proposals could achieve high QoE performance for the smart city.
Herr, D, Paler, A, Devitt, SJ & Nori, F 2018, 'A local and scalable lattice renormalization method for ballistic quantum computation', npj Quantum Information, vol. 4, no. 1, pp. 1-8.
View/Download from: Publisher's site
View description>>
AbstractA recent proposal has shown that it is possible to perform linear-optics quantum computation using a ballistic generation of the lattice. Yet, due to the probabilistic generation of its cluster state, it is not possible to use the fault-tolerant Raussendorf lattice, which requires a lower failure rate during the entanglement-generation process. Previous work in this area showed proof-of-principle linear-optics quantum computation, while this paper presents an approach to it which is more practical, satisfying several key constraints. We develop a classical measurement scheme that purifies a large faulty lattice to a smaller lattice with entanglement faults below threshold. A single application of this method can reduce the entanglement error rate to 7% for an input failure rate of 25%. Thus, we can show that it is possible to achieve fault tolerance for ballistic methods.
Herr, D, Paler, A, Devitt, SJ & Nori, F 2018, 'Lattice surgery on the Raussendorf lattice', Quantum Science and Technology, vol. 3, no. 3, pp. 035011-035011.
View/Download from: Publisher's site
View description>>
© 2018 IOP Publishing Ltd. Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.
Hu, C, Lu, J, Liu, X & Zhang, G 2018, 'Robust vehicle routing problem with hard time windows under demand and travel time uncertainty', Computers & Operations Research, vol. 94, pp. 139-153.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Due to an increase in customer-oriented service strategies designed to meet more complex and exacting customer requirements, meeting a scheduled time window has become an important part of designing vehicle routes for logistics activities. However, practically, the uncertainty in travel times and customer demand often means vehicles miss these time windows, increasing service costs and decreasing customer satisfaction. In an effort to find a solution that meets the needs of real-world logistics, we examine the vehicle routing problem with hard time windows under demand and travel time uncertainty. To address the problem, we build a robust optimization model based on novel route-dependent uncertainty sets. However, due to the complex nature of the problem, the robust model is only able to tackle small-sized instances using standard solvers. Therefore, to tackle large instances, we design a two-stage algorithm based on a modified adaptive variable neighborhood search heuristic. The first stage of the algorithm minimizes the total number of vehicle routes, while the second stage minimizes the total travel distance. Extensive computational experiments are conducted with modified versions of Solomon's benchmark instances. The numerical results show that the proposed two-stage algorithm is able to find optimal solutions for small-sized instances and good-quality robust solutions for large-sized instances with little increase to the total travel distance and/or the number of vehicles used. A detailed analysis of the results also reveals several managerial insights for decision-makers in the logistics industry.
Huang, J, Duan, Q, Guo, S, Yan, Y & Yu, S 2018, 'Converged Network-Cloud Service Composition with End-to-End Performance Guarantee', IEEE Transactions on Cloud Computing, vol. 6, no. 2, pp. 545-557.
View/Download from: Publisher's site
View description>>
The crucial role of networking in cloud computing calls for federated management of both computing and networking resources for end-to-end service provisioning. Application of the Service-Oriented Architecture (SOA) in both cloud computing and networking enables a convergence of network and cloud service provisioning. One of the key challenges to high performance converged network-cloud service provisioning lies in composition of network and cloud services with end-to-end performance guarantee. In this paper, we propose a QoS-aware service composition approach to tackling this challenging issue. We first present a system model for network-cloud service composition and formulate the service composition problem as a variant of Multi-Constrained Optimal Path (MCOP) problem. We then propose an approximation algorithm to solve the problem and give theoretical analysis on properties of the algorithm to show its effectiveness and efficiency for QoS-aware network-cloud service composition. Performance of the proposed algorithm is evaluated through extensive experiments and the obtained results indicate that the proposed method achieves better performance in service composition than the best current MCOP approaches.
Huitinga, I & Webster, MJ 2018, 'Preface', pp. ix-ix.
View/Download from: Publisher's site
Hussain, W, Hussain, FK, Hussain, O, Bagia, R & Chang, E 2018, 'Risk-based framework for SLA violation abatement from the cloud service provider’s perspective', The Computer Journal, vol. 61, no. 9, pp. 1306-1322.
View/Download from: Publisher's site
View description>>
© The British Computer Society 2018. The constant increase in the growth of the cloud market creates new challenges for cloud service providers. One such challenge is the need to avoid possible service level agreement (SLA) violations and their consequences through good SLA management. Researchers have proposed various frameworks and have made significant advances in managing SLAs from the perspective of both cloud users and providers. However, none of these approaches guides the service provider on the necessary steps to take for SLA violation abatement; that is, the prediction of possible SLA violations, the process to follow when the system identifies the threat of SLA violation, and the recommended action to take to avoid SLA violation. In this paper, we approach this process of SLA violation detection and abatement from a risk management perspective. We propose a Risk Management-based Framework for SLA violation abatement (RMF-SLA) following the formation of an SLA which comprises SLA monitoring, violation prediction and decision recommendation. Through experiments, we validate and demonstrate the suitability of the proposed framework for assisting cloud providers to minimize possible service violations and penalties.
Hussain, W, Hussain, FK, Saberi, M, Hussain, OK & Chang, E 2018, 'Comparing time series with machine learning-based prediction approaches for violation management in cloud SLAs', Future Generation Computer Systems, vol. 89, pp. 464-477.
View/Download from: Publisher's site
View description>>
© 2018 In cloud computing, service level agreements (SLAs) are legal agreements between a service provider and consumer that contain a list of obligations and commitments which need to be satisfied by both parties during the transaction. From a service provider's perspective, a violation of such a commitment leads to penalties in terms of money and reputation and thus has to be effectively managed. In the literature, this problem has been studied under the domain of cloud service management. One aspect required to manage cloud services after the formation of SLAs is to predict the future Quality of Service (QoS) of cloud parameters to ascertain if they lead to violations. Various approaches in the literature perform this task using different prediction approaches however none of them study the accuracy of each. However, it is important to do this as the results of each prediction approach vary according to the pattern of the input data and selecting an incorrect choice of a prediction algorithm could lead to service violation and penalties. In this paper, we test and report the accuracy of time series and machine learning-based prediction approaches. In each category, we test many different techniques and rank them according to their order of accuracy in predicting future QoS. Our analysis helps the cloud service provider to choose an appropriate prediction approach (whether time series or machine learning based) and further to utilize the best method depending on input data patterns to obtain an accurate prediction result and better manage their SLAs to avoid violation penalties.
Ivanyos, G, Kulkarni, R, Qiao, Y, Santha, M & Sundaram, A 2018, 'On the complexity of trial and error for constraint satisfaction problems', Journal of Computer and System Sciences, vol. 92, pp. 48-64.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. In 2013 Bei, Chen and Zhang introduced a trial and error model of computing, and applied to some constraint satisfaction problems. In this model the input is hidden by an oracle which, for a candidate assignment, reveals some information about a violated constraint if the assignment is not satisfying. In this paper we initiate a systematic study of constraint satisfaction problems in the trial and error model, by adopting a formal framework for CSPs, and defining several types of revealing oracles. Our main contribution is to develop a transfer theorem for each type of the revealing oracle. To any hidden CSP with a specific type of revealing oracle, the transfer theorem associates another CSP in the normal setting, such that their complexities are polynomial-time equivalent. This in principle transfers the study of a large class of hidden CSPs to the study of normal CSPs. We apply the transfer theorems to get polynomial-time algorithms or hardness results for several families of concrete problems.
Ivanyos, G, Qiao, Y & Subrahmanyam, KV 2018, 'Constructive non-commutative rank computation is in deterministic polynomial time', computational complexity, vol. 27, no. 4, pp. 561-593.
View/Download from: Publisher's site
View description>>
© 2018, Springer International Publishing AG, part of Springer Nature. We extend the techniques developed in Ivanyos et al. (Comput Complex 26(3):717–763, 2017) to obtain a deterministic polynomial-time algorithm for computing the non-commutative rank of linear spaces of matrices over any field. The key new idea that causes a reduction in the time complexity of the algorithm in Ivanyos et al. (2017) from exponential time to polynomial time is a reduction procedure that keeps the blow-up parameter small, and there are two methods to implement this idea: the first one is a greedy argument that removes certain rows and columns, and the second one is an efficient algorithmic version of a result of Derksen & Makam (Adv Math 310:44–63, 2017b), who were the first to observe that the blow-up parameter can be controlled. Both methods rely crucially on the regularity lemma from Ivanyos et al. (2017). In this note, we improve that lemma by removing a coprime condition there.
Ji, K, Chen, Z, Sun, R, Ma, K, Yuan, Z & Xu, G 2018, 'GIST: A generative model with individual and subgroup-based topics for group recommendation', Expert Systems with Applications, vol. 94, pp. 81-93.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd In this paper, a Topic-based probabilistic model named GIST is proposed to infer group activities, and make group recommendations. Compared with existing individual-based aggregation methods, it not only considers individual members’ interest, but also consider some subgroups’ interest. Intuition might seem that when a group of users want to take part in an activity, not every group member is decisive, instead, more likely the subgroups of members having close relationships lead to the final activity decision. That motivates our study on jointly considering individual members’ choices and subgroups’ choices for group recommendations. Based on this, our model uses two kinds of unshared topics to model individual members’ interest and subgroups’ interest separately, and then make final recommendations according to the choices from the two aspects with a weight-based scheme. Moreover, the link information in the graph topology of the groups can be used to optimize the weights of our model. The experimental results on real-life data show that the recommendation accuracy is significantly improved by GIST comparing with the state-of-the-art methods.
Jiang, J, Wen, S, Yu, S, Xiang, Y & Zhou, W 2018, 'Rumor Source Identification in Social Networks with Time-Varying Topology', IEEE Transactions on Dependable and Secure Computing, vol. 15, no. 1, pp. 166-179.
View/Download from: Publisher's site
View description>>
© 2004-2012 IEEE. Identifying rumor sources in social networks plays a critical role in limiting the damage caused by them through the timely quarantine of the sources. However, the temporal variation in the topology of social networks and the ongoing dynamic processes challenge our traditional source identification techniques that are considered in static networks. In this paper, we borrow an idea from criminology and propose a novel method to overcome the challenges. First, we reduce the time-varying networks to a series of static networks by introducing a time-integrating window. Second, instead of inspecting every individual in traditional techniques, we adopt a reverse dissemination strategy to specify a set of suspects of the real rumor source. This process addresses the scalability issue of source identification problems, and therefore dramatically promotes the efficiency of rumor source identification. Third, to determine the real source from the suspects, we employ a novel microscopic rumor spreading model to calculate the maximum likelihood (ML) for each suspect. The one who can provide the largest ML estimate is considered as the real source. The evaluations are carried out on real social networks with time-varying topology. The experiment results show that our method can reduce 60 - 90 percent of the source seeking area in various time-varying social networks. The results further indicate that our method can accurately identify the real source, or an individual who is very close to the real source. To the best of our knowledge, the proposed method is the first that can be used to identify rumor sources in time-varying social networks.
Jordaan, J, Punzet, S, Melnikov, A, Sanches, A, Oberst, S, Marburg, S & Powell, DA 2018, 'Measuring monopole and dipole polarizability of acoustic meta-atoms', Applied Physics Letters, vol. 113, no. 22, pp. 224102-224102.
View/Download from: Publisher's site
View description>>
We present a method to extract monopole and dipole polarizability from experimental measurements of two-dimensional acoustic meta-atoms. In contrast to extraction from numerical results, this enables all second-order effects and uncertainties in material properties to be accounted for. We apply the technique to 3D-printed labyrinthine meta-atoms of a variety of geometries. We show that the polarizability of structures with a shorter acoustic path length agrees well with numerical results. However, those with longer path lengths suffer strong additional damping, which we attribute to the strong viscous and thermal losses in narrow channels.
Jordan, R, Gray, S, Zellner, M, Glynn, PD, Voinov, A, Hedelin, B, Sterling, EJ, Leong, K, Olabisi, LS, Hubacek, K, Bommel, P, BenDor, TK, Jetter, AJ, Laursen, B, Singer, A, Giabbanelli, PJ, Kolagani, N, Carrera, LB, Jenni, K & Prell, C 2018, 'Twelve Questions for the Participatory Modeling Community', Earth's Future, vol. 6, no. 8, pp. 1046-1057.
View/Download from: Publisher's site
View description>>
AbstractParticipatory modeling engages the implicit and explicit knowledge of stakeholders to create formalized and shared representations of reality and has evolved into a field of study as well as a practice. Participatory modeling researchers and practitioners who focus specifically on environmental resources met at the National Socio‐Environmental Synthesis Center (SESYNC) in Annapolis, Maryland, over the course of 2 years to discuss the state of the field and future directions for participatory modeling. What follows is a description of 12 overarching groups of questions that could guide future inquiry.
Kaiwartya, O, Abdullah, AH, Cao, Y, Lloret, J, Kumar, S, Shah, RR, Prasad, M & Prakash, S 2018, 'Virtualization in Wireless Sensor Networks: Fault Tolerant Embedding for Internet of Things', IEEE Internet of Things Journal, vol. 5, no. 2, pp. 571-580.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Recently, virtualization in wireless sensor networks (WSNs) has witnessed significant attention due to the growing service domain for Internet of Things (IoT). Related literature on virtualization in WSNs explored resource optimization without considering communication failure in WSNs environments. The failure of a communication link in WSNs impacts many virtual networks running IoT services. In this context, this paper proposes a framework for optimizing fault tolerance (FT) in virtualization in WSNs, focusing on heterogeneous networks for service-oriented IoT applications. An optimization problem is formulated considering FT and communication delay as two conflicting objectives. An adapted nondominated sorting-based genetic algorithm (A-NSGA) is developed to solve the optimization problem. The major components of A-NSGA include chromosome representation, FT and delay computation, crossover and mutation, and nondominance-based sorting. Analytical and simulation-based comparative performance evaluation has been carried out. From the analysis of results, it is evident that the framework effectively optimizes FT for virtualization in WSNs.
Kang, G, Li, J & Tao, D 2018, 'Shakeout: A New Approach to Regularized Deep Neural Network Training', IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 5, pp. 1245-1258.
View/Download from: Publisher's site
View description>>
© 1979-2012 IEEE. Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines L-{0} , L-{1} and L-{2} regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.
Karimi, F & Matous, P 2018, 'Mapping diversity and inclusion in student societies: A social network perspective', Computers in Human Behavior, vol. 88, pp. 184-194.
View/Download from: Publisher's site
Kendrick, L, Musial, K & Gabrys, B 2018, 'Change point detection in social networks—Critical review with experiments', Computer Science Review, vol. 29, pp. 1-13.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Inc. Change point detection in social networks is an important element in developing the understanding of dynamic systems. This complex and growing area of research has no clear guidelines on what methods to use or in which circumstances. This paper critically discusses several possible network metrics to be used for a change point detection problem and conducts an experimental, comparative analysis using the Enron and MIT networks. Bayesian change point detection analysis is conducted on different global graph metrics (Size, Density, Average Clustering Coefficient, Average Shortest Path) as well as metrics derived from the Hierarchical and Block models (Entropy, Edge Probability, No. of Communities, Hierarchy Level Membership). The results produced the posterior probability of a change point at weekly time intervals that were analysed against ground truth change points using precision and recall measures. Results suggest that computationally heavy generative models offer only slightly better results compared to some of the global graph metrics. The simplest metrics used in the experiments, i.e. nodes and links numbers, are the recommended choice for detecting overall structural changes.
Khan, MA, Umer, T, Khan, SU, Yu, S & Rachedi, A 2018, 'IEEE Access Special Section Editorial: Green Cloud and Fog Computing: Energy Efficiency and Sustainability Aware Infrastructures, Protocols, and Applications', IEEE Access, vol. 6, pp. 12280-12283.
View/Download from: Publisher's site
Laengle, S, Modak, NM, Merigó, JM & De La Sotta, C 2018, 'Thirty years of the International Journal of Computer Integrated Manufacturing: a bibliometric analysis', International Journal of Computer Integrated Manufacturing, vol. 31, no. 12, pp. 1247-1268.
View/Download from: Publisher's site
View description>>
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. The International Journal of Computer Integrated Manufacturing was established in 1988 with the idea of advancing research in computer integrated manufacturing (CIM) technologies and promoting the application of those technologies within industry. The journal was created to facilitate the exchange of new knowledge between industry and academia derived from both research and practical application. To celebrate the 30-year journey of the journal, this study develops a bibliometric analysis of all the publications of the journal to 2017. Information was collected using the Web of Science Core Collection database. The present study has been conducted to highlight the significant contributions of the journal in terms of impact, topics, authors, universities and countries. Finally, visualisation of similarities (VOS) viewer software was used to present graphical representations of the bibliographic coupling, co-citation, citation, co-authorship and co-occurrence of keywords.
Laengle, S, Modak, NM, Merigo, JM & Zurita, G 2018, 'Twenty-Five Years of Group Decision and Negotiation: A Bibliometric Overview', Group Decision and Negotiation, vol. 27, no. 4, pp. 505-542.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media B.V., part of Springer Nature. Twenty-five years ago, in 1992, a journal named Group Decision and Negotiation was established in association with the Institute for Operations Research and the Management Sciences with the vision of promoting theoretical and empirical research, real-world applications and case studies on group decision and negotiation processes. To celebrate its 25 years of continuous and outstanding contributions, this study aims to develop a bibliometric analysis of the publications of the journal between 1992 and 2016. The Web of Science Core Collection database is used to identify the leading trends of the journal in terms of impacts, topics, authors, universities and countries. Moreover, it utilizes the visualization of similarities viewer software to analyze the bibliographic couplings, co-citations, citations, co-authorships and co-occurrences of keywords.
Lanese, I & Devitt, S 2018, 'Preface for the special issue of the 8th Conference on Reversible Computation (RC 2016)', Science of Computer Programming, vol. 151, pp. 1-1.
View/Download from: Publisher's site
Laranjo, L, Dunn, AG, Tong, HL, Kocaballi, AB, Chen, J, Bashir, R, Surian, D, Gallego, B, Magrabi, F, Lau, AYS & Coiera, E 2018, 'Conversational agents in healthcare: a systematic review', Journal of the American Medical Informatics Association, vol. 25, no. 9, pp. 1248-1258.
View/Download from: Publisher's site
View description>>
AbstractObjectiveOur objective was to review the characteristics, current applications, and evaluation measures of conversational agents with unconstrained natural language input capabilities used for health-related purposes.MethodsWe searched PubMed, Embase, CINAHL, PsycInfo, and ACM Digital using a predefined search strategy. Studies were included if they focused on consumers or healthcare professionals; involved a conversational agent using any unconstrained natural language input; and reported evaluation measures resulting from user interaction with the system. Studies were screened by independent reviewers and Cohen’s kappa measured inter-coder agreement.ResultsThe database search retrieved 1513 citations; 17 articles (14 different conversational agents) met the inclusion criteria. Dialogue management strategies were mostly finite-state and frame-based (6 and 7 conversational agents, respectively); agent-based strategies were present in one type of system. Two studies were randomized controlled trials (RCTs), 1 was cross-sectional, and the remaining were quasi-experimental. Half of the conversational agents supported consumers with health tasks such as self-care. The only RCT evaluating the efficacy of a conversational agent found a significant effect in reducing depression symptoms (effect size d = 0.44, p = .04). Patient safety was rarely evaluated in the included studies.ConclusionsThe use of conversational agents with unconstrained natural language input capabilities for health-related purposes is an emerging field of research, where the few published studies were mainly quasi-experimental, and rarely evaluated efficacy or safety. Future studies would benefit from more robust experimental des...
Lenka, RK, Rath, AK, Tan, Z, Sharma, S, Puthal, D, Simha, NVR, Prasad, M, Raja, R & Tripathi, SS 2018, 'Building Scalable Cyber-Physical-Social Networking Infrastructure Using IoT and Low Power Sensors', IEEE Access, vol. 6, pp. 30162-30173.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Wireless sensors are an important component to develop the Internet of Things (IoT) Sensing infrastructure. There are enormous numbers of sensors connected with each other to form a network (well known as wireless sensor networks) to complete the IoT Infrastructure. These deployed wireless sensors are with limited energy and processing capabilities. The IoT infrastructure becomes a key factor to building cyber-physical-social networking infrastructure, where all these sensing devices transmit data toward the cloud data center. Data routing toward cloud data center using such low power sensor is still a challenging task. In order to prolong the lifetime of the IoT sensing infrastructure and building scalable cyber infrastructure, there is the requirement of sensing optimization and energy efficient data routing. Toward addressing these issues of IoT sensing, this paper proposes a novel rendezvous data routing protocol for low-power sensors. The proposed method divides the sensing area into a number of clusters to lessen the energy consumption with data accumulation and aggregation. As a result, there will be less amount of data stream to the network. Another major reason to select cluster-based data routing is to reduce the control overhead. Finally, the simulation of the proposed method is done in the Castalia simulator to observe the performance. It has been concluded that the proposed method is energy efficient and it prolongs the networks lifetime for scalable IoT infrastructure.
León-Castro, E, Avilés-Ochoa, E & Merigó, JM 2018, 'Induced Heavy Moving Averages', International Journal of Intelligent Systems, vol. 33, no. 9, pp. 1823-1839.
View/Download from: Publisher's site
León-Castro, E, Avilés-Ochoa, E, Merigó, JM & Gil-Lafuente, AM 2018, 'Heavy Moving Averages and Their Application in Econometric Forecasting', Cybernetics and Systems, vol. 49, no. 1, pp. 26-43.
View/Download from: Publisher's site
View description>>
© 2017 Taylor & Francis Group, LLC. This paper presents the heavy ordered weighted moving average (HOWMA) operator. It is an aggregation operator that uses the main characteristics of two well-known techniques: the heavy ordered weighted averaging (OWA) and the moving averages. Therefore, this operator provides a parameterized family of aggregation operators from the minimum to the total operator and includes the OWA operator as a special case. It uses a heavy weighting vector in the moving average formulation and it represents the information available and the knowledge of the decision maker about the future scenarios of the phenomenon, according to his attitudinal character. Some of the main properties of this operator are studied, including a wide range of families of HOWMA operators such as the heavy moving average and heavy weighted moving average operators. The HOWMA operator is also extended using generalized and quasi-arithmetic means. An example concerning the foreign exchange rate between US dollars and Mexican pesos is also presented.
Li, G, Chen, H, Peng, S, Li, X, Wang, C, Yu, S & Yin, P 2018, 'A Collaborative Data Collection Scheme Based on Optimal Clustering for Wireless Sensor Networks', Sensors, vol. 18, no. 8, pp. 2487-2487.
View/Download from: Publisher's site
View description>>
In recent years, energy-efficient data collection has evolved into the core problem in the resource-constrained Wireless Sensor Networks (WSNs). Different from existing data collection models in WSNs, we propose a collaborative data collection scheme based on optimal clustering to collect the sensed data in an energy-efficient and load-balanced manner. After dividing the data collection process into the intra-cluster data collection step and the inter-cluster data collection step, we model the optimal clustering problem as a separable convex optimization problem and solve it to obtain the analytical solutions of the optimal clustering size and the optimal data transmission radius. Then, we design a Cluster Heads (CHs)-linking algorithm based on the pseudo Hilbert curve to build a CH chain with the goal of collecting the compressed sensed data among CHs in an accumulative way. Furthermore, we also design a distributed cluster-constructing algorithm to construct the clusters around the virtual CHs in a distributed manner. The experimental results show that the proposed method not only reduces the total energy consumption and prolongs the network lifetime, but also effectively balances the distribution of energy consumption among CHs. By comparing it o the existing compression-based and non-compression-based data collection schemes, the average reductions of energy consumption are 17.9% and 67.9%, respectively. Furthermore, the average network lifetime extends no less than 20-times under the same comparison.
Li, H, Liu, D, Dai, Y, Luan, TH & Yu, S 2018, 'Personalized Search Over Encrypted Data With Efficient and Secure Updates in Mobile Clouds', IEEE Transactions on Emerging Topics in Computing, vol. 6, no. 1, pp. 97-109.
View/Download from: Publisher's site
View description>>
Mobile cloud computing has been involved as a key enabling technology to overcome the physical limitations of mobile devices toward scalable and flexible mobile services. In the mobile cloud environment, searchable encryption, which enables direct search over encrypted data, is a key technique to maintain both the privacy and usability of outsourced data in cloud. On addressing the issue, many research efforts resolve to using the searchable symmetric encryption (SSE) and searchable public-key encryption (SPE). In this paper, we improve the existing works by developing a more practical searchable encryption technique, which can support dynamic updating operations in the mobile cloud applications. Specifically, we make our efforts on taking the advantages of both the SSE and SPE techniques, and propose PSU, a Personalized Search scheme over encrypted data with efficient and secure Updates in mobile cloud. By giving thorough security analysis, we demonstrate that the PSU can achieve a high security level. Using extensive experiments in a real-world mobile environment, we show that the PUS is more efficient compared with the existing proposals.
Li, H, Wang, J, Lu, H & Guo, Z 2018, 'Research and application of a combined model based on variable weight for short term wind speed forecasting', Renewable Energy, vol. 116, pp. 669-684.
View/Download from: Publisher's site
Li, J, Luo, H, Zhang, S, Yu, S & Wolf, T 2018, 'Traffic Engineering in Information-Centric Networking: Opportunities, Solutions and Challenges', IEEE Communications Magazine, vol. 56, no. 11, pp. 124-130.
View/Download from: Publisher's site
View description>>
© 1979-2012 IEEE. ICN is a novel communication paradigm that assigns names to content chunks (instead of IP addresses to hosts). ICN offers inherent features such as content metadata and in-network caching, which make it possible to reduce content transmission cost and retrieval latency and to improve the users' QoE. To achieve these goals, TE techniques need to be leveraged to deal with bursty and unevenly distributed Internet traffic demand. In this article, we explore new TE opportunities in ICN based on information-centric features and provide an overview of the state-of-The-Art TE solutions that use these features.
Li, L, Deng, N, Ren, W, Kou, B, Zhou, W & Yu, S 2018, 'Multi-Service Resource Allocation in Future Network With Wireless Virtualization', IEEE Access, vol. 6, pp. 53854-53868.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Future network is envisioned to be a multi-service network which can support various types of terminal devices with diverse quality of service requirements. As one of the key technologies, wireless virtualization establishes different virtual networks dependent on different application scenarios and user requirements through flexibly slicing and sharing wireless resources in future networks. In this paper, we first propose a service-centric wireless virtualization model to slice network according to service types. In this model, how to share and slice wireless resource is one of the fundamental issues to be addressed. Therefore, we formulate and solve a multi-service resource allocation problem to realize spectrum virtualization. Different from the existing strategies, we decouple the multi-service resource allocation problem in the proposed virtualization model to make it easier to solve. Specifically, it is solved in two stages: inter-slice resource allocation and intra-slice resource scheduling. In the first stage, we formulate the inter-slice resource allocation as a discrete optimization problem and propose a heuristic algorithm to get sub-optimal solution of this NP-hard problem. In the second stage, we modify several existing scheduling algorithms suitable for scheduling users of several specific services. Numerical results show the superiority of the proposed scheduling algorithms over the existing ones when applied to schedule specific services. Moreover, proposed resource allocation scheme is verified to meet the properties of virtualization and solves the multi-service resource allocation problem well.
Li, L, Liu, J, Sun, Y, Xu, G, Yuan, J & Zhong, L 2018, 'Unsupervised keyword extraction from microblog posts via hashtags', Journal of Web Engineering, vol. 17, no. 1-2, pp. 93-120.
View description>>
Nowadays, huge amounts of texts are being generated for social networking purposes on Web. Keyword extraction from such texts like microblog posts benefits many applications such as advertising, search, and content filtering. Unlike traditional web pages, a microblog post usually has some special social feature like a hashtag that is topical in nature and generated by users. Extracting keywords related to hashtags can reflect the intents of users and thus provides us better understanding on post content. In this paper, we propose a novel unsupervised keyword extraction approach for microblog posts by treating hashtags as topical indicators. Our approach consists of two hashtag enhanced algorithms. One is a topic model algorithm that infers topic distributions biased to hashtags on a collection of microblog posts. The words are ranked by their average topic probabilities. Our topic model algorithm can not only find the topics of a collection, but also extract hashtag-related keywords. The other is a random walk based algorithm. It first builds a word-post weighted graph by taking into account posts themselves. Then, a hashtag biased random walk is applied on this graph, which guides the algorithm to extract keywords according to hashtag topics. Last, the final ranking score of a word is determined by the stationary probability after a number of iterations. We evaluate our proposed approach on a collection of real Chinese microblog posts. Experiments show that our approach is more effective in terms of precision than traditional approaches considering no hashtag. The result achieved by the combination of two algorithms performs even better than each individual algorithm.
Li, S, Han, K, Ansari, N, Bao, Q, Hu, D, Liu, J, Yu, S & Zhu, Z 2018, 'Improving SDN Scalability With Protocol-Oblivious Source Routing: A System-Level Study', IEEE Transactions on Network and Service Management, vol. 15, no. 1, pp. 275-288.
View/Download from: Publisher's site
View description>>
Software-defined networking (SDN) has been considered as a break-through technology for the next-generation Internet. It enables fine-grained flow control that can make networks more flexible and programmable. However, this might lead to scalability issues due to the possible flow state explosion in SDN switches. SDN-based source routing can reduce the volume of flow-tables significantly by encoding the path information into packet headers. In this paper, we leverage the protocol-oblivious forwarding instruction set to design protocol-oblivious source routing (POSR), which is a protocol-independent, bandwidth-efficient, and flow-table-saving packet forwarding technique. We lay out the packet format for POSR, come up with the packet processing pipelines for realizing unicast, multicast, and link failure recovery, and implement POSR in a protocol-oblivious forwarding-enabled SDN network system. Experiments are then performed in a network testbed, which consists of 14 stand-alone SDN switches, to validate the advantages of POSR. Specifically, we compare POSR with several OpenFlow-based benchmarks for unicast, multicast, and link failure recovery, and confirm that POSR can reduce flow-table utilization effectively, shorten path setup latency and expedite link failure recovery.
Li, S, Ren, W, Zhu, T & Choo, K-KR 2018, 'Lico: A Lightweight Access Control Model for Inter-Networking Linkages', IEEE Access, vol. 6, pp. 51748-51755.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Processes in operating systems are assigned different privileges to access different resources. A process may invoke other processes whose privileges are different; thus, its privileges are expanded (or escalated) due to such improper 'inheritance.' Inter-networking can also occur between processes, either transitively or iteratively. This complicates the monitoring of inappropriate privilege assignment/escalation, which can result in information leakage. Such information leakage occurs due to privilege transitivity and inheritance and can be defined as a general access control problem for inter-networking linkages. This is also a topic that is generally less studied in existing access control models. Specifically, in this paper, we propose a lightweight directed graph-based model, LiCo, which is designed to facilitate the authorization of privileges among inter-networking processes. To the best of our knowledge, this is the first general access control model for inter-invoking processes and general inter-networking linkages.
Li, T, Zhou, H, Luo, H & Yu, S 2018, 'SERvICE: A Software Defined Framework for Integrated Space-Terrestrial Satellite Communication', IEEE Transactions on Mobile Computing, vol. 17, no. 3, pp. 703-716.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The existing satellite communication systems suffer from traditional design, such as slow configuration, inflexible traffic engineering, and coarse-grained Quality of Service (QoS) guarantee. To address these issues, in this paper, we propose SERvICE, a Software dEfined fRamework for Integrated spaCe-tErrestrial satellite Communication, based on Software Defined Network (SDN) and Network Function Virtualization (NFV). We first introduce the three planes of SERvICE, Management Plane, Control Plane, and Forwarding Plane. The framework is designed to achieve flexible satellite network traffic engineering and fine-grained QoS guarantee. We analyze the agility of the space component of SERvICE. Then, we give a description of the implementation of the prototype with the help of the Delay Tolerant Network (DTN) and OpenFlow. We conduct two experiments to validate the feasibility of SERvICE and the functionality of the prototype. In addition, we propose two heuristic algorithms, namely the QoS-oriented Satellite Routing (QSR) algorithm and the QoS-oriented Bandwidth Allocation (QBA) algorithm, to guarantee the QoS requirement of multiple users. The algorithms are also evaluated in the prototype. The experimental results show the efficiency of the proposed algorithms in terms of file transmission delay and transmission rate.
Li, X, Nie, L, Xu, H & Wang, X 2018, 'Collaborative Fall Detection Using Smart Phone and Kinect', Mobile Networks and Applications, vol. 23, no. 4, pp. 775-788.
View/Download from: Publisher's site
Li, Y, Ren, W, Zhu, T, Ren, Y, Qin, Y & Jie, W 2018, 'RIMS: A Real-time and Intelligent Monitoring System for live-broadcasting platforms', Future Generation Computer Systems, vol. 87, pp. 259-266.
View/Download from: Publisher's site
View description>>
Personal live shows on Internet streaming platforms currently are blooming as one of the most popular applications on mobile phones and especially attracting millions of young generation users. The content supervision on live streaming platforms, in which there are thousands or hundreds of show rooms for performing and chatting synchronously, is a major concern with the development of this new service. Traditional image captures and real-time content analysis experience huge difficulties such as processing delay, data overwhelming, and matching overhead. In this paper, we propose a comprehensive method to monitor real-time live stream and to identify illegal or unchartered live misbehaviors intelligently based on various proposed aspects instead of image analysis only. The proposed system called RIMS makes use of several novel indicators on show room status rather than analyzing images solely to support real-time requirements. Three detecting techniques are adopted: self-adaptive threshold-based abnormal traffic detection, sensitive Danmaku comment perception, and frame difference analysis. RIMS can detect dramatically increasing of user number in a show room, filter sensitive words in Danmaku, and capture segmentation of video scenes by frame difference analysis. We deploy our system to monitor a typical live- broadcasting platform called panda.tv, and overall accuracy of detection via three indicators reaches 90.1%. The application of RIMS can change current supervison methods on live platforms that they totally rely on real-time manual review or after the event check. The key techniques in RIMS can also be widely employed in many other mobile applications in edge computing such as video surveillance in Internet of Things and mobile short video sharing.
Liao, H, Xu, Z, Herrera, F & Merigó, JM 2018, 'Editorial Message: Special Issue on Hesitant Fuzzy Linguistic Decision Making: Algorithms, Theory and Applications', International Journal of Fuzzy Systems, vol. 20, no. 7, pp. 2083-2083.
View/Download from: Publisher's site
Lin, C-T, Chiu, T-C, Wang, Y-K, Chuang, C-H & Gramann, K 2018, 'Granger causal connectivity dissociates navigation networks that subserve allocentric and egocentric path integration', Brain Research, vol. 1679, pp. 91-100.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Studies on spatial navigation demonstrate a significant role of the retrosplenial complex (RSC) in the transformation of egocentric and allocentric information into complementary spatial reference frames (SRFs). The tight anatomical connections of the RSC with a wide range of other cortical regions processing spatial information support its vital role within the human navigation network. To better understand how different areas of the navigational network interact, we investigated the dynamic causal interactions of brain regions involved in solving a virtual navigation task. EEG signals were decomposed by independent component analysis (ICA) and subsequently examined for information flow between clusters of independent components (ICs) using direct short-time directed transfer function (sdDTF). The results revealed information flow between the anterior cingulate cortex and the left prefrontal cortex in the theta (4–7 Hz) frequency band and between the prefrontal, motor, parietal, and occipital cortices as well as the RSC in the alpha (8–13 Hz) frequency band. When participants prefered to use distinct reference frames (egocentric vs. allocentric) during navigation was considered, a dominant occipito-parieto–RSC network was identified in allocentric navigators. These results are in line with the assumption that the RSC, parietal, and occipital cortices are involved in transforming egocentric visual-spatial information into an allocentric reference frame. Moreover, the RSC demonstrated the strongest causal flow during changes in orientation, suggesting that this structure directly provides information on heading changes in humans.
Lin, C-T, Hsieh, T-Y, Liu, Y-T, Lin, Y-Y, Fang, C-N, Wang, Y-K, Yen, G, Pal, NR & Chuang, C-H 2018, 'Minority Oversampling in Kernel Adaptive Subspaces for Class Imbalanced Datasets', IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 5, pp. 950-962.
View/Download from: Publisher's site
View description>>
© 1989-2012 IEEE. The class imbalance problem in machine learning occurs when certain classes are underrepresented relative to the others, leading to a learning bias toward the majority classes. To cope with the skewed class distribution, many learning methods featuring minority oversampling have been proposed, which are proved to be effective. To reduce information loss during feature space projection, this study proposes a novel oversampling algorithm, named minority oversampling in kernel adaptive subspaces (MOKAS), which exploits the invariant feature extraction capability of a kernel version of the adaptive subspace self-organizing maps. The synthetic instances are generated from well-trained subspaces and then their pre-images are reconstructed in the input space. Additionally, these instances characterize nonlinear structures present in the minority class data distribution and help the learning algorithms to counterbalance the skewed class distribution in a desirable manner. Experimental results on both real and synthetic data show that the proposed MOKAS is capable of modeling complex data distribution and outperforms a set of state-of-the-art oversampling algorithms.
Lin, C-T, Huang, C-S, Yang, W-Y, Singh, AK, Chuang, C-H & Wang, Y-K 2018, 'Real-Time EEG Signal Enhancement Using Canonical Correlation Analysis and Gaussian Mixture Clustering', Journal of Healthcare Engineering, vol. 2018, pp. 1-11.
View/Download from: Publisher's site
View description>>
Electroencephalogram (EEG) signals are usually contaminated with various artifacts, such as signal associated with muscle activity, eye movement, and body motion, which have a noncerebral origin. The amplitude of such artifacts is larger than that of the electrical activity of the brain, so they mask the cortical signals of interest, resulting in biased analysis and interpretation. Several blind source separation methods have been developed to remove artifacts from the EEG recordings. However, the iterative process for measuring separation within multichannel recordings is computationally intractable. Moreover, manually excluding the artifact components requires a time-consuming offline process. This work proposes a real-time artifact removal algorithm that is based on canonical correlation analysis (CCA), feature extraction, and the Gaussian mixture model (GMM) to improve the quality of EEG signals. The CCA was used to decompose EEG signals into components followed by feature extraction to extract representative features and GMM to cluster these features into groups to recognize and remove artifacts. The feasibility of the proposed algorithm was demonstrated by effectively removing artifacts caused by blinks, head/body movement, and chewing from EEG recordings while preserving the temporal and spectral characteristics of the signals that are important to cognitive research.
Lin, C-T, King, J-T, Fan, J-W, Appaji, A & Prasad, M 2018, 'The Influence of Acute Stress on Brain Dynamics During Task Switching Activities', IEEE Access, vol. 6, pp. 3249-3255.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Task switching is a common method to investigate executive functions such as working memory and attention. This paper investigates the effect of acute stress on brain activity using task switching. Surprisingly few studies have been conducted in this area. There is behavioral and physiological evidence to indicate that acute stress makes the participants more tense which results in a better performance. In this current study, under stressful conditions, the participants gave quick responses with high accuracy. However, unexpected results were found in relation to salivary cortisol. Furthermore, the electroencephalogram results showed that acute stress was pronounced at the frontal and parietal midline cortex, especially on the theta, alpha, and gamma bands. One possible explanation for these results may be that the participants changed their strategy in relation to executive functions during stressful conditions by paying more attention which resulted in a higher working memory capacity which enhanced performance during the task switching.
Lin, C-T, King, J-T, Singh, AK, Gupta, A, Ma, Z, Lin, J-W, Machado, AMC, Appaji, A & Prasad, M 2018, 'Voice Navigation Effects on Real-World Lane Change Driving Analysis Using an Electroencephalogram', IEEE Access, vol. 6, pp. 26483-26492.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Improving the degree of assistance given by in-car navigation systems is an important issue for the safety of both drivers and passengers. There is a vast body of research that assesses the usability and interfaces of the existing navigation systems but very few investigations study the impact on the brain activity based on navigation-based driving. In this paper, a real-world experiment is designed to acquire the electroencephalography (EEG) and in-car information to analyze the dynamic brain activity while the driver is performing the lane-changing task based on the auditory instructions from an in-car navigation system. The results show that auditory cues can influence the speed and increase the frontal EEG delta and beta power, which is related to motor preparation and decision making during a lane change. However, there were no significant results on the alpha power. A better lane-change assessment can be obtained using specific vehicle information (lateral acceleration and heading angle) with EEG features for future naturalized driving study.
Lin, C-T, Nascimben, M, King, J-T & Wang, Y-K 2018, 'Task-related EEG and HRV entropy factors under different real-world fatigue scenarios', Neurocomputing, vol. 311, pp. 24-31.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. We classified the alertness levels of 17 subjects in different experimental sessions in a six-month longitudinal study based on a daily sampling system and related alertness to performance on a psychomotor vigilance task (PVT). As to our best knowledge, this is the first EEG-based longitudinal study for real-world fatigue. Alertness and PVT performance showed a monotonically increasing relationship. Moreover, we identified two measures in the entropy domain from electroencephalography (EEG) and heart rate variability (HRV) signals that were able to identify the extreme classes of PVT performers. Wiener entropy on selected leads from the frontal-parietal axis was able to discriminate the group of best performers. Sample entropy from the HRV signal was able to identify the worst performers. This joint EEG-HRV quantification provides complementary indexes to indicate more reliable human performance.
Lin, C-T, Prasad, M, Chung, C-H, Puthal, D, El-Sayed, H, Sankar, S, Wang, Y-K, Singh, J & Sangaiah, AK 2018, 'IoT-Based Wireless Polysomnography Intelligent System for Sleep Monitoring', IEEE Access, vol. 6, pp. 405-414.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Polysomnography (PSG) is considered the gold standard in the diagnosis of obstructive sleep apnea (OSA). The diagnosis of OSA requires an overnight sleep experiment in a laboratory. However, due to limitations in relation to the number of labs and beds available, patients often need to wait a long time before being diagnosed and eventually treated. In addition, the unfamiliar environment and restricted mobility when a patient is being tested with a polysomnogram may disturb their sleep, resulting in an incomplete or corrupted test. Therefore, it is posed that a PSG conducted in the patient's home would be more reliable and convenient. The Internet of Things (IoT) plays a vital role in the e-Health system. In this paper, we implement an IoT-based wireless polysomnography system for sleep monitoring, which utilizes a battery-powered, miniature, wireless, portable, and multipurpose recorder. A Java-based PSG recording program in the personal computer is designed to save several bio-signals and transfer them into the European data format. These PSG records can be used to determine a patient's sleep stages and diagnose OSA. This system is portable, lightweight, and has low power-consumption. To demonstrate the feasibility of the proposed PSG system, a comparison was made between the standard PSG-Alice 5 Diagnostic Sleep System and the proposed system. Several healthy volunteer patients participated in the PSG experiment and were monitored by both the standard PSG-Alice 5 Diagnostic Sleep System and the proposed system simultaneously, under the supervision of specialists at the Sleep Laboratory in Taipei Veteran General Hospital. A comparison of the results of the time-domain waveform and sleep stage of the two systems shows that the proposed system is reliable and can be applied in practice. The proposed system can facilitate the long-Term tracing and research of personal sleep monitoring at home.
Liu, A, Lu, J, Liu, F & Zhang, G 2018, 'Accumulating regional density dissimilarity for concept drift detection in data streams', Pattern Recognition, vol. 76, pp. 256-272.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd In a non-stationary environment, newly received data may have different knowledge patterns from the data used to train learning models. As time passes, a learning model's performance may become increasingly unreliable. This problem is known as concept drift and is a common issue in real-world domains. Concept drift detection has attracted increasing attention in recent years. However, very few existing methods pay attention to small regional drifts, and their accuracy may vary due to differing statistical significance tests. This paper presents a novel concept drift detection method, based on regional-density estimation, named nearest neighbor-based density variation identification (NN-DVI). It consists of three components. The first is a k-nearest neighbor-based space-partitioning schema (NNPS), which transforms unmeasurable discrete data instances into a set of shared subspaces for density estimation. The second is a distance function that accumulates the density discrepancies in these subspaces and quantifies the overall differences. The third component is a tailored statistical significance test by which the confidence interval of a concept drift can be accurately determined. The distance applied in NN-DVI is sensitive to regional drift and has been proven to follow a normal distribution. As a result, the NN-DVI's accuracy and false-alarm rate are statistically guaranteed. Additionally, several benchmarks have been used to evaluate the method, including both synthetic and real-world datasets. The overall results show that NN-DVI has better performance in terms of addressing problems related to concept drift-detection.
Liu, B, Zhou, W, Gao, L, Zhou, H, Luan, TH & Wen, S 2018, 'Malware Propagations in Wireless Ad Hoc Networks', IEEE Transactions on Dependable and Secure Computing, vol. 15, no. 6, pp. 1016-1026.
View/Download from: Publisher's site
View description>>
© 2004-2012 IEEE. Accurate malware propagation modeling in wireless ad hoc networks (WANETs) represents a fundamental and open research issue which shows distinguished challenges due to complicated access competition, severe channel interference, and dynamic connectivity. As an effort towards the issue, in this paper, we investigate the malware propagation under two spread schemes including Unicast and Broadcast, in Spread Mode and Communication Mode, respectively. We highlight our contributions in three-fold in the light of previous literature works. First, a bound of malware infection rate for each scheme is provided by applying the wireless network capacity theories. Second, the impact of mobility on malware propagations has been studied. Third, discussion of the relationship between different schemes and practical applications is provided. Numerical simulations and detailed performance analysis show that the Broadcast Scheme with Spread Mode is most dangerous in the sense of malware propagation speed in WANETs, and mobility will greatly increase the risk further. The results achieved in this paper not only provide insights on the malware propagation characteristics in WANETs, but also serve as fundamental guidelines on designing defense schemes.
Liu, B, Zhou, W, Zhu, T, Gao, L & Xiang, Y 2018, 'Location Privacy and Its Applications: A Systematic Study', IEEE Access, vol. 6, pp. 17606-17624.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. This paper surveys the current research status of location privacy issues in mobile applications. The survey spans five aspects of study: the definition of location privacy, attacks and adversaries, mechanisms to preserve the privacy of locations, location privacy metrics, and the current status of location-based applications. Through this comprehensive review, all the interrelated aspects of location privacy are integrated into a unified framework. Additionally, the current research progress in each area is reviewed individually, and the links between existing academic research and its practical applications are identified. This in-depth analysis of the current state-of-play in location privacy is designed to provide a solid foundation for future studies in the field.
Liu, F, Lu, J & Zhang, G 2018, 'Unsupervised Heterogeneous Domain Adaptation via Shared Fuzzy Equivalence Relations', IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3555-3568.
View/Download from: Publisher's site
Liu, P, Liu, J & Merigó, JM 2018, 'Partitioned Heronian means based on linguistic intuitionistic fuzzy numbers for dealing with multi-attribute group decision making', Applied Soft Computing, vol. 62, pp. 395-422.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Heronian mean (HM) operator has the advantages of considering the interrelationships between parameters, and linguistic intuitionistic fuzzy number (LIFN), in which the membership and non-membership are expressed by linguistic terms, can more easily describe the uncertain and the vague information existing in the real world. In this paper, we propose the partitioned Heronian mean (PHM) operator which assumes that all attributes are partitioned into several parts and members in the same part are interrelated while in different parts there are no interrelationships among members, and develop some new operational rules of LIFNs to consider the interactions between membership function and non-membership function, especially when the degree of non-membership is zero. Then we extend PHM operator to LIFNs based on new operational rules, and propose the linguistic intuitionistic fuzzy partitioned Heronian mean (LIFPHM) operator, the linguistic intuitionistic fuzzy weighted partitioned Heronian mean (LIFWPHM) operator, the linguistic intuitionistic fuzzy partitioned geometric Heronian mean (LIFPGHM) operator and linguistic intuitionistic fuzzy weighted partitioned geometric Heronian mean (LIFWPGHM) operator. Further, we develop two methods to solve multi-attribute group decision making (MAGDM) problems with the linguistic intuitionistic fuzzy information. Finally, we give some examples to verify the effectiveness of two proposed methods by comparing with the existing
Liu, Q, Li, P, Zhao, W, Cai, W, Yu, S & Leung, VCM 2018, 'A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View', IEEE Access, vol. 6, pp. 12103-12117.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Machine learning is one of the most prevailing techniques in computer science, and it has been widely applied in image processing, natural language processing, pattern recognition, cybersecurity, and other fields. Regardless of successful applications of machine learning algorithms in many scenarios, e.g., facial recognition, malware detection, automatic driving, and intrusion detection, these algorithms and corresponding training data are vulnerable to a variety of security threats, inducing a significant performance decrease. Hence, it is vital to call for further attention regarding security threats and corresponding defensive techniques of machine learning, which motivates a comprehensive survey in this paper. Until now, researchers from academia and industry have found out many security threats against a variety of learning algorithms, including naive Bayes, logistic regression, decision tree, support vector machine (SVM), principle component analysis, clustering, and prevailing deep neural networks. Thus, we revisit existing security threats and give a systematic survey on them from two aspects, the training phase and the testing/inferring phase. After that, we categorize current defensive techniques of machine learning into four groups: security assessment mechanisms, countermeasures in the training phase, those in the testing or inferring phase, data security, and privacy. Finally, we provide five notable trends in the research on security threats and defensive techniques of machine learning, which are worth doing in-depth studies in future.
Liu, Q, Wu, R, Chen, E, Xu, G, Su, Y, Chen, Z & Hu, G 2018, 'Fuzzy Cognitive Diagnosis for Modelling Examinee Performance', ACM Transactions on Intelligent Systems and Technology, vol. 9, no. 4, pp. 1-26.
View/Download from: Publisher's site
View description>>
Recent decades have witnessed the rapid growth of educational data mining (EDM), which aims at automatically extracting valuable information from large repositories of data generated by or related to people’s learning activities in educational settings. One of the key EDM tasks is cognitive modelling with examination data, and cognitive modelling tries to profile examinees by discovering their latent knowledge state and cognitive level (e.g. the proficiency of specific skills). However, to the best of our knowledge, the problem of extracting information from both objective and subjective examination problems to achieve more precise and interpretable cognitive analysis remains underexplored. To this end, we propose a fuzzy cognitive diagnosis framework (FuzzyCDF) for examinees’ cognitive modelling with both objective and subjective problems. Specifically, to handle the partially correct responses on subjective problems, we first fuzzify the skill proficiency of examinees. Then we combine fuzzy set theory and educational hypotheses to model the examinees’ mastery on the problems based on their skill proficiency. Finally, we simulate the generation of examination score on each problem by considering slip and guess factors. In this way, the whole diagnosis framework is built. For further comprehensive verification, we apply our FuzzyCDF to three classical cognitive assessment tasks, i.e., predicting examinee performance, slip and guess detection, and cognitive diagnosis visualization. Extensive experiments on three real-world datasets for these assessment tasks prove that FuzzyCDF can reveal the knowledge states and cognitive level of the examinees effectively and interpretatively.
Liu, W, Zhang, H, Chen, X & Yu, S 2018, 'Managing consensus and self-confidence in multiplicative preference relations in group decision making', Knowledge-Based Systems, vol. 162, pp. 62-73.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Preference relations have been widely used in Group Decision Making (GDM) to represent decision makers’ preferences over alternatives. Recently, a new kind of preference relation called the self-confident multiplicative preference relation has been presented, which is formed considering multiple self-confidence levels into the multiplicative preference relation. This paper proposes an iteration-based consensus building framework for GDM problems with self-confident multiplicative preference relations. In this consensus building framework, an extended logarithmic least squares method is presented to derive the individual and collective priority vectors from the self-confident multiplicative preference relations. Then, a two-step feedback adjustment mechanism is used to assist the decision makers to improve the consensus level, which adjusts both the preference values and the self-confidence levels. The simulation experiments are devised to testify the efficiency of the proposed consensus building framework. Simulation results show that compared with only adjusting the preference values in the iteration-based consensus model, adjusting both the preference values and the self-confidence levels can accelerate the consensus success ratio and improve the consensus success ratio.
Liu, Y-T, Pal, NR, Marathe, AR & Lin, C-T 2018, 'Weighted Fuzzy Dempster–Shafer Framework for Multimodal Information Integration', IEEE Transactions on Fuzzy Systems, vol. 26, no. 1, pp. 338-352.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. This study proposes an architecture based on a weighted fuzzy Dempster-Shafer framework (WFDSF), which can adjust weights associated with inconsistent evidence obtained by different classification approaches, to realize a fusion system for integrating multimodal information. The Dempster-Shafer theory (D-S theory) of evidence enables us to integrate heterogeneous information from multiple sources to obtain collaborative inferences for a given problem. To conquer various uncertainties associated with the collected information, our system assigns beliefs and plausibilities to possible hypotheses of each decision maker and uses a combination rule to fuse multimodal information. For information fusion, an important step in D-S aggregation is to find an appropriate basic probability assignment scheme for allocating support to each possible hypothesis/class, which remains an arduous and unsolved problem. Here, we propose a mathematical structure to aggregate weighted evidence extracted from two different types of approaches: fuzzy Naïve Bayes and nearest mean classification rule. Further, an intuitionistic belief assignment is employed to address uncertainties between hypotheses/classes. Finally, 12 benchmark problems from the UCI machine learning repository for classification are employed to validate the proposed WFDSF-based scheme. In addition, an application of WFDSF to a practical brain-computer interface problem involving multimodal data fusion is demonstrated in this study. The experimental results show that the WFDSF is superior to several existing methods.
Llopis-Albert, C, Merigó, JM, Liao, H, Xu, Y, Grima-Olmedo, J & Grima-Olmedo, C 2018, 'Water Policies and Conflict Resolution of Public Participation Decision-Making Processes Using Prioritized Ordered Weighted Averaging (OWA) Operators', Water Resources Management, vol. 32, no. 2, pp. 497-510.
View/Download from: Publisher's site
View description>>
© 2017, Springer Science+Business Media B.V. There is a growing interest in environmental policies about how to implement public participation engagement in the context of water resources management. This paper presents a robust methodology, based on ordered weighted averaging (OWA) operators, to conflict resolution decision-making problems under uncertain environments due to both information and stakeholders’ preferences. The methodology allows integrating heterogeneous interests of the general public and stakeholders on account of their different degree of acceptance or preference and level of influence or power regarding the measures and policies to be adopted, and also of their level of involvement (i.e., information supply, consultation and active involvement). These considerations lead to different environmental and socio-economic outcomes, and levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship over the stakeholders. The individual stakeholders’ preferences are aggregated through their associated weights, which depend on the satisfaction of the higher priority decision maker. The methodology ranks the optimal management strategies to maximize the stakeholders’ satisfaction. It has been successfully applied to a real case study, providing greater fairness, transparency, social equity and consensus among actors. Furthermore, it provides support to environmental policies, such as the EU Water Framework Directive (WFD), improving integrated water management while covering a wide range of objectives, management alternatives and stakeholders.
Llopis‐Albert, C, Merigó, JM, Xu, Y & Liao, H 2018, 'Application of Fuzzy Set/Qualitative Comparative Analysis to Public Participation Projects in Support of the EU Water Framework Directive', Water Environment Research, vol. 90, no. 1, pp. 74-83.
View/Download from: Publisher's site
View description>>
ABSTRACT: This study analyzes the level of satisfaction of stakeholders in the public participation process (PPP) of water resources management, which is mandatory according to the EU Water Framework Directive (WFD). The methodology uses a fuzzy set/qualitative comparative analysis (fsQCA), which allows the identification of a combination of factors that lead to the outcome that is stakeholders' satisfaction. It allows dealing with uncertain environments due to the heterogeneous nature of stakeholders and factors. The considered causes range from environmental objectives pursued, actual capacity of efficiently carrying out those objectives, socioeconomic development of the region, level of involvement and means of participation of the stakeholders engaged in the PPP, and alternative policies and measures that should be performed. Results support the argument that different causal paths explain the stakeholders' satisfaction. The methodology may help in the implementation of the WFD and conflict resolution since it leads to greater fairness, social equity, and consensus among stakeholders.
Lu, J, Liu, A, Dong, F, Gu, F, Gama, J & Zhang, G 2018, 'Learning under Concept Drift: A Review', IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 12, pp. 1-1.
View/Download from: Publisher's site
View description>>
IEEE Concept drift describes unforeseeable changes in the underlying distribution of streaming data over time. Concept drift research involves the development of methodologies and techniques for drift detection, understanding and adaptation. Data analysis has revealed that machine learning in a concept drift environment will result in poor learning results if the drift is not addressed. To help researchers identify which research topics are significant and how to apply related techniques in data analysis tasks, it is necessary that a high quality, instructive review of current research developments and trends in the concept drift field is conducted. In addition, due to the rapid development of concept drift in recent years, the methodologies of learning under concept drift have become noticeably systematic, unveiling a framework which has not been mentioned in literature. This paper reviews over 130 high quality publications in concept drift related research areas, analyzes up-to-date developments in methodologies and techniques, and establishes a framework of learning under concept drift including three main components: concept drift detection, concept drift understanding, and concept drift adaptation. This paper lists and discusses 10 popular synthetic datasets and 14 publicly available benchmark datasets used for evaluating the performance of learning algorithms aiming at handling concept drift. Also, concept drift related research directions are covered and discussed. By providing state-of-the-art knowledge, this survey will directly support researchers in their understanding of research developments in the field of learning under concept drift.
Lu, J, Xuan, J, Zhang, G & Luo, X 2018, 'Structural property-aware multilayer network embedding for latent factor analysis', Pattern Recognition, vol. 76, pp. 228-241.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Multilayer network is a structure commonly used to describe and model the complex interaction between sets of entities/nodes. A three-layer example is the author-paper-word structure in which authors are linked by co-author relation, papers are linked by citation relation, and words are linked by semantic relation. Network embedding, which aims to project the nodes in the network into a relatively low-dimensional space for latent factor analysis, has recently emerged as an effective method for a variety of network-based tasks, such as collaborative filtering and link prediction. However, existing studies of network embedding both focus on the single-layer network and overlook the structural properties of the network, e.g., the degree distribution and communities, which are significant for node characterization, such as the preferences of users in a social network. In this paper, we propose four multilayer network embedding algorithms based on Nonnegative Matrix Factorization (NMF) with consideration given to four structural properties: whole network (NNMF), community (CNMF), degree distribution (DNMF), and max spanning tree (TNMF). Experiments on synthetic data show that the proposed algorithms are able to preserve the desired structural properties as designed. Experiments on real-world data show that multilayer network embedding improves the accuracy of document clustering and recommendation, and the four embedding algorithms corresponding to the four structural properties demonstrate the differences in performance on these two tasks. These results can be directly used in document clustering and recommendation systems.
Lu, W, Lu, P, Sun, Q, Yu, S & Zhu, Z 2018, 'Profit-Aware Distributed Online Scheduling for Data-Oriented Tasks in Cloud Datacenters', IEEE Access, vol. 6, pp. 15629-15642.
View/Download from: Publisher's site
View description>>
As there is an increasing trend to deploy geographically distributed (geo-distributed) cloud datacenters (DCs), the scheduling of data-oriented tasks in such cloud DC systems becomes an appealing research topic. Specifically, it is challenging to achieve the distributed online scheduling that can handle the tasks' acceptance, data-transfers, and processing jointly and efficiently. In this paper, by considering the store-and-forward and anycast schemes, we formulate an optimization problem to maximize the time-average profit from serving data-oriented tasks in a cloud DC system and then leverage the Lyapunov optimization techniques to propose an efficient scheduling algorithm, i.e., GlobalAny. We also extend the proposed algorithm by designing a data-transfer acceleration scheme to reduce the data-transfer latency. Extensive simulations verify that our algorithms can maximize the time-average profit in a distributed online manner. The results also indicate that GlobalAny and GlobalAnyExt (i.e., GlobalAny with data-transfer acceleration) outperform several existing algorithms in terms of both time-average profit and computation time.
Ma, H, Yu, S, Gabbouj, M & Mueller, P 2018, 'Guest Editorial Special Issue on Multimedia Big Data in Internet of Things', IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3405-3407.
View/Download from: Publisher's site
Maldonado, S, Merigó, J & Miranda, J 2018, 'Redefining support vector machines with the ordered weighted average', Knowledge-Based Systems, vol. 148, pp. 41-46.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. In this work, the classical soft-margin Support Vector Machine (SVM) formulation is redefined with the inclusion of an Ordered Weighted Averaging (OWA) operator. In particular, the hinge loss function is rewritten as a weighted sum of the slack variables to guarantee adequate model fit. The proposed two-step approach trains a soft-margin SVM first to obtain the slack variables, which are then used to induce the order for the OWA operator in a second SVM training. Originally developed as a linear method, our proposal extends it to nonlinear classification thanks to the use of Kernel functions. Experimental results show that the proposed method achieved the best overall performance compared with standard SVM and other well-known data mining methods in terms of predictive performance.
Malomo, L, Pérez, J, Iarussi, E, Pietroni, N, Miguel, E, Cignoni, P & Bickel, B 2018, 'FlexMaps: computational design of flat flexible shells for shaping 3D objects.', ACM Trans. Graph., vol. 37, no. 6, pp. 241-241.
View/Download from: Publisher's site
View description>>
We propose FlexMaps, a novel framework for fabricating smooth shapes out of flat, flexible panels with tailored mechanical properties. We start by mapping the 3D surface onto a 2D domain as in traditional UV mapping to design a set of deformable flat panels called
FlexMaps.
For these panels, we design and obtain specific mechanical properties such that, once they are assembled, the static equilibrium configuration matches the desired 3D shape. FlexMaps can be fabricated from an almost rigid material, such as wood or plastic, and are made flexible in a controlled way by using computationally designed spiraling microstructures.
Mann, RL & Bremner, MJ 2018, 'Approximation Algorithms for Complex-Valued Ising Models on Bounded Degree Graphs', Quantum, vol. 3, p. 162.
View description>>
We study the problem of approximating the Ising model partition function withcomplex parameters on bounded degree graphs. We establish a deterministicpolynomial-time approximation scheme for the partition function when theinteractions and external fields are absolutely bounded close to zero.Furthermore, we prove that for this class of Ising models the partitionfunction does not vanish. Our algorithm is based on an approach due to Barvinokfor approximating evaluations of a polynomial based on the location of thecomplex zeros and a technique due to Patel and Regts for efficiently computingthe leading coefficients of graph polynomials on bounded degree graphs.Finally, we show how our algorithm can be extended to approximate certainoutput probability amplitudes of quantum circuits.
Martínez-López, FJ, Merigó, JM, Valenzuela-Fernández, L & Nicolás, C 2018, 'Fifty years of the European Journal of Marketing: a bibliometric analysis', European Journal of Marketing, vol. 52, no. 1/2, pp. 439-468.
View/Download from: Publisher's site
View description>>
PurposeThe European Journal of Marketing was created in 1967. In 2017, the journal celebrates its 50th anniversary. Therefore, the purpose of this study is to present a bibliometric overview of the leading trends of the journal during this period.Design/methodology/approachThis work uses the Scopus database to analyse the most productive authors, institutions and countries, as well as the most cited papers and the citing articles. The investigation uses bibliometric indicators to represent the bibliographic data, including the total number of publications and citations between 1967 and 2017. Additionally, the article also develops a graphical visualization of the bibliographic material by using the visualization of similarities viewer software to map journals, keywords and institutions with bibliographic coupling and co-citation analysis.FindingsBritish authors and institutions are the most productive in the journal, although Australians’ are growing significantly the number of papers published. Continental European institutions are also increasing the number of publications, but they are still far from reaching the British contribution so far. In the mid-term, however, these zone’s authors and institutions, especially those from big European countries like France, Germany, Italy and Spain, should reach a closer performance to British ones; more as less long, historic, but more recent periods of analysis are considered.Practical implicationsThis article is useful for any reader of this journal to understand questions such as papers’ Eu...
Mauleon-mendez, E, Genovart-balaguer, J, Merigo, J & Mulet-forteza, C 2018, 'Sustainable Tourism Research Towards Twenty-Five Years of the Journal of Sustainable Tourism', Advances in Hospitality and Tourism Research (AHTR), vol. 6, no. 1, pp. 23-46.
View/Download from: Publisher's site
View description>>
The Journal of Sustainable Tourism(JOST) is a main journal in ‘Geography, Planning and Development’. This paperpresents a general overview of the journal over its lifetime by using bibliometricindicators. The paper uses the Scopus database to analyse the bibliometricdata. This analysis includes key issues such as the publication and citationstructure of the journal; the most cited articles; the leading authors,institutions, and countries in the journal; and the keywords that are mostoften used. This paper also uses the visualization of similarities tographically map the bibliographic material. This analysis provides furtherinsights into how JOST links to other journals and how it links researchersacross the globe. These results indicate that JOST is one of the leadingjournals in the areas where the journal is indexed,with a wide range of authors from institutions and countries from allover the world publishing in it.
Meng, T, Cai, L, He, T, Chen, L, Deng, Z, Ding, W & Cao, Z 2018, 'A Modified Distance Dynamics Model for Improvement of Community Detection', IEEE Access, vol. 6, pp. 63934-63947.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Community detection is a key technique for identifying the intrinsic community structures of complex networks. The distance dynamics model has been proven effective in finding communities with arbitrary size and shape and identifying outliers. However, to simulate distance dynamics, the model requires manual parameter specification and is sensitive to the cohesion threshold parameter, which is difficult to determine. Furthermore, it has difficulty handling rough outliers and ignores hubs (nodes that bridge communities). In this paper, we propose a robust distance dynamics model, namely, Attractor++, which uses a dynamic membership degree. In Attractor++, the dynamic membership degree is used to determine the influence of exclusive neighbors on the distance instead of setting the cohesion threshold. In addition, considering its inefficiency and low accuracy in handling outliers and identifying hubs, we design an outlier optimization model that is based on triangle adjacency. By using optimization rules, a postprocessing method further judges whether a singleton node should be merged into the same community as its triangles or regarded as a hub or an outlier. Extensive experiments on both real-world and synthetic networks demonstrate that our algorithm more accurately identifies nodes that have special roles (hubs and outliers) and more effectively identifies community structures.
Merigó, JM, Gil-Lafuente, AM, Yu, D & Llopis-Albert, C 2018, 'Fuzzy decision making in complex frameworks with generalized aggregation operators', Applied Soft Computing, vol. 68, pp. 314-321.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. This article presents a new aggregation system applied to fuzzy decision making. The fuzzy generalized unified aggregation operator (FGUAO) is a system that integrates many operators by adding a new aggregation process that considers the relevance that each operator has in the analysis. It also deals with an uncertain environment where the information is studied with fuzzy numbers. A wide range of particular cases and properties are studied. This approach is further extended by using quasi-arithmetic means. The paper ends studying the applicability in decision making problems regarding the European Union decisions. For doing so, the work uses a multi-person aggregation process obtaining the multi-person – FGUAO operator. An example concerning the fixation of the interest rate by the European Central Bank is presented.
Merigó, JM, Pedrycz, W, Weber, R & de la Sotta, C 2018, 'Fifty years of Information Sciences: A bibliometric overview', Information Sciences, vol. 432, pp. 245-268.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. Information Sciences is a leading international journal in computer science launched in 1968, so becoming fifty years old in 2018. In order to celebrate its anniversary, this study presents a bibliometric overview of the leading publication and citation trends occurring in the journal. The aim of the work is to identify the most relevant authors, institutions, countries, and analyze their evolution through time. The paper uses the Web of Science Core Collection in order to search for the bibliographic information. Our study also develops a graphical mapping of the bibliometric material by using the visualization of similarities (VOS) viewer. With this software, the work analyzes bibliographic coupling, citation and co-citation analysis, co-authorship, and co-occurrence of keywords. The results underline the significant growth of the journal through time and its international diversity having publications from countries all over the world.
Merigó, JM, Zhou, L, Yu, D, Alrajeh, N & Alnowibet, K 2018, 'Probabilistic OWA distances applied to asset management', Soft Computing, vol. 22, no. 15, pp. 4855-4878.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. Average distances are widely used in many fields for calculating the distances between two sets of elements. This paper presents several new average distances by using the ordered weighted average, the probability and the weighted average. First, the work presents the probabilistic ordered weighted averaging weighted average distance (POWAWAD) operator. POWAWAD is a new aggregation operator that uses distance measures in a unified framework between the probability, the weighted average and the ordered weighted average (OWA) operator that considers the degree of importance that each concept has in the aggregation. The POWAWAD operator includes a wide range of particular cases including the maximum distance, the minimum distance, the normalized Hamming distance, the weighted Hamming distance and the ordered weighted average distance (OWAD). The article also presents further generalizations by using generalized and quasi-arithmetic means forming the generalized probabilistic ordered weighted averaging weighted average distance (GPOWAWAD) operator and the quasi-POWAWAD operator. The study ends analysing the applicability of this new approach in the calculation of the average fixed assets. Particularly, the work focuses on measuring the average distances between the ideal percentage of fixed assets that the companies of a specific country should have versus the real percentage of fixed assets they have. The illustrative example focuses on the Asian market.
Mirtalaie, MA, Hussain, OK, Chang, E & Hussain, FK 2018, 'Extracting sentiment knowledge from pros/cons product reviews: Discovering features along with the polarity strength of their associated opinions', Expert Systems with Applications, vol. 114, pp. 267-288.
View/Download from: Publisher's site
View description>>
Sentiment knowledge extraction is a growing area of research in the literature. It helps in analyzing users’ opinions about different entities or events, which can then be utilized by analysts for various purposes. Particularly, feature-based sentiment analysis is one of the challenging research areas that analyzes users’ opinions on various features of a product or service. Of the three formats for the product reviews, our focus in this paper is limited to analyzing the pros/cons type. Due to the nature of pros/cons reviews, they are mostly concise and follow a different structure from other review types. Therefore, specialized techniques are needed to analyze these reviews and extract the customers’ discussed product features along with their personal attitudes. In this paper, we propose the Pros/Cons Sentiment Analyzer (PCSA) framework that exploits dependency relations in extracting sentiment knowledge from pros/cons reviews. We also utilize two different lexicons to ascertain the polarity strength of the extracted features based on the customers’ opinions. Several experiments are conducted to evaluate the performance of PCSA in its different phases.
Mueller, FF, Andres, J, Marshall, J, Svanæs, D, schraefel, MC, Gerling, K, Tholander, J, Martin-Niedecken, AL, Segura, EM, van den Hoven, E, Graham, N, Höök, K & Sas, C 2018, 'Body-centric computing', Interactions, vol. 25, no. 4, pp. 34-39.
View/Download from: Publisher's site
Mulet-Forteza, C, Martorell-Cunill, O, Merigó, JM, Genovart-Balaguer, J & Mauleon-Mendez, E 2018, 'Twenty five years of the Journal of Travel & Tourism Marketing: a bibliometric ranking', Journal of Travel & Tourism Marketing, vol. 35, no. 9, pp. 1201-1221.
View/Download from: Publisher's site
View description>>
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. The Journal of Travel & Tourism Marketing (JTTM) is a leading international journal in “Marketing” and “Tourism, Leisure and Hospitality Management.” JTTM published its first issue in 1992. In 2017, the journal has celebrated its twenty-fifth anniversary. For that reason, this study analyzes all the publications in the journal since its creation by using a bibliometric approach. The objective is to provide a complete overview of the main factors that affect the journal. This analysis includes key issues such as the distribution of annual publications and citations, the most cited papers, the h-index, citations per paper, the keywords that are mostly used, the influence on the publishing industry and authors, universities, and the countries that have the most publications. The paper uses the Scopus database to analyze the bibliometric data. Additionally, the paper also uses the visualization of similarities (VOS) viewer software to map graphically the bibliographic material. The graphical analysis uses bibliographic coupling, co-citation, citation, and co-occurrence of keywords. These results indicate that JTTM is one of the leading journals in the areas where the journal is indexed, with publications from a wide range of authors, institutions, and countries around the world.
Mustapha, S, Braytee, A & Ye, L 2018, 'Multisource Data Fusion for Classification of Surface Cracks in Steel Pipes', Journal of Nondestructive Evaluation, Diagnostics and Prognostics of Engineering Systems, vol. 1, no. 2.
View/Download from: Publisher's site
View description>>
This paper focuses on the development and validation of a robust framework for surface crack detection and assessment in steel pipes based on measured vibration responses collected using a network of piezoelectric (PZT) wafers. The pipe structure considered in this study contained multiple progressive cracks occurring at different locations and with various orientations (along the circumference or length). The fusion of data collected from multiple PZT wafers was investigated based on two approaches: (a) combining the raw data from all sensors before establishing a statistical model for damage classification and (b) combining the features from each sensor after applying a multiclass support vector machine recursive feature elimination (MCSVM-RFE), for dimensionality reduction, and taking the union of discriminative features among the different sources of data. A MCSVM learning algorithm was employed to train the data and generate a statistical classifier. The dataset consisted of ten classes, consisting of nine damage cases and the healthy state. The accuracy of the prediction based on the two fusion approaches resulted in a high accuracy, exceeding 95%, but the number of features needed to enrich the accuracy (95%) differed between the two approaches. Furthermore, the performance and the precision in the prediction of the classifier were evaluated when the data from only a single sensor was used compared with the combined data from all the sensors within the network. Very promising results in the classification of damage were obtained, based on the case study that included multiple damage scenarios with different lengths and orientations.
Nawaz, F, Asadabadi, MR, Janjua, NK, Hussain, OK, Chang, E & Saberi, M 2018, 'An MCDM method for cloud service selection using a Markov chain and the best-worst method', Knowledge-Based Systems, vol. 159, pp. 120-131.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Due to the increasing number of cloud services, service selection has become a challenging decision for many organisations. It is even more complicated when cloud users change their preferences based on the requirements and the level of satisfaction of the experienced service. The purpose of this paper is to overcome this drawback and develop a cloud broker architecture for cloud service selection by finding a pattern of the changing priorities of User Preferences (UPs). To do that, a Markov chain is employed to find the pattern. The pattern is then connected to the Quality of Service (QoS) for the available services. A recently proposed Multi Criteria Decision Making (MCDM) method, Best Worst Method (BWM), is used to rank the services. We show that the method outperforms the Analytic Hierarchy Process (AHP). The proposed methodology provides a prioritized list of the services based on the pattern of changing UPs. The methodology is validated through a case study using real QoS performance data of Amazon Elastic Compute (Amazon EC2) cloud services.
Nawaz, F, Janjua, NK, Hussain, OK, Hussain, FK, Chang, E & Saberi, M 2018, 'Event-driven approach for predictive and proactive management of SLA violations in the Cloud of Things', Future Generation Computer Systems, vol. 84, pp. 78-97.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. In a dynamic environment such as the cloud-of-things, one of the most critical factors for successful service delivery is the QoS under defined constraints. Even though guarantees in the form of service level agreements (SLAs) are provided to users, many services exhibit dynamic Quality of Service (QoS) variations. This QoS variation as well as changes in the behavior and state of the service is caused by some internal events (such as varying loads) and external events (such as location and weather), which results in frequent SLA violations. Most of the existing violation prediction approaches use historic data to predict future QoS values. They do not consider dynamic changes and the events that cause these changes in QoS attributes. In this paper, we propose an event-driven-based proactive approach for predicting SLA violations by combining logic-based reasoning and probabilistic inferencing. The results show that our proposed approach is efficient and proactively identifies SLA violations under uncertain QoS observations.
Niazi, M, Mishra, A & Gill, AQ 2018, 'What Do Software Practitioners Really Think About Software Process Improvement Project Success? An Exploratory Study', Arabian Journal for Science and Engineering, vol. 43, no. 12, pp. 7719-7735.
View/Download from: Publisher's site
View description>>
© 2018, King Fahd University of Petroleum & Minerals. Software practitioners have always shown a significant interest in implementing software process improvement (SPI) initiatives to ensure the delivery of quality products. Software industry and SPI methodologies have evolved over a period of time; however, still many SPI initiatives have not been successful. There is a need to understand software practitioners’ perspectives on SPI success which can be helpful for tailoring or improving effective situation-specific SPI methodologies. This research presents an exploratory study of Turkish software development organizations. The main research question is: What software practitioners’ really think about SPI project success. This study was conducted with 27 Turkish software development organizations to identity and analyse important SPI factors that contribute to the success of SPI projects. The results reveal that professional growth, increased professional recognition, project planning, monitoring of project risks, providing technical support, adoption of current technologies, strong leadership and commitment are among the highest ranked factors that contribute towards the success of SPI initiatives. The findings of this research provide a foundation for further work in tailoring and improving situation-specific SPI methodologies for software project environments.
Nie, L, Wang, X, Wan, L, Yu, S, Song, H & Jiang, D 2018, 'Network Traffic Prediction Based on Deep Belief Network and Spatiotemporal Compressive Sensing in Wireless Mesh Backbone Networks', Wireless Communications and Mobile Computing, vol. 2018, pp. 1-10.
View/Download from: Publisher's site
View description>>
Wireless mesh network is prevalent for providing a decentralized access for users and other intelligent devices. Meanwhile, it can be employed as the infrastructure of the last few miles connectivity for various network applications, for example, Internet of Things (IoT) and mobile networks. For a wireless mesh backbone network, it has obtained extensive attention because of its large capacity and low cost. Network traffic prediction is important for network planning and routing configurations that are implemented to improve the quality of service for users. This paper proposes a network traffic prediction method based on a deep learning architecture and the Spatiotemporal Compressive Sensing method. The proposed method first adopts discrete wavelet transform to extract the low-pass component of network traffic that describes the long-range dependence of itself. Then, a prediction model is built by learning a deep architecture based on the deep belief network from the extracted low-pass component. Otherwise, for the remaining high-pass component that expresses the gusty and irregular fluctuations of network traffic, the Spatiotemporal Compressive Sensing method is adopted to predict it. Based on the predictors of two components, we can obtain a predictor of network traffic. From the simulation, the proposed prediction method outperforms three existing methods.
Niu, T, Wang, J, Lu, H & Du, P 2018, 'Uncertainty modeling for chaotic time series based on optimal multi-input multi-output architecture: Application to offshore wind speed', Energy Conversion and Management, vol. 156, pp. 597-617.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Wind energy is attracting more attention with the growing demand for energy. However, the efficient development and utilization of wind energy are restricted due to the intermittency and randomness of wind speed. Although abundant investigations concerning wind speed forecasting have been conducted by numerous researchers, most of the studies merely attach importance to point forecasts, which cannot quantitatively characterize the uncertainties as developing intervals. In this study, a novel interval prediction architecture has been designed, aiming at constructing effective prediction intervals for a wind speed series, composed of a preprocessing module, a feature selection module, an optimization module, a forecast module and an evaluation module. The feature selection module, in cooperation with the preprocessing module, is developed to determine the optimal model input. Furthermore, the forecast module optimized by the optimization module is considered a predictor for giving prediction intervals. The experimental results shed light on the architecture that not only outperforms the benchmark models considered, but also has great potential for application to wind power systems.
Oberst, S & Tuttle, S 2018, 'Nonlinear dynamics of thin-walled elastic structures for applications in space', Mechanical Systems and Signal Processing, vol. 110, pp. 469-484.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Driven by the need for multi-functionality and increasing demands for low mass and compact-stowing, unfolding, self-deploying or –morphing smart mechanical structures have become popular space engineering designs for flexible appendages. Extensive research has been conducted on the use of tape springs as hinge deployment mechanisms for space booms, solar sails, or optical membranes or directly for used as antennas. However, the vibrational behaviour of tape springs and its related dynamics have rarely been addressed in detail, even though missions are underway with similarly flexible appendages installed. By conducting quasi-static bending tests on a tape spring antenna, we evidence hysteresis behaviours in both the opposite- and equal sense bending directions. Apart from the well-known snap-through buckling, the structure exhibits torsional buckling in the equal sense bending direction before collapsing. Micro-vibrational excitation triggers nonlinear jump phenomena and the period-doubling route to chaos. Using a computational tape spring model and simplified environmental loads similar to those encountered in near-Earth orbits, coupling between the first bending and torsional modes generates a dynamic instability which is predicted by a complex eigenvalue analysis step. The current study highlights that high perturbation sensitivity and system-inherent nonlinearities can lead to stability issues. In the course of designing a spacecraft with thin-walled appendages, system-level trade-offs are routinely performed. Since it is unclear how severely the vibrations of flexible appendages might affect their proper functioning or the control of the spacecraft, it is of paramount importance to validate experimentally thin-walled structures thoroughly for their dynamic and stability behaviours.
Oberst, S, Baetz, J, Campbell, G, Lampe, F, Lai, JCS, Hoffmann, N & Morlock, M 2018, 'Vibro-acoustic and nonlinear analysis of cadavric femoral bone impaction in cavity preparations', International Journal of Mechanical Sciences, vol. 144, pp. 739-745.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Owing to an ageing population, the impact of unhealthy lifestyle, or simply congenital or gender specific issues (dysplasia), degenerative bone and joint disease (osteoarthritis) at the hip pose an increasing problem in many countries. Osteoarthritis is painful and causes mobility restrictions; amelioration is often only achieved by replacing the complete hip joint in a total hip arthroplasty (THA). Despite significant orthopaedic progress related to THA, the success of the surgical process relies heavily on the judgement, experience, skills and techniques used of the surgeon. One common way of implanting the stem into the femur is press fitting uncemented stem designs into a prepared cavity. By using a range of compaction broaches, which are impacted into the femur, the cavity for the implant is formed. However, the surgeon decides whether to change the size of the broach, how hard and fast it is impacted or when to stop the excavation process, merely based on acoustic, haptic or visual cues which are subjective. It is known that non-ideal cavity preparations increase the risk of peri-prosthetic fractures especially in elderly people. This study reports on a simulated hip replacement surgery on a cadaver and the analysis of impaction forces and the microphone signals during compaction. The recorded transient signals of impaction forces and acoustic pressures (≈ 80 µs–2 ms) are statistically analysed for their trend, which shows increasing heteroscedasticity in the force-pressure relationship between broach sizes. TIKHONOV regularisation, as inverse deconvolution technique, is applied to calculate the acoustic transfer functions from the acoustic responses and their mechanical impacts. The extracted spectra highlight that system characteristics altered during the cavity preparation process: in the high-frequency range the number of resonances increased with impacts and broach size. By applying nonlinear time series analysis the syste...
Oberst, S, Lai, JCS & Evans, TA 2018, 'Key physical wood properties in termite foraging decisions', Journal of The Royal Society Interface, vol. 15, no. 149, pp. 20180505-20180505.
View/Download from: Publisher's site
View description>>
As eusocial and wood-dwelling insects, termites have been shown to use vibrations to assess their food, to eavesdrop on competitors and predators and to warn nest-mates. Bioassay choice experiments used to determine food preferences in animals often consider single factors only but foraging decisions can be influenced by multiple factors such as the quantity and quality of the food and the wood as a medium for communication. A statistical analysis framework is developed here to design a single bioassay experiment to study multifactorial foraging choice (Pinus radiata) in the basal Australian termite speciesCoptotermes(C.)acinaciformis(Isoptera: Rhinotermitidae). By employing a correlation analysis, 17 measured physical properties of 1417Pinus radiataveneer discs were reduced to five key material properties: density, moisture absorption, early wood content, first resonance frequency and damping. By applying a fuzzyc-means clustering technique, these veneer discs were optimally paired for treatment and control trials to study food preference by termites based on these five key material properties. A multifactorial analysis of variance was compared to a permutation analysis of the results indicating for the first time thatC. acinaciformistakes into account multiple factors when making foraging decisions.C. acinaciformisprefer denser wood with large early wood content, preferably humid and highly damped. Results presented here have practical implications for food choice experiments and for studies concerned with communication in termites as well as their ecology and coevolution with trees as their major food source.
Oberst, S, Niven, RK, Lester, DR, Ord, A, Hobbs, B & Hoffmann, N 2018, 'Detection of unstable periodic orbits in mineralising geological systems', Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 28, no. 8, pp. 085711-085711.
View/Download from: Publisher's site
View description>>
Worldwide, mineral exploration is suffering from rising capital costs, due to the depletion of readily recoverable reserves and the need to discover and assess more inaccessible or geologically complex deposits. For gold exploration, this problem is particularly acute. We propose an innovative approach to mineral exploration and orebody characterisation, based on the analysis of geological core data as a spatial dynamical system, using the mathematical tools of dynamical system analysis. This approach is highly relevant for orogenic gold deposits, which—in contrast to systems formed at chemical equilibrium—exhibit many features of nonlinear dynamical systems, including episodic fluctuations on various length and time scales. Feedback relationships between thermo-chemical and deformation processes produce recurrent fluid temperatures and pressures and the deposition of vein-filling minerals such as pyrite and gold. We therefore relax the typical assumption of chemical equilibrium and analyse the underlying processes as aseismic, non-adiabatic, and inherent to a hydrothermal, nonlinear dynamical open-flow chemical reactor. These processes are approximated using the Gray-Scott model of reaction-diffusion as a complex toy system, which captures some of the features of the underlying mineralisation processes, including the spatiotemporal Turing patterns of unsteady chemical reactions. By use of this analysis, we demonstrate the capability of recurrence plots, recurrence power spectra, and recurrence time probabilities to detect underlying unstable periodic orbits as one sign of deterministic dynamics and their robustness for the analysis of data contaminated by noise. Recurrence plot based quantification is then applied to three mineral concentrations in the core data from the Sunrise Dam gold deposit in the Yilgarn region of Western Australia. Using a moving window, we reveal the episodic recurring low-dimensional dynamic structures and the period d...
Oberst, S, Tuttle, SL, Griffin, D, Lambert, A & Boyce, RR 2018, 'Experimental validation of tape springs to be used as thin-walled space structures', Journal of Sound and Vibration, vol. 419, pp. 558-570.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd With the advent of standardised launch geometries and off-the-shelf payloads, space programs utilising nano-satellite platforms are growing worldwide. Thin-walled, flexible and self-deployable structures are commonly used for antennae, instrument booms or solar panels owing to their lightweight, ideal packaging characteristics and near zero energy consumption. However their behaviour in space, in particular in Low Earth Orbits with continually changing environmental conditions, raises many questions. Accurate numerical models, which are often not available due to the difficulty of experimental testing under 1g-conditions, are needed to answer these questions. In this study, we present on-earth experimental validations, as a starting point to study the response of a tape spring as a representative of thin-walled flexible structures under static and vibrational loading. Material parameters of tape springs in a singly (straight, open cylinder) and a doubly curved design, are compared to each other by combining finite element calculations, with experimental laser vibrometry within a single and multi-stage model updating approach. While the determination of the Young's modulus is unproblematic, the damping is found to be inversely proportional to deployment length. With updated material properties the buckling instability margin is calculated using different slenderness ratios. Results indicate a high sensitivity of thin-walled structures to miniscule perturbations, which makes proper experimental testing a key requirement for stability prediction on thin-elastic space structures. The doubly curved tape spring provides closer agreement with experimental results than a straight tape spring design.
Olvera, C, Berbegal-Mirabent, J & Merigó, JM 2018, 'A Bibliometric Overview of University-Business Collaboration between 1980 and 2016', Computación y Sistemas, vol. 22, no. 4, pp. 1171-1190.
View/Download from: Publisher's site
View description>>
© 2018 Instituto Politecnico Nacional. All rights reserved. Bibliometrics is a research field that analyses bibliographic material from a quantitative point of view. Aiming at providing a comprehensive overview, this study scrutinises the academic literature in university business collaboration and technology transfer research for the period post the Bayh-Dole Act (1980-2016). The study employs the Web of Science as the main database from where information is collected. Bibliometric indicators such as number of publications, citations, productivity, and the H-index are used to analyse the results. The main findings are displayed in the form of tables and are further discussed. The focus is on the identification of the most relevant journals in this area, the most cited papers, most prolific authors, leading institutions, and countries. The results show that the USA, England, Spain, Italy, and the Netherlands are highly active in this area. Scientific production tends to fall within the research areas of business and economics, engineering or public administration, and is mainly published in journals such as Research Policy, Technovation and Journal of Technology Transfer.
Orth, D, Thurgood, C & van den Hoven, E 2018, 'Designing objects with meaningful associations', International Journal of Design, vol. 12, no. 2, pp. 91-104.
View description>>
Objects often become cherished for their ties to beliefs, experiences, memories, people, places or values that are significant to their owner. These ties can reflect the ways in which we as humans use objects to characterise, communicate and develop our sense of self. This paper outlines our approach to applying product attachment theory to design practices. We created six artefacts that were inspired by interviews conducted with three individuals who discussed details of their life stories. We then evaluated the associations that came to mind for our participants when interacting with these newly designed artefacts to determine whether these links brought meaning to them. Our findings highlight the potential of design to bring emotional value to products by embodying significant aspects of a person’s self-identity. To do so, designers must consider both the importance and authenticity of the associations formed between an object and an individual.
Paler, A & Devitt, SJ 2018, 'Specification format and a verification method of fault-tolerant quantum circuits', Physical Review A, vol. 98, no. 2, pp. 1-9.
View/Download from: Publisher's site
View description>>
© 2018 American Physical Society. Quantum computations are expressed in general as quantum circuits, which are specified by ordered lists of quantum gates. The resulting specifications are used during the optimization and execution of the expressed computations. However, the specification format makes it difficult to verify that optimized or executed computations still conform to the initial gate list specifications: showing the computational equivalence between two quantum circuits expressed by different lists of quantum gates is exponentially complex in the worst case. In order to solve this issue, this work presents a derivation of the specification format tailored specifically for fault-tolerant quantum circuits. The circuits are considered a form consisting entirely of single qubit initializations, cnot gates, and single qubit measurements (ICM form). This format allows, under certain assumptions, to efficiently verify optimized (or implemented) computations. Two verification methods based on checking stabilizer circuit structures are presented.
Peng, S, Zhou, Y, Cao, L, Yu, S, Niu, J & Jia, W 2018, 'Influence analysis in social networks: A survey', Journal of Network and Computer Applications, vol. 106, pp. 17-32.
View/Download from: Publisher's site
View description>>
Complementary to the fancy applications of social networks, influence analysis is an indispensable technique supporting these practical applications. In recent years, this emerging research branch has obtained significant attention from both industry and academia. In this new territory, researchers are facing many unprecedented theoretical and practical challenges. Thus, in this survey, we aim to pave a comprehensive and solid starting ground for interested readers by soliciting the latest work in this area. Firstly, we provide an overview of social networks, including definition, and types of social networks. Secondly, we present the current understanding of social influence analysis from different levels, such as its definition, properties, architecture, applications, and diffusion models. Thirdly, we discuss the evaluation metrics for social influence. Fourthly, we summarize the existing evaluation models on social influence in social networks. We further provide an overview of the existing methods for influence maximization. Finally, we discuss the problems of current algorithms and future trends from various perspectives in this field. We hope this work will shed light for more and more forthcoming researchers to further explore the uncharted part of this promising research field.
Pham, VVH, Yu, S, Sood, K & Cui, L 2018, 'Privacy issues in social networks and analysis: a comprehensive survey', IET Networks, vol. 7, no. 2, pp. 74-84.
View/Download from: Publisher's site
View description>>
Social networks have become part of today's life; however, they also create numerous privacy problems for their users as they reveal sensitive information. Thus, the privacy preservation has become a big problem in social networks and there are several researches related to this topic. However, the current research achievements for this issue are only ad hoc solutions and the broad picture of the problem is not clear. This paper surveys a wide range of related researches and realises that the topic includes many sub-areas; for example, the privacy in publishing social network data for the use of third-party consumers or the privacy of users in the leakage of individuals' information to unexpected people in their social circle. The study also analyses the advantages as well as limitations of proposed solutions. On the basis of that the research outlines key outstanding issues and recommends directions for the future research. Moreover, the research also investigates common privacy metrics and popular privacy-preserving techniques to provide readers some basic tools to resolve open problems in the topic. The ultimate purpose of this research is to pave a solid background for people, who are interested in the topic, to do research further.
Pileggi, SF 2018, 'Looking deeper into academic citations through network analysis: popularity, influence and impact', Universal Access in the Information Society, vol. 17, no. 3, pp. 541-548.
View/Download from: Publisher's site
View description>>
© 2017 Springer-Verlag GmbH Germany Google Scholar (GS) has progressively emerged as a tool which “provides a simple way to broadly search for scholarly literature across many disciplines and sources.” As a free tool that provides citation metrics, GS has opened the academic word to a much larger audience, according to an open information philosophy. GS’ profiles are largely used not only to have a quick look at the authors and their works but, more and more often, as a “de facto” metric to quickly evaluate the research impact. This process looks unstoppable and discussing about its fairness, advantages and disadvantages, as well as about social implications is out of the scope of this paper. We rather prefer to (1) briefly discuss the changes and the innovation that GS has introduced and to (2) propose possible improvements for analysis on academic citations. Our methods are aimed at considering a GS profile in its proper context, providing a social perspective on academic citations: Although maintaining a fundamentally quantitative focus, novel approaches, based on complex network analysis, distinguish between a research impact on the authors’ research network and a more general impact on the scientific community.
Pourreza, P, Saberi, M, Azadeh, A, Chang, E & Hussain, O 2018, 'Health, Safety, Environment and Ergonomic Improvement in Energy Sector Using an Integrated Fuzzy Cognitive Map–Bayesian Network Model', International Journal of Fuzzy Systems, vol. 20, no. 4, pp. 1346-1356.
View/Download from: Publisher's site
View description>>
© 2018, Taiwan Fuzzy Systems Association and Springer-Verlag GmbH Germany, part of Springer Nature. Health, safety, environment and ergonomics (HSEE) are important factors for any organization. In fact, organizations always have to assess their compliance in these factors to the required benchmarks and take proactive actions to improve them if required. In this paper, we propose a fuzzy cognitive map–Bayesian network (BN) model in order to assist organizations in undertaking this process. The fuzzy cognitive map (FCM) method is used for constructing graphical models of BN to ascertain the relationships between the inputs and the impact which they will have on the quantified HSEE. Using the notion of Fuzzy logic assists us to work with humans and their linguistic inputs in the process of experts’ opinion solicitation. The noisy-OR method and the EM are used to ascertain the conditional probability between the inputs and quantifying the HSEE value. Using this, we find out that the most influential input factor on HSEE quantification which can then be managed for improving an organization’s compliance to HSEE. Finding the same influential input factor in both BN models which are based on the noisy-OR method and EM demonstrate how FCM is useful in constructing a reliable BN model. Leveraging the power of Bayesian network in modelling HSEE and augmenting it with FCM is the main contribution of this research work which opens the new line of research in the area of HSE management.
Puthal, D, Obaidat, MS, Nanda, P, Prasad, M, Mohanty, SP & Zomaya, AY 2018, 'Secure and Sustainable Load Balancing of Edge Data Centers in Fog Computing', IEEE Communications Magazine, vol. 56, no. 5, pp. 60-65.
View/Download from: Publisher's site
View description>>
© 1979-2012 IEEE. Fog computing is a recent research trend to bring cloud computing services to network edges. EDCs are deployed to decrease the latency and network congestion by processing data streams and user requests in near real time. EDC deployment is distributed in nature and positioned between cloud data centers and data sources. Load balancing is the process of redistributing the work load among EDCs to improve both resource utilization and job response time. Load balancing also avoids a situation where some EDCs are heavily loaded while others are in idle state or doing little data processing. In such scenarios, load balancing between the EDCs plays a vital role for user response and real-Time event detection. As the EDCs are deployed in an unattended environment, secure authentication of EDCs is an important issue to address before performing load balancing. This article proposes a novel load balancing technique to authenticate the EDCs and find less loaded EDCs for task allocation. The proposed load balancing technique is more efficient than other existing approaches in finding less loaded EDCs for task allocation. The proposed approach not only improves efficiency of load balancing; it also strengthens the security by authenticating the destination EDCs.
Qin, M, Jin, D, Lei, K, Gabrys, B & Musial-Gabrys, K 2018, 'Adaptive community detection incorporating topology and content in social networks✰', Knowledge-Based Systems, vol. 161, pp. 342-356.
View/Download from: Publisher's site
View description>>
© 2018 In social network analysis, community detection is a basic step to understand the structure and function of networks. Some conventional community detection methods may have limited performance because they merely focus on the networks’ topological structure. Besides topology, content information is another significant aspect of social networks. Although some state-of-the-art methods started to combine these two aspects of information for the sake of the improvement of community partitioning, they often assume that topology and content carry similar information. In fact, for some examples of social networks, the hidden characteristics of content may unexpectedly mismatch with topology. To better cope with such situations, we introduce a novel community detection method under the framework of non-negative matrix factorization (NMF). Our proposed method integrates topology as well as content of networks and has an adaptive parameter (with two variations) to effectively control the contribution of content with respect to the identified mismatch degree. Based on the disjoint community partition result, we also introduce an additional overlapping community discovery algorithm, so that our new method can meet the application requirements of both disjoint and overlapping community detection. The case study using real social networks shows that our new method can simultaneously obtain the community structures and their corresponding semantic description, which is helpful to understand the semantics of communities. Related performance evaluations on both artificial and real networks further indicate that our method outperforms some state-of-the-art methods while exhibiting more robust behavior when the mismatch between topology and content is observed.
Qu, Y, Yu, S, Gao, L, Zhou, W & Peng, S 2018, 'A Hybrid Privacy Protection Scheme in Cyber-Physical Social Networks', IEEE Transactions on Computational Social Systems, vol. 5, no. 3, pp. 773-784.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The rapid proliferation of smart mobile devices has significantly enhanced the popularization of the cyber-physical social network, where users actively publish data with sensitive information. Adversaries can easily obtain these data and launch continuous attacks to breach privacy. However, existing works only focus on either location privacy or identity privacy with a static adversary. This results in privacy leakage and possible further damage. Motivated by this, we propose a hybrid privacy-preserving scheme, which considers both location and identity privacy against a dynamic adversary. We study the privacy protection problem as the tradeoff between the users aiming at maximizing data utility with high-level privacy protection while adversaries possessing the opposite goal. We first establish a game-based Markov decision process model, in which the user and the adversary are regarded as two players in a dynamic multistage zero-sum game. To acquire the best strategy for users, we employ a modified state-action-reward-state-action reinforcement learning algorithm. Iteration times decrease because of cardinality reduction from n to 2, which accelerates the convergence process. Our extensive experiments on real-world data sets demonstrate the efficiency and feasibility of the propose method.
Qu, Y, Yu, S, Zhou, W, Peng, S, Wang, G & Xiao, K 2018, 'Privacy of Things: Emerging Challenges and Opportunities in Wireless Internet of Things', IEEE Wireless Communications, vol. 25, no. 6, pp. 91-97.
View/Download from: Publisher's site
View description>>
© 2002-2012 IEEE. The proliferation of wireless devices and appliances is facilitating the rapid development of the Internet of Things (IoT). Numerous state-of-the-art applications are being used in, for example, smart cities, autonomous vehicles, and biocomputing. With the popularization of IoT, new challenges are emerging with respect to privacy issues. In this article, we first summarize privacy constraints and primary attacks based on new features of IoT. Then we present three case studies to demonstrate principal vulnerabilities and classify existing protection schemes. Built on this analysis, we identify three key challenges: a lack of theoretical foundation, the trade-off optimization between privacy and data utility, and system isomerism over-complexity and high scalability. Finally, we illustrate possible promising future directions and potential solutions to the emerging challenges facing wireless IoT scenarios. We aim to assist interested readers in investigating the unexplored parts of this promising domain.
Qumer Gill, A, Loumish, A, Riyat, I & Han, S 2018, 'DevOps for information management systems', VINE Journal of Information and Knowledge Management Systems, vol. 48, no. 1, pp. 122-139.
View/Download from: Publisher's site
View description>>
PurposeDevelopment and operations (DevOps) is complex in nature. Organizations are unsure how to effectively establish a DevOps capability for the continuous delivery of information management systems. This paper aims to compile and analyze DevOps by applying the well-known systematic literature review (SLR) approach. This review is intended to provide a knowledge base to support the informed, effective and less risky adoption of DevOps for information management systems.Design/methodology/approachIn this qualitative research study, the SLR method was applied to identify 3,790 papers, of which, 32 relevant papers were selected and reviewed.FindingsThe results are organized using the well-known ISO/IEC 24744 metamodel elements: people (roles), process, technology and artifacts. In total 11 major roles, 6 processes, 23 technologies, 5 artifacts and 7 challenges (including 6 corresponding solutions) were found. DevOps engineer is becoming a newly identified role. Continuous delivery pipeline and continuous improvement are the most highlighted major DevOps processes. Build system technology is becoming the key focus of DevOps. Finally, major challenges are around people and culture and the misunderstanding of DevOps. Potential research areas are: DevOps analytics, artifacts and tool–chain integration.Research limitations/implicationsThe research findings will serve as a resource for both practitioners and researchers who have interest in the research and adoption of DevOps for information management systems.Applied Soft Computing, vol. 66, pp. 292-296.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. The main purpose of this letter is to draw attention to a recent concept, namely Concept of Stratification (CST) developed by Zadeh [1]. CST describes a system that transitions through a number of states in order to arrive at a desired state. CST is a problem-solving approach, which is easy while effective. Therefore, CST seems very likely to emerge in coming years as a major interest area in areas such as soft computing, Artificial Intelligence (AI), robotics, Natural Language Processing (NLP), and big data. In this expository letter, the advantages and the main shortcoming of CST are reviewed. The concept is explained and areas that the concept is likely to be applied are discussed. Considering the generality of the original CST proposed by Zadeh, it is possible to consider different versions for CST to be applied in future studies. Hence, versions of CST including fuzzy CST, a 3DCST, and multiple systems and multiple CSTs are presented. This work is a first step in a vast range of applications of CST. Researchers, especially those applying soft computing tools such as fuzzy sets theory and granulation, are encouraged to examine the capability of CST in addressing significant real-world problems.
Saberi, M, Theobald, M, Hussain, OK, Chang, E & Hussain, FK 2018, 'Interactive feature selection for efficient customer recognition in contact centers: Dealing with common names', Expert Systems with Applications, vol. 113, pp. 356-376.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd We propose an interactive decision-making framework to assist a Customer Service Representative (CSR) in the efficient and effective recognition of customer records in a database with many ambiguous entries. Our proposed framework consists of three integrated modules. The first module focuses on the detection and resolution of duplicate records to improve effectiveness and efficiency in customer recognition. The second module determines the level of ambiguity in recognizing an individual customer when there are multiple records with the same name. The third module recommends the series of feature-related questions that the CSR should ask the customer to enable rapid recognition, based on that level of ambiguity. In the first module, the F-Swoosh approach for duplicate detection is used, and in the second module a dynamic programming-based technique is used to determine the level of ambiguity within the customer database for a given name. In the third module, Levenshtein edit distance is used for feature selection in combination with weights based on the Inverse Document Frequency (IDF) of terms. The algorithm that requires the minimum number of questions to be put to the customer to achieve recognition is the algorithm that is chosen. We evaluate the proposed framework on a synthetic dataset and demonstrate how it assists the CSR to rapidly recognize the correct customer.
Shahsavari, M, Golpayegani, AH, Saberi, M & Hussain, FK 2018, 'Recruiting the K-most influential prospective workers for crowdsourcing platforms', Service Oriented Computing and Applications, vol. 12, no. 3-4, pp. 247-257.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag London Ltd., part of Springer Nature. Viral marketing is widely used by businesses to achieve their marketing objectives using social media. In this work, we propose a customized crowdsourcing approach for viral marketing which aims at efficient marketing based on information propagation through a social network. We term this approach the social community-based crowdsourcing platform and integrate it with an information diffusion model to find the most efficient crowd workers. We propose an intelligent viral marketing framework (IVMF) comprising two modules to achieve this end. The first module identifies the K-most influential users in a given social network for the platform using a novel linear threshold diffusion model. The proposed model considers the different propagation behaviors of the network users in relation to different contexts. Being able to consider multiple topics in the information propagation model as opposed to only one topic makes our model more applicable to a diverse population base. Additionally, the proposed content-based improved greedy (CBIG) algorithm enhances the basic greedy algorithm by decreasing the total amount of computations required in the greedy algorithm (the total influence propagation of a unique node in any step of the greedy algorithm). The highest computational cost of the basic greedy algorithm is incurred on computing the total influence propagation of each node. The results of the experiments reveal that the number of iterations in our CBIG algorithm is much less than the basic greedy algorithm, while the precision in choosing the K influential nodes in a social network is close to the greedy algorithm. The second module of the IVMF framework, the multi-objective integer optimization model, is used to determine which social network should be targeted for viral marketing, taking into account the marketing budget. The overall IVMF framework can be used to select a social network and rec...
Shen, S, Huang, L, Zhou, H, Yu, S, Fan, E & Cao, Q 2018, 'Multistage Signaling Game-Based Optimal Detection Strategies for Suppressing Malware Diffusion in Fog-Cloud-Based IoT Networks', IEEE Internet of Things Journal, vol. 5, no. 2, pp. 1043-1054.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. We consider the Internet of Things (IoT) with malware diffusion and seek optimal malware detection strategies for preserving the privacy of smart objects in IoT networks and suppressing malware diffusion. To this end, we propose a malware detection infrastructure realized by an intrusion detection system (IDS) with cloud and fog computing to overcome the IDS deployment problem in smart objects due to their limited resources and heterogeneous subnetworks. We then employ a signaling game to disclose interactions between smart objects and the corresponding fog node because of malware uncertainty in smart objects. To minimize privacy leakage of smart objects, we also develop optimal strategies that maximize malware detection probability by theoretically computing the perfect Bayesian equilibrium of the game. Moreover, we analyze the factors influencing the optimal probability of a malicious smart object diffusing malware, and factors influencing the performance of a fog node in determining an infected smart object. Finally, we present a framework to demonstrate a potential and practical application of suppressing malware diffusion in IoT networks.
Singh, AK, Chen, H-T, Cheng, Y-F, King, J-T, Ko, L-W, Gramann, K & Lin, C-T 2018, 'Visual Appearance Modulates Prediction Error in Virtual Reality', IEEE Access, vol. 6, pp. 24617-24624.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Different rendering styles induce different levels of agency and user behaviors in virtual reality environments. We applied an electroencephalogram-based approach to investigate how the rendering style of the users' hands affects behavioral and cognitive responses. To this end, we introduced prediction errors due to cognitive conflicts during a 3-D object selection task by manipulating the selection distance of the target object. The results showed that, for participants with high behavioral inhibition scores, the amplitude of the negative event-related potential at approximately 50-250 ms correlated with the realism of the virtual hands. Concurring with the uncanny valley theory, these findings suggest that the more realistic the representation of the user's hand is, the more sensitive the user becomes toward subtle errors, such as tracking inaccuracies.
Song, J, Wang, J & Lu, H 2018, 'A novel combined model based on advanced optimization algorithm for short-term wind speed forecasting', Applied Energy, vol. 215, pp. 643-658.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Short-term wind speed forecasting has a significant influence on enhancing the operation efficiency and increasing the economic benefits of wind power generation systems. A substantial number of wind speed forecasting models, which are aimed at improving the forecasting performance, have been proposed. However, some conventional forecasting models do not consider the necessity and importance of data preprocessing. Moreover, they neglect the limitations of individual forecasting models, leading to poor forecasting accuracy. In this study, a novel model combining a data preprocessing technique, forecasting algorithms, an advanced optimization algorithm, and no negative constraint theory is developed. This combined model successfully overcomes some limitations of the individual forecasting models and effectively improves the forecasting accuracy. To estimate the effectiveness of the proposed combined model, 10-min wind speed data from the wind farm in Peng Lai, China are used as case studies. The experiment results demonstrate that the developed combined model is definitely superior compared to all other conventional models. Furthermore, it can be used as an effective technique for smart grid planning.
Stender, M, Tiedemann, M, Hoffmann, N & Oberst, S 2018, 'Impact of an irregular friction formulation on dynamics of a minimal model for brake squeal', Mechanical Systems and Signal Processing, vol. 107, pp. 439-451.
View/Download from: Publisher's site
View description>>
Friction-induced vibrations are of major concern in the design of reliable, efficient and comfortable technical systems. Well-known examples for systems susceptible to self-excitation can be found in fluid structure interaction, disk brake squeal, rotor dynamics, hip implants noise and many more. While damping elements and amplitude reduction are well-understood in linear systems, nonlinear systems and especially self-excited dynamics still constitute a challenge for damping element design. Additionally, complex dynamical systems exhibit deterministic chaotic cores which add severe sensitivity to initial conditions to the system response. Especially the complex friction interface dynamics remain a challenging task for measurements and modeling. Today, mostly simple and regular friction models are investigated in the field of self-excited brake system vibrations. This work aims at investigating the effect of high-frequency irregular interface dynamics on the nonlinear dynamical response of a self-excited structure. Special focus is put on the characterization of the system response time series.
A low-dimensional minimal model is studied which features self-excitation, gyroscopic effects and friction-induced damping. Additionally, the employed friction formulation exhibits temperature as inner variable and superposed chaotic fluctuations governed by a Lorenz attractor. The time scale of the irregular fluctuations is chosen one order smaller than the overall system dynamics. The influence of those fluctuations on the structural response is studied in various ways, i.e. in time domain and by means of recurrence analysis. The separate time scales are studied in detail and regimes of dynamic interactions are identified. The results of the irregular friction formulation indicate dynamic interactions on multiple time scales, which trigger larger vibration amplitudes as compared to regular friction formulations conventionally studied in the field of friction-induced vibr...
Sun, F, Hou, F, Zhou, H, Liu, B, Chen, J & Gui, L 2018, 'Equilibriums in the Mobile-Virtual-Network-Operator-Oriented Data Offloading', IEEE Transactions on Vehicular Technology, vol. 67, no. 2, pp. 1622-1634.
View/Download from: Publisher's site
Trianni, A, Merigó, JM & Bertoldi, P 2018, 'Ten years of Energy Efficiency: a bibliometric analysis', Energy Efficiency, vol. 11, no. 8, pp. 1917-1939.
View/Download from: Publisher's site
View description>>
© 2018, Springer Nature B.V. Energy Efficiency is an international journal dedicated to research topics connected to energy with a focus on end-use efficiency issues. In 2018, the journal celebrates its 10th anniversary. In order to mark it and analyze not only how the journal has been performing over the years, but also which are the trends for academic debate and research in this journal, this article presents a bibliometric overview of the publication and citation structure of the journal during period 2008–2017. The study relies on the Web of Science Core Collection and the Scopus database to collect the bibliographic results. Additionally, the work exploits the visualization of similarities (VOS) viewer software to map graphically the bibliographic material. The research analyses the most cited papers and the most popular keywords. Moreover, the paper studies how the journal connects with other international journals and identifies the most productive authors, institutions, and countries. The results indicate that the journal has rapidly grown over the years, obtained a merited position in the scientific community, with contributions from authors all over the world (with Europe as the most productive region). Moreover, the journal has focused so far mainly on energy efficiency issues in close relationship with policies and incentives, corporate energy efficiency, consumer behavior, and demand-side management programs, with both industrial, building and transport sectors widely involved. Our discussion concludes with suggested future research avenues, in particular towards coordinated efforts from different disciplines (technical, economic, and sociopsychological ones) to address the emerging energy efficiency challenges.
Tsai, W-C & van den Hoven, E 2018, 'Memory Probes: Exploring Retrospective User Experience Through Traces of Use on Cherished Objects', INTERNATIONAL JOURNAL OF DESIGN, vol. 12, no. 3, pp. 57-72.
Tur-Porcar, A, Mas-Tur, A, Merigó, JM, Roig-Tierno, N & Watt, J 2018, 'A Bibliometric History of the Journal of Psychology Between 1936 and 2015', The Journal of Psychology, vol. 152, no. 4, pp. 199-225.
View/Download from: Publisher's site
Valenzuela-Fernández, L, Merigó, JM & Nicolas, C 2018, 'The most influential countries in market orientation', International Journal of Engineering Business Management, vol. 10, pp. 184797901775148-184797901775148.
View/Download from: Publisher's site
View description>>
© The Author(s) 2018. The purpose of this article is to analyze the most productive and influential countries engaging in market orientation (MO) research between 1990 and 2016. This article shows the general trajectories of these countries, the relationships among them, and their research in the area of MO by analyzing results on citations and publications. The article uses applied bibliometric techniques on available information found in the Web of Science. The results show that the 10 leading countries produce more than 70% of total publications, where the United States leads in all indicators, followed by the United Kingdom and China. Furthermore, although there has been a steady increase in overall number of publications, this trend is not shared evenly among different nations.
Valenzuela-Fernández, L, Nicolas, C & Merigo, JM 2018, 'Overview of the leading countries in marketing research between 1990 and 2014', American Journal of Business, vol. 33, no. 4, pp. 134-156.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to present a general overview of the most influential countries according to their scientific contributions in marketing for the 1990–2014 period. In this bibliometric-based research, the authors generate a ranking of the 50 most influential nations according to the H-index and citations per paper, co-authorship, citation analysis and bibliographic coupling. The study provides a map that identifies the networks of researchers between countries.Design/methodology/approachThe method used is bibliometric analysis. The relevant research in marketing was extracted from Web of Science Database Core Collection, during the 1990–2014 period; 29,947 published articles in 50 countries were obtained. The investigation used: H-index as the first criterion in creating the country ranking, number of articles (TP) as a proxy for the productivity of each country, the average citation per article (C/P) and the number of citations (TC) to express the influence of a country’s articles. In addition, the study adopts VOSviewer software to identify the collaboration networks of researchers between countries and the links between countries.FindingsThe results reveal a general level that 54 percent of countries have a category H-index greater than 20. In turn, the authors see a steady increase in the number of publications over the five-year periods. The first ten countries account for over 80 percent of all publications of the sample. The USA is presented here as the leader in all indicators and highlights the important role that China has been developing.Research limitations/implications
Vijayan, MK, Chitambar, E & Hsieh, M-H 2018, 'One-shot assisted concentration of coherence', Journal of Physics A: Mathematical and Theoretical 51.41 (2018): 414001, vol. 51, no. 41.
View/Download from: Publisher's site
View description>>
We find one-shot bounds for concentration of maximally coherent states in theso called assisted scenario. In this setting, Bob is restricted to performingincoherent operations on his quantum system, however he is assisted by Alice,who holds a purification of Bob's state and can send classical data to him. Wefurther show that in the asymptotic limit our one-shot bounds recover thepreviously computed rate of asymptotic assisted concentration.
Voinov, A, Jenni, K, Gray, S, Kolagani, N, Glynn, PD, Bommel, P, Prell, C, Zellner, M, Paolisso, M, Jordan, R, Sterling, E, Schmitt Olabisi, L, Giabbanelli, PJ, Sun, Z, Le Page, C, Elsawah, S, BenDor, TK, Hubacek, K, Laursen, BK, Jetter, A, Basco-Carrera, L, Singer, A, Young, L, Brunacini, J & Smajgl, A 2018, 'Tools and methods in participatory modeling: Selecting the right tool for the job', Environmental Modelling & Software, vol. 109, pp. 232-255.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Various tools and methods are used in participatory modelling, at different stages of the process and for different purposes. The diversity of tools and methods can create challenges for stakeholders and modelers when selecting the ones most appropriate for their projects. We offer a systematic overview, assessment, and categorization of methods to assist modelers and stakeholders with their choices and decisions. Most available literature provides little justification or information on the reasons for the use of particular methods or tools in a given study. In most of the cases, it seems that the prior experience and skills of the modelers had a dominant effect on the selection of the methods used. While we have not found any real evidence of this approach being wrong, we do think that putting more thought into the method selection process and choosing the most appropriate method for the project can produce better results. Based on expert opinion and a survey of modelers engaged in participatory processes, we offer practical guidelines to improve decisions about method selection at different stages of the participatory modeling process.
Voinov, AA, Çöltekin, A, Chen, M & Beydoun, G 2018, 'Virtual geographic environments in socio-environmental modeling: a fancy distraction or a key to communication?', Int. J. Digit. Earth, vol. 11, no. 4, pp. 408-419.
View/Download from: Publisher's site
View description>>
© 2017 Informa UK Limited, trading as Taylor & Francis Group. Modeling and simulation are recognized as effective tools for management and decision support across various disciplines; however, poor communication of results to the end users is a major obstacle for properly using and understanding model output. Visualizations can play an essential role in making modeling results accessible for management and decision-making. Virtual reality (VR) and virtual geographic environments (VGEs) are popular and potentially very rewarding ways to visualize socio-environmental models. However, there is a fundamental conflict between abstraction and realism: models are goal-driven, and created to simplify reality and to focus on certain crucial aspects of the system; VR, in the meanwhile, by definition, attempts to replicate reality as closely as possible. This elevated realism may add to the complexity curse in modeling, and the message might be diluted by too many (background) details. This is also connected to information overload and cognitive load. Moreover, modeling is always associated with the treatment of uncertainty–something difficult to present in VR. In this paper, we examine the use of VR and, specifically, VGEs in socio-environmental modeling, and discuss how VGEs and simulation modeling can be married in a mutually beneficial way that makes VGEs more effective for users, while enhancing simulation models.
Wakefield, J, Frawley, JK, Tyler, J & Dyson, LE 2018, 'The impact of an iPad-supported annotation and sharing technology on university students' learning', Computers & Education, vol. 122, pp. 243-259.
View/Download from: Publisher's site
Wan, Y, Chen, L, Xu, G, Zhao, Z, Tang, J & Wu, J 2018, 'SCSMiner: mining social coding sites for software developer recommendation with relevance propagation', World Wide Web, vol. 21, no. 6, pp. 1523-1543.
View/Download from: Publisher's site
View description>>
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. With the advent of social coding sites, software development has entered a new era of collaborative work. Social coding sites (e.g., GitHub) can integrate social networking and distributed version control in a unified platform to facilitate collaborative developments over the world. One unique characteristic of such sites is that the past development experiences of developers provided on the sites convey the implicit metrics of developer’s programming capability and expertise, which can be applied in many areas, such as software developer recruitment for IT corporations. Motivated by this intuition, we aim to develop a framework to effectively locate the developers with right coding skills. To achieve this goal, we devise a generativ e probabilistic expert ranking model upon which a consistency among projects is incorporated as graph regularization to enhance the expert ranking and a perspective of relevance propagation illustration is introduced. For evaluation, StackOverflow is leveraged to complement the ground truth of expert. Finally, a prototype system, SCSMiner, which provides expert search service based on a real-world dataset crawled from GitHub is implemented and demonstrated.
Wan, Y, Xu, G, Chen, L, Zhao, Z & Wu, J 2018, 'Exploiting cross-source knowledge for warming up community question answering services', Neurocomputing, vol. 320, pp. 25-34.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Community Question Answering (CQA) services such as Yahoo! Answers, Quora and StackOverflow are collaborative platforms where users can share and exchange their knowledge explicitly by asking and answering questions. One essential task in CQA is learning topical expertise of users, which may benefit many applications such as question routing and best answers identification. One limitation of existing related works is that they only consider the warm-start users who have posted many questions or answers, while ignoring cold-start users who have few posts. In this paper, we aim to exploit knowledge from cross sources such as GitHub and StackOverflow to build up the richer views of expertise for better CQA. Inspired by the idea of Bayesian co-training, we propose a topical expertise model from the perspective of multi-view learning. Specifically, we incorporate the consistency existing among multiple views into a unified probabilistic graphic model. Comprehensive experiments on two real-world datasets demonstrate the performance of our proposed model with the comparison of some state-of-the-art ones.
Wang, D, Deng, S & Xu, G 2018, 'Sequence-based context-aware music recommendation', Information Retrieval Journal, vol. 21, no. 2-3, pp. 230-252.
View/Download from: Publisher's site
View description>>
© 2017, Springer Science+Business Media, LLC. Contextual factors greatly affect users’ preferences for music, so they can benefit music recommendation and music retrieval. However, how to acquire and utilize the contextual information is still facing challenges. This paper proposes a novel approach for context-aware music recommendation, which infers users’ preferences for music, and then recommends music pieces that fit their real-time requirements. Specifically, the proposed approach first learns the low dimensional representations of music pieces from users’ music listening sequences using neural network models. Based on the learned representations, it then infers and models users’ general and contextual preferences for music from users’ historical listening records. Finally, music pieces in accordance with user’s preferences are recommended to the target user. Extensive experiments are conducted on real world datasets to compare the proposed method with other state-of-the-art recommendation methods. The results demonstrate that the proposed method significantly outperforms those baselines, especially on sparse data.
Wang, D, Deng, S, Zhang, X & Xu, G 2018, 'Learning to embed music and metadata for context-aware music recommendation', World Wide Web, vol. 21, no. 5, pp. 1399-1423.
View/Download from: Publisher's site
View description>>
© 2017, Springer Science+Business Media, LLC, part of Springer Nature. Contextual factors greatly influence users’ musical preferences, so they are beneficial remarkably to music recommendation and retrieval tasks. However, it still needs to be studied how to obtain and utilize the contextual information. In this paper, we propose a context-aware music recommendation approach, which can recommend music pieces appropriate for users’ contextual preferences for music. In analogy to matrix factorization methods for collaborative filtering, the proposed approach does not require music pieces to be represented by features ahead, but it can learn the representations from users’ historical listening records. Specifically, the proposed approach first learns music pieces’ embeddings (feature vectors in low-dimension continuous space) from music listening records and corresponding metadata. Then it infers and models users’ global and contextual preferences for music from their listening records with the learned embeddings. Finally, it recommends appropriate music pieces according to the target user’s preferences to satisfy her/his real-time requirements. Experimental evaluations on a real-world dataset show that the proposed approach outperforms baseline methods in terms of precision, recall, F1 score, and hitrate. Especially, our approach has better performance on sparse datasets.
Wang, J, Du, P, Lu, H, Yang, W & Niu, T 2018, 'An improved grey model optimized by multi-objective ant lion optimization algorithm for annual electricity consumption forecasting', Applied Soft Computing, vol. 72, pp. 321-337.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. Accurate and stable annual electricity consumption forecasting play vital role in modern social and economic development through providing effective planning and guaranteeing a reliable supply of sustainable electricity. However, establishing a robust method to improve prediction accuracy and stability simultaneously of electricity consumption forecasting has been proven to be a highly challenging task. Most previous researches only pay more attention to enhance prediction accuracy, which usually ignore the significant of forecasting stability, despite its importance to the effectiveness of forecasting models. Considering the characteristics of annual power consumption data as well as one criterion i.e. accuracy or stability is insufficient, in this study a novel hybrid forecasting model based on an improved grey forecasting mode optimized by multi-objective ant lion optimization algorithm is successfully developed, which can not only be utilized to dynamic choose the best input training sets, but also obtain satisfactory forecasting results with high accuracy and strong ability. Case studies of annual power consumption datasets from several regions in China are utilized as illustrative examples to estimate the effectiveness and efficiency of the proposed hybrid forecasting model. Finally, experimental results indicated that the proposed forecasting model is superior to the comparison models.
Wang, J, Li, H & Lu, H 2018, 'Application of a novel early warning system based on fuzzy time series in urban air quality forecasting in China', Applied Soft Computing, vol. 71, pp. 783-799.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier B.V. With atmospheric environmental pollution becoming increasingly serious, developing an early warning system for air quality forecasting is vital to monitoring and controlling air quality. However, considering the large fluctuations in the concentration of pollutants, most previous studies have focused on enhancing accuracy, while few have addressed the stability and uncertainty analysis, which may lead to insufficient results. Therefore, a novel early warning system based on fuzzy time series was successfully developed that includes three modules: deterministic prediction module, uncertainty analysis module, and assessment module. In this system, a hybrid model combining the fuzzy time series forecasting technique and data reprocessing approaches was constructed to forecast the major air pollutants. Moreover, an uncertainty analysis was generated to further analyze and explore the uncertainties involved in future air quality forecasting. Finally, an assessment module proved the effectiveness of the developed model. The experimental results reveal that the proposed model outperforms the comparison models and baselines, and both the accuracy and the stability of the developed system are remarkable. Therefore, fuzzy logic is a better option in air quality forecasting and the developed system will be a useful tool for analyzing and monitoring air pollution.
Wang, J, Niu, T, Lu, H, Guo, Z, Yang, W & Du, P 2018, 'An analysis-forecast system for uncertainty modeling of wind speed: A case study of large-scale wind farms', Applied Energy, vol. 211, pp. 492-512.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd The uncertainty analysis and modeling of wind speed, which has an essential influence on wind power systems, is consistently considered a challenging task. However, most investigations thus far were focused mainly on point forecasts, which in reality cannot facilitate quantitative characterization of the endogenous uncertainty involved. An analysis-forecast system that includes an analysis module and a forecast module and can provide appropriate scenarios for the dispatching and scheduling of a power system is devised in this study; this system superior to those presented in previous studies. In order to qualitatively and quantitatively investigate the uncertainty of wind speed, recurrence analysis techniques are effectively developed for application in the analysis module. Furthermore, in order to quantify the uncertainty accurately, a novel architecture aimed at uncertainty mining is devised for the forecast module, where a non-parametric model optimized by an improved multi-objective water cycle algorithm is considered a predictor for producing intervals for each mode component after feature selection. The results of extensive in-depth experiments show that the devised system is not only superior to the considered benchmark models, but also has good potential practical applications in wind power systems.
Wang, T, Lu, J & Zhang, G 2018, 'Two-Stage Fuzzy Multiple Kernel Learning Based on Hilbert–Schmidt Independence Criterion', IEEE Transactions on Fuzzy Systems, vol. 26, no. 6, pp. 3703-3714.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Multiple kernel learning (MKL) is a principled approach to kernel combination and selection for a variety of learning tasks, such as classification, clustering, and dimensionality reduction. In this paper, we develop a novel fuzzy multiple kernel learning model based on the Hilbert-Schmidt independence criterion (HSIC) for classification, which we call HSIC-FMKL. In this model, we first propose an HSIC Lasso-based MKL formulation, which not only has a clear statistical interpretation that minimum redundant kernels with maximum dependence on output labels are found and combined, but also enables the global optimal solution to be computed efficiently by solving a Lasso optimization problem. Since the traditional support vector machine (SVM) is sensitive to outliers or noises in the dataset, fuzzy SVM (FSVM) is used to select the prediction hypothesis once the optimal kernel has been obtained. The main advantage of FSVM is that we can associate a fuzzy membership with each data point such that these data points can have different effects on the training of the learning machine. We propose a new fuzzy membership function using a heuristic strategy based on the HSIC. The proposed HSIC-FMKL is a two-stage kernel learning approach and the HSIC is applied in both stages. We perform extensive experiments on real-world datasets from the UCI benchmark repository and the application domain of computational biology which validate the superiority of the proposed model in terms of prediction accuracy.
Wang, W, Laengle, S, Merigó, JM, Yu, D, Herrera-Viedma, E, Cobo, MJ & Bouchon-Meunier, B 2018, 'A Bibliometric Analysis of the First Twenty-Five Years of the International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems', International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 26, no. 02, pp. 169-193.
View/Download from: Publisher's site
View description>>
Since the International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems published its first issue in 1993, it has made important contributions to the research field of computer science. In this study, based on the dataset of the publications published in this journal between 1993 and 2016 retrieved from Web of Science, a general overview of this journal is performed using bibliometric methods and visualized networks. First, the productive and influential publications, authors, institutions, countries/territories, and supraregions are analysed based on the total number of citations, publications, and different citation thresholds. Second, network visualization analysis is applied to illustrate the links and connections between terms by using the VOSviewer software. Moreover, the most cited journals and common author keywords of three continents, including North America, Europe, and Asia, are also presented. This paper will hopefully help researchers understand the research patterns of this journal.
Wang, X, Huang, C, Yao, L, Benatallah, B & Dong, M 2018, 'A Survey on Expert Recommendation in Community Question Answering', Journal of Computer Science and Technology, vol. 33, no. 4, pp. 625-653.
View/Download from: Publisher's site
Wang, X, Xu, C, Zhao, G & Yu, S 2018, 'Tuna: An Efficient and Practical Scheme for Wireless Access Point in 5G Networks Virtualization', IEEE Communications Letters, vol. 22, no. 4, pp. 748-751.
View/Download from: Publisher's site
View description>>
© 1997-2012 IEEE. Recently, network function virtualization (NFV) has been widely used in 5G innovation. However, with the implementation of NFV, virtualized wireless access point has suffered a significant performance degradation. In this letter, we propose an efficient packet processing scheme (Tuna) to improve the performance of wireless network virtualization. Specifically, we locate management frame into user space for virtualization, and place control and data frame in kernel space to reduce packet processing delay. Moreover, hostapd and network address translation are modified to accelerate packet processing. We implemented the prototype of the proposed scheme, and the experimental results demonstrate that Tuna can improve both delay and throughput dramatically.
Wang, X, Zhang, Q, Ren, J, Xu, S, Wang, S & Yu, S 2018, 'Toward efficient parallel routing optimization for large-scale SDN networks using GPGPU', Journal of Network and Computer Applications, vol. 113, pp. 1-13.
View/Download from: Publisher's site
View description>>
Routing optimization is an efficient way to improve network performance and guarantee the QoS requirements of users. However, with the rapid growth of network size and traffic demands, the routing optimization of SDN networks with centralized control plane is facing the scalability issue. To overcome the scalability issue, we aim to speed up the routing optimization process in large networks by utilizing the massive parallel computation capability of GPU. In this paper, we develop an efficient Lagrangian Relaxation based Parallel Routing Optimization Algorithm (LR-PROA). LR-PROA first decomposes the routing optimization problem into a set of path calculation problems for the traffic demands by relaxing the link capacity constraints, then the path calculation tasks are dispatched to GPU and executed concurrently on GPU. In order to achieve high degree of parallelism, LR-PROA also parallelizes the path calculation process for each traffic demand. Furthermore, to improve the convergence speed, LR-PROA uses efficient methods to adjust the calculated paths for a part of traffic demands and set the step size of subgradient algorithm for solving the Lagrangian dual problem in each iteration. Our evaluations on synthetic network topologies verify that LR-PROA has good optimization performance as well as superior calculation time efficiency. In our simulations, LR-PROA is up to tens of times faster than benchmark algorithms in large networks.
Wang, Y, He, Q, Ye, D & Yang, Y 2018, 'Formulating Criticality-Based Cost-Effective Fault Tolerance Strategies for Multi-Tenant Service-Based Systems', IEEE Transactions on Software Engineering, vol. 44, no. 3, pp. 291-307.
View/Download from: Publisher's site
Wang, Y-K, Jung, T-P & Lin, C-T 2018, 'Theta and Alpha Oscillations in Attentional Interaction during Distracted Driving', Frontiers in Behavioral Neuroscience, vol. 12, pp. 3-3.
View/Download from: Publisher's site
View description>>
© 2018 Wang, Jung and Lin. Performing multiple tasks simultaneously usually affects the behavioral performance as compared with executing the single task. Moreover, processing multiple tasks simultaneously often involve more cognitive demands. Two visual tasks, lane-keeping task and mental calculation, were utilized to assess the brain dynamics through 32-channel electroencephalogram (EEG) recorded from 14 participants. A 400-ms stimulus onset asynchrony (SOA) factor was used to induce distinct levels of attentional requirements. In the dual-task conditions, the deteriorated behavior reflected the divided attention and the overlapping brain resources used. The frontal, parietal and occipital components were decomposed by independent component analysis (ICA) algorithm. The event- and response-related theta and alpha oscillations in selected brain regions were investigated first. The increased theta oscillation in frontal component and decreased alpha oscillations in parietal and occipital components reflect the cognitive demands and attentional requirements as executing the designed tasks. Furthermore, time-varying interactive over-additive (O-Add), additive (Add) and under-additive (U-Add) activations were explored and summarized through the comparison between the summation of the elicited spectral perturbations in two single-task conditions and the spectral perturbations in the dual task. Add and U-Add activations were observed while executing the dual tasks. U-Add theta and alpha activations dominated the posterior region in dual-task situations. Our results show that both deteriorated behaviors and interactive brain activations should be comprehensively considered for evaluating workload or attentional interaction precisely.
Wei, C-S, Lin, Y-P, Wang, Y-T, Lin, C-T & Jung, T-P 2018, 'A subject-transfer framework for obviating inter- and intra-subject variability in EEG-based drowsiness detection', NeuroImage, vol. 174, pp. 407-419.
View/Download from: Publisher's site
View description>>
© 2018 Inter- and intra-subject variability pose a major challenge to decoding human brain activity in brain-computer interfaces (BCIs) based on non-invasive electroencephalogram (EEG). Conventionally, a time-consuming and laborious training procedure is performed on each new user to collect sufficient individualized data, hindering the applications of BCIs on monitoring brain states (e.g. drowsiness) in real-world settings. This study proposes applying hierarchical clustering to assess the inter- and intra-subject variability within a large-scale dataset of EEG collected in a simulated driving task, and validates the feasibility of transferring EEG-based drowsiness-detection models across subjects. A subject-transfer framework is thus developed for detecting drowsiness based on a large-scale model pool from other subjects and a small amount of alert baseline calibration data from a new user. The model pool ensures the availability of positive model transferring, whereas the alert baseline data serve as a selector of decoding models in the pool. Compared with the conventional within-subject approach, the proposed framework remarkably reduced the required calibration time for a new user by 90% (18.00 min–1.72 ± 0.36 min) without compromising performance (p = 0.0910) when sufficient existing data are available. These findings suggest a practical pathway toward plug-and-play drowsiness detection and can ignite numerous real-world BCI applications.
Wei, C-S, Wang, Y-T, Lin, C-T & Jung, T-P 2018, 'Toward Drowsiness Detection Using Non-hair-Bearing EEG-Based Brain-Computer Interfaces', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 2, pp. 400-406.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Drowsy driving is one of the major causes that lead to fatal accidents worldwide. For the past two decades, many studies have explored the feasibility and practicality of drowsiness detection using electroencephalogram (EEG)-based brain-computer interface (BCI) systems. However, on the pathway of transitioning laboratory-oriented BCI into real-world environments, one chief challenge is to obtain high-quality EEG with convenience and long-term wearing comfort. Recently, acquiring EEG from non-hair-bearing (NHB) scalp areas has been proposed as an alternative solution to avoid many of the technical limitations resulted from the interference of hair between electrodes and the skin. Furthermore, our pilot study has shown that informative drowsiness-related EEG features are accessible from the NHB areas. This study extends the previous work to quantitatively evaluate the performance of drowsiness detection using cross-session validation with widely studied machine-learning classifiers. The offline results showed no significant difference between the accuracy of drowsiness detection using the NHB EEG and the whole-scalp EEG across all subjects ( ${p} = \textsf {0.31}$ ). The findings of this study demonstrate the efficacy and practicality of the NHB EEG for drowsiness detection and could catalyze explorations and developments of many other real-world BCI applications.
Wei, F, Costanza, R, Dai, Q, Stoeckl, N, Gu, X, Farber, S, Nie, Y, Kubiszewski, I, Hu, Y, Swaisgood, R, Yang, X, Bruford, M, Chen, Y, Voinov, A, Qi, D, Owen, M, Yan, L, Kenny, DC, Zhang, Z, Hou, R, Jiang, S, Liu, H, Zhan, X, Zhang, L, Yang, B, Zhao, L, Zheng, X, Zhou, W, Wen, Y, Gao, H & Zhang, W 2018, 'The Value of Ecosystem Services from Giant Panda Reserves', Current Biology, vol. 28, no. 13, pp. 2174-2180.e7.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Ecosystem services (the benefits to humans from ecosystems) are estimated globally at $125 trillion/year [1, 2]. Similar assessments at national and regional scales show how these services support our lives [3]. All valuations recognize the role of biodiversity, which continues to decrease around the world in maintaining these services [4, 5]. The giant panda epitomizes the flagship species [6]. Its unrivalled public appeal translates into support for conservation funding and policy, including a tax on foreign visitors to support its conservation [7]. The Chinese government has established a panda reserve system, which today numbers 67 reserves [8, 9]. The biodiversity of these reserves is among the highest in the temperate world [10], covering many of China's endemic species [11]. The panda is thus also an umbrella species [12]—protecting panda habitat also protects other species. Despite the benefits derived from pandas, some journalists have suggested that it would be best to let the panda go extinct. With the recent downlisting of the panda from Endangered to Vulnerable, it is clear that society's investment has started to pay off in terms of panda population recovery [13, 14]. Here, we estimate the value of ecosystem services of the panda and its reserves at between US$2.6 and US$6.9 billion/year in 2010. Protecting the panda as an umbrella species and the habitat that supports it yields roughly 10–27 times the cost of maintaining the current reserves, potentially further motivating expansion of the reserves and other investments in natural capital in China. Wei et al. estimate that the value of ecosystem services of the giant panda and its nature reserves was US$2.6–US$6.9 billion/year in 2010. Protecting the panda and its habitat yields roughly 10–27 times the cost of maintaining the current reserves, potentially motivating expansion of the reserves and other investments in natural capital in China.
Wu, D, King, J-T, Chuang, C-H, Lin, C-T & Jung, T-P 2018, 'Spatial Filtering for EEG-Based Regression Problems in Brain–Computer Interface (BCI)', IEEE Transactions on Fuzzy Systems, vol. 26, no. 2, pp. 771-781.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Electroencephalogram (EEG) signals are frequently used in brain-computer interfaces (BCIs), but they are easily contaminated by artifacts and noise, so preprocessing must be done before they are fed into a machine learning algorithm for classification or regression. Spatial filters have been widely used to increase the signal-to-noise ratio of EEG for BCI classification problems, but their applications in BCI regression problems have been very limited. This paper proposes two common spatial pattern (CSP) filters for EEG-based regression problems in BCI, which are extended from the CSP filter for classification, by using fuzzy sets. Experimental results on EEG-based response speed estimation from a large-scale study, which collected 143 sessions of sustained-attention psychomotor vigilance task data from 17 subjects during a 5-month period, demonstrate that the two proposed spatial filters can significantly increase the EEG signal quality. When used in LASSO and k-nearest neighbors regression for user response speed estimation, the spatial filters can reduce the root-mean-square estimation error by 10.02-19.77\%, and at the same time increase the correlation to the true response speed by 19.39-86.47\%.
Wu, S, Bai, Q & Sengvong, S 2018, 'GreenCommute: An Influence-Aware Persuasive Recommendation Approach for Public-Friendly Commute Options', Journal of Systems Science and Systems Engineering, vol. 27, no. 2, pp. 250-264.
View/Download from: Publisher's site
View description>>
Negative impacts produced by transportation sector have increased in parallel with the increase of urban mobility. In this paper, we introduce GreenCommute, a novel recommendation system which can facilitate commuters to take public friendly commute options, while provide support to alleviate the external cost in society, such as traffic pollution, congestion and accidents. In the meanwhile, a rewarding mechanism for persuading commuters is embedded in the proposed approach for balancing the conflict between personal needs and social aims. The allocation of reward values also takes users’ influential degrees in the social network into consideration. Experimental results show that the GreenCommute can promote public friendly commute options more effectively in comparison to the traditional recommendation system.
Wu, W, Li, B, Chen, L, Zhu, X & Zhang, C 2018, '$K$ -Ary Tree Hashing for Fast Graph Classification', IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 5, pp. 936-949.
View/Download from: Publisher's site
View description>>
IEEE Existing graph classification usually relies on an exhaustive enumeration of substructure patterns, where the number of substructures expands exponentially w.r.t. with the size of the graph set. Recently, the Weisfeiler-Lehman (WL) graph kernel has achieved the best performance in terms of both accuracy and efficiency among state-of-the-art methods. However, it is still time-consuming, especially for large-scale graph classification tasks. In this paper, we present a < formula > < tex > $k$ < /tex > < /formula > -Ary Tree based Hashing (KATH) algorithm, which is able to obtain competitive accuracy with a very fast runtime. The main idea of KATH is to construct a traversal table to quickly approximate the subtree patterns in WL using < formula > < tex > $k$ < /tex > < /formula > -ary trees. Based on the traversal table, KATH employs a recursive indexing process that performs only < formula > < tex > $r$ < /tex > < /formula > times of matrix indexing to generate all < formula > < tex > $(r-1)$ < /tex > < /formula > -depth < formula > < tex > $k$ < /tex > < /formula > -ary trees, where the leaf node labels of a tree can uniquely specify the pattern. After that, the MinHash scheme is used to fingerprint the acquired subtree patterns for a graph. Our experimental results on both real world and synthetic datasets show that KATH runs significantly faster than state-of-the-art methods while achieving competitive or better accuracy.
Wu, Z, Li, G, Liu, Q, Xu, G & Chen, E 2018, 'Covering the Sensitive Subjects to Protect Personal Privacy in Personalized Recommendation', IEEE Transactions on Services Computing, vol. 11, no. 3, pp. 493-506.
View/Download from: Publisher's site
Wu, Z, Xu, G, Lu, C, Chen, E, Jiang, F & Li, G 2018, 'An effective approach for the protection of privacy text data in the CloudDB', World Wide Web, vol. 21, no. 4, pp. 915-938.
View/Download from: Publisher's site
View description>>
© 2017 Springer Science+Business Media, LLC Due to the advantages of pay-on-demand, expand-on-demand and high availability, cloud databases (CloudDB) have been widely used in information systems. However, since a CloudDB is distributed on an untrusted cloud side, it is an important problem how to effectively protect massive private information in the CloudDB. Although traditional security strategies (such as identity authentication and access control) can prevent illegal users from accessing unauthorized data, they cannot prevent internal users at the cloud side from accessing and exposing personal privacy information. In this paper, we propose a client-based approach to protect personal privacy in a CloudDB. In the approach, privacy data before being stored into the cloud side, would be encrypted using a traditional encryption algorithm, so as to ensure the security of privacy data. To execute various kinds of query operations over the encrypted data efficiently, the encrypted data would be also augmented with additional feature index, so that as much of each query operation as possible can be processed on the cloud side without the need to decrypt the data. To this end, we explore how the feature index of privacy data is constructed, and how a query operation over privacy data is transformed into a new query operation over the index data so that it can be executed on the cloud side correctly. The effectiveness of the approach is demonstrated by theoretical analysis and experimental evaluation. The results show that the approach has good performance in terms of security, usability and efficiency, thus effective to protect personal privacy in the CloudDB.
Xiang, T, Li, Y, Li, X, Zhong, S & Yu, S 2018, 'Collaborative ensemble learning under differential privacy', Web Intelligence, vol. 16, no. 1, pp. 73-87.
View/Download from: Publisher's site
View description>>
Ensemble learning plays an important role in big data analysis. A great limitation is that multiple parties cannot share their knowledge extracted from ensemble learning model with privacy guarantee, therefore it is a great demand to develop privacy-preserving collaborative ensemble learning. This paper proposes a privacy-preserving collaborative ensemble learning framework under differential privacy. In the framework, multiple parties can independently build their local ensemble models with personalized privacy budgets, and collaboratively share their knowledge to obtain a stronger classifier with the help of central agent in a privacy-preserving way. Under this framework, this paper presents the differentially private versions of two widely-used ensemble learning algorithms: collaborative random forests under differential privacy (CRFsDP) and collaborative adaptive boosting under differential privacy (CAdaBoostDP). Theoretical analysis and extensive experimental results show that our proposed framework achieves a good balance between privacy and utility in an efficient way.
Xiang, Y, Natgunanathan, I, Peng, D, Hua, G & Liu, B 2018, 'Spread Spectrum Audio Watermarking Using Multiple Orthogonal PN Sequences and Variable Embedding Strengths and Polarities', IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 3, pp. 529-539.
View/Download from: Publisher's site
Xiao, Y, Pei, Q, Liu, X & Yu, S 2018, 'A Novel Trust Evaluation Mechanism for Collaborative Filtering Recommender Systems', IEEE Access, vol. 6, pp. 70298-70312.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In online social networks (OSNs), high trust value entities play an important role in service recommendation when users inquire certain service. Generally, users in OSNs are more willing to choose those services recommended by high trust value entities. In fact, users may suffer from great loss of property once they accept some bad services provided by high trust value entities. However, current schemes do not consider this problem. Hence, we propose a scheme called RHT (recommendation from high trust value entities) to evaluate the trust degree of service recommended by high trust value entities. To be specific, there exist other users who provide their ratings to the service recommended by a high trust value entity, and RHT first selects the trusted ones from those users by computing the similarity between target user and them. Simultaneously, RHT also withstands malicious attacks during the trusted nodes selection. In addition, we also design an adaptive trust computation method to calculate trust value according to the ratings of trusted users. The experimental results show that RHT has higher accuracy in trust evaluation compared with current representative schemes and do effectively resistant four common attacks when choosing trusted nodes.
Xu, C, Jin, W, Wang, X, Zhao, G & Yu, S 2018, 'MC-VAP: A multi-connection virtual access point for high performance software-defined wireless networks', Journal of Network and Computer Applications, vol. 122, pp. 88-98.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd Aiming to exploit the power of multiple accesses from ubiquitous wireless networks, researchers employed multiple virtualized interfaces connecting to multiple APs for mobile users. However, these schemes require expensive modifications and additional cost on mobile device, which are hard to be implemented. Complementarily, in this paper, we propose a multi-connection virtual access point (MC-VAP) to virtualize and manipulate physical APs to provide multi-path transmission for a user while avoiding any modifications on the user side. As a result, the independent flows from an application can be dispatched to multiple paths separately and transmitted on multiple APs simultaneously, which can improve the throughput obviously. In order to maximize each application's throughput, the flow assignment is formulated as a mixed integer non-linear programming (MINLP) problem. In particular, a low-complexity heuristic algorithm, namely, narrowing search set with cutting-off solution space (NS-CoS) algorithm, is presented to solve the MINLP problem through relaxing it into simple LP problems. Moreover, we implement a prototype of MC-VAP, and the extensive real-world experiments demonstrate that MC-VAP can realize seamless handover and provide faster yet efficient solutions of flow assignment in contrast to the optimal method to achieve multifold throughput improvement for applications over regular WiFi.
Xu, X, Motta, G, Tu, Z, Xu, H, Wang, Z & Wang, X 2018, 'A new paradigm of software service engineering in big data and big service era', Computing, vol. 100, no. 4, pp. 353-368.
View/Download from: Publisher's site
View description>>
© 2018, Springer-Verlag GmbH Austria, part of Springer Nature. In the big data era, servitization becomes one of the important development trends of the IT world. More and more software resources are developed and existed in the format as services on the Internet. These services from multi-domains and multi-networks are converged as a huge complicated service network or ecosystem, which can be called as Big Service. How to reuse the abundant open service resources to rapidly develop the new applications or comprehensive service solutions to meet massive individualized customer requirements is a key issue in the big data and big service ecosystem. Based on analyzing the ecosystem of big service, this paper presents a new paradigm of software service engineering, Requirement-Engineering Two-Phase of Service Engineering Paradigm (RE2SEP), which includes service oriented requirement engineering, domain oriented service engineering, and the development approach of software services. By means of the RE2SEP approach, the adaptive service solutions can be efficiently designed and implemented to match the requirement propositions of massive individualized customers in Big Service ecosystem. A case study of the RE2SEP applications, which is a project on citizens mobility service in smart city environment, is also given in this paper. The RE2SEP paradigm will change the way of traditional life-cycle oriented software engineering, and lead a new approach of software service engineering.
Xuan, J, Lu, J, Yan, Z & Zhang, G 2018, 'Bayesian Deep Reinforcement Learning via Deep Kernel Learning', International Journal of Computational Intelligence Systems, vol. 12, no. 1, pp. 164-164.
View/Download from: Publisher's site
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2018, 'Doubly Nonparametric Sparse Nonnegative Matrix Factorization Based on Dependent Indian Buffet Processes', IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 5, pp. 1835-1849.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. Sparse nonnegative matrix factorization (SNMF) aims to factorize a data matrix into two optimized nonnegative sparse factor matrices, which could benefit many tasks, such as document-word co-clustering. However, the traditional SNMF typically assumes the number of latent factors (i.e., dimensionality of the factor matrices) to be fixed. This assumption makes it inflexible in practice. In this paper, we propose a doubly sparse nonparametric NMF framework to mitigate this issue by using dependent Indian buffet processes (dIBP). We apply a correlation function for the generation of two stick weights associated with each column pair of factor matrices while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two factor matrices will be columnwise correlated. Under this framework, two classes of correlation function are proposed: 1) using bivariate Beta distribution and 2) using Copula function. Compared with the single IBP-based NMF, this paper jointly makes two factor matrices nonparametric and sparse, which could be applied to broader scenarios, such as co-clustering. This paper is seen to be much more flexible than Gaussian process-based and hierarchial Beta process-based dIBPs in terms of allowing the two corresponding binary matrix columns to have greater variations in their nonzero entries. Our experiments on synthetic data show the merits of this paper compared with the state-of-the-art models in respect of factorization efficiency, sparsity, and flexibility. Experiments on real-world data sets demonstrate the efficiency of this paper in document-word co-clustering tasks.
Yang, E, Deng, C, Li, C, Liu, W, Li, J & Tao, D 2018, 'Shared Predictive Cross-Modal Deep Quantization', IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, pp. 5292-5303.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. With explosive growth of data volume and ever-increasing diversity of data modalities, cross-modal similarity search, which conducts nearest neighbor search across different modalities, has been attracting increasing interest. This paper presents a deep compact code learning solution for efficient cross-modal similarity search. Many recent studies have proven that quantization-based approaches perform generally better than hashing-based approaches on single-modal similarity search. In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search. Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared subspace and the private subspaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space, where the mean embedding of different modality distributions can be explicitly compared. In addition, in the shared subspace, a quantizer is learned to produce the semantics preserving compact codes with the help of label alignment. Thanks to this novel network architecture in cooperation with supervised quantization training, SPDQ can preserve intramodal and intermodal similarities as much as possible and greatly reduce quantization error. Experiments on two popular benchmarks corroborate that our approach outperforms state-of-the-art methods.
Yang, M, Zhu, T, Liu, B, Xiang, Y & Zhou, W 2018, 'Differential Private POI Queries via Johnson-Lindenstrauss Transform', IEEE Access, vol. 6, pp. 29685-29699.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. The growing popularity of location-based services is giving untrusted servers relatively free reign to collect huge amounts of location information from mobile users. This information can reveal far more than just a user's locations but other sensitive information, such as the user's interests or daily routines, which raises strong privacy concerns. Differential privacy is a well-acknowledged privacy notion that has become an important standard for the preservation of privacy. Unfortunately, existing privacy preservation methods based on differential privacy protect user location privacy at the cost of utility, aspects of which have to be sacrificed to ensure that privacy is maintained. To solve this problem, we present a new privacy framework that includes a semi-trusted third party. Under our privacy framework, both the server and the third party only hold a part of the user's location information. Neither the server nor the third party knows the exact location of the user. In addition, the proposed perturbation method based on the Johnson Lindenstrauss transform satisfies the differential privacy. Two popular point of interest queries, -NN and Range, are used to evaluate the method on two real-world data sets. Extensive comparisons against two representative differential privacy-based methods show that the proposed method not only provides a strict privacy guarantee but also significantly improves performance.
Yang, M, Zhu, T, Liu, B, Xiang, Y & Zhou, W 2018, 'Machine Learning Differential Privacy With Multifunctional Aggregation in a Fog Computing Architecture', IEEE Access, vol. 6, pp. 17119-17129.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Data aggregation plays an important role in the Internet of Things, and its study and analysis has resulted in a range of innovative services and benefits for people. However, the privacy issues associated with raw sensory data raise significant concerns due to the sensitive nature of the user information it often contains. Thus, numerous schemes have been proposed over the last few decades to preserve the privacy of users' data. Most methods are based on encryption technology, which is computationally and communicationally expensive. In addition, most methods can only handle a single aggregation function. Therefore, in this paper, we propose a multifunctional data aggregation method with differential privacy. The method is based on machine learning and can support a wide range of statistical aggregation functions, including additive and non-additive aggregation. It operates within a fog computing architecture, which extends cloud computing to the edge of the network, alleviating much of the computational burden on the cloud server. And, by only reporting the results of the aggregation to the server, communication efficiency is improved. Extensive experimental results show that the proposed method not only answers flexible aggregation queries that meet diversified aggregation goals, but also produces aggregation results with high accuracy.
Yang, M, Zhu, T, Xiang, Y & Zhou, W 2018, 'Density-Based Location Preservation for Mobile Crowdsensing With Differential Privacy', IEEE Access, vol. 6, pp. 14779-14789.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In recent years, the widespread prevalence of smart devices has created a new class of mobile Internet of Thing applications. Called mobile crowdsensing, these techniques use workers with mobile devices to collect data and send it to task requester for rewards. However, to ensure the optimal allocation of tasks, a centralized server needs to know the precise location of each user, but exposing the workers' exact locations raises privacy concerns. In this paper, we propose a data release mechanism for crowdsensing techniques that satisfies differential privacy, providing rigorous protection of worker locations. The partitioning method is based on worker density and considers non-uniform worker distribution. In addition, we propose a geocast region selection method for task assignment that effectively balances the task assignment success rate with worker travel distances and system overheads. Extensive experiments prove that the proposed method not only provides a strict privacy guarantee but also significantly improves performance.
Yao, L, Sheng, QZ, Benatallah, B, Dustdar, S, Wang, X, Shemshadi, A & Kanhere, SS 2018, 'WITS: an IoT-endowed computational framework for activity recognition in personalized smart homes', Computing, vol. 100, no. 4, pp. 369-385.
View/Download from: Publisher's site
Yao, L, Sheng, QZ, Li, X, Gu, T, Tan, M, Wang, X, Wang, S & Ruan, W 2018, 'Compressive Representation for Device-Free Activity Recognition with Passive RFID Signal Strength', IEEE Transactions on Mobile Computing, vol. 17, no. 2, pp. 293-306.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Understanding and recognizing human activities is a fundamental research topic for a wide range of important applications such as fall detection and remote health monitoring and intervention. Despite active research in human activity recognition over the past years, existing approaches based on computer vision or wearable sensor technologies present several significant issues such as privacy (e.g., using video camera to monitor the elderly at home) and practicality (e.g., not possible for an older person with dementia to remember wearing devices). In this paper, we present a low-cost, unobtrusive, and robust system that supports independent living of older people. The system interprets what a person is doing by deciphering signal fluctuations using radio-frequency identification (RFID) technology and machine learning algorithms. To deal with noisy, streaming, and unstable RFID signals, we develop a compressive sensing, dictionary-based approach that can learn a set of compact and informative dictionaries of activities using an unsupervised subspace decomposition. In particular, we devise a number of approaches to explore the properties of sparse coefficients of the learned dictionaries for fully utilizing the embodied discriminative information on the activity recognition task. Our approach achieves efficient and robust activity recognition via a more compact and robust representation of activities. Extensive experiments conducted in a real-life residential environment demonstrate that our proposed system offers a good overall performance and shows the promising practical potential to underpin the applications for the independent living of the elderly.
Yao, L, Sheng, QZ, Wang, X, Wang, S, Li, X & Wang, S 2018, 'Collaborative text categorization via exploiting sparse coefficients', World Wide Web, vol. 21, no. 2, pp. 373-394.
View/Download from: Publisher's site
View description>>
© 2017, Springer Science+Business Media New York. Text categorization is widely characterized as a multi-label classification problem. Robust modeling of the semantic similarity between a query text and training texts is essential to construct an effective and accurate classifier. In this paper, we systematically investigate the Web page/text classification problem via integrating sparse representation with random measurements. In particular, we first adopt a very sparse data-independent random measurement matrix to map the original high dimensional text feature space to a lower dimensional space without loss of key information. We then propose a generic sparse representation method to obtain the sparse solution by decoding the semantic correlations between the query text and entire training samples. Based on the above method, we also design and examine a series of rules by taking advantage of the sparse coefficients to propagate multiple labels for the given query texts. We have conducted extensive experiments using real-world datasets to examine our proposed approach, and the results show the effectiveness of the proposed approach.
Yao, L, Sheng, QZ, Wang, X, Zhang, WE & Qin, Y 2018, 'Collaborative Location Recommendation by Integrating Multi-dimensional Contextual Information', ACM Transactions on Internet Technology, vol. 18, no. 3, pp. 1-24.
View/Download from: Publisher's site
View description>>
Point-of-Interest (POI) recommendation is a new type of recommendation task that comes along with the prevalence of location-based social networks and services in recent years. Compared with traditional recommendation tasks, POI recommendation focuses more on making personalized and context-aware recommendations to improve user experience. Traditionally, the most commonly used contextual information includes geographical and social context information. However, the increasing availability of check-in data makes it possible to design more effective location recommendation applications by modeling and integrating comprehensive types of contextual information, especially the temporal information. In this article, we propose a collaborative filtering method based on Tensor Factorization, a generalization of the Matrix Factorization approach, to model the multi-dimensional contextual information. Tensor Factorization naturally extends Matrix Factorization by increasing the dimensionality of concerns, within which the three-dimensional model is the one most popularly used. Our method exploits a high-order tensor to fuse heterogeneous contextual information about users’ check-ins instead of the traditional two-dimensional user-location matrix. The factorization of this tensor leads to a more compact model of the data that is naturally suitable for integrating contextual information to make POI recommendations. Based on the model, we further improve the recommendation accuracy by utilizing the internal relations within users and locations to regularize the latent factors. Experimental results on a large real-world dataset demonstrate the effectiveness of our approach.
Yaprak, A, Yosun, T & Cetindamar, D 2018, 'The influence of firm-specific and country-specific advantages in the internationalization of emerging market firms: Evidence from Turkey', International Business Review, vol. 27, no. 1, pp. 198-207.
View/Download from: Publisher's site
View description>>
This paper examines the role of institutional factors that enable firm- and country-specific drivers of emerging market (EM) firms’ internationalization based on case-based research conducted in one EM, Turkey. Findings indicate that 10 major factors comprised of firm-specific and country-specific advantages drove the focal case study firms abroad: the firm-specific factors ranged from financial and operations supremacy; excellence in value chain activities; inexpensive human resources; rapid learning capabilities in production and technology development; and adaptability to foreign markets; while the country-specific factors included home-government policies supporting internationalization; logistical advantages arising from geographical position; adaptability capabilities resulting from former survival through institutional voids; strong social ties formed through networks; and availability of low cost resources. These findings are discussed and future research questions are offered.
Yasin, A, Liu, L, Li, T, Wang, J & Zowghi, D 2018, 'Design and preliminary evaluation of a cyber Security Requirements Education Game (SREG)', Information and Software Technology, vol. 95, pp. 179-200.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Context: Security, in digitally connected organizational environments of today, involves many different perspectives, including social, physical, and technical factors. In order to understand the interactions among these correlated aspects and elicit potential threats geared towards a given organization, different security requirements analysis approaches are proposed in the literature. However, the body of knowledge is yet to unleash its full potential due to the complex nature of security problems, and inadequate ways to improve security awareness of key players in the organization. Objective: Objective(s) of the research study is to improve the security awareness of players utilizing serious games via: (i) Know-how of security concepts and security protection; (ii) guided process of identifying valuable assets and vulnerabilities in a given organizational setting; (iii) guided process of defining successful security attacks to the organization. Method: Important methods used to address the above objectives include: (i) a comprehensive review of the literature to better understand security and game design elements; (ii) designing a serious game using cyber security knowledge and game-based techniques combined with security requirements engineering concepts; (iii) using empirical evaluation (observation and survey) to verify the effectiveness of the proposed game design. Result: The solution proposed is a serious game for security requirements education, which: (i) can be an effective and fun way of learning security related concepts; (ii) mimics a real life problem setting in a presentable and understandable way; (iii) motivates players to learn more about security related concepts in future. Conclusion: From this study, we conclude that the proposed Security Requirement Education Game (SREG) has positive results and is helpful for players of the game to get an understanding of security attacks and vulnerabilities.
Ye, D & Zhang, M 2018, 'A Self-Adaptive Sleep/Wake-Up Scheduling Approach for Wireless Sensor Networks', IEEE Transactions on Cybernetics, vol. 48, no. 3, pp. 979-992.
View/Download from: Publisher's site
View description>>
Sleep/wake-up scheduling is one of the fundamental problems in wireless sensor networks, since the energy of sensor nodes is limited and they are usually unrechargeable. The purpose of sleep/wake-up scheduling is to save the energy of each node by keeping nodes in sleep mode as long as possible (without sacrificing packet delivery efficiency) and thereby maximizing their lifetime. In this paper, a self-adaptive sleep/wake-up scheduling approach is proposed. Unlike most existing studies that use the duty cycling technique, which incurs a tradeoff between packet delivery delay and energy saving, the proposed approach, which does not us duty cycling, avoids such a tradeoff. The proposed approach, based on the reinforcement learning technique, enables each node to autonomously decide its own operation mode (sleep, listen, or transmission) in each time slot in a decentralized manner. Simulation results demonstrate the good performance of the proposed approach in various circumstances.
Ye, D, He, Q, Wang, Y & Yang, Y 2018, 'An agent-based service adaptation approach in distributed multi-tenant service-based systems', Journal of Parallel and Distributed Computing, vol. 122, pp. 11-25.
View/Download from: Publisher's site
View description>>
Service adaptation aims to alleviate the runtime quality management problem of distributed service-based systems (SBSs). Most of the existing service adaptation approaches are designed for single-tenant SBSs. However, modern distributed SBSs must achieve multi-tenancy due to simultaneous existence of multiple tenants. Thus, it is vital to study service adaptation in distributed multi-tenant SBSs. Currently, service adaptation has not been properly addressed in distributed multi-tenant SBSs. Existing approaches for service adaptation in multi-tenant SBSs are centralised which are not very efficient if the SBSs are large and distributed. Some decentralised service adaptation approaches, which are developed in single-tenant SBSs, may be extended to accommodate multi-tenant SBSs. These approaches, however, either incur significant communication overhead to obtain required information, or simply assume that some specific global information is already known, which is not realistic in large and distributed SBSs. To overcome the limitations of existing approaches, in this paper, a novel agent-based hybrid service adaptation approach for distributed multi-tenant SBSs is proposed, which is based on the multi-agent coalition formation technique. Our hybrid approach combines the advantages of both centralised and decentralised approaches while avoiding their disadvantages. The experimental results demonstrate the effectiveness of the proposed approach.
Ye, D, He, Q, Wang, Y & Yang, Y 2018, 'Detection of transmissible service failure in distributed service-based systems', Journal of Parallel and Distributed Computing, vol. 119, pp. 36-49.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Inc. Detection of service failure, also known as service monitoring, is an important research problem in distributed service-based systems (SBSs). Failure of services is a transmissible threat in distributed SBSs, because services in distributed SBSs may have dependent relationships among them and thus the failure of one service may cause the failure of other services. Therefore, such transmissible service failure has to be detected in a timely manner whereas the corresponding resource consumption should be as little as possible. Most of the existing service monitoring approaches are centralised which suffer the potential of single point of failure and are not suitable in large scale distributed SBSs. Moreover, these centralised approaches are designed only in single-tenant SBSs. Nowadays, the scale of distributed SBSs is extremely large, i.e., including a large number of services and clients. Thus, it is essential for monitoring approaches to work well in large scale distributed SBSs and support multi-tenancy. Towards this end, in this paper, a novel agent-based decentralised service monitoring approach is developed in distributed SBSs. Compared to the centralised approaches, the proposed decentralised approach can avoid the single point of failure and can balance the computation over the monitoring agents. Also, unlike existing approaches which consider only single tenancy, the proposed approach takes multi-tenancy into account in distributed SBSs. Experimental results demonstrate that the proposed approach can respond as quickly as centralised approaches with much less computation overhead.
Yu, D, Xu, Z, Kao, Y & Lin, C-T 2018, 'The Structure and Citation Landscape of IEEE Transactions on Fuzzy Systems (1994–2015)', IEEE Transactions on Fuzzy Systems, vol. 26, no. 2, pp. 430-442.
View/Download from: Publisher's site
Yusoff, B, Merigó, JM, Ceballos, D & Peláez, JI 2018, 'Weighted-selective aggregated majority-OWA operator and its application in linguistic group decision making', International Journal of Intelligent Systems, vol. 33, no. 9, pp. 1929-1948.
View/Download from: Publisher's site
View description>>
© 2018 Wiley Periodicals, Inc. This paper focuses on the aggregation operations in the group decision-making model based on the concept of majority opinion. The weighted-selective aggregated majority-OWA (WSAM-OWA) operator is proposed as an extension of the SAM-OWA operator, where the reliability of information sources is considered in the formulation. The WSAM-OWA operator is generalized to the quantified WSAM-OWA operator by including the concept of linguistic quantifier, mainly for the group fusion strategy. The QWSAM-IOWA operator, with an ordering step, is introduced to the individual fusion strategy. The proposed aggregation operators are then implemented for the case of alternative scheme of heterogeneous group decision analysis. The heterogeneous group includes the consensus of experts with respect to each specific criterion. The exhaustive multicriteria group decision-making model under the linguistic domain, which consists of two-stage aggregation processes, is developed in order to fuse the experts’ judgments and to aggregate the criteria. The model provides greater flexibility when analyzing the decision alternatives with a tolerance that considers the majority of experts and the attitudinal character of experts. A selection of investment problem is given to demonstrate the applicability of the developed model.
Zakeri, A, Saberi, M, Hussain, OK & Chang, E 2018, 'Addressing Missing Data and Data Competitiveness Issues: Transforming Tacit Knowledge into Explicit Form by Fuzzy Inference Learning System', International Journal of Fuzzy Systems, vol. 20, no. 4, pp. 1224-1239.
View/Download from: Publisher's site
View description>>
© 2017, Taiwan Fuzzy Systems Association and Springer-Verlag GmbH Germany, part of Springer Nature. Although we are living in the era of big data, in many real-world applications, being able to access the right set and quantity of data is still a challenging task. One solution to address this drawback is to transform the existing information and knowledge from its tacit form back to data which can be used to simulate and regenerate the required knowledge in different scenarios for further analysis in explicit form. In this paper, we present our developed fuzzy inference-based learning system to achieve this objective. Our proposed framework is based on both conventional fuzzy-based modelling and the adaptive network-based fuzzy inference system (ANFIS) that first transforms the existing tacit information and knowledge into a fuzzy form which is then fed into ANFIS to develop a trained model that regenerates them for analysis purposes. We validate our proposed model and demonstrate its accuracy to estimate the fuel efficiency of heavy duty trucks using real-world data.
Zakeri, A, Saberi, M, Hussain, OK & Chang, E 2018, 'An Early Detection System for Proactive Management of Raw Milk Quality: An Australian Case Study', IEEE Access, vol. 6, pp. 64333-64349.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Milk is a highly perishable product, whose quality degrades while moving downstream in an imperfect cold dairy supply chain. Existing literature adopts a reactive approach for evaluating and preventing milk with a high microbial index from moving further downstream in a dairy supply chain. In this paper, we argue that such an approach is not the best response if the intention is to maximize milk life in terms of quality. We propose a proactive approach that monitors the metrics of the temperature and the level that are the building blocks of microorganisms in milk. This information is then used to determine the status at which the storage tank should hold the milk in accordance with standards. This status is then compared with the tank's actual status, and if they are different from one another, it will prompt the farmers to take the required preventive actions to manage the quality of milk. The developed proactive management of raw milk quality approach is modeled by using a rule-based system and machine learning techniques with a high level of accuracy. To test the validity of our approach and demonstrate its applicability, we apply it to a milk farm in Queensland, Australia.
Zeng, D, Zhang, S, Gu, L, Yu, S & Fu, Z 2018, 'Quality-of-sensing aware budget constrained contaminant detection sensor deployment in water distribution system', Journal of Network and Computer Applications, vol. 103, pp. 274-279.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Water contamination or pollution has raised serious disasters and social impact. It is significant to alleviate its impact or reduce the risks. Deploying water quality monitoring sensors in the water distribution systems naturally becomes a promising solution. In the consideration of sensor deployment, the deployment cost and the achieved quality-of-sensing, usually in terms of coverage, are always two contradictive issues. Although massively deploying sensors implies higher quality-of-sensing, it may also incur extremely high deployment cost. Actually, it is usually infeasible with the consideration of limited sensor deployment budget. In this paper, we are motivated to investigate a budget constrained sensor deployment in water distribution system, with the goal of maximizing the quality-of-sensing. Two kinds of sensors with different prices and hence different communication capabilities are considered. The cheaper one equips with only sensor-to-sensor communication capability. While, the expensive one is further capable of cellular communication. We first formally describe our problem using a mixed integer non-linear programming (MINLP) problem. To address the complexity on solving MINLP, we further propose a heuristic algorithm based on genetic algorithm, whose high efficiency is extensively validated by simulation based studies.
Zeng, Y, Li, K, Yu, S, Zhou, Y & Li, K 2018, 'Parallel and Progressive Approaches for Skyline Query Over Probabilistic Incomplete Database', IEEE Access, vol. 6, pp. 13289-13301.
View/Download from: Publisher's site
View description>>
The advanced productivity of the modern society has created a wide range of similar commodities. However, the descriptions of commodities are always incomplete. Therefore, it is difficult for consumers to make choices. In the face of this problem, skyline query is a useful tool. However, the existing algorithms are unable to address incomplete probabilistic databases. In addition, it is necessary to wait for query completion to obtain even partial results. Furthermore, traditional skyline algorithms are usually serial. Thus, they cannot utilize multi-core processors effectively. Therefore, a parallel progressive skyline query algorithm for incomplete databases is imperative, which provides answers gradually and much faster. To address these problems, we design a new algorithm that uses multi-level grouping, pruning strategies, and pruning tuple transferring, which significantly decreases the computational costs. Experimental results demonstrate that the skyline results can be obtained in a short time. The parallel efficiency for an Octa-core processor reaches 90% on high-dimensional, large databases.
Zhang, B, Zhu, T, Hu, C & Zhao, C 2018, 'Cryptanalysis of a Lightweight Certificateless Signature Scheme for IIOT Environments', IEEE Access, vol. 6, pp. 73885-73894.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. As an extremely significant cryptographic primitive, certificateless signature (CLS) schemes can provide message authentication with no use of traditional digital certificates. High efficiency and provable security without random oracle are challensges in designing a CLS scheme. Recently, Karati et al. proposed an efficient pairing-based CLS scheme with no use of map-to-point hash function and random oracle model to provide data authenticity in Industrial Internet of Things (IIoT) systems. The security proof was given under several hardness assumptions. However, we notice that both public key replacement attack and known message attack are existing in Karati et al.'s scheme. Any adversary without knowledge of signer's private key is capable of forging valid signatures. This leads to several serious consequences. For example, anybody can sign IIoT data on behalf of IIoT data owner without being detected.
Zhang, H, Chen, CY, Yu, S & Quan, W 2018, 'Guest Editorial: Security Architecture and Technologies for 5G', IET Networks, vol. 7, no. 2, pp. 51-52.
View/Download from: Publisher's site
Zhang, H, Xu, G, Liang, X, Xu, G, Li, F, Fu, K, Wang, L & Huang, T 2018, 'An Attention-Based Word-Level Interaction Model for Knowledge Base Relation Detection', IEEE Access, vol. 6, pp. 75429-75441.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Relation detection plays a crucial role in knowledge base question answering, and it is challenging because of the high variance of relation expression in real-world questions. Traditional relation detection models based on deep learning follow an encoding-comparing paradigm, where the question and the candidate relation are represented as vectors to compare their semantic similarity. Max-or average-pooling operation, which is used to compress the sequence of words into fixed-dimensional vectors, becomes the bottleneck of information flow. In this paper, we propose an attention-based word-level interaction model (ABWIM) to alleviate the information loss issue caused by aggregating the sequence into a fixed-dimensional vector before the comparison. First, attention mechanism is adopted to learn the soft alignments between words from the question and the relation. Then, fine-grained comparisons are performed on the aligned words. Finally, the comparison results are merged with a simple recurrent layer to estimate the semantic similarity. Besides, a dynamic sample selection strategy is proposed to accelerate the training procedure without decreasing the performance. Experimental results of relation detection on both SimpleQuestions and WebQuestions datasets show that ABWIM achieves the state-of-the-art accuracy, demonstrating its effectiveness.
Zhang, J, Devitt, SJ, You, JQ & Nori, F 2018, 'Holonomic surface codes for fault-tolerant quantum computation', Physical Review A, vol. 97, no. 2.
View/Download from: Publisher's site
View description>>
© 2018 American Physical Society. Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
Zhang, J, McBurney, P & Musial, K 2018, 'Convergence of trading strategies in continuous double auction markets with boundedly-rational networked traders', Review of Quantitative Finance and Accounting, vol. 50, no. 1, pp. 301-352.
View/Download from: Publisher's site
View description>>
© 2017, Springer Science+Business Media New York. This paper considers the convergence of trading strategies among artificial traders connected to one another in a social network and trading in a continuous double auction financial marketplace. Convergence is studied by means of an agent-based simulation model called the Social Network Artificial stoCk marKet model. Six different canonical network topologies (including no-network) are used to represent the possible connections between artificial traders. Traders learn from the trading experiences of their connected neighbours by means of reinforcement learning. The results show that the proportions of traders using particular trading strategies are eventually stable. Which strategies dominate in these stable states depends to some extent on the particular network topology of trader connections and the types of traders.
Zhang, Y, Dong, P, Yu, S, Luo, H, Zheng, T & Zhang, H 2018, 'An Adaptive Multipath Algorithm to Overcome the Unpredictability of Heterogeneous Wireless Networks for High-Speed Railway', IEEE Transactions on Vehicular Technology, vol. 67, no. 12, pp. 11332-11344.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Accessing Internet services in high-speed mobile scenario is an increasing demand for passengers and vendors. Owing to the bandwidth limitation of a single wireless network, researchers attempt to utilize the heterogeneous wireless networks along tracks to achieve multipath parallel transmission. These multipath transmission schemes usually depend on the accurate estimation of network quality to achieve high performance. However, due to the unpredictability of wireless networks in high-speed mobile scenario, current multipath transmission schemes perform poorly. In this paper, first, we make quantitative analysis for the unpredictability of wireless networks. With lots of results of real experiments, we make quantitative analysis for the estimation error of classical algorithms in different scenarios. Second, aiming at the unpredictability of wireless networks, we propose a multipath transmission algorithm named receiver adaptive incremental delay (RAID) that can aggregate bandwidth for heterogeneous networks independent of accurate network quality estimation. Final, we deploy the RAID algorithm into a real system. Abundant of real experiments and simulations prove that our proposed algorithm has a better performance than the earliest completion first algorithm and the weighted round Robin (WRR) algorithm in high-speed mobile scenario.
Zhang, Y, Guan, L & Liu, Q 2018, 'Liu Tungsheng: A geologist from a traditional Chinese cultural background who became an international star of science', Journal of Asian Earth Sciences, vol. 155, pp. 8-20.
View/Download from: Publisher's site
Zhang, Y, He, Q, Xiang, Y, Zhang, LY, Liu, B, Chen, J & Xie, Y 2018, 'Low-Cost and Confidentiality-Preserving Data Acquisition for Internet of Multimedia Things', IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3442-3451.
View/Download from: Publisher's site
Zhang, Y, Lu, J, Liu, F, Liu, Q, Porter, A, Chen, H & Zhang, G 2018, 'Does deep learning help topic extraction? A kernel k-means clustering method with word embedding', Journal of Informetrics, vol. 12, no. 4, pp. 1099-1117.
View/Download from: Publisher's site
View description>>
© 2018 All rights reserved. Topic extraction presents challenges for the bibliometric community, and its performance still depends on human intervention and its practical areas. This paper proposes a novel kernel k-means clustering method incorporated with a word embedding model to create a solution that effectively extracts topics from bibliometric data. The experimental results of a comparison of this method with four clustering baselines (i.e., k-means, fuzzy c-means, principal component analysis, and topic models) on two bibliometric datasets demonstrate its effectiveness across either a relatively broad range of disciplines or a given domain. An empirical study on bibliometric topic extraction from articles published by three top-tier bibliometric journals between 2000 and 2017, supported by expert knowledge-based evaluations, provides supplemental evidence of the method's ability on topic extraction. Additionally, this empirical analysis reveals insights into both overlapping and diverse research interests among the three journals that would benefit journal publishers, editorial boards, and research communities.
Zhang, Y, Saberi, M & Chang, E 2018, 'A semantic-based knowledge fusion model for solution-oriented information network development: a case study in intrusion detection field', Scientometrics, vol. 117, no. 2, pp. 857-886.
View/Download from: Publisher's site
View description>>
© 2018, Akadémiai Kiadó, Budapest, Hungary. Building information networks using semantic based techniques to avoid tedious work and to achieve high efficiency has been a long-term goal in the information management world. A great volume of research has focused on developing large scale information networks for general domains to pursue the comprehensiveness and integrity of the information. However, constructing customised information networks containing subject-specific knowledge has been neglected. Such research can potentially return high value in terms of both theoretical and practical contribution. In this paper, a new type of network, solution-oriented information network, is coined that includes research problems and proposed techniques as nodes, and the relationship between them. A lightweight Semantic-based Knowledge Fusion Model (SKFM) is proposed leveraging the power of Natural Language Processing (NLP) and Crowdsourcing to construct the proposed information networks using academic papers (knowledge) from Scopus. SKFM relies on NLP in terms of automatic components while Crowdsourcing is initiated when uncertain cases arise. Applying the NLP technique assists to develop a semi-automatic knowledge fusion method for saving effort and time in extracting information from academic papers. Leveraging human power in uncertain cases is to make sure the essential concepts for developing the information networks are extracted reliably and connected correctly. SKFM shows a theoretical contribution in terms of lightweight knowledge extraction and reconstruction framework, as well as practical value by providing solutions proposed in academic papers to address corresponding research issues in subject-specific areas. Experiments have been implemented which have shown promising results. In the research field of intrusion detection, the information of attack types and proposed solutions has been extracted and integrated in a graphic manner with high accuracy ...
Zheng, Y, Zhang, G, Zhang, Z & Lu, J 2018, 'A reducibility method for the weak linear bilevel programming problems and a case study in principal-agent', Information Sciences, vol. 454-455, pp. 46-58.
View/Download from: Publisher's site
View description>>
© 2018 A weak linear bilevel programming (WLBP) problem often models problems involving hierarchy structure in expert and intelligent systems under the pessimistic point. In the paper, we deal with such a problem. Using the duality theory of linear programming, the WLBP problem is first equivalently transformed into a jointly constrained bilinear programming problem. Then, we show that the resolution of the jointly constrained bilinear programming problem is equivalent to the resolution of a disjoint bilinear programming problem under appropriate assumptions. This may give a possibility to solve the WLBP problem via a single-level disjoint bilinear programming problem. Furthermore, some examples illustrate the solution process and feasibility of the proposed method. Finally, the WLBP problem models a principal-agent problem under the pessimistic point that is also compared with a principal-agent problem under the optimistic point.
Zhong, S, Ren, W, Zhu, T, Ren, Y & Choo, K-KR 2018, 'Performance and Security Evaluations of Identity- and Pairing-Based Digital Signature Algorithms on Windows, Android, and Linux Platforms: Revisiting the Algorithms of Cha and Cheon, Hess, Barreto, Libert, Mccullagh and Quisquater, and Paterson and Schuldt', IEEE Access, vol. 6, pp. 37850-37857.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Bilinear pairing, an essential tool to construct-efficient digital signatures, has applications in mobile devices and other applications. One particular research challenge is to design cross-platform security protocols (e.g. Windows, Linux, and other popular mobile operating systems) while achieving an optimal security-performance tradeoff. That is, how to choose the right digital signature algorithm, for example, on mobile devices while considering the limitations on both computation capacity and battery life. In this paper, we examine the security-performance tradeoff of four popular digital signature algorithms, namely: CC (proposed by Cha and Cheon in 2003), Hess (proposed by Hess in 2002), BLMQ (proposed by Barreto et al. in 2005), and PS (proposed by Paterson and Schuldt in 2006), on various platforms. We empirically evaluate their performance using experiments on Windows, Android, and Linux platforms, and find that BLMQ algorithm has the highest computational efficiency and communication efficiency. We also study their security properties under the random oracle model and assuming the intractability of the CDH problem, we reveal that the BLMQ digital signature scheme satisfies the property of existential unforgeable on adaptively chosen message and ID attack. The efficiency of PS algorithm is lower, but it is secure under the standard model.
Zhou, L, Fu, A, Yu, S, Su, M & Kuang, B 2018, 'Data integrity verification of the outsourced big data in the cloud environment: A survey', Journal of Network and Computer Applications, vol. 122, pp. 1-15.
View/Download from: Publisher's site
View description>>
© 2018 Elsevier Ltd With the explosive growth of data and the rapid development of science technology, big data analysis has attracted increasing attention. Due to the restrictive performance of traditional devices, cloud computing emerges as a convenient storage and computing platform for big data analysis. Driven by benefits, cloud servers may intentionally delete or modify outsourced big data. Therefore, users need to make sure that the servers correctly store the outsourced big data prior to deploying the cloud computing applications in practice. To resolve the issue, many researchers have concentrated on enabling users to check the completeness of data with data integrity verification (DIV) technique. We have therefore collated a summary of the existing literature, aiming to present a solid and stimulating review of current academic achievements for interested readers. Firstly, we present a fundamental introduction by defining seven major topics in order to offer a summary of the existing research domain for DIV study. Secondly, we classify the state-of-the-art DIV solutions into four categories, and then we parse each category based on dynamics, providing a clear and hierarchical classification of forthcoming DIV efforts. Thirdly, we discuss the principal topics and technical means utilized to equip DIV schemes with different requirements. Finally, we discuss the issues and challenges anticipated in future work, thus suggesting possible directions for follow-up research.
Zhu, T, Li, G, Xiong, P & Zhou, W 2018, 'Answering differentially private queries for continual datasets release', Future Generation Computer Systems, vol. 87, pp. 816-827.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Privacy preserving data release is a hot topic that attracts a lot of attentions in data mining, machine learning, and social network communities. Most studies on privacy preserving focus on static data releases; however, data are usually updated periodically. As a potential solution, differential privacy addresses continual data release by simplifying it into an event stream release problem. This approach overlooks the relationship between events, which is defined as coupled information in this paper. We argue that datasets cannot be simplified as an event stream due to the coupled information. In addition, the coupled information may reveal more private information than expected. This work proposes a privacy-preserving mechanism that explicitly identify the coupled information in continually released datasets. In stead of simplifying datasets to event streams, this mechanism considers the continual released datasets as coupled datasets based on the relationship between the same individual in different datasets, and the relationship between different individuals in the same dataset. We also propose the notion of coupled sensitivity for answering differentially private queries and develop an iterative based coupled continual release algorithm, called CCR, that answers these queries with a large set of differentially private results. Theoretical analysis proves the privacy of this method, and an extensive performance study shows that CCR outperforms traditional differential privacy mechanisms when answering a large set of queries.
Zhu, T, Yang, M, Xiong, P, Xiang, Y & Zhou, W 2018, 'An iteration-based differentially private social network data release', Computer Systems Science and Engineering, vol. 33, no. 2, pp. 61-69.
View description>>
Online social networks provide an unprecedented opportunity for researchers to analysis various social phenomena. These network data is normally represented as graphs, which contain many sensitive individual information. Publish these graph data will violate users' privacy. Differential privacy is one of the most influential privacy models that provides a rigorous privacy guarantee for data release. However, existing works on graph data publishing cannot provide accurate results when releasing a large number of queries. In this paper, we propose a graph update method transferring the query release problem to an iteration process, in which a large set of queries are used as update criteria. Compared with existing works, the proposed method enhances the accuracy of query results. The extensive experiment proves that the proposed solution outperforms two state-of-the-art methods, the Laplace method and the correlated method, in terms of Mean Absolute Value. It means our methods can retain more utility of the queries while preserving the privacy.
Zuo, H, Zhang, G, Pedrycz, W, Behbood, V & Lu, J 2018, 'Granular Fuzzy Regression Domain Adaptation in Takagi–Sugeno Fuzzy Models', IEEE Transactions on Fuzzy Systems, vol. 26, no. 2, pp. 847-858.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. In classical data-driven machine learning methods, massive amounts of labeled data are required to build a high-performance prediction model. However, the amount of labeled data in many real-world applications is insufficient, so establishing a prediction model is impossible. Transfer learning has recently emerged as a solution to this problem. It exploits the knowledge accumulated in auxiliary domains to help construct prediction models in a target domain with inadequate training data. Most existing transfer learning methods solve classification tasks; only a few are devoted to regression problems. In addition, the current methods ignore the inherent phenomenon of information granularity in transfer learning. In this study, granular computing techniques are applied to transfer learning. Three granular fuzzy regression domain adaptation methods to determine the estimated values for a regression target are proposed to address three challenging cases in domain adaptation. The proposed granular fuzzy regression domain adaptation methods change the input and/or output space of the source domain's model using space transformation, so that the fuzzy rules are more compatible with the target data. Experiments on synthetic and real-world datasets validate the effectiveness of the proposed methods.
Abad, ZSH, Noaeen, M, Zowghi, D, Far, BH & Barker, K 1970, 'Two Sides of the Same Coin', Proceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 2018, EASE'18: 22nd International Conference on Evaluation and Assessment in Software Engineering 2018, ACM, Christchurch, New Zealand, pp. 175-180.
View/Download from: Publisher's site
View description>>
In the constantly evolving world of software development, switching back and forth between tasks has become the norm. While task switching often allows developers to perform tasks effectively and may increase creativity via the flexible pathway, there are also consequences to frequent task-switching. For high-momentum tasks like software development, "flow", the highly productive state of concentration, is paramount. Each switch distracts the developers’ flow, requiring them to switch mental state and an additional immersion period to get back into the flow. However, the wasted time due to time fragmentation caused by task switching is largely invisible and unnoticed by developers and managers. We conducted a survey with 141 software developers to investigate their perceptions of differences between task switching and task interruption and to explore whether they perceive task switchings as disruptive as interruptions. We found that practitioners perceive considerable similarities between the disruptiveness of task switching (either planned or unplanned) and random interruptions. The high level of cognitive cost and low performance are the main consequences of task switching articulated by our respondents. Our findings broaden the understanding of flow change among software practitioners in terms of the characteristics and categories of disruptive switches as well as the consequences of interruptions caused by daily meetings.
Abdulkareem, SA, Augustijn, EW, Musial, K, Mustafa, YT & Filatova, T 1970, 'The impact of social versus individual learning for agents' risk perception during epidemics', Proceedings - IEEE 14th International Conference on eScience, e-Science 2018, 2018 IEEE 14th International Conference on e-Science (e-Science), IEEE, Amsterdam, Netherlands, pp. 297-298.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Epidemics have always been a source of concern to people, both at the individual and government level. To fight outbreaks effectively, we need advanced tools that enable us to understand the factors that influence the spread of life-threatening diseases.
Abolbashari, MH, Hussain, OK, Saberi, M & Chang, E 1970, 'Fine Tuning a Bayesian Network and Fairly Allocating Resources to Improve Procurement Performance', Springer International Publishing, pp. 3-15.
View/Download from: Publisher's site
Abu-Khzam, FN, Egan, J, Gaspers, S, Shaw, A & Shaw, P 1970, 'Cluster Editing with Vertex Splitting', Combinatorial Optimization (LNCS), International Symposium on Combinatorial Optimization, Springer International Publishing, Marrakesh, Morocco, pp. 1-13.
View/Download from: Publisher's site
View description>>
In the Cluster Editing problem, a given graph is to be transformed into a disjoint union of cliques via a minimum number of edge editing operations. In this paper we introduce a new variant of Cluster Editing whereby a vertex can be divided, or split, into two or more vertices thus allowing a single vertex to belong to multiple clusters. This new problem, Cluster Editing with Vertex Splitting, has applications in finding correlation clusters in discrete data, including graphs obtained from Biological Network analysis. We initiate the study of this new problem and show that it is fixed-parameter tractable when parameterized by the total number of vertex splitting and edge editing operations. In particular we obtain a 4k(k+1) vertex kernel for the problem.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'A Study on Idiosyncratic Handwriting with Impact on Writer Identification', 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Niagara Falls, NY, USA, pp. 193-198.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In this paper, we study handwriting idiosyncrasy in terms of its structural eccentricity. In this study, our approach is to find idiosyncratic handwritten text components and model the idiosyncrasy analysis task as a machine learning problem supervised by human cognition. We employ the Inception network for this purpose. The experiments are performed on two publicly available databases and an in-house database of Bengali offline handwritten samples. On these samples, subjective opinion scores of handwriting idiosyncrasy are collected from handwriting experts. We have analyzed the handwriting idiosyncrasy on this corpus which comprises the perceptive ground-truth opinion. We also investigate the effect of idiosyncratic text on writer identification by using the SqueezeNet. The performance of our system is promising.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Cognitive Analysis for Reading and Writing of Bengali Conjuncts', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In this paper, we study the difficulties arising in reading and writing of Bengali conjunct characters by human-beings. Such difficulties appear when the human cognitive system faces certain obstructions in effortlessly reading/writing. In our computer-based investigation, we consider the reading/writing difficulty analysis task as a machine learning problem supervised by human perception. To this end, we employ two distinct models: (a) an auto-derived feature-based Inception network and (b) a hand-crafted feature-based SVM (Support Vector Machine). Two commonly used Bengali printed fonts and three contemporary handwritten databases are used for collecting subjective opinion scores from human readers/writers. On this corpus, which contains the perceptive ground-truth opinion of reading/writing complications, we have undertaken to conduct the experiments. The experimental results obtained on various types of conjunct characters are promising.
Adak, C, Marinai, S, Chaudhuri, BB & Blumenstein, M 1970, 'Offline Bengali Writer Verification by PDF-CNN and Siamese Net', 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), IEEE, Vienna, Austria, pp. 381-386.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Automated handwriting analysis is a popular area of research owing to the variation of writing patterns. In this research area, writer verification is one of the most challenging branches, having direct impact on biometrics and forensics. In this paper, we deal with offline writer verification on complex handwriting patterns. Therefore, we choose a relatively complex script, i.e., Indic Abugida script Bengali (or, Bangla) containing more than 250 compound characters. From a handwritten sample, the probability distribution functions (PDFs) of some handcrafted features are obtained and input to a convolutional neural network (CNN). For such a CNN architecture, we coin the term 'PDFCNN', where handcrafted feature PDFs are hybridized with auto-derived CNN features. Such hybrid features are then fed into a Siamese neural network for writer verification. The experiments are performed on a Bengali offline handwritten dataset of 100 writers. Our system achieves encouraging results, which sometimes exceed the results of state-of-The-Art techniques on writer verification.
Ahadi, A, Lister, R & Mathieson, L 1970, 'Syntax error based quantification of the learning progress of the novice programmer', Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '18: 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ACM, Larnaca, Cyprus, pp. 10-14.
View/Download from: Publisher's site
View description>>
© 2018 Association for Computing Machinery. Recent data-driven research has produced metrics for quantifying a novice programmer’s error profile, such as Jadud’s error quotient. However, these metrics tend to be context dependent and contain free parameters. This paper reviews the caveats of such metrics and proposes a more general approach to developing a metric. The online implementation of the proposed metric is publicly available at http://online-analysis-demo.herokuapp.com/.
Ahadi, A, Lister, R, Lal, S & Hellas, A 1970, 'Learning programming, syntax errors and institution-specific factors', Proceedings of the 20th Australasian Computing Education Conference, ACE 2018: 20th Australasian Computing Education Conference, ACM, Brisbane, Queensland, Australia, pp. 90-96.
View/Download from: Publisher's site
View description>>
Learning programming is a road that is paved with mistakes. Initially, novices are bound to write code with syntactic mistakes, but after a while semantic mistakes take a larger role in the novice programmers’ lives. Researchers who wish to understand that road are increasingly using data recorded from students’ programming processes. Such data can be used to draw inferences on the typical errors, and on how students approach fixing them. At the same time, if the lens that is used to analyze such data is used only from one angle, the view is likely to be narrow. In this work, we replicate a previous multi-institutional study by Brown et al. [5]. That study used a large scale programming process data repository to analyze mistakes that novices make while learning programming. In our single institution replication of that study, we use data collected from approximately 800 students. We investigate the frequency, time required to fix, and the development of mistakes through the semester. We contrast our findings from our single institution with the multi-institutional study, and show that whilst the data collection tools and the research methodology are the same, the results can differ solely due to how the course is conducted.
Ajayan, AR, Al-Doghman, F & Chaczko, Z 1970, 'Visualizing Multimodal Big Data Anomaly Patterns in Higher-Order Feature Spaces', 2018 26th International Conference on Systems Engineering (ICSEng), 2018 26th International Conference on Systems Engineering (ICSEng), IEEE, Sydney, NSW, Australia, pp. 1-9.
View/Download from: Publisher's site
View description>>
The world today, as we know it, is profuse with information about humans and objects. Datasets generated by cyber-physical systems are orders of magnitude larger than their current information processing capabilities. Tapping into these big data flows to uncover much deeper perceptions into the functioning, operational logic and smartness levels attainable has been investigated for quite a while. Knowledge Discovery & Representation capabilities across mutiple modalities holds much scope in this direction, with regards to their information holding potential. This paper investigates the applicability of an arithmetic tool Tensor Decompositions and Factorizations in this scenario. Higher order datasets are decomposed for Anomaly Pattern capture which encases intelligence along multiple modes of data flow. Preliminary investigations based on data derived from Smart Grid Smart City Project are compliant with our hypothesis. The results proved that Abnormal patterns detected in decomposed Tensor factors encompass deep information energy content from Big Data as efficiently as other Pattern Extraction and Knowledge Discovery frameworks, while salvaging time and resources.
Al-Doghman, F, Chaczko, Z & Brookes, W 1970, 'Adaptive Consensus-based Aggregation for Edge Computing', 2018 26th International Conference on Systems Engineering (ICSEng), 2018 26th International Conference on Systems Engineering (ICSEng), IEEE, Sydney, Australia.
View/Download from: Publisher's site
View description>>
The swift expansion in employing IoT and the tendency to apply its application have encompassed a wide range of fields in our life. The heterogeneity and the massive amount of data produced from IoT require adaptive collection and transmission processes that function closed to front-end to mitigate these issues. In this paper, We introduced a method
of aggregating IoT data in a consensus way using Bayesian analysis and Markov Chain techniques. The aim is to enhance the quality of data traveling within IoT framework.
Alfaro-Garcia, VG, Merigo, JM, Plata-Perez, L & Calderon, GGA 1970, 'On Ordered Weighted Logarithmic Averaging Operators and Distance Measures', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Bangalore, India, pp. 1472-1477.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In this paper we perform an in-depth description of the main properties and families of the introduced ordered weighted logarithmic averaging distance (OWLAD) operator, the generalized ordered weighted averaging distance (GWLAD) operator, and the generalized ordered weighted logarithmic averaging distance (GOWLAD) operator. These operators have as foundation the well-known Hamming distance measure and the generalized ordered weighted logarithmic averaging (GOWLA) operator. Furthermore, we analyze multiple classical measures to characterize the operators' weighting vectors and we present alternative formulations of the operators based on the ordering of the arguments.
Alkalbani, AM & Hussain, FK 1970, 'Quality CloudCrowd: A Crowdsourcing Platform for QoS Assessment of SaaS Services', Springer International Publishing, pp. 235-240.
View/Download from: Publisher's site
View description>>
The adoption of Software as a Service (SaaS) has grown rapidly since 2010, and the need for Quality of Service (QoS) information is a significant factor in selecting a trustworthy SaaS service. In the existing literature, little attention has been given to providing QoS information with the SaaS service offering. SaaS providers offer a description of the overall QoS and service performance when they make their service offer; however service user satisfaction is a crucial factor in service selection decision-making. Crowd sourcing has grown in popularity over the last few years for performing tasks such as product design and consumer feedback, in particular, attracts the researchers in the field of client feedback on services or products. In this paper, we propose a novel framework for providing missing QoS values to the cloud marketplace called “Quality CloudCrowd”. Our proposed framework comprises of several parts; however, the development of the QCC platform for collecting missing QoS values is the core element of this structure and is the focus of this paper.
Al-Mansoori, A, Yu, S, Xiang, Y & Sood, K 1970, 'A survey on big data stream processing in SDN supported cloud environment', Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2018: Australasian Computer Science Week 2018, ACM, Brisbane, Queensland, Australia.
View/Download from: Publisher's site
View description>>
© 2018 Association for Computing Machinery. Big data is the term which denotes data with features such as voluminous data, a variety of data and streaming data as well. Processing big data became essential for enterprises to garner general intelligence and avoid biased conclusions. Due to these features, big data processing is considered to be a challenging task. Big data Processing should rely on a robust network. Cloud computing offers a suitable environment for these processes. However, it is more challenging when we move big data to the cloud, as managing the cloud resources is the main issue. Software Defined Network (SDN) has a potential solution to this issue. In this paper, first, we survey the present state of the art of SDN, cloud computing, and Big data Stream processing (BDSP). Then, we discuss SDN in the context of Big Data Stream Processing in Cloud environment. Finally, critical issues and research opportunity are discussed.
Almasoud, AS, Eljazzar, MM & Hussain, F 1970, 'Toward a Self-Learned Smart Contracts', 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), IEEE, Xi'an, China, pp. 269-273.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In recent years, Blockchain technology has been highly valued and disruptive. Several researches have presented a merge between blockchain and current application i.e. Medical, supply chain, and e-commerce. Although Blockchain architecture does not have a standard yet, IBM, MS, AWS offer BaaS (Blockchain as a Service). In addition to the current public chains i.e. Ethereum, NEO, and Cardeno; there are some differences between several public ledgers in terms of development and architecture. This paper introduces the main factors that affect integration of Artificial Intelligence with Blockchain. As well as, how it could be integrated for forecasting and automating; building self-regulated chain.
Alshehri, MD & Hussain, FK 1970, 'A Centralized Trust Management Mechanism for the Internet of Things (CTM-IoT)', Advances on Broad-Band Wireless Computing, Communication and Applications, International Conference on Broad-Band Wireless Computing, Communication and Applications, Springer International Publishing, Barcelona, Spain, pp. 533-543.
View/Download from: Publisher's site
View description>>
The Internet of Things (IoT) is an extended network that allows all devices to be connected to one another over the Internet. This new network faces numerous challenges, but mainly security issues. One such issue is how the IoT’s nodes can trust each other when they are connected over the Internet. There is a lack of studies that address the issue of trust management in IoT, or that provide a fully trustworthy framework. This paper proposes and delivers a centralized trust management mechanism for IoT by adding trust modules as a feature of the central trust manager, the Super Node (SN). To deliver a comprehensive approach, the SN includes other modules which are integrated with the whole IoT Trust Management framework to provide trustworthy communication between all nodes.
Altszyler, E, Berenstein, AJ, Milne, D, Calvo, RA & Fernandez Slezak, D 1970, 'Using contextual information for automatic triage of posts in a peer-support forum', Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, Association for Computational Linguistics.
View/Download from: Publisher's site
Amazeeq, MSAB, Kalantar, B, Al-Najjar, HAH, Idrees, MO, Pradhan, B & Mansor, S 1970, 'A geospatial solution using a TOPSIS approach for prioritizing urban projects in Libya', Proceedings - 39th Asian Conference on Remote Sensing: Remote Sensing Enabling Prosperity, ACRS 2018, Asian Conference on Remote Sensing, ACRS, Malaysia, pp. 87-96.
View description>>
The world population is growing rapidly; consequently, urbanization has been in an increasing trend in many developing cities around the globe. This rapid growth in population and urbanization have also led to infrastructural development such as transportation systems, sewer, power utilities and many others. One major problem with rapid urbanization in developing/third-world countries is that developments in mega cities are hindered by ineffective planning before construction projects are initiated and mostly developments are random. Libya faces similar problems associated with rapid urbanization. To resolve this, an automating process via effective decision making tools is needed for development in Libyan cities. This study develops a geospatial solution based on GIS and TOPSIS for automating the process of selecting a city or a group of cities for development in Libya. To achieve this goal, fifteen GIS factors were prepared from various data sources including Landsat, MODIS, and ASTER. These factors are categorized into six groups of topography, land use and infrastructure, vegetation, demography, climate, and air quality. The suitability map produced based on the proposed methodology showed that the northern part of the study area, especially the areas surrounding Benghazi city and northern parts of Al Marj and Al Jabal al Akhdar cities, are most suitable. Support Vector Machine (SVM) model accurately classified 1178 samples which is equal to 78.5% of the total samples. The results produced Kappa statistic of 0.67 and average success rate of 0.861. Validation results revealed that the average prediction rate is 0.719. Based on the closeness coefficient statistics, Benghazi, Al Jabal al Akhdar, Al Marj, Darnah, Al Hizam Al Akhdar, and Al Qubbah cities are ranked in that order of suitability. The outputs of this study provide solution to subjective decision making in prioritizing cities for development.
Anaissi, A, Braytee, A & Naji, M 1970, 'Gaussian Kernel Parameter Optimization in One-Class Support Vector Machines', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The one-class support vector machines with Gaussian kernel function is a promising machine learning method which have been employed extensively in the area of anomaly detection. However, generalization performance of OCSVM is profoundly influenced by its Gaussian model parameter σ. This paper proposes a new algorithm named Edged Support Vector (ESV) for tuning the Gaussian model parameter. The semantic idea of this algorithm is based on inspecting the spatial locations of the selected support vector samples. The algorithm selects the optimal value of σ which leads to a decision boundary that has all its support vectors reside on the surface of the training data (i.e. Edged support vector). A support vector is identified as an edge sample by constructing a hyperplane with its k-nearest neighbour samples using a hard margin linear support vector machine. The algorithm was successfully validated using two real world sensing datasets, one collected from a lab specimen which was replicated a jack arch from the Sydney Harbour Bridge, and another one collected from sensors mounted on vehicles for road condition assessment. Results show that the designed ESV algorithm is an appropriate choice to identify the optimal value of σ for OCSVM.
Angelini, L, Mugellini, E, Abou Khaled, O, Couture, N, van den Hoven, E & Bakker, S 1970, 'Internet of Tangibles', Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '18: Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Stockholm, Sweden, pp. 740-743.
View/Download from: Publisher's site
Anh, N, Prasad, M, Srikanth, N & Sundaram, S 1970, 'Wave Forecasting using Meta-cognitive Interval Type-2 Fuzzy Inference System', Procedia Computer Science, International Neural Network Society Conference on Big Data and Deep Learning, Elsevier BV, Bali, Indonesia, pp. 33-41.
View/Download from: Publisher's site
View description>>
© 2018 The Authors. Published by Elsevier Ltd. Renewable energy is fast becoming a mainstay in today's energy scenario. One of the important sources of renewable energy is the wave energy, in addition to wind, solar, tidal, etc. Wave prediction/forecasting is consequently essential in coastal and ocean engineering studies. However, it is difficult to predict wave parameters in long term and even in the short term due to its intermittent nature. This study aims to propose a solution to handle the issue using Interval type-2 fuzzy inference system, or IT2FIS. IT2FIS has been shown to be capable of handling uncertainty associated with the data. The proposed IT2FIS is a fuzzy neural network realizing Takagi-Sugeno-Kang inference mechanism employing meta-cognitive learning algorithm. The algorithm monitors knowledge in a sample to decide an appropriate learning strategy. Performance of the system is evaluated by studying significant wave heights obtained from buoys located in Singapore. The results compared with existing state-of-the art fuzzy inference system approaches clearly indicate the advantage of IT2FIS based wave prediction.
Anh, N, Prasad, M, Srikanth, N & Sundaram, S 1970, 'Wind Speed Intervals Prediction using Meta-cognitive Approach', Procedia Computer Science, International Neural Network Society Conference on Big Data and Deep Learning, Elsevier BV, Bali, Indonesia, pp. 23-32.
View/Download from: Publisher's site
View description>>
© 2018 The Authors. Published by Elsevier Ltd. In this paper, an interval type-2 neural fuzzy inference system and its meta-cognitive learning algorithm for wind speed prediction is proposed. Interval type-2 neuro-fuzzy system is capable of handling uncertainty associated with the data and meta-cognition employs self-regulation mechanism for learning. The proposed system realizes Takagi-Sugeno-Kang inference mechanism and adopts a fast data-driven interval-reduction method. Meta-cognitive learning enables the network structure to evolve automatically based on the knowledge in data. The parameters are updated based on an extended Kalman filter algorithm. In addition, the proposed network is able to construct prediction intervals to quantify uncertainty associated with forecasts. For performance evaluation, a real-world wind speed prediction problem is utilized. Using historical data, the model provides short-term prediction intervals of wind speed. The performance of proposed algorithm is compared with existing state-of-the art fuzzy inference system approaches and the results clearly indicate its advantages in forecasting problems.
Anwar, MJ, Gill, AQ & Beydoun, G 1970, 'A review of information privacy laws and standards for secure digital ecosystems', ACIS 2018 - 29th Australasian Conference on Information Systems, University of Technology, Sydney.
View/Download from: Publisher's site
View description>>
© 2018 authors. Information privacy is mainly concerned with the protection of personally identifiable information. Information privacy is an arduous task, in particular, in the context of complex adaptive and multi-party heterogeneous digital ecosystems. There is a need to identify and understand the relevant privacy laws and standards for designing the secure digital ecosystems. This paper presents the results of our information privacy research in digital ecosystems through the lens of local and international privacy regulations and standards. A qualitative research method was applied to review a set of identified privacy laws across the four layers of digital ecosystem. The evaluation criteria has been applied to evaluate the applicability and coverage of the selected seven information privacy laws to people, process, information and technology layers of the digital ecosystems. The research results indicate that information privacy is a critical phenomenon; however, it is not adequately addressed in the context of end-to-end digital ecosystems. It is recommended that a multi-layered privacy by design approach is required by reviewing and mapping information privacy laws and standards to design the secure digital ecosystems.
Anwar, MJ, Gill, AQ & Beydoun, G 1970, 'A review of information privacy laws and standards for secure digital ecosystems', ACIS 2018 - 29th Australasian Conference on Information Systems, Australasian Conference on Information Systems, Sydney, Australia.
View description>>
© 2018 authors. Information privacy is mainly concerned with the protection of personally identifiable information. Information privacy is an arduous task, in particular, in the context of complex adaptive and multi-party heterogeneous digital ecosystems. There is a need to identify and understand the relevant privacy laws and standards for designing the secure digital ecosystems. This paper presents the results of our information privacy research in digital ecosystems through the lens of local and international privacy regulations and standards. A qualitative research method was applied to review a set of identified privacy laws across the four layers of digital ecosystem. The evaluation criteria has been applied to evaluate the applicability and coverage of the selected seven information privacy laws to people, process, information and technology layers of the digital ecosystems. The research results indicate that information privacy is a critical phenomenon; however, it is not adequately addressed in the context of end-to-end digital ecosystems. It is recommended that a multi-layered privacy by design approach is required by reviewing and mapping information privacy laws and standards to design the secure digital ecosystems.
Awais, M, Prior, J, Ferguson, S & Leaney, J 1970, 'Enterprise IT governance and its impact on agile software development project success', Proceedings of the 27th International Conference on Information Systems Development: Designing Digitalization, ISD 2018, International Conference on Information Systems Development, AIS, Lund, Sweden, pp. 1-3.
View description>>
Enterprise IT (EIT) governance has become the primary approach in leveraging the IT function to achieve business objectives. We found in previously published work that decision making is the core of EIT governance. We collected quantitative data from professionals on decision making in Agile Software Development (ASD) projects, which we analyzed using Spearman’s Ranked Correlation Coefficient. Decision-making clarity in implementation and decision-making distribution in the organization layers positively impact ASD project success. However, our finding that tailoring the decision-making process does not impact ASD project success was most surprising. We conclude that the impact of decision-making factors in an ASD project’s success needs to be explored more deeply.
Azadeh, A, Partovi, M, Saberi, M, Chang, E & Hussain, O 1970, 'A Bayesian Network for Improving Organizational Regulations Effectiveness: Concurrent Modeling of Organizational Resilience Engineering and Macro-Ergonomics Indicators', Springer International Publishing, pp. 285-295.
View/Download from: Publisher's site
Badarinath, D, Chaitra, S, Bharill, N, Tanveer, M, Prasad, M, Suma, HN, Appaji, AM & Vinekar, A 1970, 'Study of Clinical Staging and Classification of Retinal Images for Retinopathy of Prematurity (ROP) Screening', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Retinopathy of Prematurity (ROP) is a disease which requires immediate precautionary measures to prevent blindness in the infants, and this condition is prevalent in premature babies in all the underdeveloped, developing, and in the developed countries as well. This paper proposes a tool by which the stage and zones of Retinopathy of Prematurity in infants can be diagnosed easily. This tool takes the input from the Retcam and detects the stage, zone, and gives a rating of 1 to 9 for classifying the severity of the disease in the infants. This is achieved by extracting the optic disc, marking the ridge, and the distance of the optic nerve. This tool can be easily used by nurses and paramedics, unlike the existing technologies which require the guidance of a specialist to come to a conclusion.
Bai, L, Yao, L, Kanhere, SS, Wang, X & Yang, Z 1970, 'Automatic Device Classification from Network Traffic Streams of Internet of Things', 2018 IEEE 43rd Conference on Local Computer Networks (LCN), 2018 IEEE 43rd Conference on Local Computer Networks (LCN), IEEE, USA, pp. 597-605.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. With the widespread adoption of Internet of Things (IoT), billions of everyday objects are being connected to the Internet. Effective management of these devices to support reliable, secure and high quality applications becomes challenging due to the scale. As one of the key cornerstones of IoT device management, automatic cross-device classification aims to identify the semantic type of a device by analyzing its network traffic. It has the potential to underpin a broad range of novel features such as enhanced security (by imposing the appropriate rules for constraining the communications of certain types of devices) or context-awareness (by the utilization and interoperability of IoT devices and their high-level semantics) of IoT applications. We propose an automatic IoT device classification method to identify new and unseen devices. The method uses the rich information carried by the traffic flows of IoT networks to characterize the attributes of various devices. We first specify a set of discriminating features from raw network traffic flows, and then propose a LSTM-CNN cascade model to automatically identify the semantic type of a device. Our experimental results using a real-world IoT dataset demonstrate that our proposed method is capable of delivering satisfactory performance. We also present interesting insights and discuss the potential extensions and applications.
Bai, L, Yao, L, Kanhere, SS, Wang, X & Yang, Z 1970, 'Automatic Device Classification from Network Traffic Streams of Internet of Things', PROCEEDINGS OF THE 2018 IEEE 43RD CONFERENCE ON LOCAL COMPUTER NETWORKS (LCN), 43rd IEEE Conference on Local Computer Networks (LCN), IEEE COMPUTER SOC, IL, Chicago, pp. 597-605.
Bano, M & Zowghi, D 1970, 'Crowd Vigilante', Requirements Engineering for Internet of Things, 4th Asia Pacific Requirements Engineering Symposium 2017, Springer Singapore, Melaka, Malaysia, pp. 114-120.
View/Download from: Publisher's site
View description>>
Crowdsourcing is a complex and sociotechnical problem solving approach for collaboration of geographically distributed volunteer crowd to contribute to the achievement of a common task. One of the major issues faced by crowdsourced projects is the trustworthiness of the crowd. This paper presents a vision to develop a framework with supporting methods and tools for early detection of the malicious acts of sabotage in crowdsourced projects by utilizing and scaling digital forensic techniques. The idea is to utilize the crowd to build the digital evidence of sabotage with systematic collection and analysis of data from the same crowdsourced project where the threat is situated. The proposed framework aims to improve the security of the crowdsourced projects and their outcomes by building confidence about the trustworthiness of the workers.
Bano, M, Zowghi, D & Rimini, FD 1970, 'Power and Politics of User Involvement in Software Development.', EASE, International Conference on Evaluation and Assessment in Software Engineering, ACM, Christchurch, New Zealand, pp. 157-162.
View/Download from: Publisher's site
View description>>
© 2018 Association for Computing Machinery. [CONTEXT] Involving users in software development is a complex and multi-faceted concept. Empirical research that studies power and politics of user involvement in software development is scarce. [OBJECTIVE] In this paper, we present the results from a case study of a software development project, where organizational politics was explored in context of user involvement in software development. [METHOD] We collected data through 30 interviews with 20 participants, attending workshops, observing project meetings, and analysing projects documents. The qualitative data was rigorously and iteratively analyzed. [RESULTS] The results indicate that the politics was a significant factor used to exert power and influence in decision-making processes. Communication channels were exploited for political purposes. These contributed to the users' dissatisfaction with their involvement thus impacting on the project outcome. [CONCLUSION] Having multiple teams of stakeholders with different levels of power in decision-making, the politics is inevitable and inescapable. Without careful attention, the political aspect of user involvement in software development can contribute to unsuccessful project.
Bano, M, Zowghi, D, Ferrari, A, Spoletini, P & Donati, B 1970, 'Learning from Mistakes: An Empirical Study of Elicitation Interviews Performed by Novices.', RE, International Requirements Engineering Conference, IEEE Computer Society, Banff, Alberta, Canada, pp. 182-193.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. [Context] Interviews are the most widely used elicitation technique in requirements engineering. However, conducting effective requirements elicitation interviews is challenging, due to the combination of technical and soft skills that requirements analysts often acquire after a long period of professional practice. Empirical evidence about training the novices on conducting effective requirements elicitation interviews is scarce. [Objectives] We present a list of most common mistakes that novices make in requirements elicitation interviews. The objective is to assist the educators in teaching interviewing skills to student analysts. [Re-search Method] We conducted an empirical study involving role-playing and authentic assessment with 110 students, teamed up in 28 groups, to conduct interviews with a customer. One re-searcher made observation notes during the interview while two researchers reviewed the recordings. We qualitatively analyzed the data to identify the themes and classify the mistakes. [Results and conclusion] We identified 34 unique mistakes classified into 7 high level themes. We also give examples of the mistakes made by the novices in each theme, to assist the educationists and trainers. Our research design is a novel combination of well-known pedagogical approaches described in sufficient details to make it re-peatable for future requirements engineering education and training research.
Biddle, R, Liu, S & Xu, G 1970, 'Semi-Supervised Soft K-means Clustering of Life Insurance Questionnaire Responses', 2018 5TH INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, AND SOCIO-CULTURAL COMPUTING (BESC), 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC), IEEE, PEOPLES R CHINA, Natl Univ Kaohsiung, Kaohsiung, pp. 30-31.
View/Download from: Publisher's site
Biddle, R, Liu, S, Tilocca, P & Xu, G 1970, 'Automated Underwriting in Life Insurance: Predictions and Optimisation', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Database Conference, Springer International Publishing, Gold Coast, QLD, Australia, pp. 135-146.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. Underwriting is an important stage in the life insurance process and is concerned with accepting individuals into an insurance fund and on what terms. It is a tedious and labour-intensive process for both the applicant and the underwriting team. An applicant must fill out a large survey containing thousands of questions about their life. The underwriting team must then process this application and assess the risks posed by the applicant and offer them insurance products as a result. Our work implements and evaluates classical data mining techniques to help automate some aspects of the process to ease the burden on the underwriting team as well as optimise the survey to improve the applicant experience. Logistic Regression, XGBoost and Recursive Feature Elimination are proposed as techniques for the prediction of underwriting outcomes. We conduct experiments on a dataset provided by a leading Australian life insurer and show that our early-stage results are promising and serve as a foundation for further work in this space.
Borah, P, Gupta, D & Prasad, M 1970, 'Improved 2-norm Based Fuzzy Least Squares Twin Support Vector Machine', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Bangalore, India, pp. 412-419.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In order to reduce the higher training cost of support vector machine (SVM) and its sensitivity towards noise and outliers, two fuzzy based approaches are proposed in this paper. The proposed approaches are based on least squares twin support vector machine (LSTWSVM) and fuzzy support vector machine (FSVM). The effects of noise and outliers are reduced by assigning lower membership values to the data points which are away from the class centers. Further, 2-norm of the slack vectors of the LSTWSVM formulation is taken after multiplying to their respective diagonal matrices of the membership values to effectively utilize the fuzzy membership principle and to make the optimization problem strongly convex. Moreover, the proposed approaches solve linear equations instead of quadratic programming problems which help in training faster. The effectiveness of the proposed approaches are established by comparing the classification accuracies and training time with support vector machine, fuzzy support vector machine, twin support vector machine and least squares twin support vector machine.
Boroon, L, Abedin, B & Erfani, S 1970, 'Exploring the dark side of online social networks: A taxonomy of negative effects on users', 26th European Conference on Information Systems: Beyond Digitization - Facets of Socio-Technical Change, ECIS 2018, European Conference on Information Systems, Portsmouth, UK.
View description>>
The use of online social networks (OSNs) has grown substantially over the past few years and many studies have reported the benefits and positive effects of using these platforms. However, the negative effects of OSNs have received little attention. Given the lack of a comprehensive picture of the dark side of using OSNs, we conducted a systematic literature review of the top information systems journals to categorise negative effects and develop a taxonomy of the dark side of OSNs use. Our review of 20 papers identified 43 negative effects of OSNs use, which we grouped into six categories: cost of social exchange, annoying content, privacy concerns, security threats, cyber bullying and low performance that formed the holistic view of dark side of OSNs use. This paper discusses implications of the findings, identifies gaps in the literature and provides a roadmap for future research.
Boroon, L, Abedin, B & Erfani, SS 1970, 'Impacts of dark side of online social networks (OSNs) on users: An agenda for future research', Proceedings of the 22nd Pacific Asia Conference on Information Systems - Opportunities and Challenges for the Digitized Society: Are We Ready?, PACIS 2018, Pacific Asia Conference on Information Systems, Association for Information Systems, Yokohama, Japan.
View description>>
The use of online social networks (OSNs) has grown substantially over the past few years and many studies have reported positive effects of using OSNs platforms. However, the negative effects of OSNs have received little attention. Given the lack of studies in this area, we conducted a review of top information systems journals to explore the gaps in the literature. Our review identified a number of theoretical and practical gaps. We then recommended an agenda for the future research, highlighting the importance of the dark side of OSNs and guiding researchers on how they can identify, mitigate and reduce negative consequences of using OSNs on different aspects of human lives.
Braytee, A, Anaissi, A & Kennedy, PJ 1970, 'Sparse Feature Learning Using Ensemble Model for Highly-Correlated High-Dimensional Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Siem Reap, Cambodia, pp. 423-434.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. High-dimensional highly correlated data exist in several domains such as genomics. Many feature selection techniques consider correlated features as redundant and therefore need to be removed. Several studies investigate the interpretation of the correlated features in domains such as genomics, but investigating the classification capabilities of the correlated feature groups is a point of interest in several domains. In this paper, a novel method is proposed by integrating the ensemble feature ranking and co-expression networks to identify the optimal features for classification. The main advantage of the proposed method lies in the fact, that it does not consider the correlated features as redundant. But, it shows the importance of the selected correlated features to improve the performance of classification. A series of experiments on five high dimensional highly correlated datasets with different levels of imbalance ratios show that the proposed method outperformed the state-of-the-art methods.
Brownlow, J, Chu, C, Fu, B, Xu, G, Culbert, B & Meng, Q 1970, 'Cost-Sensitive Churn Prediction in Fund Management Services', Database Systems for Advanced Applications (LNCS), International Conference on Database Systems for Advanced Applications, Springer International Publishing, Gold Coast, QLD, Australia, pp. 776-788.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. Churn prediction is vital to companies as to identify potential churners and prevent losses in advance. Although it has been addressed as a classification task and a variety of models have been employed in practice, fund management services have presented several special challenges. One is that financial data is extremely imbalanced since only a tiny proportion of customers leave every year. Another is a unique cost-sensitive learning problem, i.e., costs of wrong predictions for churners should be related to their account balances, while costs of wrong predictions for non-churners should be the same. To address these issues, this paper proposes a new churn prediction model based on ensemble learning. In our model, multiple classifiers are built using sampled datasets to tackle the imbalanced data issue while exploiting data fully. Moreover, a novel sampling strategy is proposed to deal with the unique cost-sensitive issue. This model has been deployed in one of the leading fund management institutions in Australia, and its effectiveness has been fully validated in real applications.
Brownlow, J, Chu, C, Fu, B, Xu, G, Culbert, B & Meng, Q 1970, 'Cost-Sensitive Churn Prediction in Fund Management Services', DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2018), PT II, 23rd International Conference on Database Systems for Advanced Applications (DASFAA)., SPRINGER INTERNATIONAL PUBLISHING AG, Gold Coast, AUSTRALIA, pp. 776-788.
View/Download from: Publisher's site
Brownlow, J, Chu, C, Xu, G, Culbert, B, Fu, B & Meng, AQ 1970, 'A Multiple Source based Transfer Learning Framework for Marketing Campaigns', Proceedings of the International Joint Conference on Neural Networks, 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio De Janeiro, Brasil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The rapid growing number of marketing campaigns demands an efficient learning model to identify prospective customers to target. Transfer learning is widely considered as a major way to improve the learning performance by using the generated knowledge from previous learning tasks. Most recent studies focused on transferring knowledge from source domains to target domains which may result in knowledge missing. To avoid this, we proposed a multiple source based transfer learning framework to do it reversely. The data in target domains is transferred into source domains by normalizing them into the same distributions and then improving the learning task in target domains by its generated knowledge in source domains. The proposed method is general and can deal with supervised and unsupervised inductive and transductive learning simultaneously with a compatibility to work with different machine learning models. The experiments on real-world campaign data demonstrate the performance of the proposed method.
Buchan, J, Bano, M, Zowghi, D & Volabouth, P 1970, 'Semi-Automated Extraction of New Requirements from Online Reviews for Software Product Evolution.', ASWEC, Australian Software Engineering Conference, IEEE Computer Society, Adelaide, Australia, pp. 31-40.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In order to improve and increase their utility, software products must evolve continually and incrementally to meet the new requirements of current and future users. Online reviews from users of the software provide a rich and readily available resource for discovering candidate new features for future software releases. However, it is challenging to manually analyze a large volume of potentially unstructured and noisy data to extract useful information to support software release planning decisions. This paper investigates machine learning techniques to automatically identify text that represents users' ideas for new features from their online reviews. A binary classification approach to categorize extracted text as either a feature or non-feature was evaluated experimentally. Three machine learning algorithms were evaluated in the experiments: Naïve Bayes (with multinomial and Bernoulli variants), Support Vector Machines (with linear and multinomial variants) and Logistic Regression. Variations on the configurations of k-fold cross validation, the use of n-grams and review sentiment were also experimentally evaluated. Based on binary classification of over a thousand separate reviews of two products, Trello and Jira, linear Support Vector Machines with review sentiment as an input, using n-gram (1,4) together with k-fold 10 cross validation gave the best performance. The results have confirmed the feasibility and accuracy of semi-automated extraction of candidate requirements from a large volume of unstructured and noisy online user reviews. The next steps planned are to experiment with machine supported grouping, prioritizing and visualizing the extracted features to best support release planners' work, as well as extending the sources of candidate requirements.
Butler, A, Xu, G & Musial, K 1970, 'Research Performance Reporting is Fallacious', Proceedings - 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing, BESC 2018, 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC), IEEE, Taiwan, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Citation-based research performance reporting is contentious. The methods used to categorize research and researchers are misleading and somewhat arbitrary. This paper compares cohorts of social science categorized citation data and ultimately shows that assumptions of comparability are spurious. A subject area comparison using research field distributions and networks between a 'reference author', bibliographically coupled data, keyword-obtained data, social science data and highly cited social science author data shows very dissimilar field foci with one dataset very much being medically focused. This leads to the question whether subject area classifications should continue to be used as the basis for the plethora of rankings and lists that use such groupings. It is suggested that bibliographic coupling and dynamic topic classifiers would better inform citation data comparisons.
Butler, A, Xu, G & Musial, K 1970, 'Research Performance Reporting is Fallacious', 2018 5TH INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, AND SOCIO-CULTURAL COMPUTING (BESC), 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC), IEEE, Natl Univ Kaohsiung, Kaohsiung, PEOPLES R CHINA, pp. 1-5.
View/Download from: Publisher's site
Cao, G, Downes, A, Khan, S, Wong, W & Xu, G 1970, 'Taxpayer Behavior Prediction in SMS Campaigns', 2018 5TH INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, AND SOCIO-CULTURAL COMPUTING (BESC), 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC), IEEE, Natl Univ Kaohsiung, Kaohsiung, PEOPLES R CHINA, pp. 19-23.
View/Download from: Publisher's site
Cao, G, Downes, A, Khan, S, Wong, W & Xu, G 1970, 'Taxpayer Behavior Prediction in SMS Campaigns', 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC), 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC), IEEE.
View/Download from: Publisher's site
Cao, G, Downes, A, Khan, S, Wong, W & Xu, G 1970, 'Taxpayer Behavior Prediction in SMS Campaigns', Proceedings - 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing, BESC 2018, 2018 5th International Conference on Behavioural, Economic, and Socio-Cultural Computing, Taiwan, pp. 19-23.
View/Download from: Publisher's site
View description>>
This paper develops a prediction study of a group of small businesses which have a higher risk of non-compliance with taxation obligations. These businesses have been selected for a pre-emptive SMS reminder campaign and prediction models are used to predict the probability of on-Time payment. Through experiments on a real world taxation debt dataset, it is found that the XGBoost algorithm significantly outperforms random forest, decision tree and logistic regression algorithms. The variables showing the largest explanatory power are related to debt amount. Second and subsequent SMS messages make a negligible contribution to the probability of payment. The XGBoost explainer is also used to delve further into the inner workings of the algorithm.
Cetindamar Kozanoglu, D & Kozanoglu, H 1970, 'The 4th Industrial Revolution and its Impact on Division of Labor in Developing Countries', Transformation, Coopetition, and Sustainability in the era of Globalization, Engagement and Disruptive Technology, The 27th World Business Congress of the International Management Development Association, International Management Development Association, Hong Kong, pp. 56-60.
View description>>
Technological developments and automation have always been a hope and threat. Technological changes greatly affect employment opportunities and division of labor. It normally offer novel methods of producing and consuming goods and services, suggests rising living standards as well. It frees humans from dangerous, repetitive and boring works. On the other hand it has disruptive consequences for existing work practices and might result in substantial job losses. The recent technological breakthroughs build around the generation, processing and dissemination of information under the umbrella term of the 4th Industrial Revolution. There are two opposite views on the impact of the 4th Industrıal Revolution on labor. So far consequences of digital revolution discussed more with developed country perspective. This paper will focus on developing countries and try to investigate, how coming wave of automation will affect labor in developing world.
Cetindamar, D, Lammers, T & Sick, N 1970, 'Establishing Entrepreneurship Ecosystems Based on Digital Technologies: A Policy Roadmap Approach at the City Level', 2018 Portland International Conference on Management of Engineering and Technology (PICMET), 2018 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Honolulu, pp. 1-5.
View/Download from: Publisher's site
View description>>
The last decade has witnessed the rise of technology-based entrepreneurs who managed to build companies based on the use of emerging digital technologies. However, the pure availability of digital technologies in a particular country does not guarantee to establish successful companies and economic growth. Companies are located in certain regional or urban environments with varying contextual factors. Cities have been a popular unit of analysis for technological development and economic activities due to their high dependency on immediate local environmental factors. Nevertheless, the literature offers a limited view on the relationship between technological developments and entrepreneurial activities at city level to identify feasible frameworks to support a digitally competitive entrepreneurial ecosystem. By combining the previous literature on entrepreneurship and digital technologies within a particular urban context, this paper offers a conceptual approach that might help policy makers to plan the future competitiveness of their cities.
Chen, J, Lin, Z, Liu, X, Deng, Z & Wang, X 1970, 'Reputation-Based Framework for Internet of Things', Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Springer International Publishing, pp. 592-597.
View/Download from: Publisher's site
View description>>
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018. Internet of Things (IoT) is going to create a world where physical objects are integrated into traditional networks in order to provide intelligent services for human-beings. Trust plays an important role in communications and interactions of objects in IoT. Two vital tasks of trust management are trust model design and reputation evaluation. However, current literature cannot be simply and directly applied to the IoT due to smart node hardware constraints, very limited computing and energy resources. Therefore a general and flexible model is needed to meet the special requirements for IoT. In this paper, we firstly design LTrust, a layered trust model for IoT. Then, a Reputation Evaluation Scheme for the Node (RES-N) has been presented. The proposed trust model and reputation evaluation scheme provide a general framework for the study of trust management for IoT. The efficiency of RES-N is validated by the simulation results.
Chen, K, Yao, L, Wang, X, Zhang, D, Gu, T, Yu, Z & Yang, Z 1970, 'Interpretable Parallel Recurrent Neural Networks with Convolutional Attentions for Multi-Modality Activity Modeling', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Multimodal features play a key role in wearable sensor based human activity recognition (HAR). Selecting the most salient features adaptively is a promising way to maximize the effectiveness of multimodal sensor data. In this regard, we propose a 'collect fully and select wisely' principle as well as an interpretable parallel recurrent model with convolutional attentions to improve the recognition performance. We first collect modality features and the relations between each pair of features to generate activity frames, and then introduce an attention mechanism to select the most prominent regions from activity frames precisely. The selected frames not only maximize the utilization of valid features but also reduce the number of features to be computed effectively. We further analyze the accuracy and interpretability of the proposed model based on extensive experiments. The results show that our model achieves competitive performance on two benchmarked datasets and works well in real life scenarios.
Cheng, EJ, Young, K-Y & Lin, C-T 1970, 'Image-based EEG signal processing for driving fatigue prediction', 2018 International Automatic Control Conference (CACS), 2018 International Automatic Control Conference (CACS), IEEE, Taoyuan, Taiwan.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This study proposes a EEG-based prediction system that transform the measured EEG record into an image-liked data for estimating the drowsiness level of drivers. Drowsy driving is one of the main factors to the occurrence of traffic accident. Since drivers themselves may not always immediately recognize that they are in the drowsy state, the risk of traffic accident increases while the driver is in the low vigilance state. In order to address this problem, the estimation of drowsy driving state via brain-computer interfaces (BCI) becomes a major concern in the driving safety field. This study transforms the measured EEG record into a image-liked feature maps, and then passes these feature maps to a Convolutional Neural Network (CNN) to learn the discriminative representations. The proposed drowsiness prediction system is evaluated by leave-one-subject-out cross-validation. The results indicate that our approach provides impressive and robust prediction performance on the EEG dataset without artifact removal process.
Chivukula, AS, Li, J & Liu, W 1970, 'Discovering Granger-Causal Features from Deep Learning Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), The 31st Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 692-705.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. In this research, we propose deep networks that discover Granger causes from multivariate temporal data generated in financial markets. We introduce a Deep Neural Network (DNN) and a Recurrent Neural Network (RNN) that discover Granger-causal features for bivariate regression on bivariate time series data distributions. These features are subsequently used to discover Granger-causal graphs for multivariate regression on multivariate time series data distributions. Our supervised feature learning process in proposed deep regression networks has favourable F-tests for feature selection and t-tests for model comparisons. The experiments, minimizing root mean squared errors in the regression analysis on real stock market data obtained from Yahoo Finance, demonstrate that our causal features significantly improve the existing deep learning regression models.
Cobo, MJ, Wang, W, Laengle, S, Merigó, JM, Yu, D & Herrera-Viedma, E 1970, 'Co-words Analysis of the Last Ten Years of the International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems', Communications in Computer and Information Science, Springer International Publishing, Cádiz, Spain, pp. 667-677.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. The main aim of this contribution is to develop a co-words analysis of the International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems in the last ten years (2008–2017). The software tool SciMAT is employed using an approach that allows us to uncover the main research themes and analyze them according to their performance measures (qualitative and quantitative). An amount of 562 documents were retrieved from the Web of Science. The corpus was divided into two consecutive periods (2008–2012 and 2013–2017). Our key findings are that the most important research themes in the first and second period were devoted with decision making process and its related aspects, techniques and methods.
Cui, L, Qu, Y, Yu, S, Gao, L & Xie, G 1970, 'A Trust-Grained Personalized Privacy-Preserving Scheme for Big Social Data', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, Kansas City, MO, USA, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In the age of big data, the rapid development of social networking applications has become an improtant data source, while the massive collection of personal data leads to significant privacy concerns. Differential privacy emerged as an effective tool to get access to useful information while provide strong privacy guarantees. However, most the current proposed solutions suppose that all individuals across the network require a uniform level of privacy protection, which rules out of individuals' personalized requirements. Aiming at solving this problem, in this paper, we propose a trust-grained personalized differential privacy mechanism, called TGDP, by combining the notion of trust. Specifically, whenever a user wants to get another user's personal information, the proposed mechanism returns a corresponding private response in which the privacy level selected for each individual depend on the trust value between them in the network. Compared with traditional methods, the scheme can provide a fine-grained differential privacy protection method, while guarantee the utility of social networks. Finally, the scheme is evaluated analytically, and demonstrated experimentally on the real- world data, which reflects its effectiveness and utility.
Cunill, OM, Gil-Lafuente, AM, Merigó, JM & González, LO 1970, 'Academic Contributions in Asian Tourism Research: A Bibliometric Analysis', Advances in Intelligent Systems and Computing, International Conference of the ‘Forum for Interdisciplinary Mathematics', Springer International Publishing, Spain, pp. 326-342.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. Bibliometrics is a fundamental field of information science that helps to draw quantitative conclusions about bibliographic material. During the last decade, the use of techniques and bibliometric studies has experienced a significant increase due to the improvement of information technology and its usefulness to organize knowledge in a scientific discipline. This paper presents an overview of the most productive and influential Asian universities and countries in academic tourism research through the use of bibliometric indicators, according to information found in the database Web of Science (WoS). This database is considered one of the main tools for the analysis of scientific information. In order to analyze the information obtained, several rankings of universities and countries have been carried out, both global and individual, based on a series of bibliometric indicators, such as the number of publications, the number of citations and h-index. Analyzing the results, we observe that within tourism research in Asia, the most influential countries are China, Taiwan and South Korea, and that the leading university is Hong Kong Polytechnic University.
Cunill, OM, Gil-Lafuente, AM, Merigó, JM & González, LO 1970, 'Asian Academic Research in Tourism with an International Impact: A Bibliometric Analysis of the Main Academic Contributions', Advances in Intelligent Systems and Computing, International Conference of the ‘Forum for Interdisciplinary Mathematics', Springer International Publishing, Spain, pp. 307-325.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. Asian academic research in tourism is a very recent field of research, which has significantly developed over the last decade due to the strong expansion of the tourism industry worldwide, and also owing to the strong evolution of search engines via the Internet. This article analyses the main contributions to Asian academic research in tourism over recent years using bibliometric indicators. The results obtained are based on the information contained in the Web of Science database. These results focus on explaining three fundamental questions. Firstly, we study the publication structure of Asian articles in tourism over recent decades, as well as the citations these articles have received. Secondly, we present a ranking of the most important tourism journals in Asia through the use of a series of indicators such as the number of publications in said journals, the number of citations, and the h-index. Finally, we present a list of the 50 most cited Asian articles in tourism (and hence the ones that can be considered the most influential) of all times. The results show how, in Asian terms, the most influential journals in this field are Tourism Management (TM), the Annals of Tourism Research (ATR) and the International Journal of Hospitality Management (IJHM).
Das, A, Sengupta, A, Saqib, M, Pal, U & Blumenstein, M 1970, 'More Realistic and Efficient Face-Based Mobile Authentication using CNNs', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In this work, we propose a more realistic and efficient facebased mobile authentication technique using CNNs. This paper discusses and explores an inevitable problem of using face images for mobile authentication, taken from varying distances with a front/selfie camera of the mobile phone. Incidentally, once an individual comes towards a certain distance from the camera, the face images get large and appear over-sized. Simultaneously sharp features of some portions of the face, such as forehead, cheek, and chin are changed completely. As a result, the face features change and the impact increases exponentially once the individual crosses a certain distance and gradually approaches towards the front camera. This work proposes a solution (achieving better accuracy and facial features, whereby face images were cropped and aligned around its close bounding box) to mitigate the aforementioned identified gap. The work investigated different frontier face detection and recognition techniques to justify the proposed solution. Among all the employed methods evaluated, CNNs worked best. For a quantitative comparison of the proposed method, manually cropped face images/annotations of the face images along with their close boundary were prepared. In turn, we have developed a database considering the above-mentioned scenario for 40 individuals, which will be publicly available for academic research purposes. The experimental results achieved indicate a successful implementation of the proposed method and the performance of the proposed technique is also found to be superior in comparison to the existing state-of-the-art.
Dong, M, Yao, L, Wang, X, Benatallah, B, Sheng, QZ & Huang, H 1970, 'DUAL: A Deep Unified Attention Model with Latent Relation Representations for Fake News Detection', Web Information Systems Engineering – WISE 2018, Springer International Publishing, Dubai, UAE, pp. 199-209.
View/Download from: Publisher's site
View description>>
The prevalence of online social media has enabled news to spread wider and faster than traditional publication channels. The easiness of creating and spreading the news, however, has also facilitated the massive generation and dissemination of fake news. It, therefore, becomes especially important to detect fake news so as to minimize its adverse impact such as misleading people. Despite active efforts to address this issue, most existing works focus on mining news’ content or context information from individuals but neglect the use of clues from multiple resources. In this paper, we consider clues from both news’ content and side information and propose a hybrid attention model to leverage these clues. In particular, we use an attention-based bi-directional Gated Recurrent Units (GRU) to extract features from news content and a deep model to extract hidden representations of the side information. We combine the two hidden vectors resulted from the above extractions into an attention matrix and learn an attention distribution over the vectors. Finally, the distribution is used to facilitate better fake news detection. Our experimental results on two real-world benchmark datasets show our approach outperforms multiple baselines in the accuracy of detecting fake news.
Erfani, SS & Ramin, K 1970, 'Developing and testing a smartphone health application for older people to improve their mental health', Americas Conference on Information Systems 2018: Digital Disruption, AMCIS 2018, Americas Conference on Information Systems, AISEL, New Orleans, USA, pp. 1-6.
View description>>
There is an increasing number of smartphone health applications available to smartphone users. Mental health applications are becoming an increasingly influential part of healthcare. While the adoption of smartphones has emerged as a vital tool for health-related behavioural interventions, making mental health support more accessible, and reducing barriers to help seeking, little is known about the potential benefits that smartphone health applications can provide in the mental health care of older people. There are hardly any contributions that focus on smartphone mental health applications for older people. This paper asks what are the key features needed for smartphone health application designed to improve mental health of older people? To answer this question, a comprehensive literature review of studies conducted in information systems and mental health disciplines has been undertaken and a theoretical model is proposed. This study contributes to the existing knowledge base through the development of a new theoretical model and the introduction of the features of a mobile health application that may have a positive impact on older peoples' mental health.
Fahmideh, M & Zowghi, D 1970, 'IoT Smart City Architectures: An Analytical Evaluation', 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), IEEE, Vancuover, Canada, pp. 709-715.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. while several IoT architectures have been proposed for enabling smart city visions, not much work has been done to assess and compare these architectures. By applying our proposed evaluation framework that incorporates a variety of 33 criteria, this paper presents a comparative analysis of nine existing well-known IoT architectures. The results of the analysis highlight the strengths and weaknesses of these architectures and give insight to city leaders, architects, and developers aiming at selecting the most appropriate architecture or their combination that may fit their own specific smart city development scenario.
Fan, B, Ouyang, Z, Niu, J, Yu, S & Rodrigues, J 1970, 'Smart Water Flosser: A Novel Smart Oral Cleaner with IMU Sensor', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Among various tools invented to help improve peo-ple's oral health, water flossers can achieve better performance than traditional and electronic toothbrushes, and are less harmful than dental floss, especially for those with orthodontic teeth or tooth implant surgeries. However, the water flossers available in the market serve no monitoring or recording functions that can help consumers clean their teeth in a more efficient way. To capture users' motions, this study develops a novel smart water flosser, installing an Inertial Measurement Unit (IMU) sensor on the handle of the flosser. We determine the motion cycle using signal processing techniques and extract a set of statistical characteristics from the data set. We then train and compare different machine learning models as classifiers to recognize the motions of the handle. We find that the Random Forest model achieves the best detection accuracy at 97% and 85% of the whole feature set and optimized set, respectively. Finally we implement an Android App that connects the smart water flosser with a Bluetooth module to show the washing area in real-time and record relevant information for further guidance.
Fazal, MAU, Ferguson, S & Johnston, A 1970, 'Investigating Concurrent Speech-based Designs for Information Communication', Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, AM'18: Sound in Immersion and Emotion, ACM, Wrexham, United Kingdom, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2018 Association for Computing Machinery. Speech-based information is usually communicated to users in a sequential manner, but users are capable of obtaining information from multiple voices concurrently. This fact implies that the sequential approach is possibly under-utilizing human perception capabilities to some extent and restricting users to perform optimally in an immersive environment. This paper reports on an experiment that aimed to test different speech-based designs for concurrent information communication. Two audio streams from two types of content were played concurrently to 34 users, in both a continuous or intermittent form, with the manipulation of a variety of spatial configurations (i.e. Diotic, Diotic-Monotic, and Dichotic). In total, 12 concurrent speech-based design configurations were tested with each user. The results showed that the concurrent speech-based information designs involving intermittent form and the spatial difference in information streams produce comprehensibility equal to the level achieved in sequential information communication.
Feng, B, Li, G, Li, G, Zhou, H, Zhang, H & Yu, S 1970, 'Efficient Mappings of Service Function Chains at Terrestrial-Satellite Hybrid Cloud Networks', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The great improvements in both satellite and terrestrial networks have motivated the academic and industrial communities to rethink their integration. As a result, there is an increasing interest on how to combine broadband satellite networks with the clean-slate terrestrial ones, especially with clouds leveraging SDN (Software-Defined Networking) and NFV (Network Functions Virtualization) techniques, for better network openness, flexibility, elasticity and controllability. In this way, customized SFCs (Service Function Chaining) can be deployed at terrestrial and satellite ground segment clouds on demand, significantly reducing OPEX and CAPEX (Operational and Capital Expense). Nevertheless, how to efficiently leverage cloud substrate resources and deploy required SFCs is still challenging, as many issues such as system cost and revenue are involved. Therefore, in this paper, we focus on SFC mappings at SDN/NFV-based terrestrial and satellite ground clouds, and propose a related approach that considers both SF (Service Function) multiplexing and SFC merging, aiming to improve resource utilization efficiency of underlying substrate networks. Extensive simulations are performed and numerical results have verified benefits of the proposed SFC mapping approach.
Fredericks, J & Lawrence, C 1970, '#thismymob: Preserving and promoting indigenous australian cultural heritage', CEUR Workshop Proceedings, Workshop on Mobile Access to Cultural Heritage, CEUR, Barcelona, Spain.
View description>>
Mobile technologies have become an integral part of daily life in contemporary society thanks to the pervasiveness of smartphones and tablet devices. Over the past 30 years these technologies have evolved beyond their original mandate by permeating diverse social segments across the world. Many cultural heritage projects have adopted mobile technologies to catalogue and document culture and history. However, limited research has examined the potential of using mobile technologies as a mechanism to preserve and promote Indigenous cultural heritage. This work-in-progress paper outlines three distinct areas for the design and development of mobile technologies for Indigenous cultural heritage. We outline these as: (1) Establishing the notion of 'digital land rights' which asserts the rights of Indigenous people to a safe online space that they control; (2) Co-designing with a diverse group of Indigenous communities to build meaningful mobile experiences; and, (3) Documenting traditions within their unique context to preserve and promote Indigenous cultural history.
Fu, L, Li, J, Zhou, L, Ma, Z, Liu, S, Lin, Z & Prasad, M 1970, 'Utilizing Information from Task-Independent Aspects via GAN-Assisted Knowledge Transfer', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Observed data often have multiple labels with respect to different aspects. For example, a picture can have one label specifying the contents in terms of the object category such as aeroplane, building, cat, etc. And in the meanwhile have another label describing the image style such as photo-realistic or artistic. The central idea of this work is that any annotation of the data contains precious knowledge and is not to be foregone: An analytic task focusing on one aspect of the data can benefit from the knowledge transferred from the other aspects. We propose a passive knowledge transfer scheme for deep neural network training based on the generative adversarial nets (GANs). The adversarial training scheme encourages the nets to encode data into representations that are both discriminative for the target aspect and invariant with respect to the irrelevant aspects. We show that the scheme mixes the conditional distributions of the encoded data on the irrelevant aspects, by the theory on the link between the GAN framework and the Wasserstein metric in distribution spaces. Moreover, we empirically verified the method by i) classifying images despite influence by geometric transform and ii) recognizing the movements (geometric transform) regardless the image contents.
Garcia, JA 1970, 'Assessing the validity of in-game stepping performance data from a kinect-based fall prevention exergame', 2018 IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH), 2018 IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH), IEEE, IEEE, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. One of the main limitations of commercial games is the inability to determine improvements in the mental and physical health of the players. Although high game scores might provide an indication of higher cognitive and physical abilities, these are not sufficient to reliably determine the improvement in health outcomes. The work presented in this paper hence focuses on determining whether the collection of clinical measures during gameplay could potentially be used as a reliable indicator of improvement. For this study, the author uses the StepKinnection game for fall prevention, a Kinect-based game which builds on a hybrid version of the Choice Stepping Reaction Time (CSRT) task, a validated test that has been shown to prospectively predict older fallers. A group of 10 independent-living older adults was recruited and asked played with the game for a 12 weeks period of time. Assessments were conducted at baseline and every four weeks. Stepping performance data collected through gameplay was compared to the validated CSRT test. Statistical analysis proved that the stepping performance data collected by the game correlated and agreed with the validated measures of the CSRT test, suggesting that this could be used as a reliable indicator for health improvements.
Garcia, JA, Raffe, WL & Navarro, KF 1970, 'Assessing user engagement with a fall prevention game as an unsupervised exercise program for older people', Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2018: Australasian Computer Science Week 2018, ACM, Published in: · Proceeding ACSW '18 Proceedings of the Australasian Computer Science Week Multiconference Article No. 37 Brisband, Queensland, Australia, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2018 ACM. Falling is, unfortunately, a leading cause of injury and death in the global elderly population. However, it has previously been shown that increased physical and cognitive activity can decrease the occurrence of falls in the elderly. This paper investigates the potential for a long-term, unsupervised fall prevention training tool in the form of the StepKinnection game, which was designed to exercise both reflex times and movement speed while also providing entertainment. Specifically, this game was used in a three month user study consisting of 10 participants over the age of 65. Adherence to the training program, enjoyment of the game, and ease of use of the game were investigated using a custom usability questionnaire, four established usability scales, heuristic evaluation of gameplay data, and semi-structured interviews. Results show that participants generally had positive attitudes towards the game, they felt that they would engage with this training program more than there current exercises, and that the game was easy to use without guidance or supervision beyond the initial set up support and instructions provided at the start of the experiment period.
Ghantous, GB & Gill, AQ 1970, 'DevOps Reference Architecture for Multi-cloud IOT Applications.', CBI (1), International Conference on Business Informatics, IEEE Computer Society, Vienna, Austria., pp. 158-167.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. There is a growing interest among organizations in adopting DevOps approach for IoT (Internet of Things) applications. However, the challenge is: how to apply DevOps when a multi-cloud heterogeneous environment is required for IoT application. This paper aims to addresses this important challenge and proposes a DevOps Reference Architecture (DRA) to deploy IoT-applications on multi-cloud. The proposed architecture is evaluated by the means of a case study, which involves deploying an IoT application on the chosen set of clouds. The results of this initial evaluation indicate that the proposed architecture would help practitioners and researchers to understand the usefulness and applicability of DevOps approach on multi-cloud platform for automating IoT application deployment.
Gheisari, S, Catchpoole, DR, Charlton, A & Kennedy, PJ 1970, 'Patched Completed Local Binary Pattern is an Effective Method for Neuroblastoma Histological Image Classification', Communications in Computer and Information Science, Australasian Conference on Data Mining, Springer Singapore, Bathurst, NSW, Australia, pp. 57-71.
View/Download from: Publisher's site
View description>>
© Springer Nature Singapore Pte Ltd. 2018. Neuroblastoma is the most common extra cranial solid tumour in children. The histology of neuroblastoma has high intra-class variation, which misleads existing computer-aided histological image classification methods that use global features. To tackle this problem, we propose a new Patched Completed Local Binary Pattern (PCLBP) method combining Sign Binary Pattern (SBP) and Magnitude Binary Pattern (MBP) within local patches to build feature vectors which are classified by k-Nearest Neighbor (k-NN) and Support Vector Machine (SVM) classifiers. The advantage of our method is extracting local features which are more robust to intra-class variation compared to global ones. We gathered a database of 1043 histologic images of neuroblastic tumours classified into five subtypes. Our experiments show the proposed method improves the weighted average F-measure by 1.89% and 0.81% with k-NN and SVM classifiers, respectively.
Glynn, P, Shapiro, CD & Voinov, A 1970, 'Records of Engagement and Decision Tracking for Adaptive Management and Policy Development', 2018 IEEE International Symposium on Technology and Society (ISTAS), 2018 IEEE International Symposium on Technology and Society (ISTAS), IEEE, George Washington Univ, Sch Engn & Appl Sci, Washington, DC, pp. 81-87.
View/Download from: Publisher's site
Gong, S, Wang, X & Oberst, S 1970, 'Non-linear Analysis of Vibrating Flip-flow Screens', MATEC Web of Conferences, International Conference on Design and Manufacturing Engineering, EDP Sciences, Monash University, Melbourne, Australia, pp. 04007-04007.
View/Download from: Publisher's site
View description>>
Vibrating flip-flow screens provide an effective solution for the screening of highly viscous or fine materials. Apart from other factors, the vibration characteristics of the main and floating screen frames are largely responsible for the flip-flow screen’s sifting performance and its processing capacity. In this paper, the vibration characteristics of a vibrating flip-flow screen with linear and nonlinear springs are compared. Analytical results highlight that increasing the relative amplitude and avoiding undesirable resonances of the main and the floating screen frames can be realised to improve the screen’s performance. The materials on the screen panel have less an effect on the vibration characteristics of the vibrating flip-flow screen with nonlinear springs than using linear springs. Other design parameters which influence the performance of vibrating flip-flow screens are discussed.
Groulx, A & McGregor, C 1970, 'A Social Media Tax Data Warehouse to Manage the Underground Economy', 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), IEEE, Exeter, United Kingdom, pp. 1599-1606.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Social media can provide a wealth of information valuable to tax administrators in managing the underground economy. This paper proposes a data warehouse design to integrate social media data into tax analytics processes. The warehouse is designed to support modern tax administration strategies that encourage self-regulation and voluntary compliance by shaping public opinion, improving services and developing inclusive tax policies. The warehouse also incorporates the use of social media analytics to support tax evasion detection and enforcement activities such as compliance risk assessment, audits, inspections and investigations.
Gui, M, Zhang, Z, Yang, Z, Gu, Y & Xu, G 1970, 'An Effective Joint Framework for Document Summarization', Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18, Companion of the The Web Conference 2018, ACM Press, pp. 121-122.
View/Download from: Publisher's site
View description>>
© 2018 IW3C2 (International World Wide Web Conference Committee), published under Creative Commons CC BY 4.0 License. Document summarization is an important research issue and has attracted much attention from the academe. The approaches for document summarization can be classified as extractive and abstractive. In this work, we introduce an effective joint framework that integrates extractive and abstractive summarization models, which is much closer to the way human write summaries (first underlining important information). Preliminary experiments on real benchmark dataset demonstrate that our model is competitive with the state-of-the-art methods.
Gupta, U, Gupta, D & Prasad, M 1970, 'Kernel Target Alignment based Fuzzy Least Square Twin Bounded Support Vector Machine', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Bangalore, India, pp. 228-235.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. A kernel-target alignment based fuzzy least square twin bounded support vector machine (KTAFLSTBSVM) is proposed to reduce the effects of outliers and noise. The proposed model is an effective and efficient fuzzy based least square twin bounded support vector machine for binary classification where the membership values are assigned based on kernel-target alignment approach. The proposed KTA-FLSTBSVM solves the two systems of linear equations, which is computationally very fast with significant comparable performance. To development the robust model, this approach minimizes the structural risk which is the gist of statistical learning theory. This powerful KTA-FLSTBSVM approach is tested on artificial data sets as well as benchmark real-world datasets that provide significantly better result in terms of generalization performance and computational time.
Hayati, H, Walker, P, Brown, T, Kennedy, P & Eager, D 1970, 'A Simple Spring-Loaded Inverted Pendulum (SLIP) Model of a Bio-Inspired Quadrupedal Robot Over Compliant Terrains', Volume 4B: Dynamics, Vibration, and Control, ASME 2018 International Mechanical Engineering Congress and Exposition, American Society of Mechanical Engineers, USA.
View/Download from: Publisher's site
View description>>
To study the impact of compliant terrains on the biomechanics of rapid legged movements, a well-known spring loaded inverted pendulum (SLIP) model is deployed. The model is a three-degrees-of-freedom system (3 DOF), inspired by galloping greyhounds competing in a racing condition. A single support phase of hind-leg stance in a galloping gait is taken into consideration due to its primary function in powering the greyhounds locomotion and higher rate of musculoskeletal injuries. To obtain and solve the nonlinear second-order differential equation of motions, the Lagrangian method and MATLABb R2017b (ode45 solver), which is based on the Runge-Kutta method, has been used, respectively. To get the viscoelastic behavior of compliant terrains, a Clegg hammer test was developed and performed five times on each sample. The effective spring and damping coefficients of each sample were then determined from the hysteresis curves. The results showed that galloping on the synthetic rubber requires more muscle force compared with wet sand. However, according to the Clegg hammer test, wet sand had a higher impact force than synthetic rubber which can be a risk factor for bone fracture, particularly hock fracture, in greyhounds. The results reported in this paper are not only useful for identifying optimum terrain properties and injury thresholds of an athletic track, but also can be used to design control methods and shock impedances for legged robots performing on compliant terrains.
Herron, D, Moncur, W, Marija Curic, M, Grubisic, D, Vistica, O & van den Hoven, E 1970, 'Digital Possessions in the Museum of Broken Relationships', Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18: CHI Conference on Human Factors in Computing Systems, ACM, Montreal QC, Canada.
View/Download from: Publisher's site
Huang, C, Yao, L, Wang, X, Benatallah, B, Zhang, S & Dong, M 1970, 'Expert Recommendation via Tensor Factorization with Regularizing Hierarchical Topical Relationships', Service-Oriented Computing (LNCS), International Conference on Service-Oriented Computing, Springer International Publishing, Hangzhou, China, pp. 373-387.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Knowledge acquisition and exchange are generally crucial yet costly for both businesses and individuals, especially when the knowledge concerns various areas. Question Answering Communities offer an opportunity for sharing knowledge at a low cost, where communities users, many of whom are domain experts, can potentially provide high-quality solutions to a given problem. In this paper, we propose a framework for finding experts across multiple collaborative networks. We employ the recent techniques of tree-guided learning (via tensor decomposition), and matrix factorization to explore user expertise from past voted posts. Tensor decomposition enables to leverage the latent expertise of users, and the posts and related tags help identify the related areas. The final result is an expertise score for every user on every knowledge area. We experiment on Stack Exchange Networks, a set of question answering websites on different topics with a huge group of users and posts. Experiments show our proposed approach produces steady and premium outputs.
Huang, L, Zhang, G, Yu, S, Fu, A & Yearwood, J 1970, 'Customized Data Sharing Scheme Based on Blockchain and Weighted Attribute', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, U ARAB EMIRATES.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In data sharing schemes, the file owners should obtain rewards by sharing files with others as they put energy in these files. Therefore, we proposed an incentive data sharing scheme in this paper which encourages users to share data and also supports customization. Customization allows the owners to decide the threshold of access, the importance of each attributive classification which determines users' priority level of file modification and file ownership obtaining when the original owner leaves according to the priority level value. To support a convincing customized data sharing scheme, we introduce the knowledge of blockchain and construct a suitable access structure based on weighted attributes. The blockchain is used to ensure the fairness in incentive. Based on weighted attributes, an attribute set is disposed to a numerical value and the owner of the attribute set is able to obtain the file when the value is not less than the threshold, which is different from the normal access control policy. We prove the security from integrity, privacy and the availability of access key. The performance of the proposed scheme is evaluated at the end of this paper.
Ibrahim, IA, Li, X, Zhao, X, Maskari, SA, Albarrak, AM & Zhang, Y 1970, 'Automated Explanations of User-Expected Trends for Aggregate Queries', Springer International Publishing, pp. 602-614.
View/Download from: Publisher's site
Idrees, MO, Kalantar, B, Ueda, N, A. Alnajjar, H, Motevalli, A & Pradhan, B 1970, 'Landslide susceptibility mapping at Dodangeh watershed, Iran using LR and ANN models in GIS', Earth Resources and Environmental Remote Sensing/GIS Applications IX, Earth Resources and Environmental Remote Sensing/GIS Applications, SPIE, Berlin, GERMANY.
View/Download from: Publisher's site
Ikram, MA & Hussain, FK 1970, 'Software as a Service (SaaS) Service Selection Based on Measuring the Shortest Distance to the Consumer’s Preferences', Springer International Publishing, pp. 403-415.
View/Download from: Publisher's site
View description>>
Software as a Service (SaaS) is a type of cloud service that runs and operates over the Platform as a Service (PaaS), which in turn works on the Infrastructure as a Service (IaaS). In the past few years, there has been an enormous growth in the number of SaaS services. It is estimated that the revenue of SaaS services will reach US$ 112.8 billion in 2019. This growth in the number of SaaS services makes the selection process difficult for a consumer who is looking to select the best service among the many services that have similar functionalities. In this article, we propose a Find SaaS framework to select a service based on measuring the shortest distance to the consumer’s preferences. In order to explain how the Find SaaS framework works, a case study based on selecting a computer repair shop’s SaaS application for the consumer has been presented.
Inibhunu, C & Am, CM 1970, 'State Based Hidden Markov Models for Temporal Pattern Discovery in Critical Care', 2018 IEEE Life Sciences Conference (LSC), 2018 IEEE Life Sciences Conference (LSC), IEEE, Montreal, CANADA, pp. 77-80.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. We are studying the challenge of finding a good set of features that represent well the temporal aspects in time series data. We argue that discovery of such features could be crucial to understanding hidden relationships in data. In particular, in critical care where time oriented data is generated every second on patients physiological features, discovery of any hidden relationships could aid in discovery of unknown and potentially life threatening conditions before they happen. Additionally, this discovery could help in better dissemination of healthcare services leading to better outcomes and experiences for patients. To facilitate this process, this research explores two research questions; (a) can discovery of temporal relationships in data help in learning hidden aspects in differing patient cohort and (b) with respect to elderly patients receiving telehealth services, can detection of abnormal patterns help in identifying patients at risk of adverse events before they happen. In this paper, we introduce a model for temporal pattern mining by; (1) applying principles from finite state machines augmented with hidden markov models and temporal abstraction for identifying temporal relations in data, (2) generating temporal patterns by augmenting similar relationships, (3) formulating a process for mining frequently occurring temporal patterns and (4) using the resulting mined patterns to build a temporal classification system. Such a classification system can be effective at characterizing normal and abnormal behaviors in patients data and flag when a patient is at risk of a potential adverse event.
Inibhunu, C & McGregor, C 1970, 'Fusing Dimension Reduction and Classification for Mining Interesting Frequent Patterns in Patients Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Machine Learning and Data Mining in Pattern Recognition, Springer International Publishing, New York, NY, USA, pp. 1-15.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. Vast amounts of data are collected about elderly patients diagnosed with chronic conditions and receiving care in telehealth services. The potential to discover hidden patterns in the collected data can be crucial in making effective decisions on dissemination of services and lead to improved quality of care for patients. In this research, we investigate a knowledge discovery method that applies a fusion of dimension reduction and classification algorithms to discover interesting patterns in patient data. The research premise is that discovery of such patterns could help explain unique features about patients who are likely or unlikely to have an adverse event. This is a unique and innovative technique that utilizes the best of probability, rules, random trees and association algorithms for; (a) feature selection, (b) predictive modelling and (c) frequent pattern mining. The proposed method has been applied in a case study context to discover interesting patterns and features in patients participating in telehealth services. The results of the models developed shows that identification of best feature set can lead to accurate predictors of adverse events as well as effective in generation of frequent patterns and discovery of interesting features in varying patient cohort.
Ivanyos, G & Qiao, Y 1970, 'Algorithms based on *-algebras, and their applications to isomorphism of polynomials with one secret, group isomorphism, and polynomial identity testing', Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, Symposium on Discrete Algorithm, Society for Industrial and Applied Mathematics, New Orleans, LA, USA, pp. 2357-2376.
View/Download from: Publisher's site
View description>>
© Copyright 2018 by SIAM. We consider two basic algorithmic problems concerning tuples of (skew-)symmetric matrices. The first problem asks to decide, given two tuples of (skew-)symmetric matrices (B1; : : : ;Bm) and (C1; : : : ;Cm), whether there exists an invertible matrix A such that for every i 2 f1; : : : ;mg, AtBiA = Ci. We show that this problem can be solved in randomized polynomial time over finite fields of odd size, the reals, and the complex numbers. The second problem asks to decide, given a tuple of square matrices (B1; : : : ;Bm), whether there exist invertible matrices A and D, such that for every i 2 f1; : : : ;mg, ABiD is (skew-)symmetric. We show that this problem can be solved in deterministic polynomial time over fields of characteristic not 2. For both problems we exploit the structure of the underlying α-algebras (algebras with an involutive antiautomorphism), and utilize results and methods from the module isomorphism problem. Applications of our results range from multivariate cryptography, group isomorphism, to polynomial identity testing. Specifically, these results imply efficient algorithms for the following problems. (1) Test isomorphism of quadratic forms with one secret over a finite field of odd size. This problem belongs to a family of problems that serves as the security basis of certain authentication schemes proposed by Patarin (Eurocrypt 1996). (2) Test isomorphism of p-groups of class 2 and exponent p (p odd) with order p' in time polynomial in the group order, when the commutator subgroup is of order pO( p '). (3) Deterministically reveal two families of singularity witnesses caused by the skew-symmetric structure. This represents a natural next step for the polynomial identity testing problem, in the direction set up by the recent resolution of the non-commutative rank problem (Garg-Gurvits-Oliveira-Wigderson, FOCS 2016; Ivanyos-Qiao-Subrahmanyam, ITCS 2017).
Jin, D, Liu, Z, He, D, Gabrys, B & Musial, K 1970, 'Robust Detection of Communities with Multi-semantics in Large Attributed Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Knowledge Science, Engineering and Management, Springer International Publishing, Changchun, China, pp. 362-376.
View/Download from: Publisher's site
View description>>
© 2018, Springer Nature Switzerland AG. In this paper, we are interested in how to explore and utilize the relationship between network communities and semantic topics in order to find the strong explanatory communities robustly. First, the relationship between communities and topics displays different situations. For example, from the viewpoint of semantic mapping, their relationship can be one-to-one, one-to-many or many-to-one. But from the standpoint of underlying community structures, the relationship can be consistent, partially consistent or completely inconsistent. Second, it will be helpful to not only find communities more precise but also reveal the communities’ semantics that shows the relationship between communities and topics. To better describe this relationship, we introduce the transition probability which is an important concept in Markov chain into a well-designed nonnegative matrix factorization framework. This new transition probability matrix with a suitable prior which plays the role of depicting the relationship between communities and topics can perform well in this task. To illustrate the effectiveness of the proposed new approach, we conduct some experiments on both synthetic and real networks. The results show that our new method is superior to baselines in accuracy. We finally conduct a case study analysis to validate the new method’s strong interpretability to detected communities.
John, BM, Wickramasinghe, N & Jayan Chirayath Kurian, J 1970, 'Identifying Similar Questions in Healthcare Social Question Answering: A Design Science Research', LA.
Juang, C-F, Chang, Y-C & Chung, I-F 1970, 'Evolutionary hexapod robot gait control using a new recurrent neural network learned through group-based hybrid metaheuristic algorithm', Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '18: Genetic and Evolutionary Computation Conference, ACM.
View/Download from: Publisher's site
Ke, H, Fu, A, Yu, S & Chen, S 1970, 'AQ-DP: A New Differential Privacy Scheme Based on Quasi-Identifier Classifying in Big Data', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, United Arab Emirates, pp. 3398-3403.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The rapid development of big data has brought great convenience to human's lives. The circulation and sharing of information are two main characteristics of the big data era. However, the risk of privacy leakage is also greatly increased when we enjoy the various services of big data. Therefore, how to protect the data privacy in the complex context of big data has become a research hotspot in academic circles. Most of the current researches on privacy protection are divided into two research fields: k-anonymity and differential privacy. Some existing research shows that traditional methods of privacy protection, such as k-anonymity and its extension, cannot achieve absolutely security. The emergence of differential privacy provides a new solution for privacy protection. We draw the lessons from exiting work and propose a new privacy method based on differential privacy: AQ-DP. We propose the first method for classifying quasi-identifiers based on sensitive attributes, which divide quasi-identifiers into associated quasi-identifiers (AQI) and non-associated quasi-identifiers (NAQI). The purpose is not to lose the correlation between quasi-identifiers and sensitive attributes. Our model AQ-DP carries out random shuffling of NAQls., generalizes the AQIs., and adds random noise that satisfies the laplacian distribution to the statistics. We have conducted extensive experiments, confirming that our model can achieve a satisfying privacy level and data utility.
Kocabalil, AB, Laranjo, L & Coiera, E 1970, 'Measuring User Experience in Conversational Interfaces: A Comparison of Six Questionnaires', Electronic Workshops in Computing, Proceedings of the 32nd International BCS Human Computer Interaction Conference, BCS Learning & Development.
View/Download from: Publisher's site
View description>>
© Dupré et al. Published by BCS Learning and Development Ltd. Proceedings of British HCI 2018. Belfast, UK User experience (UX) has become an important aspect in the evaluation of interactive systems. In parallel, conversational interfaces have been increasingly used in many work and everyday settings. Although there have been various methods developed to evaluate conversational interfaces, there has been a lack of methods specifically focusing on evaluating user experience. This study reviews the six main questionnaires for evaluating conversational systems in order to assess the potential suitability of these questionnaires to measure various UX dimensions. We found that (i) four questionnaires included assessment items, in varying extents, to measure hedonic, aesthetic and pragmatic dimensions of UX; (ii) two questionnaires assessed affect, and one assessed frustration dimension; and, (iii) enchantment, playfulness and motivation dimensions have not been covered sufficiently by any questionnaires. We recommend using multiple questionnaires to obtain a more complete measurement of user experience or improve the assessment of a particular UX dimension.
Kocaballi, AB & Núñez-Pacheco, C 1970, 'Rethinking Context-aware Computing to Support Reflective Engagement', Electronic Workshops in Computing, Proceedings of the 32nd International BCS Human Computer Interaction Conference, BCS Learning & Development.
View/Download from: Publisher's site
View description>>
© Dupré et al. Published by BCS Learning and Development Ltd. Proceedings of British HCI 2018. Belfast, UK Context-aware technologies increasingly become more integrated into people's everyday lives, providing adaptive services in seamless ways that make many everyday tasks and activities more practical and automated. However, this seamless automation of services may prevent people from reflecting on the opportunities offered by their surroundings. The perspective proposed in this paper invites designers to rethink the role of context-aware technologies as mediators of humans' capacity to reflectively engage with their surroundings. Drawing upon the design qualities offered by seamful design and New Brutalism movement, the paper offers two ways in which context-aware technologies can support reflective engagement: visibility of technology and visibility through technology.
Kong, Y, Zhang, M, Ye, D, Zhu, J & Choi, J 1970, 'An intelligent agent‐based method for task allocation in competitive cloud environments', Concurrency and Computation: Practice and Experience, Wiley, pp. e4178-e4178.
View/Download from: Publisher's site
View description>>
SummaryIn market‐based cloud environments, both resource consumers and providers are self‐interested; additionally, they can come and leave the environment freely. Therefore, the environment is competitive and uncertain. Because of the competition, participants may cheat in making deals, and this represents that the environment is insecure to resource providers who intend to earn profits through renting their resources to the tasks of resource consumers. Against this, in this paper, intelligent agents are designed to strategically quote for the tasks that they are interested in, on behalf of resource providers. Agents could quote according to the messages it obtained and the information learnt and predicted from the messages, to minimize the influence of insecure factors, such as cheating, competition, and dynamism. The experimental evaluation shows that the proposed method outperforms both a well‐known multiresource negotiation‐based task allocation method and a max‐sum belief propagation–based method.
Korhonen, JJ & Gill, AQ 1970, 'Digital Capability Dissected', ACIS 2018 - 29th Australasian Conference on Information Systems, University of Technology, Sydney.
View/Download from: Publisher's site
View description>>
© 2018 authors. There is a growing interest in digital innovation and transformation among the researchers and practitioners. It has been recognised that being “digital” is not all about digital data and information technologies. The notion of “digital capability” has been increasingly embraced, but definitions of this concept have remained vague and elusive. A salient research question remains: what is digital capability? This question is explored in this paper from theoretical and practical perspectives in the form of a conceptual construct: the Digital Capability Framework (D-CaF). The framework distinguishes six levels and seven dimensions of digital capability. It is intended to provide a foundation to plan and execute digital capability driven innovation and transformation initiatives. Further, it helps identify and prioritise the research areas of high impact for further studies.
Korhonen, JJ & Gill, AQ 1970, 'Digital capability dissected', ACIS 2018 - 29th Australasian Conference on Information Systems, Australasian Conference on Information Systems, Sydney.
View description>>
© 2018 authors. There is a growing interest in digital innovation and transformation among the researchers and practitioners. It has been recognised that being “digital” is not all about digital data and information technologies. The notion of “digital capability” has been increasingly embraced, but definitions of this concept have remained vague and elusive. A salient research question remains: what is digital capability? This question is explored in this paper from theoretical and practical perspectives in the form of a conceptual construct: the Digital Capability Framework (D-CaF). The framework distinguishes six levels and seven dimensions of digital capability. It is intended to provide a foundation to plan and execute digital capability driven innovation and transformation initiatives. Further, it helps identify and prioritise the research areas of high impact for further studies.
Kumar Mishra, A, Kumar Tripathy, A, S. Obaidat, M, Tan, Z, Prasad, M, Sadoun, B & Puthal, D 1970, 'A Chain Topology for Efficient Monitoring of Food Grain Storage using Smart Sensors', Proceedings of the 15th International Joint Conference on e-Business and Telecommunications, International Conference on e-Business, SCITEPRESS - Science and Technology Publications, pp. 89-98.
View/Download from: Publisher's site
View description>>
Due to lack of an efficient monitoring system to periodically record environmental parameters for food grain storage, a huge loss of food grains in storage is reported every year in many developing countries, especially south-Asian countries. Although Smart Sensor Networks have been successfully implemented in various applications such as health-care, military, and wildlife monitoring, there are still various issues to be addressed in food grain storage monitoring applications. Due to the food grain storage infrastructure constraints, the commonly practiced network topologies of sensor devices such as mesh, star, and grid cannot provide an effective monitoring environment. In this paper, we proposed a topology using smart sensors that can effectively cover and monitor the food grain storage area. It uses a chained structure of sensor devices with directional antennas to accurately sense and report the environmental data. The proposed topology works better than common topologies due to its chain-based structure which remains unaffected by various hindrance imposed due to food grain storage infrastructure. From the experimental results it is conclude that the proposed topology has effective coverage percentage, detection accuracy, and message delivery over Cluster-based and Mesh topologies in food grain storage environments.
Kumar Mishra, A, Kumar Tripathy, A, S. Obaidat, M, Tan, Z, Prasad, M, Sadoun, B & Puthal, D 1970, 'A Chain Topology for Efficient Monitoring of Food Grain Storage using Smart Sensors', Proceedings of the 15th International Joint Conference on e-Business and Telecommunications, International Conference on e-Business, SCITEPRESS - Science and Technology Publications.
View/Download from: Publisher's site
Lai, JCS, Oberst, S & Evans, TA 1970, 'Termites use vibrations to eavesdrop on predatory ants', INTER-NOISE 2018 - 47th International Congress and Exposition on Noise Control Engineering: Impact of Noise Control Engineering, Internoise 2018, Chicago, Illinois, USA..
View description>>
Animals detect many signal types, including light, chemicals, sound and vibrations, some of which come from their environment, some they produce themselves. Signals are used deliberately to communicate and can be detected by predators or parasites. The role of vibrational communication in predator-prey relationships has received limited attention. One such relationship is that between termites and ants, which often live in close proximity with evidence of this evolutionary arms race dating back millions of years ago. Apart from having soldiers to drum alarm signals and to slow down predators' attacks, termites rely on mechanisms to avoid being contacted and detected. However, being cryptic also limits their ability to explore and assess foraging sites. Our previous research shows that despite being blind, (a) termites use vibrations of their feeding to assess food size; and (b) the drywood secundus workers use vibrations to eavesdrop to discriminate their own kin from and avoid their main subterranean competitor, Coptotermes (Co.) acinaciformis. In this paper, we will discuss our recent results that Co. acinaciformis can detect its main predator, the ant Iridomyrmex pupureus, by detecting its footsteps of which the frequency and magnitude are similar to that of the alarm signal of Co. acinaciformis. The application of the engineering noise control principle to develop vibration-based termite control technologies will be discussed.
Li, C, Deng, C, Li, N, Liu, W, Gao, X & Tao, D 1970, 'Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval', 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 4242-4251.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (SSAH) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal hashing in a self-supervised fashion. The primary contribution of this work is that two adversarial networks are leveraged to maximize the semantic correlation and consistency of the representations between different modalities. In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations. Such information guides the feature learning process and preserves the modality relationships in both the common semantic space and the Hamming space. Extensive experiments carried out on three benchmark datasets validate that the proposed SSAH surpasses the state-of-the-art methods.
Li, G, Feng, B, Li, G, Zhou, H & Yu, S 1970, 'An SMDP-Based Service Function Allocation Scheme for Mobile Edge Clouds', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, Kansas City, MO, USA, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. With the increasing global mobile traffic, there is a trend to deploy network services at mobile edge clouds. Benefiting from the techniques of Network Function Virtualization and Software-Defined Networking, service function chains are enabled to compose a series of required network functions dynamically. As a consequence, most of common-used and IT-based mobile network services can be deployed at MEC cloud networks under the 5G context, remarkably reducing user latency and network traffic. However, as resources in cloud networks are limited, it is challenging to promote the system utilization with guaranteed user experience. Thus, in this paper, we formulate the allocation problem of service functions in MECs as an Semi-Markov Decision Process model and present a value iteration algorithm to find the optimized solution, aiming to increase request acceptance rate. Additionally, we discuss the parameter settings of the proposed scheme under different cases to find higher rewards.
Li, G, Zhou, H, Feng, B, Li, G & Yu, S 1970, 'Automatic Selection of Security Service Function Chaining Using Reinforcement Learning', 2018 IEEE Globecom Workshops (GC Wkshps), 2018 IEEE Globecom Workshops (GC Wkshps), IEEE, United Arab Emirates, pp. 203-208.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. When selecting security Service Function Chaining (SFC) for network defense, operators usually take security performance, service quality, deployment cost, and network function diversity into consideration, formulating as a multi-objective optimization problem. However, as applications, users, and data volumes grow massively in networks, traditional mathematical approaches cannot be applied to online security SFC selections due to high execution time and uncertainty of network conditions. Thus, in this paper, we utilize reinforcement learning, specifically, the Q-learning algorithm to automatically choose proper security SFC for various requirements. Particularly, we design a reward function to make a tradeoff among different objectives and modify the standard -greedy based exploration to pick out multiple ranked actions for diversified network defense. We compare the Q-learning with mathematical optimization-based approaches, which are assumed to know network state changes in advance. The training and testing results show that the Q-learning based approach can capture changes of network conditions and make a tradeoff among different objectives.
Li, J, Luo, H, Jin, M, Yu, S & Wang, Z 1970, 'Solving Selfish Routing in Route-by-Name Information-Centric Network Architectures', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Information-Centric Networking (ICN) is a promising network paradigm for the future Internet. As in the current Internet, selfish routing is also crucial problem in ICN. To the best of our knowledge, however, the selfish routing problem in ICN is remaining an unresolved challenge. To fill this gap, in this paper we propose a Nash Bargaining based content registration (NBREG) method, which is used for register content names (dissemination of content reachability information) from the game theoretic perspective. NBREG allows neighboring domains to cooperate with each other without revealing their internal private information. Based on results from real (inter-domain topology) trace simulations and prototype implementations, we show that neighboring domains can obtain more benefits with NBREG than they register and forward contents selfishly.
Li, M, Yang, C, Zhang, J, Puthal, D, Luo, Y & Li, J 1970, 'Stock market analysis using social networks', Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2018: Australasian Computer Science Week 2018, ACM, Brisbane, Queensland, Australia.
View/Download from: Publisher's site
View description>>
© 2018 ACM. Nowadays, the use of social media has reached unprecedented levels. Among all social media, with its popular micro-blogging service, Twitter enables users to share short messages in real time about events or express their own opinions. In this paper, we examine the effectiveness of various machine learning techniques on retrieved tweet corpus. A machine learning model is deployed to predict tweet sentiment, as well as gain an insight into the correlation between twitter sentiment and stock prices. Specifically, that correlation is acquired by mining tweets using Twitter's search API and process it for further analysis. To determine tweet sentiment, two types of machine learning techniques are adopted including Naïve Bayes classification and Support vector machines. By evaluating each model, we discover that support vector machine gives higher accuracy through cross validation. After predicting tweet sentiment, we mine historical stock data using Yahoo finance API, while the designed feature matrix for stock market prediction includes positive, negative, neutral and total sentiment score and stock price for each day. In order to capturing the correlation situation between tweet opinions and stock market prices, hence, evaluating the direct correlation between tweet sentiments and stock market prices, the same machine learning algorithm is implemented for conducting our empirical study.
Li, S, Zhang, J, Xie, D, Yu, S & Dou, W 1970, 'High Quality Participant Recruitment of Mobile Crowdsourcing over Big Data', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. With the rich set of embedded sensors installed in smart-phones, an increasing number of applications have been designed based on these mobile sensors rather than on static sensors in urban areas. In Mobile Crowdsourcing (MCS), participant selection is promoted to save energy and entire incentives. Nevertheless, most of the current researches on this problem assume that the system should get the entire information about the participants. As a result, the suitable tasks are always not allocated to the suitable participants. This practice contributes an inaccurate match between a task and participants, which leads to energy and incentives waste. In view of this challenge, we aim to select participants under a more accurate prediction model, rather than assuming that the information of each participant should be obtained in advance. The prediction model is enabled by the big data of participants' historic evaluation, which are used to predict the user action. Furthermore, a greedy method based on an improved singular value decomposition (SVD), named as SVD-G, is proposed to solve this problem. Finally, the proposed SVD-G method is validated by using the large-scale dataset collected from a real-world project (DaZhongDianPing APP).
Lin, A, Li, J, Zhang, L, Ma, Z & Luo, W 1970, 'Multiple-Task Learning and Knowledge Transfer Using Generative Adversarial Capsule Nets', AI 2018: Advances in Artificial Intelligence (LNAI), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 669-680.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. It is common that practical data has multiple attributes of interest. For example, a picture can be characterized in terms of its content, e.g. the categories of the objects in the picture, and in the meanwhile the image style such as photo-realistic or artistic is also relevant. This work is motivated by taking advantage of all available sources of information about the data, including those not directly related to the target of analytics. We propose an explicit and effective knowledge representation and transfer architecture for image analytics by employing Capsules for deep neural network training based on the generative adversarial nets (GAN). The adversarial scheme help discover capsule-representation of data with different semantic meanings in respective dimensions of the capsules. The data representation includes one subset of variables that are particularly specialized for the target task – by eliminating information about the irrelevant aspects. We theoretically show the elimination by mixing conditional distributions of the represented data. Empirical evaluations show the propose method is effective for both standard transfer-domain recognition tasks and zero-shot transfer.
Lin, A, Li, J, Zhang, L, Shi, L & Ma, Z 1970, 'A New Family of Generative Adversarial Nets Using Heterogeneous Noise to Model Complex Distributions', AI 2018: AI 2018: Advances in Artificial Intelligence LNAI, Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 706-717.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Generative adversarial nets (GANs) are effective framework for constructing data models and enjoys desirable theoretical justification. On the other hand, realizing GANs for practical complex data distribution often requires careful configuration of the generator, discriminator, objective function and training method and can involve much non-trivial effort. We propose an novel family of generative adversarial nets (GANs), where we employ both continuous noise and random binary codes in the generating process. The binary codes in the new GAN model (named BGANs) play the role of categorical latent variables helps improve the model capability and training stability when dealing with complex data distributions. BGAN has been evaluated and compared with existing GANs trained with the state-of-the-art method on both synthetic and practical data. The empirical evaluation shows effectiveness of BGAN.
Lin, A, Xuan, J, Zhang, G & Lu, J 1970, 'Causal inference with Gaussian processes for support of terminating or maintaining an existing program', Data Science and Knowledge Engineering for Sensing Decision Support, Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018), WORLD SCIENTIFIC, Belfast, Northern Ireland.
View/Download from: Publisher's site
Liu, B, Ding, M, Zhu, T, Xiang, Y & Zhou, W 1970, 'Using Adversarial Noises to Protect Privacy in Deep Learning Era', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Unuted Arab Emirates, pp. 2276-2281.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The unprecedented accuracy of deep learning methods has earned themselves as the foundation of new AI-based services on the Internet. At the same time, it presents obvious privacy issues. The deep learning aided privacy attack can dig out sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against the deep learning tools. We also propose two new metrics to measure the image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our schemes is validated by simulation on a large-scale dataset. Our study shows that we can protect the image privacy by adding a small amount of noise, while the added noise has a humanly imperceptible impact on the image quality.
Liu, C, Chen, L, Tsang, I & Yin, H 1970, 'Towards the Learning of Weighted Multi-label Associative Classifiers', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Because of the ability to capture the correlation between features and labels, association rules have been applied to multi-label classification. However, existing multi-label associative classification algorithms usually exploit association rules using heuristic strategies. Moreover, only the covering association rules whose feature set is a subset of the testing instance are considered. Discarding any mined rules may diminish the performance of the classifier, especially when some rules only differ from the testing instance by a few insignificant features. In this paper we propose Weighted Multi-label Associative Classifiers (WMAC) that leverage an extended set of association rules with overlapping features with the testing instance to learn a universal weight vector for features. For this purpose, we embed the set of rules into a linear model and weigh the association rules by its confidence. Empirical results on diversified datasets clearly demonstrate that WMAC outperforms other well-established multi-label classification algorithms.
Liu, C, Talaei-Khoei, A & Zowghi, D 1970, 'Theoretical support for enhancing data quality: Application in electronic medical records', Americas Conference on Information Systems 2018: Digital Disruption, AMCIS 2018, Americas Conference on Information Systems, New Orleans.
View description>>
This paper aims at reviewing the existing theoretical support to enhance data quality and utilizing the findings of the review in the context of electronic medical records (EMRs). For this to happen, we first conducted a survey of publications that have a focus on an empirical investigation of factors influencing data quality in the conceptual models. By using a well-established taxonomy development method from the discipline of information systems, we then proposed 3 dimensions for studying factors influencing data quality and constructing the conceptual model for enhancing data quality: breadth, depth, and interaction, within 9 characteristics under different dimensions. Last, we compared related studies using the proposed dimensions and utilized the findings of the review in enhancing EMRs quality to disclose the limitations and possibilities of new areas for further study.
Liu, C, Zowghi, D, Talaei-Khoei, A & Daniel, J 1970, 'Achieving Data Completeness in Electronic Medical Records: A Conceptual Model and Hypotheses Development', Proceedings of the 51st Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, Hawaii International Conference on System Sciences, Hilton Waikoloa Village, Hawaii, USA, pp. 2824-2833.
View/Download from: Publisher's site
View description>>
This paper aims at proposing a conceptual model of achieving data completeness in electronic medical records (EMR). For this to happen, firstly, we draw on the model of factors influencing data quality management to construct our conceptual model. Secondly, we develop hypotheses of relationships between influencing factors for data completeness and mediators for achieving data completeness in EMR based on the literature. Our conceptual model extends the prior model for factors influencing data quality management by adding a new factor and exploring the relationships between the influencing factors within the context of data completeness in EMR. The proposed conceptual model and the presented hypotheses once empirically validated will be the basis for the development of tools and techniques for achieving data completeness in EMR.
Liu, F, Zhang, G & Lu, J 1970, 'Unconstrained fuzzy feature fusion for heterogeneous unsupervised domain adaptation', 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Rio de Janeiro, BRAZIL.
View/Download from: Publisher's site
View description>>
Domain adaptation can transfer knowledge from the source domain to improve pattern recognition accuracy in the target domain. However, it is rarely discussed when the target domain is unlabeled and heterogeneous with the source domain, which is a very challenging problem in the domain adaptation field. This paper presents a new feature reconstruction method: unconstrained fuzzy feature fusion. Through the reconstructed features of a source and a target domain, a geodesic flow kernel is applied to transfer knowledge between them. Furthermore, the original information of the target domain is also preserved when reconstructing the features of the two domains. Compared to the previous work, this work has two advantages: 1) the sum of the memberships of the original features to fuzzy features no longer must be one, and 2) the original information of the target domain is persevered. As a result of these advantages, this work delivers a better performance than previous studies using two public datasets.
Liu, L, Huo, H, Liu, X, Palade, V, Peng, D & Chen, Q 1970, 'Recognizing Textual Entailment with Attentive Reading and Writing Operations', Springer International Publishing, pp. 847-860.
View/Download from: Publisher's site
Liu, M, Nanda, P, Zhang, X, Yang, C, Yu, S & Li, J 1970, 'Asymmetric Commutative Encryption Scheme Based Efficient Solution to the Millionaires' Problem', 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), IEEE, New York, NY, USA, pp. 990-995.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Secure multiparty computation (SMC) is an important scheme in cryptography and can be applied in various real-life problems. The first SMC problem is the millionaires' problem which involves two-party secure computation. Because the efficiency of public key encryption scheme appears less than symmetric encryption scheme, most existing solutions based on public key cryptography to this problem is inefficient. Thus, a solution based on the symmetric encryption scheme has been proposed. Although it is claimed that this approach can be efficient and practical, we discover that there exist several severe security flaws in this solution. In this paper, we analyze the vulnerability of existing solutions, and propose a new scheme based on the Decisional Diffie-Hellman hypothesis (DDH). Our solution also uses two special encodings (0-encoding and 1-encoding) generated by our modified encoding method to reduce the computation cost of modular multiplications. Extensive experiments are conducted to evaluate the efficiency of our solution, and the experimental results show that our solution can be much more efficient and be approximately 8000 times faster than the solution based on symmetric encryption scheme for a 32-bit input and short-term security. Moreover, our solution is also more efficient than the state-of-the-art solution.
Liu, Q, Huang, H, Zhang, G, Gao, Y, Xuan, J & Lu, J 1970, 'Semantic structure-based word embedding by incorporating concept convergence and word divergence', 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 32nd AAAI Conference on Artificial Intelligence / 30th Innovative Applications of Artificial Intelligence Conference / 8th AAAI Symposium on Educational Advances in Artificial Intelligence, ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, New Orleans, LA, pp. 5261-5268.
View description>>
Representing the semantics of words is a fundamental task in text processing. Several research studies have shown that text and knowledge bases (KBs) are complementary sources for word embedding learning. Most existing methods only consider relationships within word-pairs in the usage of KBs. We argue that the structural information of well-organized words within the KBs is able to convey more effective and stable knowledge in capturing semantics of words. In this paper, we propose a semantic structure-based word embedding method, and introduce concept convergence and word divergence to reveal semantic structures in the word embedding learning process. To assess the effectiveness of our method, we use WordNet for training and conduct extensive experiments on word similarity, word analogy, text classification and query expansion. The experimental results show that our method outperforms state-of-the-art methods, including the methods trained solely on the corpus, and others trained on the corpus and the KBs.
Liu, W & Chivukula, A 1970, 'AI 2018: Advances in Artificial Intelligence', The 31st Australasian Joint Conference on Artificial Intelligence, The 31st Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 692-692.
View/Download from: Publisher's site
Liu, W, Chang, X, Chen, L & Yang, Y 1970, 'Semi-supervised Bayesian attribute learning for person re-identification', 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, Thirty-Second AAAI Conference on Artificial Intelligence, AAAI, New Orleans, Louisiana, USA, pp. 7162-7169.
View description>>
Person re-identification (re-ID) tasks aim to identify the same person in multiple images captured from non-overlapping camera views. Most previous re-ID studies have attempted to solve this problem through either representation learning or metric learning, or by combining both techniques. Representation learning relies on the latent factors or attributes of the data. In most of these works, the dimensionality of the factors/attributes has to be manually determined for each new dataset. Thus, this approach is not robust. Metric learning optimizes a metric across the dataset to measure similarity according to distance. However, choosing the optimal method for computing these distances is data dependent, and learning the appropriate metric relies on a sufficient number of pair-wise labels. To overcome these limitations, we propose a novel algorithm for person re-ID, called semi-supervised Bayesian attribute learning. We introduce an Indian Buffet Process to identify the priors of the latent attributes. The dimensionality of attributes factors is then automatically determined by nonparametric Bayesian learning. Meanwhile, unlike traditional distance metric learning, we propose a re-identification probability distribution to describe how likely it is that a pair of images contains the same person. This technique relies solely on the latent attributes of both images. Moreover, pair-wise labels that are not known can be estimated from pair-wise labels that are known, making this a robust approach for semi-supervised learning. Extensive experiments demonstrate the superior performance of our algorithm over several state-of-the-art algorithms on small-scale datasets and comparable performance on large-scale re-ID datasets.
Liu, W, Chang, X, Chen, L & Yang, Y 1970, 'Semi-supervised Joint Learning of Representation and Relation for Person Re-identification', AAAI Conference on Artificial Intelligence, Louisiana, USA.
Liu, Z, Cai, Q, Wang, S, Xu, X, Dou, W & Yu, S 1970, 'A Cloud Service Enhanced Method Supporting Context-Aware Applications', MOBILE NETWORKS AND MANAGEMENT (MONAMI 2017), 9th European-Alliance-for-Innovation (EAI) International Conference on Mobile Networks and Management (MONAMI), Springer International Publishing, Melbourne, AUSTRALIA, pp. 277-290.
View/Download from: Publisher's site
LU, S, Oberst, S, Zhang, G & Luo, Z 1970, 'Comparing complex dynamics using machine learning-reconstructed attracting sets', Colloquium on Irregular Engineering Oscillations and Signal Processing, TUHH, Hamburg, Germany.
LU, S, Oberst, S, Zhang, G & Luo, Z 1970, 'Order patterns recurrence plots and new quantifications to unveil nonlinear dynamics from stochastic systems', International Conference on Time Series and Forecasting 2018, International Conference on Time Series and Forecasting 2018, Granada, Spain.
Ma, Y, Lv, T, Zhang, X, Gao, H & Yu, S 1970, 'High Energy Efficiency Transmission in MIMO Satellite Communications', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, USA.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In this paper, we propose a high energy efficiency transmission scheme in multi-beam MIMO satellite systems. Satellite is regarded as a two-way decode-and- forward (DF) relay, where multiple pairs of users exchange information within pair. Zero-forcing transceivers are employed at the satellite. The challenge is that of deriving an accurate yet tractable expression of the system-level energy efficiency (EE) to be used as our objective function. To tackle this challenge, firstly, a closed-form expression of the EE is derived under the assumption of perfect satellite channel. Secondly, based on this analytical expression, we formulate a resource allocation optimization problem for the EE maximization by jointly optimizing satellite power and users power, subject to limited transmit power and minimum quality-of-service (QoS) constraints. Finally, the successive convex approximation technique is invoked to transform the original optimization problem into a concave fractional programming problem, which is then efficiently solved by the existed methods. Simulation results demonstrate the effectiveness of the proposed algorithms.
Manzoor, A, Hu, Y, Liyanage, M, Ekparinya, P, Thilakarathna, K, Jourjon, G, Seneviratne, A, Kanhere, S & Ylianttila, ME 1970, 'Demo: A Delay-Tolerant Payment Scheme on the Ethereum Blockchain', 2018 IEEE 19th International Symposium on 'A World of Wireless, Mobile and Multimedia Networks' (WoWMoM), 2018 IEEE 19th International Symposium on 'A World of Wireless, Mobile and Multimedia Networks' (WoWMoM), IEEE.
View/Download from: Publisher's site
Mehar, AM, Gill, AQ & Matawie, KM 1970, 'Analytical Model for Residential Predicting Energy Consumption.', CBI (2), IEEE Conference on Business Informatics BAPAR Workshops, IEEE Computer Society, Vienna, Austria, pp. 82-88.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Effective energy consumption prediction is important for determining the demand and supply of energy. The challenge is how to predict energy consumption? This study presents an energy consumption analytical regression model and process based on the project conducted in an Australian company. This study involved the analysis of household and energy consumption datasets in the residential sector. The analytical model generation process is organised into four major stages: prepared the household and energy consumption data or data cleansing, household energy consumption clustering (segmentation or groups) using k-means clustering algorithm for similarity measure in their characteristics, stepwise multiple regression for variables selection to determine the final model's predictors, and filter the final regression model to identify the influential observations using Cook's distance and Q-Q (quantile-quantile) normal plot for improvement in the model. The final filtered regression model represents 64 percent variation to the dependent variable is explained by independent variables with correlation 0.8 between energy consumption observed and predicted values. The abovementioned process and resultant regression model seem useful for developing household energy consumptions models for managing the demand and supply of energy.
Melnikov, A, Quann, L, Alú, A, Oberst, S, Marburg, S & Powell, D 1970, 'Theory for Willis coupling prediction of acoustic meta-atoms', Symposium on Acoustic Metamaterials, Xatavia, Spain.
Meng, Q, Wang, K, Liu, B, Miyazaki, T & He, X 1970, 'QoE-Based Big Data Analysis with Deep Learning in Pervasive Edge Environment', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In the age of big data, the services in pervasive edge environment are expected to offer end-users better Quality of Experience (QoE) than that in a normal edge environment. Nevertheless, various types of edge devices with storage, delivery, and sensing are coming into our environment and produce the high-dimensional big data accompanied by a volume of pervasive big data increasingly with a lot of redundancy. Therefore, the satisfaction of QoE becomes the primary challenge in high dimensional big data on the basis of pervasive edge environment. In this paper, we first propose a QoE model to evaluate the quality of service in pervasive edge environment. The value of QoE does not only include the accurate data, but also the transmission rate. Then, on the basis of the accuracy, we propose a Tensor-Fast Convolutional Neural Network (TF-CNN) algorithm based on Deep Learning, which is suitable for pervasive edge environment with high-dimensional big data analysis. Simulation results reveal that our proposals could achieve high QoE performance.
Meng, S, Li, Q, Chen, S, Yu, S, Qi, L, Lin, W, Xu, X & Dou, W 1970, 'Temporal-Sparsity Aware Service Recommendation Method via Hybrid Collaborative Filtering Techniques', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Service-Oriented Computing, Springer International Publishing, Hangzhou, China, pp. 421-429.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Temporal information has been proved to be an important factor to recommender systems. Both of user behaviors and QoS performance of services are time-sensitive, especially in dynamic cloud environment. Furthermore, due to the data sparsity problem, it is still difficult for existing recommendation methods to get the similarity relationships between services or users well. In view of these challenges, in this paper, we propose a temporal-sparsity aware service recommendation method based on hybrid collaborative filtering (CF) techniques. Specifically, temporal influence is considered into classical neighborhood-based CF model by distinguishing temporal QoS metrics from stable QoS metrics. To deal with the sparsity problem, a time-aware latent factor model based on a tensor decomposition model is applied to mine the temporal similarity between services. Finally, experiments are designed and conducted to validate the effectiveness of our proposal.
Merigó, JM, Herrera-Viedma, E, Cobo, MJ, Laengle, S & Rivas, D 1970, 'A Bibliometric Analysis of the First Twenty Years of Soft Computing', Proceedings of the Conference of the European Society for Fuzzy Logic and Technology, International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets, Springer International Publishing, Warsaw, Poland, pp. 517-528.
View/Download from: Publisher's site
View description>>
© 2018, Springer International Publishing AG. Soft Computing was launched in 1997. Today, the journal is becoming twenty years old. Motivated by this anniversary, this article develops a bibliometric analysis of the journal in order to identify the leading trends of the journal in terms of publications and citations. The work considers several issues including the leading authors, institutions and countries. The study also uses a software to develop a graphical analysis. The results show a significant increase of the journal during the last years that has consolidated the journal as a leading one in the field.
Merigo, JM, Herrera-Viedma, E, Yager, RR & Kacprzyk, J 1970, 'A Bibliometric Overview of the Research Impact of Lotfi A. Zadeh', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, pp. 441-446.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Lotfi A. Zadeh is the founder of fuzzy logic. He is one of the most prominent computer scientists of all-time. On the 6th of September of 2017 he passed away. In order to commemorate and provide a complete overview of his research impact in the scientific community, this study presents a bibliometric overview of his publications according to the results available in the Web of Science Core Collection. The article also uses the VOS viewer in order to map graphically the leading trends connected to Zadeh in terms of journals, papers, authors and countries. Obviously, the bibliometric sources used concern more recent works of Zadeh and one should bear in mind that his brilliant and prominent works on signal analysis, Z-transform, state space approach, optimal control, etc., are not included in our analyses.
Ming, Y, Wang, Y-K, Prasad, M, Wu, D & Lin, C-T 1970, 'Sustained Attention Driving Task Analysis based on Recurrent Residual Neural Network using EEG Data', 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Rio de Janeiro, Brazil, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This paper proposes applying recurrent residual network (RRN) for analyzing electroencephalogram (EEG) data captured during a simulated sustained attention driving task. We first address the suitableness of utilizing residual structure as well as adopting recurrent structure for EEG signal processing. Then based on these descriptions a recurrent residual network is tailored and depicted in detail. Thirdly we use an EEG dataset obtained from a sustained-attention experiment for our model justification. By applying the RRN model to the experimental data and via the competitive result achieved, we demonstrate the elegance of the proposed model. At last, we discuss the characteristics of the learned filters and their interpretations from EEG frequency band perspectives.
Mirtalaie, MA, Hussain, OK, Chang, E & Hussain, FK 1970, 'Sentiment Analysis of Specific Product’s Features Using Product Tree for Application in New Product Development', Advances in Intelligent Networking and Collaborative Systems The 9th International Conference on Intelligent Networking and Collaborative Systems, International Conference on Intelligent Networking and Collaborative Systems, Springer International Publishing, Toronto, CANADA, pp. 82-95.
View/Download from: Publisher's site
View description>>
New Product Development (NPD) is a multi-step process by which novel products are introduced in the market. Sentiment analysis, which ascertains the popularity of each new feature added to the product, is one of the key steps in this process. In this paper we present an approach by which product designers analyze users’ reviews from social media platforms to determine the popularity of a specific product’s feature in order to make a decision about adding it to the product’s next generation. Our proposed approach utilizes a product tree generated from a product specification document to facilitate forming an efficient link between features mentioned in the users’ reviews and those of the product designer’s interest. Furthermore, it captures the links/interactions between a feature of interest and its other related features in a product to ascertain its polarity.
Niamir, L, Ivanova, O, Filatova, T & Voinov, A 1970, 'Tracing Macroeconomic Impacts of Individual Behavioral Changes through Model Integration', IFAC-PapersOnLine, IFAC Workshop on Integrated Assessment Modelling for Environmental System, Elsevier BV, Brescia, Italy, pp. 96-101.
View/Download from: Publisher's site
View description>>
© 2018 The discourse on climate change stresses the importance of individual behavioral changes and shifts in social norms to assist both climate mitigation efforts worldwide. A design of an effective and efficient climate policy calls for decision support tools that are able to quantify cumulative impacts of individual behaviour and can integrate bottom-up processes into the traditional decision support tools. We propose an integrated system of models that combines strengths of macro and micro approaches to trace the cross-scale feedbacks in socio-economic processes in residential energy markets at provincial and national scales. This paper explores the feasibility of such hybrid models to study dynamic effects of climate change mitigation policy measures targeted at changes in residential energy use practices. We present an example of an agent-based energy model (BENCH) integrated with a EU-EMS computable general equilibrium model. We discusses methodological advancements and open challenges with respect to the integrated system of models.
Ning, X, Yao, L, Wang, X, Benatallah, B, Salim, F & Haghighi, PD 1970, 'Predicting Citywide Passenger Demand via Reinforcement Learning from Spatio-Temporal Dynamics', Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous '18: Computing, Networking and Services, ACM, New York, NY, USA, pp. 19-28.
View/Download from: Publisher's site
Ning, X, Yao, L, Wang, X, Benatallah, B, Zhang, S & Zhang, X 1970, 'Data-Augmented Regression with Generative Convolutional Network', Web Information Systems Engineering – WISE 2018, International Conference on Web Information Systems Engineering, Springer International Publishing, Dubai, United Arab Emirates, pp. 301-311.
View/Download from: Publisher's site
View description>>
Generative adversarial networks (GAN)-based approaches have been extensively investigated whereas GAN-inspired regression (i.e., numeric prediction) has rarely been studied in image and video processing domains. The lack of sufficient labeled data in many real-world cases poses great challenges to regression methods, which generally require sufficient labeled samples for their training. In this regard, we propose a unified framework that combines a robust autoencoder and a generative convolutional neural network (GCNN)-based regression model to address the regression problem. Our model is able to generate high-quality artificial samples via augmenting the size of a small number of training samples for better training effects. Extensive experiments are conducted on two real-world datasets and the results show that our proposed model consistently outperforms a set of advanced techniques under various evaluation metrics.
Oberst, S 1970, 'Nonlinear Dynamics: Towards a paradigm change via evidence-based complex dynamics modelling', Noise and Vibrations Emerging Methods, Ibiza, Spain.
Oberst, S, Baetz, J, Campbell, G, Lampe, F, Lai, JCS, Hoffmann, N & Morlock, M 1970, 'Vibro-acoustic and nonlinear analysis of cadavric femoral bone impaction for cavity preparation in hip implants', MATEC Web of Conferences, EDP Sciences.
Oberst, S, Baetz, J, Campbell, G, Lampe, F, Leis, JCS, Hoffmann, N & Morlock, M 1970, 'Vibro-acoustic and nonlinear analysis of cadavric femoral bone impaction in cavity preparations', INTERNATIONAL CONFERENCE ON ENGINEERING VIBRATION (ICOEV 2017), International Conference on Engineering Vibration (ICoEV), E D P SCIENCES, BULGARIA, Sofia, pp. 1-6.
View/Download from: Publisher's site
View description>>
Owing to an ageing population, the impact of unhealthy lifestyle, or simply congenital or gender
specific issues (dysplasia), degenerative bone and joint disease (osteoarthritis) at the hip pose an increasing
problem in many countries. Osteoarthritis is painful and causes mobility restrictions; amelioration is often only
achieved by replacing the complete hip joint in a total hip arthroplasty (THA). Despite significant orthopaedic
progress related to THA, the success of the surgical process relies heavily on the judgement, experience, skills
and techniques used of the surgeon. One common way of implanting the stem into the femur is press fitting
uncemented stem designs into a prepared cavity. By using a range of compaction broaches, which are impacted
into the femur, the cavity for the implant is formed. However, the surgeon decides whether to change the size of
the broach, how hard and fast it is impacted or when to stop the excavation process, merely based on acoustic,
haptic or visual cues which are subjective. It is known that non-ideal cavity preparations increase the risk of
peri-prosthetic fractures especially in elderly people.
This study reports on a simulated hip replacement surgery on a cadaver and the analysis of impaction forces
and the microphone signals during compaction. The recorded transient signals of impaction forces and acoustic
pressures (≈ 80 µs - 2 ms) are statistically analysed for their trend, which shows increasing heteroscedasticity
in the force-pressure relationship between broach sizes.
TIKHONOV regularisation, as inverse deconvolution technique, is applied to calculate the acoustic transfer
functions from the acoustic responses and their mechanical impacts. The extracted spectra highlight that system
characteristics altered during the cavity preparation process: in the high-frequency range the number of
resonances increased with impacts and broach size. By applying nonlinear time series analysis the system dynamics
increase in compl...
Oberst, S, Lai, JCS & Evans, TA 1970, 'Excitation signal extraction of ant walking pattern under the influence of noise using a biomechanical bipedal mathematical model', 2nd International Symposium on Biotremology, San Michele all’Adige, Italy.
Oberst, S, Lim, S, Romão, AC, Lai, JCS, Stender, M, Hoffmann, NP & Evans, TA 1970, 'A coupled mono-bipedal biomechanical surrogate model to mimic ants walking and running gait analysed using recurrence plot quantification analysis', Colloquium on Irregular Oscillations and Signal Processing,, Hamburg, Germany.
Pan, J, Li, J, Han, X & Jia, K 1970, 'Residual MeshNet: Learning to Deform Meshes for Single-View 3D Reconstruction', 2018 International Conference on 3D Vision (3DV), 2018 International Conference on 3D Vision (3DV), IEEE, Verona, Italy, pp. 719-727.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This work presents a novel architecture of deep neural networks to generate meshes approximating the surface of a 3D object from a single image. Compared to existing learning-based 3D reconstruction models, our architecture is characterized by (1) deep mesh deformation stacks with residual network design, where a simple mesh is transformed to approximate the target surface and undergoes multiple deformation steps to progressively refine the result and reduce the residuals, and (2) parallel paths per deformation step, which can exponentially enrich the generated meshes using deeper structure and more model parameters. We also propose novel regularization scheme that encourages the meshes to be both globally complementary to cover the target surface and locally consistent with each other. Empirical evaluation on benchmark datasets show advantage of the proposed architecture over existing methods.
Pang, G, Cao, L, Chen, L & Liu, H 1970, 'Learning Representations of Ultrahigh-dimensional Data for Random Distance-based Outlier Detection', Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18: The 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, London, United Kingdom, pp. 2041-2050.
View/Download from: Publisher's site
View description>>
© 2018 Association for Computing Machinery. Learning expressive low-dimensional representations of ultrahigh-dimensional data, e.g., data with thousands/millions of features, has been a major way to enable learning methods to address the curse of dimensionality. However, existing unsupervised representation learning methods mainly focus on preserving the data regularity information and learning the representations independently of subsequent outlier detection methods, which can result in suboptimal and unstable performance of detecting irregularities (i.e., outliers). This paper introduces a ranking model-based framework, called RAMODO, to address this issue. RAMODO unifies representation learning and outlier detection to learn low-dimensional representations that are tailored for a state-of-the-art outlier detection approach - the random distance-based approach. This customized learning yields more optimal and stable representations for the targeted outlier detectors. Additionally, RAMODO can leverage little labeled data as prior knowledge to learn more expressive and application-relevant representations. We instantiate RAMODO to an efficient method called REPEN to demonstrate the performance of RAMODO. Extensive empirical results on eight real-world ultrahigh dimensional data sets show that REPEN (i) enables a random distance-based detector to obtain significantly better AUC performance and two orders of magnitude speedup; (ii) performs substantially better and more stably than four state-of-the-art representation learning methods; and (iii) leverages less than 1% labeled data to achieve up to 32% AUC improvement.
Pang, G, Cao, L, Chen, L, Lian, D & Liu, H 1970, 'Sparse Modeling-Based Sequential Ensemble Learning for Effective Outlier Detection in High-Dimensional Numeric Data', Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI), New Orleans, USA, pp. 3892-3899.
View/Download from: Publisher's site
View description>>
The large proportion of irrelevant or noisy features in real-life high-dimensional data presents a significant challenge to subspace/feature selection-based high-dimensional outlier detection (a.k.a. outlier scoring) methods. These methods often perform the two dependent tasks: relevant feature subset search and outlier scoring independently, consequently retaining features/subspaces irrelevant to the scoring method and downgrading the detection performance. This paper introduces a novel sequential ensemble-based framework SEMSE and its instance CINFO to address this issue. SEMSE learns the sequential ensembles to mutually refine feature selection and outlier scoring by iterative sparse modeling with outlier scores as the pseudo target feature. CINFO instantiates SEMSE by using three successive recurrent components to build such sequential ensembles. Given outlier scores output by an existing outlier scoring method on a feature subset, CINFO first defines a Cantelli's inequality-based outlier thresholding function to select outlier candidates with a false positive upper bound. It then performs lasso-based sparse regression by treating the outlier scores as the target feature and the original features as predictors on the outlier candidate set to obtain a feature subset that is tailored for the outlier scoring method. Our experiments show that two different outlier scoring methods enabled by CINFO (i) perform significantly better on 11 real-life high-dimensional data sets, and (ii) have much better resilience to noisy features, compared to their bare versions and three state-of-the-art competitors. The source code of CINFO is available at https://sites.google.com/site/gspangsite/sourcecode.
Pileggi, SF, Lopez-Lorca, AA & Beydoun, G 1970, 'Ontology in software engineering', ACIS 2018 - 29th Australasian Conference on Information Systems, Australasian Conference on Information Systems, Sydney, Australia.
View description>>
© 2018 authors. During the past years, ontological thinking and design have become more and more popular in the field of Artificial Intelligence (AI). More recently, Software Engineering (SE) has evolved towards more conceptual approaches based on the extensive adoption of models and meta-models. This paper briefly discusses the role of ontologies in SE according to a perspective that closely matches the theoretical life-cycle. These roles vary considerably across the development lifecycle. The use of ontologies to improve SE development activities is still relatively new (2000 onward), but it is definitely no more a novelty. Indeed, the role of such structures is well consolidated in certain SE aspects, such as requirement engineering. On the other hand, despite their well-known potential as knowledge representation mechanisms, ontologies are not completely exploited in the area of SE. We first (i) proposes a brief overview of ontologies and their current understanding within the Semantic Web with a focus on the benefits provided; then, the role that ontologies play in the more specific context of SE is addressed (ii); finally, we deal with (iii) some brief considerations looking at specific types of software architecture, such as Multi-Agent Systems (MAS) and Service-Oriented Architecture (SOA). The main limitation of our research is that we are focusing on traditional developments, where phases occur mostly sequentially. However, industry has fully embraced agile developments. It is unclear that agile practitioners are willing to adopt ontologies as a tool, unless we ensure that they can provide a clear benefit and they be used in a lean way, without introducing significant overhead to the agile development process.
Pileggi, SF, Lopez-Lorca, AA & Beydoun, G 1970, 'Ontology in Software Engineering', ACIS 2018 - 29th Australasian Conference on Information Systems, University of Technology, Sydney.
View/Download from: Publisher's site
View description>>
During the past years, ontological thinking and design have become more and more popular in the field of Artificial Intelligence (AI). More recently, Software Engineering (SE) has evolved towards more conceptual approaches based on the extensive adoption of models and meta-models. This paper briefly discusses the role of ontologies in SE according to a perspective that closely matches the theoretical life-cycle. These roles vary considerably across the development lifecycle. The use of ontologies to improve SE development activities is still relatively new (2000 onward), but it is definitely no more a novelty. Indeed, the role of such structures is well consolidated in certain SE aspects, such as requirement engineering. On the other hand, despite their well-known potential as knowledge representation mechanisms, ontologies are not completely exploited in the area of SE. We first (i) proposes a brief overview of ontologies and their current understanding within the Semantic Web with a focus on the benefits provided; then, the role that ontologies play in the more specific context of SE is addressed (ii); finally, we deal with (iii) some brief considerations looking at specific types of software architecture, such as Multi-Agent Systems (MAS) and Service-Oriented Architecture (SOA). The main limitation of our research is that we are focusing on traditional developments, where phases occur mostly sequentially. However, industry has fully embraced agile developments. It is unclear that agile practitioners are willing to adopt ontologies as a tool, unless we ensure that they can provide a clear benefit and they be used in a lean way, without introducing significant overhead to the agile development process.
Prasad, M, Chang, L-C, Gupta, D, Pratama, M, Sundaram, S & Lin, C-T 1970, 'Online video streaming for human tracking based on weighted resampling particle filter', Procedia Computer Science, INNS Conference on Big Data and Deep Learning 2018, Elsevier BV, Bali, Indonesia, pp. 2-12.
View/Download from: Publisher's site
View description>>
© 2018 The Authors. Published by Elsevier Ltd. This paper proposes a weighted resampling method for particle filter which is applied for human tracking on active camera. The proposed system consists of three major parts which are human detection, human tracking, and camera control. The codebook matching algorithm is used for extracting human region in human detection system, and the particle filter algorithm estimates the position of the human in every input image. The proposed system in this paper selects the particles with highly weighted value in resampling, because it provides higher accurate tracking features. Moreover, a proportional-integral-derivative controller (PID controller) controls the active camera by minimizing difference between center of image and the position of object obtained from particle filter. The proposed system also converts the position difference into pan-tilt speed to drive the active camera and keep the human in the field of view (FOV) camera. The intensity of image changes overtime while tracking human therefore the proposed system uses the Gaussian mixture model (GMM) to update the human feature model. As regards, the temporal occlusion problem is solved by feature similarity and the resampling particles. Also, the particle filter estimates the position of human in every input frames, thus the active camera drives smoothly. The robustness of the accurate tracking of the proposed system can be seen in the experimental results.
Prasad, M, Liu, C-L, Li, D-L, Jha, C & Lin, C-T 1970, 'Multi-view Vehicle Detection based on Part Model with Active Learning', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Nowadays, most ofthe vehicle detection methods aim to detect only single-view vehicles, and the performance is easily affected by partial occlusion. Therefore, a novel multi-view vehicle detection system is proposed to solve the problem of partial occlusion. The proposed system is divided into two steps: Background filtering and part model. Background filtering step is used to filter out trees, sky and other road background objects. In the part model step, each of the part models is trained by samples collected by using the proposed active learning algorithm. This paper validates the performance of the background filtering method and the part model algorithm in multi-view car detection. The performance of the proposed method outperforms previously proposed methods.
Prasad, M, Rajora, S, Gupta, D, Daraghmi, Y-A, Daraghmi, E, Yadav, P, Tiwari, P & Saxena, A 1970, 'Fusion based En-FEC Transfer Learning Approach for Automobile Parts Recognition System', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Bangalore, India,, pp. 2193-2199.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The artificially supervised classification of real world entities have gained a phenomenal significance in recent year of computational advancements. An intelligent classification model focuses on rendering accurate outcomes vide the implicated paradigms with respect to the subjected data employed to train the classifier. This paper proposes a novel deep learning approach to classify the various parts of any operational engine such as crank shafts, rock-arms, distributer, air duct, assecorybelt etc. Deployed in automobiles. The proposed architecture distinctively utilizes convolution neural networks for this typical classification problem and altogether constructs a robust transfer learning paradigm to render the correct class label against the validation and test images as the conclusive result of the classification. The proposed methodology poses in such a way that it can qualitatively classify and henceforth give the corresponding class label of the machinery/engine part under consideration. This computationally intelligent architecture requires the user to feed the image of the engine part to the model in order to achieve the requisite responses of classification. The main contribution of the proposed method is the development of a robust algorithm that can exhibit pronounced results without training the entire ConvNet architecture from scratch, thereby enabling the proposed paradigm to be deployable in application instances wherein limited labeled training data is available.
Prasad, M, Zheng, D-R, Mery, D, Puthal, D, Sundaram, S & Lin, C-T 1970, 'A fast and self-adaptive on-line learning detection system', Procedia Computer Science, INNS Conference on Big Data and Deep Learning, Elsevier BV, Bali, Indonesia, pp. 13-22.
View/Download from: Publisher's site
View description>>
© 2018 The Authors. Published by Elsevier Ltd. This paper proposes a method to allow users to select target species for detection, generate an initial detection model by selecting a small piece of image sample and as the movie plays, continue training this detection model automatically. This method has noticeable detection results for several types of objects. The framework of this study is divided into two parts: the initial detection model and the online learning section. The detection model initialization phase use a sample size based on the proportion of users of the Haar-like features to generate a pool of features, which is used to train and select effective classifiers. Then, as the movie plays, the detecting model detects the new sample using the NN Classifier with positive and negative samples and the similarity model calculates new samples based on the fusion background model to calculate a new sample and detect the relative similarity to the target. From this relative similarity-based conservative classification of new samples, the conserved positive and negative samples classified by the video player are used for automatic online learning and training to continuously update the classifier. In this paper, the results of the test for different types of objects show the ability to detect the target by choosing a small number of samples and performing automatic online learning, effectively reducing the manpower needed to collect a large number of image samples and a large amount of time for training. The Experimental results also reveal good detection capability.
Prasad, M, Zheng, D-R, Mery, D, Puthal, D, Sundaram, S & Lin, C-T 1970, 'A fast and self-adaptive on-line learning detection system', INNS CONFERENCE ON BIG DATA AND DEEP LEARNING, 3rd INNS Conference on Big Data and Deep Learning (INNS BDDL), ELSEVIER SCIENCE BV, INDONESIA, Bali, pp. 13-22.
View/Download from: Publisher's site
Prysyazhnyuk, A & McGregor, C 1970, 'Spatio-temporal visualisation of big data analytics during spaceflight', Proceedings of the International Astronautical Congress, IAC.
View description>>
Technological advancements continue to extend the capacity of clinical decision support aboard the spacecraft, while improve physiological monitoring practices, presenting new opportunities for clinical discovery and early detection monitoring. Preservation of health and performance of astronauts remains paramount for the success of the mission and safety of the entire crew. Increasing scientific evidence demonstrates effectiveness of the use of big data analytics to support provision of medical care in space, providing the necessary tools for development of an autonomous comprehensive clinical decision support system. In prior work, the big data analytics framework, known as the Artemis, was presented, demonstrating its capacity to analyse large volumes of physiological data streams, which could be effectively combined with other relevant clinical and environmental data. Preliminary studies focused on re-engineering of algorithms assessing adaption to enable them to run within an Online Analytics component of the Artemis platform, to assess the level of wellness and tolerance of adaptation mechanisms to the conditions of spaceflight, in real-time. Conventional data visualisation methods limited representation of data to 2-dimensional scatter graphs, which depicted the dynamicity of functional states, yet provided no task-specific or temporal detail, hindering the ability to understand the trajectory of changes that occur in response to changing physiological and environmental conditions. The ability of the Artemis platform to support real-time analytics has necessitated exploration of new data visualization techniques, to enable accurate representation of the functional state of the body, while depicting the trajectory of movement, signifying deviation from the norm and the risk of development of pathology. A spatio-temporal visualization technique for representation of big data analytics has been explored and demonstrates great potential to depict tas...
Prysyazhnyuk, A, McGregor, C, Bersenev, E & Slonov, AV 1970, 'Investigation of Adaptation Mechanisms During Five-Day Dry Immersion Utilizing Big-Data Analytics', 2018 IEEE Life Sciences Conference (LSC), 2018 IEEE Life Sciences Conference (LSC), IEEE, Montreal, QC, Canada, pp. 247-250.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Emerging technology continues to redefine the concept of health and human capacity to adapt to various extreme environments on Earth, as well as in space, while preserving performance and alleviating adverse effects on the human body. Technological advancements enable effective modeling of extreme environmental conditions in terrestrial facilities, demonstrating great potential for scientific discovery, modernization of available countermeasure systems and development of comprehensive software tools for clinical decision support. To date, a vast amount of knowledge has been accumulated on physiological deconditioning in response to spaceflight environment. The underlying conditions are often closely associated with maladaptation, supported by changes in heart rate variability parameters. However, existing methods do not support real-time data acquisition, processing and analytics, thereby limiting the usability of physiological data to inform clinical decision making and timely introduction of countermeasure systems. The proposed extension of Artemis, big data analytics platform and modernization of the wellness algorithm, demonstrate great potential to address limitations of existing methods, while significantly improve the provision of medical care in space or in terrestrial environments for individuals working and/or living under conditions of chronic stress. Current study demonstrates application of the proposed big-data analytics framework in a 5-day dry immersion experiment.
Qararyah, F, Daraghmi, Y-A, Daraghmi, E, Rajora, S, Lin, C-T & Prasad, M 1970, 'A Time Efficient Model for Region of Interest Extraction in Real Time Traffic Signs Recognition System', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, India, pp. 83-87.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Computation intelligence plays a major role in developing intelligent vehicles, which contains a Traffic Sign Recognition (TSR) system for increasing vehicle safety. Traffic sign recognition systems consist of an initial phase called Traffic Sign Detection (TSD), where images and colors are segmented and fed to the recognition phase. The most challenging process in TSR systems in terms of time consumption is the detection phase. The previous studies proposed different models for traffic sign detection, however, the computation time of these models still requires improvement for enabling real time systems. Therefore, this paper focuses on the computational time and proposes a novel time efficient color segmentation model based on logistic regression. This paper uses RGB color space as the domain to extract the features of our hypothesis; this has boosted the speed of the proposed model, since no color conversion is needed. The trained segmentation classifier is tested on 1000 traffic sign images taken in different lighting conditions. The experimental results show that the proposed model segmented 974 of these images correctly and in a time less than one-fifth of the time needed by any other robust segmentation methods.
Qiu, Z, Zhang, S, Zhou, W & Yu, S 1970, 'Empirical study on taxi's mobility nature in dense urban area', IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2018 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE, USA, pp. 232-237.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Vehicular mobility statistics including vehicle velocity, relative velocity and link duration between vehicles may impact greatly on V2V communications and networking but existing works based on large scale real traces are rarely reported. In this paper, we firstly present the statistical analysis on this topic using the taxi traces in one metropolis of China, which reveals the practical distribution of the mobility statistics mentioned above. We propose a computation methodology with low computation complexity to approximate vehicular communication pattern by analyzing large scale vehicular trace data. By Maximum Likelihood Estimation (MLE), we conclude that the taxi velocity follows normal distribution and the relative velocity follows Logistic distribution in different disconnected distance. Moreover, the link duration is verified to comply with generalized Pareto distribution in different disconnected distance. Such findings are significant for designing practical transmission technology and protocols.
Qu, Y, Cui, L, Yu, S, Zhou, W & Wu, J 1970, 'Improving Data Utility through Game Theory in Personalized Differential Privacy', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, USA.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Due to dramatically increasing information published in social networks, privacy issues have given rise to public concerns. Although the presence of differential privacy provides privacy protection with theoretical foundations, the trade-off between privacy and data utility still demands further improvement. However, most existing works do not consider the impact of the adversary in the measurement of data utility. In this paper, we firstly propose a personalized differential privacy based on social distance. Then, we analyze the maximum data utility when users and adversaries are blind to the strategy sets of each other. We formulize all the payoff functions in the differential privacy sense, which is followed by the establishment of a Static Bayesian Game. The trade-off is calculated by deriving the Bayesian Nash Equilibrium. In addition, the in-place trade-off can maximize the user' data utility if the action sets of the user and the adversary are public while the strategy sets are unrevealed. Our extensive experiments on the real-world dataset prove the proposed model is effective and feasible.
Qu, Y, Yu, S, Zhou, W & Niu, J 1970, 'FBI: Friendship Learning-Based User Identification in Multiple Social Networks', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Fast proliferation of mobile devices significantly promotes the development of mobile social networks. Users tend to interact with friends via multiple social networks. Multiple social networks identification is of great significance in terms of both attack and defense. Current methods either focus on the profile matching or network structure to re-identify a specific user. However, the accuracy are not satisfying with relative high error rate. In this paper, we propose a new Friendship learning-Based Identification (FBI) method to discriminate multiple pseudo identities of a real-world individual. We aim at providing potential attack mechanism to following privacy protection research. Firstly, we develop a new identification method based on friendship matching. Then, we implement a weighted mechanism which takes profile, network structure, and friendship into consideration. Furthermore, machine learning is leverage to further optimize the parameters and improve the accuracy. In addition, extensive experimental results show the superior of the FBI comparing to existing ones.
Raffe, WL & Garcia, JA 1970, 'Combining skeletal tracking and virtual reality for game-based fall prevention training for the elderly', 2018 IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH), 2018 IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH), IEEE, IEEE, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This paper provides a preliminary appraisal of combining commercial skeletal tracking and virtual reality technologies for the purposes of innovative gameplay interfaces in fall prevention exergames for the elderly. This work uses the previously published StepKinnection game, which used skeletal tracking with a flat screen monitor, as a primary point of comparison for the proposed combination of these interaction modalities. Here, a Microsoft Kinect is used to track the player's skeleton and represent it as an avatar in the virtual environment while the HTC Vive is used for head tracking and virtual reality visualization. Multiple avatar positioning modes are trialled and discussed via a small self-reflective study (with the authors as participants) to examine their ability to allow accurate stepping motions, maintain physical comfort, and encourage self-identification or empathy with the avatar. While this is just an initial study, it highlights promising opportunities for designing engaging step training games with this integrated interface but also highlights its limitations, especially in the context of an unsupervised exercise program of older people in independent living situations.
Rajora, S, kumar Vishwakarma, D, Singh, K & Prasad, M 1970, 'CSgI: A Deep Learning based approach for Marijuana Leaves Strain Classification', 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), IEEE, Vancouver, BC, Canada, pp. 209-214.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This paper proposes a novel approach that classifies the images of various marijuana/cannabis leaves into their respective classes of strains and types. The proposed architecture works on a two-fold technique which when implemented in the requisite sequence delivers phenomenal results to the classification problem statement. The first fold, being the segmentation or foreground extraction in the images, focuses on extracting the RDI (Region of Interest) using a robust segmentation algorithm which can suitable separate the foreground from the image; and the second fold, being the Deep Learning aspect focuses on the result classification task. This literature gives a quantitative analysis of implementing this classification problem vide a transfer learning paradigm (for application instances with less training data in hand) training the entire CNN archetype from scratch (for application instances with sufficient training data in hand). Thus, altogether the proposed methodology distinctively deploys ConvNets for the posed classification problem having dual aspects of approaches implementation wiz: a) Transfer Learning b) Training the entire CNN from scratch. The novelty of the proposed work can be counted upon as the construction of a robust algorithm very first of its kind in this respective application domain which is potent enough to render the correct class label of the strain/type of marijuana or cannabis leaf image when fed to the system for classification.
Rajora, S, Li, D-L, Jha, C, Bharill, N, Patel, OP, Joshi, S, Puthal, D & Prasad, M 1970, 'A Comparative Study of Machine Learning Techniques for Credit Card Fraud Detection Based on Time Variance', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Bangalore, India, India, pp. 1958-1963.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This paper proposes a comparative performance of ten different machine learning algorithms, done on a credit card fraud detection application. The machine learning methods have been classified into two groups namely classification algorithms and ensemble learning group. Each group is comprised of five different algorithms. Besides, the 'Time' feature is introduced in the data set and performances of the algorithms are studied with and without the 'Time' feature. Two algorithms of the ensemble learning group have been found to perform better when the used dataset does not include the 'Time' feature. However, for the classification algorithms group, three classifiers are found to show better predictive accuracies when all attributes are included in the used dataset. The rest of the machine learning models have approximate similar scores between these datasets.
Razzak, MI, Saris, RA, Blumenstein, M & Xu, G 1970, 'Robust 2D Joint Sparse Principal Component Analysis With F-Norm Minimization For Sparse Modelling: 2D-RJSPCA', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
Reza Nosouhi, M, Yu, S, Grobler, M, Xiang, Y & Zhu, Z 1970, 'SPARSE: Privacy-Aware and Collusion Resistant Location Proof Generation and Verification', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Recently, there has been an increase in the number of location-based services and applications. It is common for these applications to provide facilities or rewards for users who visit specific venues frequently. This creates the incentive for dishonest users to lie about their location and submit fake check-ins by changing their GPS data. To solve this issue, different distributed location proof schemes have been proposed to generate location proofs for mobile users. However, these schemes have some drawbacks: (1) they are vulnerable to either Prover-Prover or Prover-Witness collusions, (2) the location proof generation process is slow when users adopt a long private key, and (3) their implementation requires some hardware changes on mobile devices. To address these issues, we propose the Secure, Privacy-Aware and collusion Resistant poSition vErification (SPARSE) scheme to generate private location proofs for mobile users. SPARSE has a distributed architecture designed for ad-hoc scenarios in which mobile users generate location proofs for each other. Since we do not integrate any distance bounding protocol into SPARSE, it becomes an easy-to-implement scheme in which the location proof generation process is independent of the length of the users' private key. We provide a comprehensive security analysis and simulation which show that SPARSE provides privacy protection as well as security properties for users including integrity, unforgeability and non-transferability of the location proofs. Moreover, it achieves a highly reliable performance against collusions.
Roberts, AGK, Catchpoole, DR & Kennedy, PJ 1970, 'Variance-based Feature Selection for Classification of Cancer Subtypes Using Gene Expression Data', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Classification in cancer has traditionally relied on feature selection by differential expression as a first step, where genes are selected according to the strength of evidence for a consistent difference in expression level between classes. However, recent work has shown that many genes also differ in the variance of their gene expression between disease states, and in particular between cancers of different types, prognosis, or stages of development. Features selected based on increased variance in cancer or differences in variance between tumours of differing prognosis have been used to successfully predict tumour progression or prognosis within the same cancer type, and to classify cancer subtypes in cases where there is an overall increase in variance in one class over the other. Here, we apply feature selection by differential variance to the more general problem of classification of cancer subtypes. We show that classifiers using features selected by differential variance are able to distinguish between clinically relevant cancer subtypes, that these classifiers perform as well as classifiers based on features selected by differential expression, and that combining the two approaches often gives better classification results than either feature selection method alone.
Saberi, M, Chang, E, Saffari, M & Khadeer Hussain, O 1970, 'Customised Data Dashboard for Contact Centres by Focussing on Customer Identification', 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), IEEE, pp. 153-157.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The main touch point of any organisation is its Contact Centre (CC) where about seventy percent of all customer interactions are handled. The first task of these centres is customer recognition. Wrong identification leads to customer dissatisfaction, which consequently affects Customer Service Representatives' (CSRs) emotions. CSR fatigue is a known problem in CCs and one of their main issues is the high rate of CSR attrition. Therefore, CSRs need good support such as having the required valuable information within CCs along with advanced data analytic tools and techniques that make their job of customer identification more efficient. In this paper, we propose a customised Customer Identification (ID) dashboard that provides a summary of customers' profiles to the CSRs. We propose a heuristic algorithm which measures the difficulty of customer identification based on his/her name. This information allows the CSR to know beforehand how much effort is required to ensure that the customer is identified as quickly as possible.
Saberi, Z, Hussain, O, Saberi, M & Chang, E 1970, 'Stackelberg Game-Theoretic Approach in Joint Pricing and Assortment Optimizing for Small-Scale Online Retailers: Seller-Buyer Supply Chain Case', 2018 IEEE 32nd International Conference on Advanced Information Networking and Applications (AINA), 2018 IEEE 32nd International Conference on Advanced Information Networking and Applications (AINA), IEEE, Poland, pp. 834-838.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Assortment planning is one of the fundamental and complex decisions for online retailers. The complexity of this problem is increasing while considering demand and supply uncertainties in assortment planning (AP). However, this leads to more efficient results in today's uncertain markets. In this paper, the supplier and E-tailer interactions are modeled by the non-cooperative game theory model. As small-scale online retailers opposed to bricks and mortar usually have lower power in front of suppliers, we propose a Stackelberg or leader-follower game model. First, the supplier as a leader announces its decisions regarding selling price to the E-tailer. Consequently, the E-tailer reacts by determining the purchase quantity, selling price to the customers and assortment size. Various scenarios are presented and analyzed to show the effectiveness of the Stackelberg game model in simulating the interactions between small-scale online retailers and a powerful supplier.
Saeed, Z, Abbasi, RA, Sadaf, A, Razzak, MI & Xu, G 1970, 'Text Stream to Temporal Network - A Dynamic Heartbeat Graph to Detect Emerging Events on Twitter', PAKDD 2018: Advances in Knowledge Discovery and Data Mining, Pacific-Asia Conference on Knowledge Discovery and Data, Springer International Publishing, Australia, pp. 534-545.
View/Download from: Publisher's site
View description>>
Huge mounds of data are generated every second on the Internet. People around the globe publish and share information related to real-world events they experience every day. This provides a valuable opportunity to analyze the content of this information to detect real-world happenings, however, it is quite challenging task. In this work, we propose a novel graph-based approach named the Dynamic Heartbeat Graph (DHG) that not only detects the events at an early stage, but also suppresses them in the upcoming adjacent data stream in order to highlight new emerging events. This characteristic makes the proposed method interesting and efficient in finding emerging events and related topics. The experiment results on real-world datasets (i.e. FA Cup Final and Super Tuesday 2012) show a considerable improvement in most cases, while time complexity remains very attractive.
Salamai, A, Saberi, M, Hussain, O & Chang, E 1970, 'Risk Identification-Based Association Rule Mining for Supply Chain Big Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Security, Privacy and Anonymity in Computation, Communication, and Storage, Springer International Publishing, Australia, pp. 219-228.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Since most supply chain processes include operational risks, the effectiveness of a corporation’s success depends mainly on identifying, analyzing and managing them. Currently, supply chain risk management (SCRM) is an active research field for enhancing a corporation’s efficiency. Although several techniques have been proposed, they still face a big challenge as they analyze only internal risk events from big data collected from the logistics of supply chain systems. In this paper, we analyze features that can identify risk labels in a supply chain. We propose defining risk events based on the association rule mining (ARM) technique that can categorize those in a supply chain based on a company’s historical data. The empirical results we obtained using data collected from an Aluminum company showed that this technique can efficiently generate and predict the optimal features of each risk label with a higher than 96.5% accuracy.
Saqib, M, Daud Khan, S, Sharma, N, Scully-Power, P, Butcher, P, Colefax, A & Blumenstein, M 1970, 'Real-Time Drone Surveillance and Population Estimation of Marine Animals from Aerial Imagery', 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Auckland, New Zealand.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Video analysis is being rapidly adopted by marine biologists to asses the population and migration of marine animals. Manual analysis of videos by human observers is labor intensive and prone to error. The automatic analysis of videos using state-of-the-art deep learning object detectors provides a cost-effective way for the study of marine animals population and their ecosystem. However, there are many challenges associated with video analysis such as background clutter, illumination, occlusions, and deformation. Due to the high-density of objects in the images and sever occlusion, current state-of-the-art object often results in multiple detections. Therefore, customized Non-Maxima-Suppression is proposed after the detections to suppress false positives which significantly improves the counting and mean average precision of the detections. An end-to-end deep learning framework of Faster-RCNN [1] was adopted for detections with base architectures of VGG16 [2], VGGM [3] and ZF [4].
Saqib, M, Khan, SD, Sharma, N & Blumenstein, M 1970, 'Person Head Detection in Multiple Scales Using Deep Convolutional Neural Networks', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Person detection is an important problem in computer vision with many real-world applications. The detection of a person is still a challenging task due to variations in pose, occlusions and lighting conditions. The purpose of this study is to detect human heads in natural scenes acquired from a publicly available dataset of Hollywood movies. In this work, we have used state-of-the-art object detectors based on deep convolutional neural networks. These object detectors include region-based convolutional neural networks using region proposals for detections. Also, object detectors that detect objects in the single-shot by looking at the image only once for detections. We have used transfer learning for fine-tuning the network already trained on a massive amount of data. During the fine-tuning process, the models having high mean Average Precision (mAP) are used for evaluation of the test dataset. Experimental results show that Faster R-CNN [18] and SSD MultiBox [13] with VGG16 [21] perform better than YOLO [17] and also demonstrate significant improvements against several baseline approaches.
Saud Azeez, O, Kalantar, B, Al-Najjar, HAH, Halin, AA, Ueda, N & Mansor, S 1970, 'OBJECT BOUNDARIES REGULARIZATION USING THE DYNAMIC POLYLINE COMPRESSION ALGORITHM', The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Copernicus GmbH, pp. 541-546.
View/Download from: Publisher's site
View description>>
Abstract. This study presents a regularization approach to refine object boundaries for the purpose of buildings 3D modelling and reconstruction. Specifically, the derivative Normalized Digital Surface model (nDSM) image layer is firstly segmented using the classical multi-resolution segmentation followed by spectral difference segmentation. As the segmentation results can contain quite a number of boundary artefacts in the form geometrical distortions, the Dynamic Polyline Compression algorithm (DCPA) is applied as a regularization step in order to refine the outer boundaries, which removes the distortions. This results in higher quality image objects for the purpose of 3D models reconstruction. Experimental results after comparing between automatically extracted buildings and manually digitized aerial photographs indicate high completeness scores of 94%–97% and correctness of 93%–96%. Overall average error is minimized with very low Root Mean Square (RMS) and Overlay errors.
schraefel, MC, van den Hoven, E & Andres, J 1970, 'The Body as Starting Point', Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18: CHI Conference on Human Factors in Computing Systems, ACM, Montreal QC, Canada.
View/Download from: Publisher's site
View description>>
© 2018 Copyright is held by the owner/author(s). More HCI designs and devices are embracing what is being dubbed “body centric computing,” where designs both deliberately engage the body as the locus of interest, whether to move the body into play or relaxation, or to track and monitor its performance, or to use it as a surface for interaction. Most HCI researchers are engaging in these designs, however, with little direct knowledge of how the body itself works either as a set of complex internal systems or as sets of internal and external systems that interact dynamically. The science of how our body interacts with the microbiome around us also increasingly demonstrates that our presumed boundaries between what is inside and outside us may be misleading if not considered harmful. Developing both (1) introductory knowledge and (2) design practice of how these in-bodied and circum-bodied systems work with our understanding of the em-bodied self, and how this gnosis/praxis may lead to innovative new body-centric computing designs is the topic of this workshop.
Shakeri Hossein Abad, Z, Gervasi, V, Zowghi, D & Barker, K 1970, 'ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information', 2018 5th International Workshop on Artificial Intelligence for Requirements Engineering (AIRE), 2018 5th International Workshop on Artificial Intelligence for Requirements Engineering (AIRE), IEEE, Banff, Canada.
View/Download from: Publisher's site
View description>>
Requirements elicitation requires extensive knowledge and deep understanding of the problem domain where the final system will be situated. However, in many software development projects, analysts are required to elicit the requirements from an unfamiliar domain, which often causes communication barriers between analysts and stakeholders. In this paper, we propose a requirements ELICitation Aid tool (ELICA) to help analysts better understand the target application domain by dynamic extraction and labeling of requirements-relevant knowledge. To extract the relevant terms, we leverage the flexibility and power of Weighted Finite State Transducers (WFSTs) in dynamic modeling of natural language processing tasks. In addition to the information conveyed through text, ELICA captures and processes non-linguistic information about the intention of speakers such as their confidence level, analytical tone, and emotions. The extracted information is made available to the analysts as a set of labeled snippets with highlighted relevant terms which can also be exported as an artifact of the Requirements Engineering (RE) process. The application and usefulness of ELICA are demonstrated through a case study. This study shows how pre-existing relevant information about the application domain and the information captured during an elicitation meeting, such as the conversation and stakeholders’ intentions, can be captured and used to support analysts achieving their tasks.
Shakeri Hossein Abad, Z, Rahman, M, Cheema, A, Gervasi, V, Zowghi, D & Barker, K 1970, 'Dynamic Visual Analytics for Elicitation Meetings with ELICA', 2018 IEEE 26th International Requirements Engineering Conference (RE), 2018 IEEE 26th International Requirements Engineering Conference (RE), IEEE, Banff, Canada, pp. 492-493.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Requirements elicitation can be very challenging in projects that require deep domain knowledge about the system at hand. As analysts have the full control over the elicitation process, their lack of knowledge about the system under study inhibits them from asking related questions and reduces the accuracy of requirements provided by stakeholders. We present ELIC, a generic interactive visual analytics tool to assist analysts during requirements elicitation process. ELICA uses a novel information extraction algorithm based on a combination of Weighted Finite State Transducers (WFSTs) (generative model) and SVMs (discriminative model). ELICA presents the extracted relevant information in an interactive GUI (including zooming, panning, and pinching) that allows analysts to explore which parts of the ongoing conversation (or specification document) match with the extracted information. In this demonstration, we show that ELICA is usable and effective in practice, and is able to extract the related information in real-time. We also demonstrate how carefully designed features in ELICA facilitate the interactive and dynamic process of information extraction.
Sharma, N, Mandal, R, Sharma, R, Pal, U & Blumenstein, M 1970, 'Signature and Logo Detection using Deep CNN for Document Image Retrieval', 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Niagara Falls, NY, USA, pp. 416-422.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Signature and logo as a query are important for content-based document image retrieval from a scanned document repository. This paper deals with signature and logo detection from a repository of scanned documents, which can be used for document retrieval using signature or logo information. A large intra-category variance among signature and logo samples poses challenges to traditional hand-crafted feature extraction-based approaches. Hence, the potential of deep learning-based object detectors namely, Faster R-CNN and YOLOv2 were examined for automatic detection of signatures and logos from scanned administrative documents. Four different network models namely ZF, VGG16, VGG-M, and YOLOv2 were considered for analysis and identifying their potential in document image retrieval. The experiments were conducted on the publicly available 'Tobacco-800' dataset. The proposed approach detects Signatures and Logos simultaneously. The results obtained from the experiments are promising and at par with the existing methods.
Sharma, N, Scully-Power, P & Blumenstein, M 1970, 'Shark Detection from Aerial Imagery Using Region-Based CNN, a Study', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 224-236.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Shark attacks have been a very sensitive issue for Australians and many other countries. Thus, providing safety and security around beaches is very fundamental in the current climate. Safety for both human beings and underwater creatures (sharks, whales, etc.) in general is essential while people continue to visit and use the beaches heavily for recreation and sports. Hence, an efficient, automated and real-time monitoring approach on beaches for detecting various objects (e.g. human activities, large fish, sharks, whales, surfers, etc.) is necessary to avoid unexpected casualties and accidents. The use of technologies such as drones and machine learning techniques are promising directions in such challenging circumstances. This paper investigates the potential of Region-based Convolutional Neural Networks (R-CNN) for detecting various marine objects, and Sharks in particular. Three network architectures namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16), and VGG_M were considered for analysis and identifying their potential. A dataset consisting of 3957 video frames were used for experiments. VGG16 architecture with faster-R-CNN performed better than others, with an average precision of 0.904 for detecting Sharks.
Shi, H, He, W & Xu, G 1970, 'Workshop Proposal on Knowledge Discovery from Digital Libraries', Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries, JCDL '18: The 18th ACM/IEEE Joint Conference on Digital Libraries, ACM, Texas, USA, pp. 429-430.
View/Download from: Publisher's site
View description>>
© 2018 Authors. The workshop is with the ACM/IEEE Joint Conference on Digital Libraries in 2018 (JCDL 2018) which will be held in Fort Worth, Texas, USA on June 3 - 7, 2018. The Joint Conference on Digital Libraries (JCDL) is a major international forum focusing on digital libraries and associated technical, practical, and social issues.
Sohaib, O, Naderpour, M & Hussain, W 1970, 'SaaS E-Commerce Platforms Web Accessibility Evaluation.', FUZZ-IEEE, International Conference on Fuzzy Systems, IEEE, Rio de Janeiro, Brazil, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Web accessibility related to cloud computing is more concerned at the application level where a human interacts with an application via a user interface. Although previous research has identified web accessibility influences on website effectiveness, the evaluation of the relative importance of web accessibility on software-as-a-service (SaaS) e-commerce platform has not been empirically determined. This study evaluates the web accessibility of SaaS e-commerce platform websites. The web accessibility features from the cloud accessibility taxonomy framework were evaluated for people with disabilities such as sensory (hearing and vision), motor (limited use of hands) and cognitive (language and learning disabilities) impairments. We conducted an expert evaluation using Fuzzy TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution). The results show Shopify cloud-based e-commerce platform has a high number of web accessibility features from the proposed cloud accessibility framework followed by 3dCart, BigCommerce, Volusion, and WooCommerce.
Song, Y, Zhang, G, Lu, H & Lu, J 1970, 'A Self-adaptive Fuzzy Network for Prediction in Non-stationary Environments', 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Rio de Janeiro, Brazil, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Prediction in non-stationary environments, where data streams are ever-changing at very high speeds, has become more and more important in real-world applications. The uncertainty in data streams caused by changes in data distribution is described as concept drift. The appearance of concept drift in a data stream results in inconsistencies between the existing data and incoming data. Such inconsistencies pose a great challenge to conventional machine learning methods, given they are built on the assumption of independent and identically distributed data and cannot adapt to unpredictable changes in knowledge patterns. To solve such data stream uncertainty problem, this paper presents a window-based self-adaptive fuzzy network called adaptive fuzzy network (AFN), which can continuously modify the network through identifying new knowledge from the previous data samples. Three components are embedded in ANF: a drift detection module to identify whether the current window of data samples presents different pattern from the previous; a drift adaption module to retain useful knowledge in previous samples; and a fuzzy inference system, which integrates the detection and adaption modules for prediction. ANF has been evaluated through a set of experiments on non-stationary data streams. The experimental results show a good effectiveness of our method.
Sood, K, Karmakar, K, Vardharajan, V, Tupakula, U & Yu, S 1970, 'Towards QoS and Security in Software-Driven Heterogeneous Autonomous Networks', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Autonomous Networks has a potential to solve complex and critical management issues in large scale multi-technological networks. Further, the novel paradigms, i.e., Software-Defined Networks (SDN) and Network Function Vir-tualization (NFV) offer unique and attractive solutions for Autonomous Networks or Systems (AS). However, despite of these attractive features, we observed two critical issues in this interlinked multi-technology domain. Firstly, the network externality and nodes heterogeneity seriously effected the flow specific Quality of Service (QoS). Secondly, it influenced se-curity adoption in an network of interconnected nodes. We observed that QoS and security both are non-negligible and inter-dependent factors. This motivates us to investigate solution towards a) alleviating the SDN network heterogeneity at control layer, and b) to strengthen the network security after alleviating the heterogeneity. In this research effort, we have attempted to alleviate the first issue. Firstly, significant and reasonable examples have been cited to motivate researchers to study QoS and security hand-to-hand. Secondly, a theoretical high level frame work has been proposed with the aim to transform the N heterogeneous controllers to n homogeneous controller groups. Following this, we have demonstrated that our approximation method to transform heterogeneous systems to homogeneous groups works well even at high degree of heterogeneity in the network. We have shown our theoretical analysis results using Matlab. Following this, we have shown the Proof of Concept (PoC) of our approach in SDN-NFV ecosystem using Mininet. This early analysis will help researchers to address heterogeneity and security in more effective ways.
Spoletini, P, Ferrari, A, Bano, M, Zowghi, D & Gnesi, S 1970, 'Interview Review: An Empirical Study on Detecting Ambiguities in Requirements Elicitation Interviews.', REFSQ, International Working Conference on Requirements Engineering: Foundation for Software Quality, Springer, Ultrecht, Netherlands, pp. 101-118.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. [Context and Motivation] Ambiguities identified during requirements elicitation interviews can be used by the requirements analyst as triggers for additional questions and, consequently, for disclosing further – possibly tacit – knowledge. Therefore, every unidentified ambiguity may be a missed opportunity to collect additional information. [Question/problem] Ambiguities are not always easy to recognize, especially during highly interactive activities such as requirements elicitation interviews. Moreover, since different persons can perceive ambiguous situations differently, the unique perspective of the analyst in the interview might not be enough to identify all ambiguities. [Principal idea/results] To maximize the number of ambiguities recognized in interviews, this paper proposes a protocol to conduct reviews of requirements elicitation interviews. In the proposed protocol, the interviews are audio recorded and the recordings are inspected by both the analyst who performed the interview and another reviewer. The idea is to use the identified cases of ambiguity to create questions for the follow-up interviews. Our empirical evaluation of this protocol involves 42 students from Kennesaw State University and University of Technology Sydney. The study shows that, during the review, the analyst and the other reviewer identify 68% of the total number of ambiguities discovered, while 32% were identified during the interviews. Furthermore, the ambiguities identified by analysts and other reviewers during the review significantly differ from each other. [Contribution] Our results indicate that interview reviews allow the identification of a considerable number of undetected ambiguities, and can potentially be highly beneficial to discover unexpressed information in future interviews.
Stender, M, Oberst, S & Hoffmann, NP 1970, 'Reconstruction of differential equations from time-series data for feature engineering and model identification', Colloquium on Irregular Oscillations and Signal Processing, Hamburg, Germany.
Tang, W, Li, S, Rafique, W, Dou, W & Yu, S 1970, 'An Offloading Approach in Fog Computing Environment', 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), IEEE, Guangzhou, China, pp. 857-864.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Fog computing has emerged as a promising infrastructure to provide elastic resources at the proximity of mobile users. Currently, to offload some computational tasks from mobile devices to Fog servers comes the main stream to improve the quality of experience (QoE) of mobile users. In fact, due to the high speed for moving vehicles on expressway, there would be a lot of candidate Fog servers in Fog environment for them to offload their computational workload. However, which Fog server should be selected to utilize and how much computation should be offloaded so as to meet the corresponding task's deadline without large computing bill are still lack of discussion. To address this problem, we propose a deadline-aware and cost effective offloading approach which aims to improve offloading efficiency for vehicles, and let more tasks meet their deadlines in this paper. The proposed approach has been validated its feasibility and efficiency by extensive experiments.
Tang, W, Wang, S, Li, D, Huang, T, Dou, W & Yu, S 1970, 'A Deadline-Aware Coflow Scheduling Approach for Big Data Applications', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, Kansas City, MO, USA.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Many datacenters usually process complex jobs such as MapReduce jobs. From a network perspective, most of these jobs trigger multiple parallel data flows, which comprise a coflow group semantically. When to schedule the jobs in datacenter or across multiple datacenters, most of current job schedulers have not considered the underlying network traffic load, which is suboptimal for jobs completion times. We present a new deadline-aware coflow scheduling approach called DCS, which takes the underlying network traffic load into consideration while guaranteeing high percentage of coflows that meet their deadlines. DCS aims to alleviate the network congestion in datacenters whose network worload are unbalanced, and it includes two stages for coflow scheduling: Firstly, it generates the task placement proposal by considering the underlying network workload. Secondly, it makes scheduling decision by estimating both task's execution time and transmission waiting time under the previous task placement proposal. The real-world data based simulation results have shown that DCS outperforms all existing solutions on reducing the percentage of coflows that miss their deadlines.
Tiwary, M, Sharma, S, Mishra, P, El-Sayed, H, Prasad, M & Puthal, D 1970, 'Building Scalable Mobile Edge Computing by Enhancing Quality of Services', 2018 International Conference on Innovations in Information Technology (IIT), 2018 International Conference on Innovations in Information Technology (IIT), IEEE, Al Ain, United Arab Emirates, pp. 141-146.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. With the new computing archutecture supported by Mobile Edge Computing (MEC) brings services to the physical proximity of end users, resource rich mobile devices such as smartphones can now offer computational services to another smartphones. Leveraging the computational resources from mobile cloudlet clusters the availability of mobile nodes is the most important attribute. The effect of movement of mobile devices in real life scenarios has not been captured reliably. This work focuses on improving the Quality of Service (QoS) by considering the effect of mobility deviation. This paper presents an dynamic pricing model which calculates the deviation of the contributors and optimises the price depending on the demand-supply curve. Finally, the proposed scheme is simulated in NS3 environment and is compared with existing schemes to validate the performance of proposed scheme. We observe that there is a proposed steep increase in overall utility in terms of response time and resource utilization with the proposed scheme.
Verma, R, Merigo, JM & Mittal, N 1970, 'Triangular Fuzzy Partitioned Bonferroni Mean Operators and Their Application to Multiple Attribute Decision Making', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, pp. 941-949.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The Bonferroni mean (BM) operator, introduced by Bonferroni, is a powerful tool to capture the interrelationship among aggregated arguments. Various generalizations and extensions of BM have developed and applied to solve many realworld problems. Recently, the notion of Partitioned Bonferroni mean (PBM) operator has been proposed with the assumption that the interrelationships do not always exist among all of the attributes. This work studies the PBM operator under triangular fuzzy environment. First, we propose a new fuzzy aggregation operator called the triangular fuzzy partitioned Bonferroni mean} (TFPBM) operator for aggregating triangular fuzzy numbers. Some properties and special cases of the new aggregation operator are also investigated. For the situations where the input arguments have different importance, we then define the triangular fuzzy weighted partitioned Bonferroni mean} (TFWPBM) operator. Furthermore, based on TFWPBM operator, an approach to deal with multiple attribute decision-making problems under triangular fuzzy environment is developed. Finally, a practical example is provided to illustrate the developed approach.
Verma, R, Merigo, JM & Sahni, M 1970, 'On Generalized Fuzzy Jensen-Exponential Divergence and Its Application to Pattern Recognition', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, pp. 1515-1519.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This paper develops a novel information theoretic divergence measure between two fuzzy sets based on exponential function and applies it to solve pattern recognition problems. First, we generalize the idea of fuzzy Jensen-exponential divergence and propose a new parametric divergence called fuzzy Jensen-exponential divergence of order-α to measure the information of discrimination between two fuzzy sets. We also prove some properties of the proposed measure and discuss its particular cases. Finally, we apply the proposed divergence measure between fuzzy sets to deal with pattern recognition problems with fuzzy information.
Vishwa, A & Hussain, FK 1970, 'A Blockchain based approach for multimedia privacy protection and provenance', 2018 IEEE Symposium Series on Computational Intelligence (SSCI), 2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Bangalore, India, India, pp. 1941-1945.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. There has been a vast increase in incidents related to multimedia copyright and security breaches in the past few years, compromising users' privacy. One such breach involved the seventh season of the TV series 'Game of Thrones', where episodes were illegally downloaded before the official release date etc. Such security breaches raise questions about the approaches and models that currently apply to data privacy and security, where the user saves and distributes his data personally or depends on a third party or stakeholder to manage the distribution rights of sensitive data. When it comes to multimedia, many companies or multimedia owners rely on third parties, distributors and sales persons to monitor their publicity, maintain their popularity and sell their multimedia content. Blockchain technology, which was originally devised for the digital currency (cryptocurrency), has distinct features such as distributed networking, data privacy, trust less computing etc. This technology attracts great interest from the research community due to its innovative properties which can be applied to many business applications, one being access control over data. In this paper, we present a decentralized data management framework that ensures user data privacy and control. We propose a protocol that uses blockchain technology to take control of the user's data. This protocol enables the user to have full control over his multimedia files and he doesn't need to trust a third party. The framework allows the user to not only store data but also to query and share data as well as auditing. Finally, we discuss possible future extensions of blockchain technology as a medium to ensure privacy, data control, auditing and trust management in different areas.
Vyas, K & McGregor, C 1970, 'The Use of Heart Rate for the Assessment of Firefighter Resilience: A Literature Review', 2018 IEEE Life Sciences Conference (LSC), 2018 IEEE Life Sciences Conference (LSC), IEEE, Montreal, CANADA, pp. 259-262.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Heart rate monitoring of the firefighters have begun to be used for job stress level assessment or firefighting training. However, resilience assessment and heart rate variability monitoring is not widely utilized on firefighters with limited feedback available through wearables. This paper presents an initial exploratory study that considers heart rate responses from firefighters in real life like emergency scenarios.
Wahid -Ul- Ashraf, A, Budka, M & Musial-Gabrys, K 1970, 'Newton’s Gravitational Law for Link Prediction in Social Networks', Complex Networks & Their Applications VI Proceedings of Complex Networks 2017 (The Sixth International Conference on Complex Networks and Their Applications) (SCI 689), International Conference on Complex Networks and their Applications, Springer International Publishing, Lyon, France, pp. 93-104.
View/Download from: Publisher's site
View description>>
Link prediction is an important research area in network science due to a wide range of real-world application. There are a number of link prediction methods. In the area of social networks, these methods are mostly inspired by social theory, such as having more mutual friends between two people in a social network platform entails higher probability of those two people becoming friends in the future. In this paper we take our inspiration from a different area, which is Newton’s law of universal gravitation. Although this law deals with physical bodies, based on our intuition and empirical results we found that this could also work in networks, and especially in social networks. In order to apply this law, we had to endow nodes with the notion of mass and distance. While node importance could be considered as mass, the shortest path, path count, or inverse similarity (AdamicAdar, Katz score etc.) could be considered as distance. In our analysis, we have primarily used degree centrality to denote the mass of the nodes, while the lengths of shortest paths between them have been used as distances. In this study we compare the proposed link prediction approach to 7 other methods on 4 datasets from various domains. To this end, we use the ROC curves and the AUC measure to compare the methods. As the results show that our approach outperforms the other 7 methods on 2 out of the 4 datasets, we also discuss the potential reasons of the observed behaviour.
Wahid-Ul-Ashraf, A, Budka, M & Musial, K 1970, 'NetSim -- The framework for complex network generator', Procedia Computer Science, Knowledge-Based and Intelligent Information & Engineering Systems, Elsevier, Belgrade, Serbia, pp. 547-556.
View/Download from: Publisher's site
View description>>
Networks are everywhere and their many types, including social networks, theInternet, food webs etc., have been studied for the last few decades. However,in real-world networks, it's hard to find examples that can be easilycomparable, i.e. have the same density or even number of nodes and edges. Wepropose a flexible and extensible NetSim framework to understand how propertiesin different types of networks change with varying number of edges andvertices. Our approach enables to simulate three classical network models(random, small-world and scale-free) with easily adjustable model parametersand network size. To be able to compare different networks, for a singleexperimental setup we kept the number of edges and vertices fixed across themodels. To understand how they change depending on the number of nodes andedges we ran over 30,000 simulations and analysed different networkcharacteristics that cannot be derived analytically. Two of the main findingsfrom the analysis are that the average shortest path does not change with thedensity of the scale-free network but changes for small-world and randomnetworks; the apparent difference in mean betweenness centrality of thescale-free network compared with random and small-world networks.
Wan, Y, Zhao, Z, Yang, M, Xu, G, Ying, H, Wu, J & Yu, PS 1970, 'Improving automatic source code summarization via deep reinforcement learning', Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE '18: 33rd ACM/IEEE International Conference on Automated Software Engineering, ACM, Corum, Montpellier, France, pp. 397-407.
View/Download from: Publisher's site
View description>>
© 2018 Association for Computing Machinery. Code summarization provides a high level natural language description of the function performed by code, as it can benefit the software maintenance, code categorization and retrieval. To the best of our knowledge, most state-of-the-art approaches follow an encoder-decoder framework which encodes the code into a hidden space and then decode it into natural language space, suffering from two major drawbacks: a) Their encoders only consider the sequential content of code, ignoring the tree structure which is also critical for the task of code summarization; b) Their decoders are typically trained to predict the next word by maximizing the likelihood of next ground-truth word with previous ground-truth word given. However, it is expected to generate the entire sequence from scratch at test time. This discrepancy can cause an exposure bias issue, making the learnt decoder suboptimal. In this paper, we incorporate an abstract syntax tree structure as well as sequential content of code snippets into a deep reinforcement learning framework (i.e., actor-critic network). The actor network provides the confidence of predicting the next word according to current state. On the other hand, the critic network evaluates the reward value of all possible extensions of the current state and can provide global guidance for explorations. We employ an advantage reward composed of BLEU metric to train both networks. Comprehensive experiments on a real-world dataset show the effectiveness of our proposed model when compared with some state-of-the-art methods.
Wang, B, Deng, K, Wei, W, Zhang, S, Zhou, W & Yu, S 1970, 'Full Cycle Campus Life of College Students: A Big Data Case in China', 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), IEEE, Shanghai, PEOPLES R CHINA, pp. 507-512.
View/Download from: Publisher's site
Wang, B, Yan, Z, Lu, J, Zhang, G & Li, T 1970, 'Deep Multi-task Learning for Air Quality Prediction', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Siem Reap, Cambodia, pp. 93-103.
View/Download from: Publisher's site
View description>>
© 2018, Springer Nature Switzerland AG. Predicting the concentration of air pollution particles has been an important task of urban computing. Accurately measuring and estimating makes the citizen and governments can behave with suitable decisions. In order to predict the concentration of several air pollutants at multiple monitoring stations throughout the city region, we proposed a novel deep multi-task learning framework based on residual Gated Recurrent Unit (GRU). The experimental results on the real world data from London region substantiate that the proposed deep model has manifest superiority than shallow models and outperforms 9 baselines.
Wang, B, Yan, Z, Lu, J, Zhang, G & Li, T 1970, 'Explore Uncertainty in Residual Networks for Crowds Flow Prediction', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The residual network has witnessed a great success in computer vision particularly on classification tasks, however, it has not been well studied in regression. In this work, we show its competence in a regression task - crowds flow prediction, which has strong implication to city safety and management. The problem of crowds flow prediction is challenging due to its fast dynamics. To address this issue, we explore residual learning with Gaussian regularization and propose a novel convolutional neural network called Gaussian noise residual networks (Noise-ResNet). Compared with the benchmark ST-ResNet on crowds flow prediction, the proposed architecture has three advantages: 1) Superior performance. Especially, it attains the state-of-the-art results on benchmark dataset BikeNYC. 2) Light architecture. Noise-ResNet only utilises one residual unit rather than STResNet with multiple ones, which greatly reduces the training time. 3) Interpretable input sequences. Noise-ResNet takes an input sequence that only considers the most important periodic data and closeness data, which makes the learning process more interpretable. Furthermore, experimental results substantiate that the Noise-ResNet can outperform ResNet with dropout on the same regression task.
Wang, B, Yan, Z, Lu, J, Zhang, G & Li, T 1970, 'Road traffic flow prediction using deep transfer learning', Data Science and Knowledge Engineering for Sensing Decision Support, Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018), WORLD SCIENTIFIC, Belfast, Northern Ireland, pp. 331-338.
View/Download from: Publisher's site
View description>>
Traffic flow prediction is a long-standing problem. Over the recent years, deep learning has gradually achieved a satisfying success on this task, but it depends on abundant historical traffic data. A realistic problem is that some new-established transportation networks only have few data which is not enough to train a robust deep learning model. To address this problem, we first explore and apply the transfer learning and fine-tuning to the field of transportation and propose a novel transferable traffic deep learning model, called TT-DL which can predict real-time traffic flow in data-strapped roads by transferring knowledge from data-rich roads. Our experimental results show that transfer learning is better than any other initialization methods. This indicates that traffic network has its special structure and there exists transferable knowledge between different traffic areas.
Wang, H, Chen, J, Wang, X, Liu, X & Na, Z 1970, 'Privacy Protection for Location Sharing Services in Social Networks', Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Springer International Publishing, pp. 97-102.
View/Download from: Publisher's site
View description>>
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018. Recently, there is an increase interest in location sharing services in social networks. Behind the convenience brought by location sharing, there comes an indispensable security risk of privacy. Though many efforts have been made to protect user’s privacy for location sharing, they are not suitable for social network. Most importantly, little research so far can support user relationship privacy and identity privacy. Thus, we propose a new privacy protection protocol for location sharing in social networks. Different from previous work, the proposed protocol can provide perfect privacy for location sharing services. Simulation results validate the feasibility and efficiency of the proposed protocol.
Wang, J, Chen, L, Qin, L & Wu, X 1970, 'ASTM: An Attentional Segmentation Based Topic Model for Short Texts.', ICDM, IEEE International Conference on Data Mining, IEEE Computer Society, Singapore, Singapore, pp. 577-586.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. To address the data sparsity problem in short text understanding, various alternative topic models leveraging word embeddings as background knowledge have been developed recently. However, existing models combine auxiliary information and topic modeling in a straightforward way without considering human reading habits. In contrast, extensive studies have proven that it is full of potential in textual analysis by taking into account human attention. Therefore, we propose a novel model, Attentional Segmentation based Topic Model (ASTM), to integrate both word embeddings as supplementary information and an attention mechanism that segments short text documents into fragments of adjacent words receiving similar attention. Each segment is assigned to a topic and each document can have multiple topics. We evaluate the performance of our model on three real-world short text datasets. The experimental results demonstrate that our model outperforms the state-of-the-art in terms of both topic coherence and text classification.
Wang, S, Hu, L, Cao, L, Huang, X, Lian, D & Liu, W 1970, 'Attention-based transactional context embedding for next-item recommendation', 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, AAAI Conference on Artificial Intelligence, AAAI, New Orleans, United States, pp. 2532-2539.
View description>>
To recommend the next item to a user in a transactional context is practical yet challenging in applications such as marketing campaigns. Transactional context refers to the items that are observable in a transaction. Most existing transaction-based recommender systems (TBRSs) make recommendations by mainly considering recently occurring items instead of all the ones observed in the current context. Moreover, they often assume a rigid order between items within a transaction, which is not always practical. More importantly, a long transaction often contains many items irreverent to the next choice, which tends to overwhelm the influence of a few truely relevant ones. Therefore, we posit that a good TBRS should not only consider all the observed items in the current transaction but also weight them with different relevance to build an attentive context that outputs the proper next item with a high probability. To this end, we design an effective attention-based transaction embedding model (ATEM) for context embedding to weight each observed item in a transaction without assuming order. The empirical study on real-world transaction datasets proves that ATEM significantly outperforms the state-of-the-art methods in terms of both accuracy and novelty.
Wang, Z, Yu, S & Rose, S 1970, 'An On-Demand Defense Scheme Against DNS Cache Poisoning Attacks', SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, SECURECOMM 2017, 13th EAI International Conference on Security and Privacy in Communication Networks (SecureComm), Springer International Publishing, Niagara Falls, CANADA, pp. 793-807.
View/Download from: Publisher's site
Wang, Z, Zhou, H, Feng, B, Quan, W & Yu, S 1970, 'MTF: Mitigating Link Flooding Attacks in Delay Tolerant Networks', 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), IEEE, Guangzhou, China, pp. 1532-1539.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The link flooding attack (LFA) is a new type of distributed denial-of-service (DDoS) attack emerged in recent years. Several defense mechanisms have been proposed in TCP/IP networks. However, due to the connectionless nature of Delay Tolerant Networks (DTN), the efficiency of these mechanisms is degraded facing the LFA in DTN. Thus, in this paper, we propose a new scheme named Macro Traffic Filtering (MTF), to defend the LFA in DTN efficiently. With the real prototype implementations and the long-term emulations, the preliminary results show that compared to the undifferentiated interception and the TE-based interplay scheme, MTF achieves significantly higher attack traffic hit ratio, lower collateral damage and higher cost to the attackers.
Wu, D, Lu, J, Hussain, F, Doumouras, C & Zhang, G 1970, 'A workforce health insurance plan recommender system', Data Science and Knowledge Engineering for Sensing Decision Support, Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018), WORLD SCIENTIFIC.
View/Download from: Publisher's site
Wu, R, Xiong, J, Gui, L, Liu, B, Qiu, M, Ma, W & Shi, Z 1970, 'On Services Unequal Error Protecting and Pushing by Using Terrestrial Broadcasting Network', 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Valencia, SPAIN.
View/Download from: Publisher's site
Wu, W, Li, B, Chen, L & Zhang, C 1970, 'Efficient Attributed Network Embedding via Recursive Randomized Hashing', Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}, International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden, pp. 2861-2867.
View/Download from: Publisher's site
View description>>
Attributed network embedding aims to learn a low-dimensional representation for each node of a network, considering both attributes and structure information of the node. However, the learning based methods usually involve substantial cost in time, which makes them impractical without the help of a powerful workhorse. In this paper, we propose a simple yet effective algorithm, named NetHash, to solve this problem only with moderate computing capacity. NetHash employs the randomized hashing technique to encode shallow trees, each of which is rooted at a node of the network. The main idea is to efficiently encode both attributes and structure information of each node by recursively sketching the corresponding rooted tree from bottom (i.e., the predefined highest-order neighboring nodes) to top (i.e., the root node), and particularly, to preserve as much information closer to the root node as possible. Our extensive experimental results show that the proposed algorithm, which does not need learning, runs significantly faster than the state-of-the-art learning-based network embedding methods while achieving competitive or even better performance in accuracy.
Xu, Q, Su, Z & Yu, S 1970, 'Green Social CPS Based E-Healthcare Systems to Control the Spread of Infectious Diseases', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, USA.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Recently, social network based e-healthcare service has emerged as a promising way to control the spread of infectious diseases. However, the large-scale deployment in reality faces a fundamental challenge to reduce the cost where social features of mobile users and the properties of networks should be considered. To tackle the above problem, this paper presents a green social cyber physical system (CPS) based e-Healthcare scheme to control infectious diseases. Firstly, based on the analysis of social features, the high influential users are selected to inoculate immune drugs when an infectious disease is identified. Secondly, we develop an epidemic spreading model with the dynamic equations to analyze the efficiency of immune strategy. With the proposed model, the spread of infectious diseases can be effectively monitored and the spreading range of the infectious can be predicted. In addition, simulation experiments prove that the proposal can be more efficient to prevent infectious diseases from being spread than conventional methods.
Xu, Z, Zhang, X, Yu, S & Zhang, J 1970, 'Energy-Efficient Virtual Network Function Placement in Telecom Networks', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Network Function Virtualization (NFV) is an emerging network architecture, which decouples the software implementation of network functions from the underlying hardware. Telecom operators widely place diverse types of Virtual Network Functions (VNFs) on specified software middlebox. Traffic needs to go through a set of ordered VNFs which forms a Service Function Chain (SFC). However, how to efficiently place VNFs at various network locations while minimizing energy consumption is still an open problem. To this end, we study joint optimization of VNF placement and traffic routing for energy efficiency in telecom networks. We first present the energy model in NFV-enabled telecom networks, and then formulate the studied problem as an Integer Linear Programming (ILP) model. Since the problem is NP-hard, we design a polynomial algorithm using the Markov approximation technique to find the near-optimal result. Extensive simulation results show that our algorithm saves up to 14.84% energy consumption in telecom networks compared with previous VNF placement algorithms.
Xue, S, Lu, J, Zhang, G & Xiong, L 1970, 'A Framework of Transferring Structures Across Large-scale Information Networks', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The existing domain-specific methods for mining information networks in machine learning aims to represent the nodes of an information network into a vector format. However, the real-world large-scale information network cannot make well network representations by one network. When the information of the network structure transferred from one network to another network, the performance of network representation might decrease sharply. To achieve these ends, we propose a novel framework to transfer useful information across relational large-scale information networks (FTLSIN). The framework consists of a 2-layer random walks to measure the relations between two networks and predict links across them. Experiments on real-world datasets demonstrate the effectiveness of the proposed model.
Xue, Y, Li, S, Han, K, Zhao, S, Huang, H, Yu, S & Zhu, Z 1970, 'Virtualization of Table Resources in Programmable Data Plane with Global Consideration', 2018 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2018 - 2018 IEEE Global Communications Conference, IEEE, Abu Dhabi, United Arab Emirates.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. In this work, we try to address the problem of memory fragmentation in ternary content addressable memory (TCAM) in programmable data plane (PDP), by designing and implementing a novel network hypervisor for PDP, namely, TPVX. TPVX realizes the virtualization of table resources in PDP with global consideration, i.e., when mapping tenant flow tables to physical switches, TPVX considers their table sizes and the pre-formatted sub-tables in the physical network to improve TCAM utilization and avoid memory fragmentation. Our experimental results verify that with TPVX, the utilization of the table resources in PDP can be improved dramatically and the extra processing latency due to the newly-introduced overheads can be maintained well simultaneously.
Yan, Z, Lu, J & Zhang, G 1970, 'Distributed Model Predictive Control of Linear Systems with Coupled Constraints Based on Collective Neurodynamic Optimization', AI 2018: AI 2018: Advances in Artificial Intelligence (LNAI), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 318-328.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Distributed model predictive control explores an array of local predictive controllers that synthesize the control of subsystems independently yet they communicate to efficiently cooperate in achieving the closed-loop control performance. Distributed model predictive control problems naturally result in sequential distributed optimization problems that require real-time solution. This paper presents a collective neurodynamic approach to design and implement the distributed model predictive control of linear systems in the presence of globally coupled constraints. For each subsystem, a neurodynamic model minimizes its cost function using local information only. According to the communication topology of the network, neurodynamic models share information to their neighbours to reach consensus on the optimal control actions to be carried out. The collective neurodynamic models are proven to guarantee the global optimality of the model predictive control system.
Yang, E, Deng, C, Liu, T, Liu, W & Tao, D 1970, 'Semantic Structure-based Unsupervised Deep Hashing', Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}, International Joint Conferences on Artificial Intelligence Organization, pp. 1064-1070.
View/Download from: Publisher's site
View description>>
Hashing is becoming increasingly popular for approximate nearest neighbor searching in massive databases due to its storage and search efficiency. Recent supervised hashing methods, which usually construct semantic similarity matrices to guide hash code learning using label information, have shown promising results. However, it is relatively difficult to capture and utilize the semantic relationships between points in unsupervised settings. To address this problem, we propose a novel unsupervised deep framework called Semantic Structure-based unsupervised Deep Hashing (SSDH). We first empirically study the deep feature statistics, and find that the distribution of the cosine distance for point pairs can be estimated by two half Gaussian distributions. Based on this observation, we construct the semantic structure by considering points with distances obviously smaller than the others as semantically similar and points with distances obviously larger than the others as semantically dissimilar. We then design a deep architecture and a pair-wise loss function to preserve this semantic structure in Hamming space. Extensive experiments show that SSDH significantly outperforms current state-of-the-art methods.
Yang, H, Pan, S, Zhang, P, Chen, L, Lian, D & Zhang, C 1970, 'Binarized Attributed Network Embedding', 2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 18th IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, SINGAPORE, Singapore, pp. 1476-1481.
View/Download from: Publisher's site
Yang, H, Pan, S, Zhang, P, Chen, L, Lian, D & Zhang, C 1970, 'Binarized attributed network embedding', 2018 IEEE International Conference on Data Mining (ICDM), 2018 IEEE International Conference on Data Mining (ICDM), IEEE, Singapore, Singapore, pp. 1476-1481.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Attributed network embedding enables joint representation learning of node links and attributes. Existing attributed network embedding models are designed in continuous Euclidean spaces which often introduce data redundancy and impose challenges to storage and computation costs. To this end, we present a Binarized Attributed Network Embedding model (BANE for short) to learn binary node representation. Specifically, we define a new Weisfeiler-Lehman proximity matrix to capture data dependence between node links and attributes by aggregating the information of node attributes and links from neighboring nodes to a given target node in a layer-wise manner. Based on the Weisfeiler-Lehman proximity matrix, we formulate a new Weisfiler-Lehman matrix factorization learning function under the binary node representation constraint. The learning problem is a mixed integer optimization and an efficient cyclic coordinate descent (CCD) algorithm is used as the solution. Node classification and link prediction experiments on real-world datasets show that the proposed BANE model outperforms the state-of-the-art network embedding methods.
Yang, L, Wei, T, Ma, J, Yu, S & Yang, C 1970, 'Inference Attack in Android Activity based on Program Fingerprint', 2018 IEEE Conference on Communications and Network Security (CNS), 2018 IEEE Conference on Communications and Network Security (CNS), IEEE, Beijing, China.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Private breach has always been an important threat to mobile security. Recent studies show that an attacker can infer user private information through side channels, such as the use of runtime memory and network usage. For side-channel attacks, malicious applications generally run parallel in the background with a foreground application and stealthily collect side-channel information. In this paper, we analyze the relationship between memory changes and activity transition, then use side-channel information to label an Activity and build an Activity signature database. We show how to use the runtime memory exposure to infer the Activity transition of the current application and use other side channels to infer its Activity interface. We demonstrate the effectiveness of the attacks with 5 popular applications that contain user sensitive information, and successfully inferred the most of Activity transition and Activity interface process. Moreover, we propose a protection scheme which can effectively resist side-channel attacks.
Yeung, J & McGregor, C 1970, 'Countermeasure Data Integration within Autonomous Space Medicine: An Extension to Artemis in Space', 2018 IEEE Life Sciences Conference (LSC), 2018 IEEE Life Sciences Conference (LSC), IEEE, Montreal, CANADA, pp. 251-254.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Health effects of space mission crewmembers due to microgravity have historically been acceptable and reversible, yet the effect of longer duration missions remain largely unknown. Expected communication blocks between the spacecraft and Mission Control on Earth preventing crew members from consulting with Earth-based doctors immediately should a medical problem arise onboard presents the potential to integrate a health analytics platform for real-time physiological monitoring. This paper proposes a design for the data integration of current medical support and countermeasure equipment that collect physiological data from astronauts onboard the ISS with an existing platform to enable predictive and diagnostic analytic provisions.
Ying, H, Zhuang, F, Zhang, F, Liu, Y, Xu, G, Xie, X, Xiong, H & Wu, J 1970, 'Sequential Recommender System based on Hierarchical Attention Networks', Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}, International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden, pp. 3926-3932.
View/Download from: Publisher's site
View description>>
With a large amount of user activity data accumulated, it is crucial to exploit user sequential behavior for sequential recommendations. Conventionally, user general taste and recent demand are combined to promote recommendation performances. However, existing methods often neglect that user long-term preference keep evolving over time, and building a static representation for user general taste may not adequately reflect the dynamic characters. Moreover, they integrate user-item or item-item interactions through a linear way which limits the capability of model. To this end, in this paper, we propose a novel two-layer hierarchical attention network, which takes the above properties into account, to recommend the next item user might be interested. Specifically, the first attention layer learns user long-term preferences based on the historical purchased item representation, while the second one outputs final user representation through coupling user long-term and short-term preferences. The experimental study demonstrates the superiority of our method compared with other state-of-the-art ones.
Yu, H, Lu, J & Zhang, G 1970, 'An Incremental Dual nu-Support Vector Regression Algorithm', PAKDD 2018: Advances in Knowledge Discovery and Data Mining (LNAI), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Melbourne, VIC, Australia, pp. 522-533.
View/Download from: Publisher's site
View description>>
© 2018, Springer International Publishing AG, part of Springer Nature. Support vector regression (SVR) has been a hot research topic for several years as it is an effective regression learning algorithm. Early studies on SVR mostly focus on solving large-scale problems. Nowadays, an increasing number of researchers are focusing on incremental SVR algorithms. However, these incremental SVR algorithms cannot handle uncertain data, which are very common in real life because the data in the training example must be precise. Therefore, to handle the incremental regression problem with uncertain data, an incremental dual nu-support vector regression algorithm (dual-v-SVR) is proposed. In the algorithm, a dual-v-SVR formulation is designed to handle the uncertain data at first, then we design two special adjustments to enable the dual-v-SVR model to learn incrementally: incremental adjustment and decremental adjustment. Finally, the experiment results demonstrate that the incremental dual-v-SVR algorithm is an efficient incremental algorithm which is not only capable of solving the incremental regression problem with uncertain data, it is also faster than batch or other incremental SVR algorithms.
Yu, H, Lu, J, Zhang, G & Wu, D 1970, 'A Dual Neural Network Based On Confidence Intervals For Fuzzy Random Regression Problems', 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Uncertainty in dependent variables or independent variables is typically caused by randomness or fuzziness. But randomness and fuzziness are more and more often appearing simultaneously in independent variables or dependent variables, giving rise to the concept of a fuzzy random variable. Regression analysis is a statistical measure to model the relationship between a dependent variable and one or more independent variables. However, the standard regression algorithms cannot handle the fuzzy random variables, so we propose a dual neural network algorithm based on confidence intervals for fuzzy random regression problems in this paper. The algorithm relies on the expectations of, and variances in, fuzzy random variables to construct the confidence intervals for fuzzy random input-output data. A dual neural network then identifies the sides of the interval output data; one network identifies the upper side, another network identifies the lower side, while a dual v-support vector regression algorithm concurrently constructs the initial structure of the dual neural network. Lastly, a dynamic genetic backpropagation algorithm tunes the parameters of the dual neural network to improve performance. Experiment results demonstrate the validity and applicability of the proposed dual neural network algorithm based on confidence intervals.
Yusoff, B, Merigó, JM & Hornero, DC 1970, 'Analysis on Extensions of Multi-expert Decision Making Model with Respect to OWA-Based Aggregation Processes', Advances in Intelligent Systems and Computing, International Forum for Interdisciplinary Mathematics, Springer International Publishing, Palau Macaya, Barcelona, Spain, pp. 179-196.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. In this paper, an analysis on extensions of multi-expert decision making model based on ordered weighted averaging (OWA) operators is presented. The focus is on the aggregation of criteria and the aggregation of individual judgment of experts. First, soft majority concept based on induced OWA (IOWA) and generalized quantifiers to aggregate the experts’ judgments is analyzed, in which concentrated on both classical and alternative schemes of decision making model. Secondly, analysis on the weighting methods related to unification of weighted average (WA) and OWA is conducted. An alternative weighting technique is proposed which is termed as alternative OWA-WA (AOWAWA) operator. The multi-expert decision making model then is developed based on both aggregation processes and a comparison is made to see the effect of different schemes for the fusion of soft majority opinions of experts and distinct weighting techniques in aggregating the criteria. A numerical example in the selection of investment strategy is provided for the comparison purpose.
Yusoff, B, Merigó, JM & Hornero, DC 1970, 'Generalized OWA-TOPSIS Model Based on the Concept of Majority Opinion for Group Decision Making', Advances in Intelligent Systems and Computing, International Conference of the ‘Forum for Interdisciplinary Mathematics, Springer International Publishing, Spain, pp. 124-139.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG, part of Springer Nature 2018. In this paper, an extension of OWA-TOPSIS model by inclusion of a concept of majority opinion and generalized aggregation operators for group decision making is proposed. To achieve this objective, two fusion schemes in TOPSIS model are designed. First, an external fusion scheme to aggregate the experts’ judgments with respect to the concept of majority opinion on each criterion is proposed. Then, an internal fusion scheme of ideal and anti-ideal solutions that represents the majority of experts is proposed using the Minkowski OWA distance with the inclusion of relative importances of criteria. The advantages of the proposed model include, a consideration of soft majority concept as a group aggregator and a flexibility in applying the decision strategies for analyzing the decision making process. In addition, instead of calculate the majority opinion with respect to the individual experts’ judgments on each alternative, the proposed method takes into account the majority of experts on each criterion, in which reflects the specificity on criteria for overall decision. A numerical example is provided to demonstrate the applicability of the proposed method and comparisons are made between some aggregation operators and distance measures.
Za’in, C, Pratama, M, Lughofer, E, Ferdaus, M, Cai, Q & Prasad, M 1970, 'Big Data Analytics based on PANFIS MapReduce', Procedia Computer Science, International Neural Network Society Conference on Big Data and Deep Learning, Elsevier BV, Bali, Indonesia, pp. 140-152.
View/Download from: Publisher's site
View description>>
© 2018 The Authors. Published by Elsevier Ltd. In this paper, a big data analytic framework is introduced for processing high-frequency data stream. This framework architecture is developed by combining an advanced evolving learning algorithm namely Parsimonious Network Fuzzy Inference System (PANFIS) with MapReduce parallel computation, where PANFIS has the capability of processing data stream in large volume. Big datasets are learnt chunk by chunk by processors in MapReduce environment and the results are fused by rule merging method, that reduces the complexity of the rules. The performance measurement has been conducted, and the results are showing that the MapReduce framework along with PANFIS evolving system helps to reduce the processing time around 22 percent in average in comparison with the PANFIS algorithm without reducing performance in accuracy.
Zakeri, A, Saberi, M, Hussain, OK & Chang, E 1970, 'A Heuristic Machine Learning Based Approach for Utilizing Scarce Data in Estimating Fuel Consumption of Heavy Duty Trucks', Springer International Publishing, pp. 96-107.
View/Download from: Publisher's site
Zakeri, A, Saberi, M, Hussain, OK & Chang, E 1970, 'Early Detection of Events as a Decision Support in the Milk Collection Planning', 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), IEEE, Bangkok, Thailand, pp. 516-520.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Milk is a highly perishable product which needs to go through an almost perfect cold chain in a milk supply chain to maintain its highest quality. To satisfy the ever-increasing demand from dairy processors to be provided with raw milk at highest quality, transporters need to ensure the milk which is collected from farms has been stored properly before the pickup occurs; i.e., from the starting point of the production in the farm until the pickup event. To address this issue, in this paper, we have proposed a model for early detection of events in a milking cycle. Using the online data coming from IoT sensors, we detect and recognize various events in a milking cycle as close as possible to their real happening in the tank. This provides the transporter with a comprehensive, clear picture of the milk cooling performance while being stored in the farm. It also assists them in making smart decisions on pickup planning and scheduling.
Zakeri, A, Saberi, M, Hussain, OK, Aboutalebi, S & Chang, E 1970, 'Developing a quality index for managing the quality of raw milk in the farm', Proceedings of International Conference on Computers and Industrial Engineering, CIE.
View description>>
With significant changes in the food supply pattern from small stores to large supermarkets as well as less frequent shopping cycles in recent years, there is an increasing demand for dairy products with extended shelf life. That’s why dairy processors stress on receiving raw milk with the highest quality. To address this, transportation companies need to make sure that the collected milk from farms is in its highest quality. However, the current procedure of milk collection by the transporter in farms has two obstacles; first, quite often the collected milk from farms passes both the temperature and senses test at the pickup point which are performed by the transporter, but subsequently when it reaches to the dairy processor, it is rejected due to unacceptable level of bacteria present in it. This will result to a substantial financial loss for both the farmer and the transporter. Second is that when collecting the milk at the farm, the transporter has no information about the cooling history of the milk from the earliest point of extraction to the final point of pickup to check if the milk has been cooled down according to the standard and its resulting quality. In this paper, we address this drawback by developing a function to calculate the milk quality in the tank which is ready to be picked up by the transporter. This information allows the transporter to make informed and smart decisions at two levels. First is whether to accept milk from the farmer or not and second is to decide to which processors the collected milk should be assigned according to the processors’ demands.
Zhang, L, Li, J, Huang, T, Ma, Z, Lin, Z & Prasad, M 1970, 'GAN2C: Information Completion GAN with Dual Consistency Constraints', 2018 International Joint Conference on Neural Networks (IJCNN), 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, Rio de Janeiro, Brazil.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. This paper proposes an information completion technique, GAN2C, by imposing dual consistency constraints (2C) to a closed loop encoder-decoder architecture based on the generative adversarial nets (GAN). When adopting deep neural networks as function approximators, GAN2C enables highly effective multi-modality image conversion with sparse observation in the target modes. For empirical demonstration and model evaluation, we show that trained deep neural networks in GAN2C can infer colors for grayscale images, as well as estimate rich 3D information of a scene by densely predicting the depths. The results of the experiments show that in both tasks GAN2C as a generic framework has been comparable to or advanced the state-of-the-art performance which are achieved by highly specialized systems. Code is available at https://github.com/AdalinZhang/GAN2C.
Zhang, Q, Lu, J, Wu, D & Zhang, G 1970, 'Cross-domain Recommendation with Consistent Knowledge Transfer by Subspace Alignment', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Web Information Systems Engineering, Springer International Publishing, Dubai, United Arab Emirates, pp. 67-82.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Recommender systems have drawn great attention from both academic area and practical websites. One challenging and common problem in many recommendation methods is data sparsity, due to the limited number of observed user interaction with the products/services. Cross-domain recommender systems are developed to tackle this problem through transferring knowledge from a source domain with relatively abundant data to the target domain with scarce data. Existing cross-domain recommendation methods assume that similar user groups have similar tastes on similar item groups but ignore the divergence between the source and target domains, resulting in decrease in accuracy. In this paper, we propose a cross-domain recommendation method transferring consistent group-level knowledge through aligning the source subspace with the target one. Through subspace alignment, the discrepancy caused by the domain-shift is reduced and the knowledge shared local top-n recommendation via refined item-user bi-clustering two domains is ensured to be consistent. Experiments are conducted on five real-world datasets in three categories: movies, books and music. The results for nine cross-domain recommendation tasks show that our proposed method has improved the accuracy compared with five benchmarks.
Zhang, Q, Wu, D, Lu, J & Zhang, G 1970, 'Cross-domain Recommendation with Probabilistic Knowledge Transfer', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 208-219.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Recommender systems have drawn great attention from both academic and practical area. One challenging and common problem in many recommendation methods is data sparsity, due to the limited number of observed user interaction with the products/services. To alleviate the data sparsity problem, cross-domain recommendation methods are developed to share group-level knowledge in several domains so that recommendation in the domain with scarce data can benefit from domains with relatively abundant data. However, divergence exists in the data of similar domains so that the extracted group-level knowledge is not always suitable to be applied in the target domain, thus recommendation accuracy in the target domain is impaired. In this paper, we propose a cross-domain recommendation method with probabilistic knowledge transfer. The proposed method maintain two sets of group-level knowledge, profiling both domain-shared and domain-specific characteristics of the data. In this way users’ mixed preferences can be profiled comprehensively thus improves the performance of the cross-domain recommender systems. Experiments are conducted on five real-world datasets in three categories: movies, books and music. The results for nine cross-domain recommendation tasks show that our proposed method has improved the accuracy compared with five benchmarks.
Zhang, W, Xiong, J, Gui, L, Liu, B, Qiu, M & Shi, Z 1970, 'On Popular Services Pushing and Distributed Caching in Converged Overlay Networks', 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Valencia, SPAIN.
View/Download from: Publisher's site
Zhang, X, Yao, L, Zhang, D, Wang, X, Sheng, QZ & Gu, T 1970, 'Multi-Person Brain Activity Recognition via Comprehensive EEG Signal Analysis', Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, ACM.
View/Download from: Publisher's site
Zhang, Y, Hu, C, Lu, X & Li, J 1970, 'A novel illumination normalization method in face recognition based on logarithmic total variation', Tenth International Conference on Digital Image Processing (ICDIP 2018), Tenth International Conference on Digital Image Processing (ICDIP 2018), SPIE, Shanghai, China.
View/Download from: Publisher's site
View description>>
© 2018 SPIE. Varying illumination is a tricky issue in face recognition. In this paper, we make improvement on the logarithmic total variation (LTV) algorithm to handle the varying illumination in face image. First of all, logarithmic total variation (LTV) is adopt to separate the face image into high-frequency and low-frequency features. Then, a novel illumination normalization method is proposed to handle low-frequency feature, which is founded on the advanced contrast limited adaptive histogram equalization (CLAHE). Furthermore, threshold-value filtering is utilized to realize enhancement on high-frequency feature. Finally, the normalized face image can take shape through the normalized high-frequency feature and enhanced low-frequency feature. We make comparative experiments on YALE B databases, including three types of techniques. The finnal results show that CLA&TH-LTV algorithm owns excellent recognition performance compared to other state-of-art algorithms.
Zhang, Y, Ren, W, Zhu, T & Bi, W 1970, 'MoSa: A Modeling and Sentiment Analysis System for Mobile Application Big Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Algorithms and Architectures for Parallel Processing, Springer International Publishing, Guangzhou, China, pp. 582-595.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. A large amount of data about ending users are generated in the interaction over mobile applications, which becomes a valuable data source for sensing human behaviors and public sentiment trends on some topics. Existing works concentrate on traditional feedback data from web sites, which usually come from desktops instead of from mobile terminals. Few studies have been conducted on interactive data from mobile applications such as news aggregation and recommendation applications. In this paper, we propose a system that can model feedback behaviors of mobile users, and can analyze sentiment trends in mobile feedbacks. The testing data are authentic and are dumped from the most frequently used mobile application in China called Toutiao. We propose several analysis methods on sentiment of comments, and modeling algorithms on feedback behaviors. We build a system called MoSa and by using the system, we discover several implicit behavior models and hidden sentiment trends as follows: During news spreading stage, the number of comments grow linearly per month with slope of 3 in 3 months; The dynamics of replying comments are positively correlated with personal daily routines in 24 h; Replying comment behaviors are much more rare than clicking agreement behaviors in mobile applications; The standard deviation of sentiment values in comments are highly influenced by timing stages. Our system and modeling methods provide empirical results for guiding interaction design in mobile Internet, social networks, and blockchain-based crowdsourcing.
Zhang, Y, Saberi, M, Chang, E & Abbasi, A 1970, 'Solution and Reference Recommendation System Using Knowledge Fusion and Ranking', 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), IEEE, Xian, PEOPLES R CHINA, pp. 31-38.
View/Download from: Publisher's site
Zhang, Y, Wang, W, Xuan, J, Lu, J, Zhang, G & Lin, H 1970, 'Map-based medical practice behavior analysis: Methodology and a case study on Australia’s medical practices', Data Science and Knowledge Engineering for Sensing Decision Support, Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018), WORLD SCIENTIFIC, pp. 1323-1330.
View/Download from: Publisher's site
Zhang, Y, Wang, X, Zhang, G & Lu, J 1970, 'Predicting the dynamics of scientific activities: A diffusion‐based network analytic methodology', Proceedings of the Association for Information Science and Technology, Annual Meeting of the Association for Information Science and Technology, Wiley, Vancouver, CA, pp. 598-607.
View/Download from: Publisher's site
View description>>
ABSTRACTWith the rapid explosion of information and the dramatic development of bibliometric techniques in the past decades, it becomes a challenge to comprehensively, extensively, and efficiently understand science maps. Aim‐ing to explore in‐depth insights from science maps and predict the dynamics of scientific activities, this paper, based on the co‐occurrence statistics of terms derived from scientific documents, proposes a diffusion‐based network analytic methodology to conduct the prediction study from two aspects: the research interest of scien‐tific researchers and the evolutionary directions of scientific topics. A case study on academic articles down‐loaded from three leading journals in the field of bibliometrics demonstrates the feasibility of the methodology. The future directions of bibliometrics are identified, such as the application of information technologies to tradi‐tional bibliometric data, the interactions between bibliometrics and science, technology, and innovation policy issues, and individual‐level bibliometrics. The results also provide recommendations as potential research inter‐ests for a set of experts. The proposed method could be a toolkit to conduct forecasting studies for a given technological area or a given discipline, and a recommender system to assist academic researchers in identify‐ing potential research interests and extended areas.
Zhang, Z, Oberst, S & Lai, JCS 1970, 'Instability analysis of brake squeal with uncertain contact conditions', 25th International Congress on Sound and Vibration 2018, ICSV 2018: Hiroshima Calling, International Congress on Sound and Vibration, Hiroshima, Japan, pp. 4031-4038.
View description>>
Brake squeal, as a phenomenon of friction-induced self-excited vibrations, has been a noise, vibration and harshness (NVH) problem for the automotive industry due to warranty-related claims and customer dissatisfaction. Intensive research in the past two decades have provided insight into a number of mechanisms that trigger brake squeal. However, brake squeal is a transient and nonlinear phenomenon and many determining factors are not known precisely such as material properties, operating conditions (brake pad pressure and temperature, speed), contact conditions between pad and disc, and friction. As a result, reliable prediction of brake squeal propensity is difficult to achieve and extensive noise dynamometer testings are still required to identify problematic frequencies for the development and validation of countermeasures. Here, the influence of uncertainties in friction modelling and contact conditions on friction-induced self-excited vibrations of a 3 x 3 coupled friction oscillators model is examined by combining the linear Complex Eigenvalue Analysis (CEA) method widely used in industry with a stochastic approach that incorporates these uncertainties. It has been found that unstable vibration modes with consistently high occurrence of instability independent of the contact area, friction modelling and sliding speed could be identified. Such unstable modes are considered to be robustly unstable and are most likely to produce squeal. An example is given to illustrate how instability countermeasures could be designed by repeating the uncertainty analysis for these robustly unstable modes. These results highlight the potential of reliable prediction of brake squeal propensity in a full brake-system using a stochastic approach with the CEA.
Zhao, G, Wang, Q, Xu, C & Yu, S 1970, 'Analyzing and Modelling the Interference Impact on Energy Efficiency of WLANs', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, Kansas City, MO, USA, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. The demands for high-bandwidth drives a dense wireless local access network (WLAN), which may result in severe co-channel interference and energy consumption increasing. To clearly quantify the effect of interference on energy consumption of 802.11 access devices, it is crucial to measure and model the effect of interference. This paper takes extensive measurements for five different WiFi interference types for downstream UDP transmission in actual environment. Based on experimental measurements, we establish a physical interference-energy efficiency (IFEE) model by reconstructing the signal to interference plus noise ratio (SINR) notion and the modulation and coding scheme (MCS) rate adaptive mechanism to accurately predict the interference impaction. Our experimental measurements demonstrate that interference leads to a decrease in energy efficiency and throughput. Compared with the transmit power, channel separation interference dominates. It is worth noting that the impact of interference with multiple interferers is less than single interferer scene. The simulation experiments verify that our IFEE model can achieve high accuracy of interference and energy efficiency modeling.
Zhao, Y, Ma, X, Li, J, Yu, S & Li, W 1970, 'Revisiting Website Fingerprinting Attacks in Real-World Scenarios: A Case Study of Shadowsocks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Network and System Security, Springer International Publishing, Hong Kong, China, pp. 319-336.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Website fingerprinting has been recognized as a traffic analysis attack against encrypted traffic induced by anonymity networks (e.g., Tor) and encrypted proxies. Recent studies have demonstrated that, leveraging machine learning techniques and numerous side-channel traffic features, website fingerprinting is effective in inferring which website a user is visiting via anonymity networks and encrypted proxies. In this paper, we concentrate on Shadowsocks, an encrypted proxy widely used to evade Internet censorship, and we are interested in to what extent state-of-the-art website fingerprinting techniques can break the privacy of Shadowsocks users in real-world scenarios. By design, Shadowsocks does not deploy any timing-based or packet size-based defenses like Tor. Therefore, we expect that website fingerprinting could achieve better attack performance against Shadowsocks compared to Tor. However, after deploying Shadowsocks with more than 20 active users and collecting 30 GB traces during one month, our observation is counter-intuitive. That is, the attack performance against Shadowsocks is even worse than that against Tor (based on public Tor traces). Motivated by such an observation, we investigate a series of practical factors affecting website fingerprinting, such as data labeling, feature selection, and number of instances per class. Our study reveals that state-of-the-art website fingerprinting techniques may not be effective in real-world scenarios, even in the face of Shadowsocks which does not deploy typical defenses.
Zhou, Z, Liu, S, Xu, G, Xie, X, Yin, J, Li, Y & Zhang, W 1970, 'Knowledge-Based Recommendation with Hierarchical Collaborative Embedding', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Melbourne, Australia, pp. 222-234.
View/Download from: Publisher's site
View description>>
© 2018, Springer International Publishing AG, part of Springer Nature. Data sparsity is a common issue in recommendation systems, particularly collaborative filtering. In real recommendation scenarios, user preferences are often quantitatively sparse because of the application nature. To address the issue, we proposed a knowledge graph-based semantic information enhancement mechanism to enrich the user preferences. Specifically, the proposed Hierarchical Collaborative Embedding (HCE) model leverages both network structure and text info embedded in knowledge bases to supplement traditional collaborative filtering. The HCE model jointly learns the latent representations from user preferences, linkages between items and knowledge base, as well as the semantic representations from knowledge base. Experiment results on GitHub dataset demonstrated that semantic information from knowledge base has been properly captured, resulting improved recommendation performance.
Zhu, F, Lin, A, Zhang, G & Lu, J 1970, 'Counterfactual Inference with Hidden Confounders Using Implicit Generative Models', AI 2018: Advances in Artificial Intelligence, Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Wellington, New Zealand, pp. 519-530.
View/Download from: Publisher's site
View description>>
In observational studies, a key problem is to estimate the causal effect of a treatment on some outcome. Counterfactual inference tries to handle it by directly learning the treatment exposure surfaces. One of the biggest challenges in counterfactual inference is the existence of unobserved confounders, which are latent variables that affect both the treatment and outcome variables. Building on recent advances in latent variable modelling and efficient Bayesian inference techniques, deep latent variable models, such as variational auto-encoders (VAEs), have been used to ease the challenge by learning the latent confounders from the observations. However, for the sake of tractability, the posterior of latent variables used in existing methods is assumed to be Gaussian with diagonal covariance matrix. This specification is quite restrictive and even contradictory with the underlying truth, limiting the quality of the resulting generative models and the causal effect estimation. In this paper, we propose to take advantage of implicit generative models to detour this limitation by using black-box inference models. To make inference for the implicit generative model with intractable likelihood, we adopt recent implicit variational inference based on adversary training to obtain a close approximation to the true posterior. Experiments on simulated and real data show the proposed method matches the state-of-art.
Zhu, F, Lin, A, Zhang, G, Lu, J & Zhu, D 1970, 'Pareto-smoothed inverse propensity weighing for causal inference', Data Science and Knowledge Engineering for Sensing Decision Support, Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018), WORLD SCIENTIFIC, Belfast, Northern Ireland, UK, pp. 413-420.
View/Download from: Publisher's site
View description>>
Causal inference has received great attention across different fields ranging from economics, statistics, biology, medicine, to machine learning. Observational causal inference is challenging because confounding variables may influence both the treatment and outcome. Propensity score based methods are theoretically able to handle this confounding bias problem. However, in practice, propensity score estimation is subject to extreme values, leading to small effective sample size and making the estimators unstable or even misleading. Two strategies — truncation and normalization — are usually adopted to address this problem. In this paper, we propose a new Pareto-smoothing strategy to tackle this problem. Simulations and a real-world example validate the effectiveness.
Zhu, Y, Fu, A, Yu, S, Yu, Y, Li, S & Chen, Z 1970, 'New Algorithm for Secure Outsourcing of Modular Exponentiation with Optimal Checkability Based on Single Untrusted Server', 2018 IEEE International Conference on Communications (ICC), 2018 IEEE International Conference on Communications (ICC 2018), IEEE, USA.
View/Download from: Publisher's site
View description>>
© 2018 IEEE. Nowadays, cloud computing is increasingly popular. As its important application, outsourcing has aroused great concern. Modular exponentiation is an expensive discrete-logarithm operation and it is difficult for users to calculate locally. Therefore, securely outsourcing modular exponentiation to cloud is a good choice for resource-limited users to reduce computation overhead. In this paper, to outsource modular exponentiation calculation, we dope out a fully verifiable secure outsourcing scheme with single server, so as to eliminate the collusion attacks which occur in algorithms based on two untrusted servers. Meanwhile, our algorithm allows outsourcers to detect any misbehavior with probability 1, which means the checkability of our algorithm shows a significant improvement in comparison to other single server based schemes. Furthermore, to protect data privacy, we propose a new division method to hide the primitive outsourced data. Compared with the state-of-the-art schemes, our secure outsourcing algorithm has an outstanding performance in both efficiency and checkability.
Zowghi, D 1970, ''Affects' of User Involvement in Software Development', 2018 1st International Workshop on Affective Computing for Requirements Engineering (AffectRE), 2018 1st International Workshop on Affective Computing for Requirements Engineering (AffectRE), IEEE.
View/Download from: Publisher's site
Zowghi, D, Bano, M, Ferrari, A, Spoletini, P & Gnesi, S 1970, 'Interview Review: an Empirical Study on Detecting Ambiguities in Requirements Elicitation Interviews', Lecture Notes in Computer Science, International Working Conference on Requirements Engineering: Foundation for Software Quality, Springer Verlag, Utrecht, The Netherlands, pp. 101-118.
View/Download from: Publisher's site
View description>>
[Context and Motivation] Ambiguities identified during requirements elicitation interviews can be used by the requirements analyst as triggers for additional questions and, consequently, for disclosing further – possibly tacit – knowledge. Therefore, every unidentified ambiguity may be a missed opportunity to collect additional information. [Question/problem] Ambiguities are not always easy to recognize, especially during highly interactive activities such as requirements elicitation interviews. Moreover, since different persons can perceive ambiguous situations differently, the unique perspective of the analyst in the interview might not be enough to identify all ambiguities. [Principal idea/results] To maximize the number of ambiguities recognized in interviews, this paper proposes a protocol to conduct reviews of requirements elicitation interviews. In the proposed protocol, the interviews are audio recorded and the recordings are inspected by both the analyst who performed the interview and another reviewer. The idea is to use the identified cases of ambiguity to create questions for the follow-up interviews. Our empirical evaluation of this protocol involves 42 students from Kennesaw State University and University of Technology Sydney. The study shows that, during the review, the analyst and the other reviewer identify 68% of the total number of ambiguities discovered, while 32% were identified during the interviews. Furthermore, the ambiguities identified by analysts and other reviewers during the review significantly differ from each other. [Contribution] Our results indicate that interview reviews allow the identification of a considerable number of undetected ambiguities, and can potentially be highly beneficial to discover unexpressed information in future interviews.
Zuo, H, Zhang, G & Lu, J 1970, 'Fuzzy Domain Adaptation Using Unlabeled Target Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Siem Reap, Cambodia, pp. 242-250.
View/Download from: Publisher's site
View description>>
© Springer Nature Switzerland AG 2018. Transfer learning has been emerging recently and gaining more attention because of its ability to deal with “small labeled data” issue in new markets and for new products. It addresses the problem of leveraging knowledge acquired from previous domain (a source domain with a large amount of labeled data) to improve the accuracy of tasks in the current domain (a target domain with little labeled data). Fuzzy rule-based transfer learning methods are developed due to the ability to dealing with the uncertainty in domain adaptation scenarios. Although some effort is made to develop the fuzzy methods, they only apply the knowledge of the labeled data in the target domain to assist the model’s construction. This work develops a new method that explores and utilizes the information contained in the unlabeled target data to improve the performance of the new constructed model. The experiments on both synthetic datasets and real-world datasets illustrate the effectiveness of our method, and also give the application scope of applying it.
Zuo, H, Zhang, G & Lu, J 1970, 'Semi-supervised transfer learning in Takagi-Sugeno fuzzy models', Data Science and Knowledge Engineering for Sensing Decision Support, Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018), WORLD SCIENTIFIC.
View/Download from: Publisher's site
van den Hoven, E, Kenning, G & Van Gennip, D 2018, Materialising Memories: Design Research to Support Remembering, pp. 1-28, Sydney.
Cao, Z, Chuang, C-H, King, J-K & Lin, C-T 2018, 'Multi-channel EEG recordings during a sustained-attention driving task'.
Cao, Z, Ding, W, Wang, Y-K, Hussain, FK, Al-Jumaily, A & Lin, C-T 2018, 'Effects of Repetitive SSVEPs on EEG Complexity using Multiscale Inherent Fuzzy Entropy'.
Cao, Z, Lin, C-T, Ding, W, Chen, M-H, Li, C-T & Su, T-P 2018, 'Identifying Ketamine Responses in Treatment-Resistant Depression Using a Wearable Forehead EEG'.
Cao, Z, Lin, C-T, Lai, K-L, Ko, L-W, King, J-T, Fuh, J-L & Wang, S-J 2018, 'Extraction of SSVEPs-based Inherent Fuzzy Entropy Using a Wearable Headband EEG in Migraine Patients'.
Cao, Z, Prasad, M, Tanveer, M & Lin, C-T 2018, 'Tensor Decomposition for EEG Signal Retrieval', arXiv.
View/Download from: Publisher's site
Chen, J, Li, K, Tang, Z, Bilal, K, Yu, S, Weng, C & Li, K 2018, 'A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment'.
Chen, S, Wang, Y, Lin, C-T, Ding, W & Cao, Z 2018, 'Semi-supervised Feature Learning For Improving Writer Identification'.
Gill, AQ 2018, 'SECURE INFORMATION ARCHITECTURE: SECURITY BY DESIGN'.
View description>>
ACS & DAMA Joint Professional Development event 2018
Han, B, Tsang, IW, Xiao, X, Chen, L, Fung, S-F & Yu, CP 2018, 'Privacy-preserving Stochastic Gradual Learning'.
Hu, Y, Liyanage, M, Mansoor, A, Thilakarathna, K, Jourjon, G & Seneviratne, A 2018, 'Blockchain-based Smart Contracts - Applications and Challenges'.
Hu, Y, Manzoor, A, Ekparinya, P, Liyanage, M, Thilakarathna, K, Jourjon, G, Seneviratne, A & Ylianttila, ME 2018, 'A Delay-Tolerant Payment Scheme Based on the Ethereum Blockchain'.
LU, S, Zhang, G, Luo, Z & Oberst, S 2018, 'Order pattern recurrence plots unveiling determinism buried in noise'.
Meng, Q, Catchpoole, D, Skillicorn, D & Kennedy, PJ 2018, 'Relational Autoencoder for Feature Extraction'.
Nizami, S, Green, JR & McGregor, C 2018, 'Implementation of Artifact Detection in Critical Care: A Methodological Review'.
Oberst, S, Stender, M, Baetz, J, Campbell, G, Lampe, F, Lai, JCS, Morlock, M & Hoffmann, N 2018, 'Extracting differential equations from measured vibro-acoustic impulse responses in cavity preparation of total hip arthroplasty'.
Rozpędek, F, Schiet, T, Thinh, LP, Elkouss, D, Doherty, AC & Wehner, S 2018, 'Optimizing practical entanglement distillation'.
Sanders, YR, Low, GH, Scherer, A & Berry, DW 2018, 'Black-box quantum state preparation without arithmetic'.
Thinh, LP, Faist, P, Helsen, J, Elkouss, D & Wehner, S 2018, 'Practical and reliable error bars for quantum process tomography'.
Thinh, LP, Varvitsiotis, A & Cai, Y 2018, 'Structure of the set of quantum correlators via semidefinite programming'.
van den Hoven, E, Kenning, G & Van Gennip, D 2018, 'Festival session: Design to support remembering'.
View description>>
Session at the Design Research Innovation Festival (DRIVE), Dutch Design Week, 24-25 October 2018, Eindhoven
Verma, R, Merigó, JM & Sahni, M 2018, 'Pythagorean fuzzy graphs: Some results'.
Wu, W, Li, B, Chen, L, Gao, J & Zhang, C 2018, 'A Review for Weighted MinHash Algorithms'.
Yin, K, Laranjo, L, Tong, HL, Lau, AYS, Kocaballi, AB, Martin, P, Vagholkar, S & Coiera, E 2018, 'Context-Aware Systems for Chronic Disease Patients: Scoping Review (Preprint)', JMIR Publications Inc., JMIR Publications Inc..
View/Download from: Publisher's site
View description>>
BACKGROUND
Context-aware systems, also known as context-sensitive systems, are computing applications designed to capture, interpret, and use contextual information and provide adaptive services according to the current context of use. Context-aware systems have the potential to support patients with chronic conditions; however, little is known about how such systems have been utilized to facilitate patient work.
OBJECTIVE
This study aimed to characterize the different tasks and contexts in which context-aware systems for patient work were used as well as to assess any existing evidence about the impact of such systems on health-related process or outcome measures.
METHODS
A total of 6 databases (MEDLINE, EMBASE, CINAHL, ACM Digital, Web of Science, and Scopus) were scanned using a predefined search strategy. Studies were included in the review if they focused on patients with chronic conditions, involved the use of a context-aware system to support patients’ health-related activities, and reported the evaluation of the systems by the users. Studies were screened by independent reviewers, and a narrative synthesis of included studies was conducted.
RESULTS
The database search retrieved 1478 citations; 6 papers were included, all published from 2009 onwards. The majority of the papers were quasi-experimental and involved pilot and usability testing with a small number of users; there were no randomized controlled trials (RCTs) to evaluate the efficacy of a context-aware system. In the included studies, context was captured ...
Zhu, L, Zheng, B, Shen, M, Yu, S, Gao, F, Li, H, Shi, K & Gai, K 2018, 'Research on the Security of Blockchain Data: A Survey'.