Jiang, H, Chang, R, Ren, L, Dong, W, Jiang, L & Yang, S 2017, An Effective Authentication for Client Application Using ARM TrustZone, Springer International Publishing.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Owing to lack of authentication for client application (CA), traditional protection mechanism based on ARM TrustZone may lead to the sensitive data leakage within trusted execution environment (TEE). Furthermore, session resources will be occupied by malicious CA due to the design drawback for session mechanism between CA and trusted application (TA). Therefore, attackers can initiate a request to read the data stored in secure world or launch DoS attack by forging malicious CA. In order to address the authentication problems, this paper proposes a CA authentication scheme using ARM TrustZone. When CA establishes a session with trusted application, a CA authentication will be executed in TEE to prevent sensitive data from being accessed by malicious. At the same time, TA closes the session and releases occupied resources. The proposed authentication scheme is implemented on simulation platform built by QEMU and OP-TEE. The experimental results show that the proposed scheme can detect the content change of CA, avoid sensitive data leakage and prevent DoS attack.
Cetindamar, D 2017, 'The Turkish Biotechnology System' in Advances in Bioinformatics and Biomedical Engineering, IGI Global, pp. 251-268.
View/Download from: Publisher's site
View description>>
This chapter empirically examines biotechnology innovation system in order to present the concerns of developing countries. Even though it is not possible to create standard prescriptions across countries, this paper aims to develop a solid understanding of how biotechnology and institutions co-evolve that might shed light to innovation policy issues for biotechnology across developing countries. The immediate goal is the Turkish policy makers but it will surely have policy implications for developing countries in general. Through mapping innovation processes/functions over time, it is possible to develop insights of the dynamics of innovation systems. This mapping is carried out for the Turkish biotechnology system, and the findings are summarized.
Cetindamar, D & König, A-N 2017, 'Turkey' in The World Guide to Sustainable Enterprise, Routledge, pp. 198-204.
View/Download from: Publisher's site
Dyson, LE, Wishart, J & Andrews, T 2017, 'Ethical Issues Surrounding the Adoption of Mobile Learning in the Asia-Pacific Region' in Education in the Asia-Pacific Region: Issues, Concerns and Prospects, Springer Singapore, pp. 45-65.
View/Download from: Publisher's site
View description>>
© Springer Nature Singapore Pte Ltd. 2017. Mobile technologies are increasingly part of the everyday life of people in the Asia-Pacific Region and are used to support a range of work, life and learning activities. In spite of the high penetration of mobile phones into all socio-economic groups, many educational organisations, from primary school to higher education, have been slow to adopt mobile learning. In large part, this is due to concerns about ethics and possible misuse of these devices. Examples include fears of students being distracted if they bring their mobile phones to the classroom, concerns over cheating and worries about the use of personal information. Mobile devices tend to be associated with play, not work, leading to misperceptions by others that students are not on task when seen using their mobile device in an educational setting. In addition, there are equity issues if not all students have access to the technology. However, vignettes presented in this chapter also demonstrate how mobile learning is being used to overcome major educational inequities in the region. The authors propose strategies for fostering a proactive approach, taking into account local contexts and cultures. These include student education regarding responsible behaviour, professional workshops for teachers based on ethical scenario development and the development of institutional and national guidelines.
Eggen, B, van den Hoven, E & Terken, J 2017, 'Human-Centered Design and Smart Homes: How to Study and Design for the Home Experience?' in Handbook of Smart Homes, Health Care and Well-Being, Springer International Publishing, Germany, pp. 83-92.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2017. The focus of this chapter is on designing for smart homes. The perspective will be user-driven design research. The chapter starts with a context analysis of the home environment. This analysis shows that, from a user perspective, home is about emotions and not about the physical house with all its smart applications. It is this “home experience” designers have to design for. The core of the chapter consists of the description of three big challenges that modern designers (need to) face when designing or studying smart home environments. These challenges are linked to existing and future design paradigms. The following challenges are addressed: (1) What makes a worthwhile user experience? (2) How to design for user experience? (3) How to design for user experiences that can be seamlessly integrated in everyday life? The chapter concludes with a summary of the main insights that emerge from current design research practice facing these challenges.
Gao, L, Luan, TH, Liu, B, Zhou, W & Yu, S 2017, 'Fog Computing and Its Applications in 5G' in 5G Mobile Communications, Springer International Publishing, Switzerland, pp. 571-593.
View/Download from: Publisher's site
View description>>
With smartphones becoming our everyday companions, high-quality mobile applications have become an important integral of people’s lives. The intensive and ubiquitous use of mobile applications have led to explosive growth of mobile data traffics. To accommodate the surge mobile traffic yet providing the guaranteed service quality to mobile users represent a key issue of 5G mobile networks. This motivates the emergence of Fog computing as a promising, practical and efficient solution tailored to serving mobile traffics. Fog computing deploys highly virtualized computing and communication facilities at the proximity of mobile users. Dedicated to serving the mobile users, Fog computing explores the predictable service demand patterns of mobile users and typically provides desirable localized services accordingly. Stitching above features, Fog computing can provide mobile users with the demanded services through low-latency and short-distance local connections. In this chapter, we introduce the main features of Fog computing and describe its concept, architecture and design goals. Lastly, we discuss on the potential research issues from the perspective of 5G networking.
Gill, AQ 2017, 'Applying Agility and Living Service Systems Thinking to Enterprise Architecture' in Decision Management, IGI Global, USA, pp. 487-502.
View/Download from: Publisher's site
View description>>
Adaptive enterprise architecture capability plays an important role in enabling complex enterprise transformations. One of the key challenges when establishing an adaptive enterprise architecture capability is identifying the enterprise context and the scope of the enterprise architecture. The objective of this paper is to develop and present an adaptive enterprise service system (AESS) conceptual model, which is a part of The Gill Framework for Adaptive Enterprise Service Systems. This model has been developed using a “Design Research” approach. The AESS conceptual model assimilates agility, service, and living systems thinking (following multi-agent system modelling) for describing and analyzing the enterprise context and scope for establishing an adaptive enterprise architecture capability. The target audience of this AESS model driven approach includes both, enterprise architecture researchers and practitioners.
Lance, BJ, Touryan, J, Wang, Y-K, Lu, S-W, Chuang, C-H, Khooshabeh, P, Sajda, P, Marathe, A, Jung, T-P, Lin, C-T & McDowell, K 2017, 'Towards Serious Games for Improved BCI' in Handbook of Digital Games and Entertainment Technologies, Springer Singapore, pp. 197-224.
View/Download from: Publisher's site
Mathieson, L, Mendes, A, Marsden, J, Pond, J & Moscato, P 2017, 'Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques' in Methods in Molecular Biology, Springer New York, Germany, pp. 299-325.
View/Download from: Publisher's site
View description>>
© Springer Science+Business Media New York 2017. This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Merigó, JM, Gil-Lafuente, AM & Kacprzyk, J 2017, 'A Bibliometric Analysis of the Publications of Ronald R. Yager' in Studies in Fuzziness and Soft Computing, Springer International Publishing, Germany, pp. 233-248.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2017. This study presents a bibliometric analysis of the publications of Ronald R. Yager available in Web of Science. Currently Professor Yager has more than 500 publications in this database. He is recognized as one of the most influential authors in the World in Computer Science. The bibliometric review considers a wide range of issues including a specific analysis of his publications, collaborators and citations. The VOS viewer software is used to visualize his publication and citation network though bibliographic coupling and co-citation analysis. The results clearly show his strong influence in Computer Science although it also shows a strong influence in Engineering and Applied Mathematics.
Rueda-Armengot, C, Estelles-Miguel, S, Palmer Gato, ME, Albarracín Guillem, JM & Merigó Lindahl, JM 2017, 'Entrepreneurship at the Universitat Politècnica de València' in Innovation, Technology, and Knowledge Management, Springer International Publishing, pp. 239-248.
View/Download from: Publisher's site
Sohaib, O & Kang, K 2017, 'E-Commerce Web Accessibility for People with Disabilities' in Goluchowski, J (ed), Lecture Notes in Information Systems and Organisation, Springer International Publishing, pp. 87-100.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2017. In recent years online shopping has grown significantly worldwide. As the technology advanced, new techniques such as (HTML 5 and Flash-based content, JavaScript etc.) are used in e-commerce websites to visually present information. However, these new techniques have accessibility problems for people with disabilities when accessed using assistive technology. Therefore, it is also important to adopt web accessibility such as the Web Content Accessibility Guidelines (WCAG 2.0) in business-to-consumer (B2C) e-commerce websites to increase the consumer's satisfaction of all ages and with disabilities. This study analyses 30 Australian B2C websites in accordance to WCAG 2.0 using an automated web service. The result shows that B2C websites in Australia are not paying attention to web accessibility for people with disabilities. However, e-commerce will succeed in meeting WCAG 2.0 by making B2C e-commerce websites accessible to consumer of all ages and with disabilities. Recommendations are proposed in order to improve web accessibility for people with sensory (hearing and vision), motor (limited use of hands) and cognition (language and learning) disabilities in B2C e-commerce websites.
Sohaib, O & Kang, K 2017, 'Online shopping: Consumer itrust and influencing factors at the individual level' in William D. Nelson (ed), Advances in Business and Management, Nova Science Publishers, USA, pp. 31-61.
View description>>
Building consumer trust is important to business-to-consumer (B2C) e-commerce firms seeking to extend their reach to consumers globally. Based on the Stimulus-Organism-Response (S-O-R) model, this study examines the moderating role of individual consumer culture on the relationship between web design (website accessibility, visual appearance (colour and images) and social networking services), consumer behaviour (religiosity), privacy, security, emotions (fear and joy) and interpersonal trust (iTrust), cognitive and affect-based trust concerning online purchasing intentions. The motivation of this study includes testing and comparing individual consumer cultural values (individualism and uncertainty avoidance) difference moderators in proposed multiperspective model of online interpersonal trust (iTrust). In order to empirically test the research model, survey was conducted in Australia. The results of the analysis shows that depending on the stimulus (S) towards which a reaction is made provides a signal regarding the cognitive and affect-based trust (Organism) of B2C e-commerce website, which influence consumers purchase intentions (Response) at the individual level. The results of this study highlight the need to consider individual consumer level cultural differences when identifying the mix of e-commerce strategies to employ in B2C websites, not only at the country level but also in culturally diverse country such as Australia.
van den Hoven, E, Broekhuijsen, M & Mols, I 2017, 'Design Applications for Social Remembering' in Meade, ML, Harris, CB, Van Bergen, P, Sutton, J & Barnier, A (eds), Collaborative Remembering, Oxford University PressOxford, Oxford, UK, pp. 386-403.
View/Download from: Publisher's site
View description>>
Abstract With the increasing availability of technology, the number of digital media people create, such as digital photos, has exploded. At the same time, the number of media they organize has decreased. Many personal media are created for mnemonic reasons, but are often not used as intended or desired. We see this as a design opportunity for supporting new experiences using personal digital media. Our people-centered design perspectives start in the real world, in people’s everyday lives, in which remembering is often a social and collaborative activity. This social activity involves multiple people in different situations, and includes digital media that can serve as memory cues. In this chapter, we present six concept designs for interactive products, specifically conceived to support everyday remembering activities that vary in their degree of socialness. From these concepts, five design characteristics emerge: social situation; type of event; social effect; media process; and media interaction.
Voinov, A 2017, 'Participatory Modeling for Sustainability' in Encyclopedia of Sustainable Technologies, Elsevier, pp. 33-39.
View/Download from: Publisher's site
View description>>
Sustainability is a wicked problem, which is hard to define in a unique way. It cannot be solved and should be treated in a participatory approach involving as many stakeholders in the process as possible. Participatory modeling is an efficient method for dealing with wicked problems. It involves stakeholders in an open-ended process of shared learning and can be essential for developing sustainable technologies. While there may be various levels of participation, the process evolves around a model of the system at stake. The model is built in interaction with the stakeholders; it provides formalism to synchronize stakeholder thinking and knowledge about the system and to move toward consensus about the possible decision making.
Voinov, A & Gaddis, EB 2017, 'Values in Participatory Modeling: Theory and Practice' in Environmental Modeling with Stakeholders, Springer International Publishing, pp. 47-63.
View/Download from: Publisher's site
View description>>
In this chapter, we reflect on some of our experiences as modelers engaged in participatory modeling by outlining some of the lessons we have learned. Specifically, we outline best practices for modelers seeking to engage in the process, identify trade-offs in evaluating model results, and present a call for future research to explicitly incorporate values in the process.
Voinov, AA, Glazyrina, IP, Pavoni, B & Zharova, NA 2017, 'Environmental management in uncertain economies' in Growing Pains Environmental Management in Developing Countries, pp. 148-159.
Voinov, AA, Glazyrina, IP, Pavoni, B & Zharova, NA 2017, 'Environmental management, crime and information: A Russian case study' in Growing Pains Environmental Management in Developing Countries, pp. 117-129.
Xu, G, Wu, Z, Cao, J & Tao, H 2017, 'Models for Community Dynamics' in Encyclopedia of Social Network Analysis and Mining, Springer New York, pp. 1-15.
View/Download from: Publisher's site
Yao, L, Sheng, QZ, Ngu, AHH, Li, X, Benatallah, B & Wang, X 2017, 'Building Entity Graphs for the Web of Things Management' in Managing the Web of Things, Elsevier, pp. 275-303.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. All rights reserved. With recent advances in radio-frequency identification (RFID), wireless sensor networks, and Web services, physical things are becoming an integral part of the emerging ubiquitous Web. Finding correlations among ubiquitous things is a crucial prerequisite for many important applications such as things search, discovery, classification, recommendation, and composition. This article building an entity graph for Web of Things, whereas we propose a novel graph-based approach for discovering underlying connections of things via mining the rich content embodied in the human-thing interactions in terms of user, temporal and spatial information. We model this various information using two graphs, namely a spatiotemporal graph and a social graph. Then, random walk with restart (RWR) is applied to find proximities among things, a relational graph of things, entity graph, indicating implicit correlations of things is learned. The correlation analysis lays a solid foundation contributing to improved effectiveness in things management and analytics. To demonstrate the utility of the proposed approach, we present two typical applications and a systematic case study in regards to a flexible feature-based classification framework and a unified probabilistic factor based framework, respectively. Our evaluation exhibits the strength and feasibility of approach.
Abdul Hanan, AH, Yazid Idris, M, Kaiwartya, O, Prasad, M & Ratn Shah, R 2017, 'Real traffic-data based evaluation of vehicular traffic environment and state-of-the-art with future issues in location-centric data dissemination for VANETs', Digital Communications and Networks, vol. 3, no. 3, pp. 195-210.
View/Download from: Publisher's site
View description>>
© 2017 Chongqing University of Posts and Telecommuniocations Extensive investigation has been performed in location-centric or geocast routing protocols for reliable and efficient dissemination of information in Vehicular Adhoc Networks (VANETs). Various location-centric routing protocols have been suggested in literature for road safety ITS applications considering urban and highway traffic environment. This paper characterizes vehicular environments based on real traffic data and investigates the evolution of location-centric data dissemination. The current study is carried out with three main objectives: (i) to analyze the impact of dynamic traffic environment on the design of data dissemination techniques, (ii) to characterize location-centric data dissemination in terms of functional and qualitative behavior of protocols, properties, and strengths and weaknesses, and (iii) to find some future research directions in information dissemination based on location. Vehicular traffic environments have been classified into three categories based on physical characteristics such as speed, inter-vehicular distance, neighborhood stability, traffic volume, etc. Real traffic data is considered to analyze on-road traffic environments based on the measurement of physical parameters and weather conditions. Design issues are identified in incorporating physical parameters and weather conditions into data dissemination. Functional and qualitative characteristics of location-centric techniques are explored considering urban and highway environments. Comparative analysis of location-centric techniques is carried out for both urban and highway environments individually based on some unique and common characteristics of the environments. Finally, some future research directions are identified in the area based on the detailed investigation of traffic environments and location-centric data dissemination techniques.
Adak, C, Chaudhuri, BB & Blumenstein, M 2017, 'An Empirical Study on Writer Identification & Verification from Intra-variable Individual Handwriting', IEEE Access, vol. 7, pp. 24738-24758.
View/Download from: Publisher's site
View description>>
The handwriting of an individual may vary substantially with factors such asmood, time, space, writing speed, writing medium and tool, writing topic, etc.It becomes challenging to perform automated writer verification/identificationon a particular set of handwritten patterns (e.g., speedy handwriting) of aperson, especially when the system is trained using a different set of writingpatterns (e.g., normal speed) of that same person. However, it would beinteresting to experimentally analyze if there exists any implicitcharacteristic of individuality which is insensitive to high intra-variablehandwriting. In this paper, we study some handcrafted features and auto-derivedfeatures extracted from intra-variable writing. Here, we work on writeridentification/verification from offline Bengali handwriting of highintra-variability. To this end, we use various models mainly based onhandcrafted features with SVM (Support Vector Machine) and featuresauto-derived by the convolutional network. For experimentation, we havegenerated two handwritten databases from two different sets of 100 writers andenlarged the dataset by a data-augmentation technique. We have obtained someinteresting results.
Aghasian, E, Garg, S, Gao, L, Yu, S & Montgomery, J 2017, 'Scoring Users’ Privacy Disclosure Across Multiple Online Social Networks', IEEE Access, vol. 5, pp. 13118-13130.
View/Download from: Publisher's site
View description>>
Users in online social networking sites unknowingly disclose their sensitive information that aggravate the social and financial risks. Hence, to prevent the information loss and privacy exposure, users need to find ways to quantify their privacy level based on their online social network data. Current studies that focus on measuring the privacy risk and disclosure consider only a single source of data, neglecting the fact that users, in general, can have multiple social network accounts disclosing different sensitive information. In this paper, we investigate an approach that can help social media users to measure their privacy disclosure score (PDS) based on the information shared across multiple social networking sites. In particular, we identify the main factors that have impact on users privacy, namely, sensitivity and visibility, to obtain the final disclosure score for each user. By applying the statistical and fuzzy systems, we can specify the potential information loss for a user by using obtained PDS. Our evaluation results with real social media data show that our method can provide a better estimation of privacy disclosure score for users having presence in multiple online social networks.
Ahadi, A, Hellas, A & Lister, R 2017, 'A Contingency Table Derived Method for Analyzing Course Data', ACM Transactions on Computing Education, vol. 17, no. 3, pp. 1-19.
View/Download from: Publisher's site
View description>>
We describe a method for analyzing student data from online programming exercises. Our approach uses contingency tables that combine whether or not a student answered an online exercise correctly with the number of attempts that the student made on that exercise. We use this method to explore the relationship between student performance on online exercises done during semester with subsequent performance on questions in a paper-based exam at the end of semester. We found that it is useful to include data about the number of attempts a student makes on an online exercise.
Arodudu, O, Helming, K, Wiggering, H & Voinov, A 2017, 'Bioenergy from Low-Intensity Agricultural Systems: An Energy Efficiency Analysis', Energies, vol. 10, no. 1, pp. 29-29.
View/Download from: Publisher's site
View description>>
In light of possible future restrictions on the use of fossil fuel, due to climate change obligations and continuous depletion of global fossil fuel reserves, the search for alternative renewable energy sources is expected to be an issue of great concern for policy stakeholders. This study assessed the feasibility of bioenergy production under relatively low-intensity conservative, eco-agricultural settings (as opposed to those produced under high-intensity, fossil fuel based industrialized agriculture). Estimates of the net energy gain (NEG) and the energy return on energy invested (EROEI) obtained from a life cycle inventory of the energy inputs and outputs involved reveal that the energy efficiency of bioenergy produced in low-intensity eco-agricultural systems could be as much as much as 448.5–488.3 GJ·ha−1 of NEG and an EROEI of 5.4–5.9 for maize ethanol production systems, and as much as 155.0–283.9 GJ·ha−1 of NEG and an EROEI of 14.7–22.4 for maize biogas production systems. This is substantially higher than for industrialized agriculture with a NEG of 2.8–52.5 GJ·ha−1 and an EROEI of 1.2–1.7 for maize ethanol production systems, as well as a NEG of 59.3–188.7 GJ·ha−1 and an EROEI of 2.2–10.2 for maize biogas production systems. Bioenergy produced in low-intensity eco-agricultural systems could therefore be an important source of energy with immense net benefits for local and regional end-users, provided a more efficient use of the co-products is ensured.
Arodudu, O, Helming, K, Wiggering, H & Voinov, A 2017, 'Towards a more holistic sustainability assessment framework for agro-bioenergy systems — A review', Environmental Impact Assessment Review, vol. 62, pp. 61-75.
View/Download from: Publisher's site
Arodudu, OT, Helming, K, Voinov, A & Wiggering, H 2017, 'Integrating agronomic factors into energy efficiency assessment of agro-bioenergy production – A case study of ethanol and biogas production from maize feedstock', Applied Energy, vol. 198, pp. 426-439.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Previous life cycle assessments for agro-bioenergy production rarely considered some agronomic factors with local and regional impacts. While many studies have found the environmental and socio-economic impacts of producing bioenergy on arable land not good enough to be considered sustainable, others consider it still as one of the most effective direct emission reduction and fossil fuel replacement measures. This study improved LCA methods in order to examine the individual and combined effects of often overlooked agronomic factors (e.g. alternative farm power, seed sowing, fertilizer, tillage and irrigation options) on life-cycle energy indicators (net energy gain-NEG, energy return on energy invested-EROEI), across the three major agro-climatic zones namely tropic, sub-tropic and the temperate landscapes. From this study, we found that individual as well as combined effects of agronomic factors may improve the energy productivity of arable bioenergy sources considerably in terms of the NEG (from between 6.8 and 32.9 GJ/ha to between 99.5 and 246.7 GJ/ha for maize ethanol; from between 39.0 and 118.4 GJ/ha to between 127.9 and 257.9 GJ/ha for maize biogas) and EROEI (from between 1.2 and 1.8 to between 2.1 and 3.0 for maize ethanol, from between 4.3 and 12.1 to between 15.0 and 33.9 for maize biogas). The agronomic factors considered by this study accounted for an extra 7.5–14.6 times more of NEG from maize ethanol, an extra 2.2–3.3 times more of NEG from maize biogas, an extra 1.7 to 1.8 times more of EROEI from maize ethanol, and an extra 2.8–3.5 times more of EROEI from maize biogas respectively. This therefore underscores the need to factor in local and regional agronomic factors into energy efficiency and sustainability assessments, as well as decision making processes regarding the application of energy from products of agro-bioenergy production.
Ashamalla, A, Beydoun, G & Low, G 2017, 'Model driven approach for real-time requirement analysis of multi-agent systems.', Comput. Lang. Syst. Struct., vol. 50, pp. 127-139.
View/Download from: Publisher's site
View description>>
Software systems can fail when requirement constraints are overlooked or violated. With the increased complexity of software systems, software development has become more reliant on model driven development. The paper advocates a model driven approach to ensure real-time requirement constraints are taken into account prior to the design of a multi-agent system (MAS). The paper presents the synthesis of a real-time metamodel to support requirements analysis of a MAS. The metamodel describes a collection of modelling units and constraints that can be used to identify the real-time requirements of a multi-agent system during the analysis phase. The paper takes the view that the earlier you model real-time requirements in the software development life cycle, the more reliable and robust the resultant system will be. Furthermore, the more likely it is an appropriate balance between competing time requirements will be achieved. The paper also presents a validation of the metamodel in a Call Management MAS application. This provides a preliminary evidence of the coverage and validity of the metamodel presented.
Avilés-Ochoa, E, Perez-Arellano, LA, León-Castro, E & Merigó, JM 2017, 'PRIORITIZED INDUCED PROBABILISTIC DISTANCES IN TRANSPARENCY AND ACCESS TO INFORMATION LAWS', FUZZY ECONOMIC REVIEW, vol. 22, no. 01, pp. 45-55.
View/Download from: Publisher's site
View description>>
© 2016 Int. Association for Fuzzy-Set Management and Economy. All rights reserved. In this paper, a new extension of the ordered weighted average (OWA) operator is developed using four different methods: prioritized operators, induced operators, probabilistic operators and distance techniques. This new operator is called the prioritized induced probabilistic ordered weighted average distance (PIPOWAD) operator. The primary advantage is that we include in one formulation different characteristics and information provided by a group of decision makers to compare actual and ideal situations. Finally, an example of transparency and access to information law in Mexico is presented to forecast the score based on the expectations of decision makers.
Azadeh, A, Foroozan, H, Ashjari, B, Motevali Haghighi, S, Yazdanparast, R, Saberi, M & Torki Nejad, M 2017, 'Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach', Enterprise Information Systems, vol. 11, no. 9, pp. 1-15.
View/Download from: Publisher's site
Azadeh, A, Jebreili, S, Chang, E, Saberi, M & Hussain, OK 2017, 'An integrated fuzzy algorithm approach to factory floor design incorporating environmental quality and health impact', International Journal of System Assurance Engineering and Management, vol. 8, no. S4, pp. 2071-2082.
View/Download from: Publisher's site
View description>>
This paper presents an integrated algorithm based on fuzzy simulation, fuzzy linear programming (FLP), and fuzzy data envelopment analysis (FDEA) to cope with a special case of workshop facility layout design problem with ambiguous environmental and health indicators. First a software package is used for generating feasible layout alternatives and then quantitative performance indicators are calculated. Weights are estimated by LP for pairwise comparisons (by linguistic terms) in evaluating certain qualitative performance indicators. Fuzzy simulation is then employed for modeling different layout alternatives with uncertain parameters. Next, the impacts of environment and health indicators are retrieved from a standard questionnaire. Finally, FDEA is used for ranking the alternatives and consequently finding the optimal layout design alternatives. A possibilistic programming approach is used to modify the fuzzy DEA model to an equivalent crisp one. Moreover, fuzzy principal component analysis method is used to validate the results of FDEA model at various α-cut levels by Spearman correlation experiment. This is the first study that presents an integrated algorithm for optimization of facility layout with environmental and health indicators.
Azadeh, A, Sadri, S, Saberi, M, Yoon, JH, Chang, E, Khadeer Hussain, O & Pourmohammad Zia, N 2017, 'An Integrated Fuzzy Trust Prediction Approach in Product Design and Engineering', International Journal of Fuzzy Systems, vol. 19, no. 4, pp. 1190-1199.
View/Download from: Publisher's site
Bakirov, R, Gabrys, B & Fay, D 2017, 'Multiple adaptive mechanisms for data-driven soft sensors', Computers & Chemical Engineering, vol. 96, pp. 42-54.
View/Download from: Publisher's site
View description>>
Recent data-driven soft sensors often use multiple adaptive mechanisms to cope with non-stationary environments. These mechanisms are usually deployed in a prescribed order which does not change. In this work we use real world data from the process industry to compare deploying adaptive mechanisms in a fixed manner to deploying them in a flexible way, which results in varying adaptation sequences. We demonstrate that flexible deployment of available adaptive methods coupled with techniques such as cross-validatory selection and retrospective model correction can benefit the predictive accuracy over time. As a vehicle for this study, we use a soft-sensor for batch processes based on an adaptive ensemble method which employs several adaptive mechanisms to react to the changes in data.
Bano, M, Zowghi, D & Rimini, FD 2017, 'User satisfaction and system success: an empirical exploration of user involvement in software development.', Empir. Softw. Eng., vol. 22, no. 5, pp. 2339-2372.
View/Download from: Publisher's site
View description>>
© 2016, Springer Science+Business Media New York. For over four decades user involvement has been considered intuitively to lead to user satisfaction, which plays a pivotal role in successful outcome of a software project. The objective of this paper is to explore the notion of user satisfaction within the context of the user involvement and system success relationship. We have conducted a longitudinal case study of a software development project and collected qualitative data by means of interviews, observations and document analysis over a period of 3 years. The analysis of our case study data revealed that user satisfaction significantly contributes to the system success even when schedule and budget goals are not met. The case study data analysis also presented additional factors that contribute to the evolution of user satisfaction throughout the project. Users’ satisfaction with their involvement and the resulting system are mutually constituted while the level of user satisfaction evolves throughout the stages of software development process. Effective management strategies and user representation are essential elements of maintaining an acceptable level of user satisfaction throughout software development process.
Belete, GF, Voinov, A & Laniak, GF 2017, 'An overview of the model integration process: From pre-integration assessment to testing', Environmental Modelling & Software, vol. 87, pp. 49-63.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Integration of models requires linking models which can be developed using different tools, methodologies, and assumptions. We performed a literature review with the aim of improving our understanding of model integration process, and also presenting better strategies for building integrated modeling systems. We identified five different phases to characterize integration process: pre-integration assessment, preparation of models for integration, orchestration of models during simulation, data interoperability, and testing. Commonly, there is little reuse of existing frameworks beyond the development teams and not much sharing of science components across frameworks. We believe this must change to enable researchers and assessors to form complex workflows that leverage the current environmental science available. In this paper, we characterize the model integration process and compare integration practices of different groups. We highlight key strategies, features, standards, and practices that can be employed by developers to increase reuse and interoperability of science software components and systems.
Belete, GF, Voinov, A & Morales, J 2017, 'Designing the Distributed Model Integration Framework – DMIF', Environmental Modelling & Software, vol. 94, pp. 112-126.
View/Download from: Publisher's site
View description>>
� 2017 Elsevier Ltd We describe and discuss the design and prototype of the Distributed Model Integration Framework (DMIF) that links models deployed on different hardware and software platforms. We used distributed computing and service-oriented development approaches to address the different aspects of interoperability. Reusable web service wrappers were developed for technical interoperability models created in NetLogo and GAMS modeling languages. We investigated automated semantic mapping of text-based input-output data and attribute names of components using word overlap semantic matching algorithms and using an openly available lexical database. We also incorporated automated unit conversion in semantic mediation by using openly available ontologies. DMIF helps to avoid significant amount of reinvention by framework developers, and opens up the modeling process for many stakeholders who are not prepared to deal with the technical difficulties associated with installing, configuring, and running various models. As a proof of concept, we implemented our design to integrate several climate-energy-economy models.
Beydoun, G, Hoffmann, A & Gill, A 2017, 'Constructing enhanced default theories incrementally', Complex & Intelligent Systems, vol. 3, no. 2, pp. 83-92.
View/Download from: Publisher's site
View description>>
The main difference between various formalisms of non-monotonic reasoning is the representation of non-monotonic rules. In default logic, they are represented by special expressions called defaults. In default logic, commonsense knowledge about the world is represented as a set of named defaults. The use of defaults is popular because they reduce the complexity of the representation, and they are sufficient for knowledge representation in many naturally occurring contexts. This paper offers an incremental process to acquire defaults from human experts directly and at the same time it provides added semantics to defaults by adding priorities to defaults and creating additional relations between them. The paper uses an existing incremental framework, NRDR, to generate these defaults. This framework is chosen as it not only enables incremental context driven formulation of defaults, but also allows experts to introduce their own domain terms. In choosing this framework, the paper broadens its utility.
Blanco-Mesa, F & Merigó, JM 2017, 'BONFERRONI DISTANCES WITH HYBRID WEIGHTED DISTANCE AND IMMEDIATE WEIGHTED DISTANCE', FUZZY ECONOMIC REVIEW, vol. 22, no. 02, pp. 2274-2274.
View/Download from: Publisher's site
View description>>
© 2017 Int. Association for Fuzzy-Set Management and Economy. All rights reserved. The aim of the paper is to develop new aggregation operators using Bonferroni means, ordered weighted averaging (OWA) operators and some measures of distance. We introduce the Bonferroni Hybrid-weighted distance (BON-HWD), and Bonferroni distances with OWA operators and weighted averages (BON-IWOWAD). The main advantages of using these operators are that they allow the consideration of different aggregations contexts to be considered and multiple-comparison between each argument and distance measures in the same formulation. We develop a mathematical application to show the versatility of new models. Finally, this new group of family distances can be used in a wide range of management and economic fields.
Blanco-Mesa, F, Merigó, JM & Gil-Lafuente, AM 2017, 'Fuzzy decision making: A bibliometric-based review', Journal of Intelligent & Fuzzy Systems, vol. 32, no. 3, pp. 2033-2050.
View/Download from: Publisher's site
View description>>
Fuzzy decision-making consists in making decisions under complex and uncertain environments where the information can be assessed with fuzzy sets and systems. The aim of this study is to review the main contributions in this field by using a bibliometric approach. For doing so, the article uses a wide range of bibliometric indicators including the citations and the h -index. Moreover, it also uses the VOS viewer software in order to map the main trends in this area. The work considers the leading journals, articles, authors and institutions. The results indicate that the USA was the traditional leader in this field with the most significant researcher. However, during the last years, this field is receiving more attention by Asian authors that are starting to lead the field. This discipline has a strong potential and the expectations for the future is that it will continue to grow.
Bluff, A & Johnston, A 2017, 'Creature:Interactions: A Social Mixed-Reality Playspace', Leonardo, vol. 50, no. 4, pp. 360-367.
View/Download from: Publisher's site
View description>>
This paper discusses Creature:Interactions (2015), a large-scale mixed-reality artwork created by the authors that incorporates immersive 360° stereoscopic visuals, interactive technology, and live actor facilitation. The work uses physical simulations to promote an expressive full-bodied interaction as children explore the landscapes and creatures of Ethel C. Pedley’s ecologically focused children’s novel, Dot and the Kangaroo. The immersive visuals provide a social playspace for up to 90 people and have produced “phantom” sensations of temperature and touch in certain participants.
Bower, M, Wood, L, Lai, J, Howe, C, Lister, R, Mason, R, Highfield, K & Veal, J 2017, 'Improving the Computational Thinking Pedagogical Capabilities of School Teachers', Australian Journal of Teacher Education, vol. 42, no. 3, pp. 53-72.
View/Download from: Publisher's site
View description>>
The idea of computational thinking as skills and universal competence which every child should possess emerged last decade and has been gaining traction ever since. This raises a number of questions, including how to integrate computational thinking into the curriculum, whether teachers have computational thinking pedagogical capabilities to teach children, and the important professional development and training areas for teachers. The aim of this paper is to address the strategic issues by illustrating a series of computational thinking workshops for Foundation to Year 8 teachers held at an Australian university. Data indicated that teachers' computational thinking understanding, pedagogical capabilities, technological know-how and confidence can be improved in a relatively short period of time through targeted professional learning.
Broekhuijsen, M, van den Hoven, E & Markopoulos, P 2017, 'From PhotoWork to PhotoUse: exploring personal digital photo activities', Behaviour & Information Technology, vol. 36, no. 7, pp. 754-767.
View/Download from: Publisher's site
CALVO, RA, MILNE, DN, HUSSAIN, MS & CHRISTENSEN, H 2017, 'Natural language processing in mental health applications using non-clinical texts', Natural Language Engineering, vol. 23, no. 5, pp. 649-685.
View/Download from: Publisher's site
View description>>
AbstractNatural language processing (NLP) techniques can be used to make inferences about peoples’ mental states from what they write on Facebook, Twitter and other social media. These inferences can then be used to create online pathways to direct people to health information and assistance and also to generate personalized interventions. Regrettably, the computational methods used to collect, process and utilize online writing data, as well as the evaluations of these techniques, are still dispersed in the literature. This paper provides a taxonomy of data sources and techniques that have been used for mental health support and intervention. Specifically, we review how social media and other data sources have been used to detect emotions and identify people who may be in need of psychological assistance; the computational techniques used in labeling and diagnosis; and finally, we discuss ways to generate and personalize mental health interventions. The overarching aim of this scoping review is to highlight areas of research where NLP has been applied in the mental health literature and to help develop a common language that draws together the fields of mental health, human-computer interaction and NLP.
Cancino, C, Merigo, JM, Coronado, F, Dessouky, Y & Dessouky, M 2017, 'Forty years of computers and industrial engineering: A bibliometric analysis', Proceedings of International Conference on Computers and Industrial Engineering CIE, vol. 0, pp. 614-629.
View/Download from: Publisher's site
View description>>
Computers & Industrial Engineering is a leading international journal in the field of industrial engineering that published its first issue in 1976. In 2016, the journal has celebrated the 40th anniversary. Motivated by this event, the aim of this study is to develop a bibliometric overview of the publications of the journal between 1976 and 2015. The objective is to identify the leading trends that are occurring in the journal in terms of productivity and influence of topics, authors, universities and countries. For doing so, the work uses the Web of Science Core Collection database to analyse the bibliometric data. The results show the strong position of the USA in the journal although China and other Asian countries are becoming very significant.
Cancino, C, Merigó, JM, Coronado, F, Dessouky, Y & Dessouky, M 2017, 'Forty years of Computers & Industrial Engineering: A bibliometric analysis', Computers & Industrial Engineering, vol. 113, pp. 614-629.
View/Download from: Publisher's site
View description>>
Computers & Industrial Engineering is a leading international journal in the field of industrial engineering that published its first issue in 1976. In 2016, the journal has celebrated the 40th anniversary. Motivated by this event, the aim of this study is to develop a bibliometric overview of the publications of the journal between 1976 and 2015. The objective is to identify the leading trends that are occurring in the journal in terms of productivity and influence of topics, authors, universities and countries. For doing so, the work uses the Web of Science Core Collection database to analyse the bibliometric data. The results show the strong position of the USA in the journal although China and other Asian countries are becoming very significant.
Cancino, CA, Merigo, JM & Coronado, FC 2017, 'Big Names in Innovation Research:A Bibliometric Overview', Current Science, vol. 113, no. 08, pp. 1507-1507.
View/Download from: Publisher's site
View description>>
Over the last few years an increasing number of scientific studies related to innovation research has been carried out. The present study analyses innovation research developed between 1989 and 2013. It uses the Web of Science database and provides several author-level bibliometric indicators including the total number of publications and citations, and the h-index. The results indicate that the most influential professors over the last 25 years, according to their highest h-index, are David Audretsch, Michael Hitt, Shaker Zahra, Rajshree Agarwal, Eric Von Hippel, David Teece, Will Mitchell and Robert Cooper. Among these authors, it is possible to demonstrate that they are not necessarily the most productive authors, with the highest number of publications; however, they are the most influential, with the highest number of citations. The incorporation of a larger number of journals to the Web of Science has granted different authors access to publish their work on innovation research.
Cancino, CA, Merigó, JM & Coronado, FC 2017, 'A bibliometric analysis of leading universities in innovation research', Journal of Innovation & Knowledge, vol. 2, no. 3, pp. 106-124.
View/Download from: Publisher's site
View description>>
© 2017 Journal of Innovation & Knowledge The number of innovation studies with a management perspective has grown considerably over the last 25 years. This study identified the universities that are most productive and influential in innovation research. The leading innovation research journals were also studied individually to identify the most productive universities for each journal. Data from the Web of Science were analyzed. Studies that were published between 1989 and 2013 were filtered first by the keyword “innovation” and second by 18 management-related research areas. The results indicate that US universities are the most productive and influential because they account for the most publications with a high number of citations and high h-index. Following advances in the productivity of numerous European journals, however, universities from the UK and the Netherlands are the most involved in publishing in journals that specialize in innovation research.
Cetindamar, D & Ozkazanc‐Pan, B 2017, 'Assessing mission drift at venture capital impact investors', Business Ethics: A European Review, vol. 26, no. 3, pp. 257-270.
View/Download from: Publisher's site
View description>>
AbstractIn this article, we consider a recent trend whereby private equity available from venture capital (VC) firms is being deployed toward mission‐driven initiatives in the form of impact investing. Acting as hybrid organizations, these impact investors aim to achieve financial results while also targeting companies and funds to achieve social impact. However, potential mission drift in these VCs, which we define as a decoupling between the investments made (means) and intended aims (ends), might become detrimental to the simultaneous financial and social goals of such firms. Based on a content analysis of mission statements, we assess mission drift and the hybridization level of VC impact investors by examining their missions (ends/goals) and their investment practices (means) through the criteria of social and financial logic. After examining eight impact‐oriented VC investors and their investments in 164 companies, we find mission drift manifest as a disparity between the means and ends in half of the VC impact investors in our sample. We discuss these findings and make suggestions for further studies.
Cetindamar, D & Rickne, A 2017, 'Using the functional analysis to understand the emergence of biomaterials within an existing biotechnology system: observations from a case study in Turkey.', Technol. Anal. Strateg. Manag., vol. 29, no. 3, pp. 313-324.
View/Download from: Publisher's site
View description>>
The paper applies a functional approach to the analysis of an emerging technology within an innovation system (IS) in a developing country. By doing so, the paper identifies the advantages and drawbacks of the approach through a dynamic analysis and highlights the life cycle of an IS within which a new technology is emerging. This is done empirically by analysing the emergence of biosimilars within the infant Turkish biotechnology system mainly from the perspective of firms. Our analysis of the Turkish case illustrates how the tool of functional approach could be valuable in understanding the dynamics of a technology in a developing country context. Policy suggestions and implications of the study are presented as concluding remarks.
Chaudhuri, BB & Adak, C 2017, 'An approach for detecting and cleaning of struck-out handwritten text', Pattern Recognition, vol. 61, pp. 282-294.
View/Download from: Publisher's site
View description>>
This paper deals with the identification and processing of struck-out texts in unconstrained offline handwritten document images. If run on the OCR engine, such texts will produce nonsense character-string outputs. Here we present a combined (a) pattern classification and (b) graph-based method for identifying such texts. In case of (a), a feature-based two-class (normal vs. struck-out text) SVM classifier is used to detect moderate-sized struck-out components. In case of (b), skeleton of the text component is considered as a graph and the strike-out stroke is identified using a constrained shortest path algorithm. To identify zigzag or wavy struck-outs, all paths are found and some properties of zigzag and wavy line are utilized. Some other types of strike-out stroke are also detected by modifying the above method. The large sized multi-word and multi-line struck-outs are segmented into smaller components and treated as above. The detected struck-out texts can then be blocked from entering the OCR engine. In another kind of application involving historical documents, page images along with their annotated ground-truth are to be generated. In this case the strike-out strokes can be deleted from the words and then fed to the OCR engine. For this purpose an inpainting-based cleaning approach is employed. We worked on 500 pages of documents and obtained an overall F-Measure of 91.56% (91.06%) in English (Bengali) script for struck-out text detection. Also, for strike-out stroke identification and deletion, the F-Measures obtained were 89.65% (89.31%) and 91.16% (89.29%), respectively.
Chen, H, Zhang, G, Zhu, D & Lu, J 2017, 'Topic-based technological forecasting based on patent data: A case study of Australian patents from 2000 to 2014', Technological Forecasting and Social Change, vol. 119, no. June 2017, pp. 39-52.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. The study of technological forecasting is an important part of patent analysis. Although fitting models can provide a rough tendency of a technical area, the trend of the detailed content within the area remains hidden. It is also difficult to reveal the trend of specific topics using keyword-based text mining techniques, since it is very hard to track the temporal patterns of a single keyword that generally represents a technological concept. To overcome these limitations, this research proposes a topic-based technological forecasting approach, to uncover the trends of specific topics underlying massive patent claims using topic modelling. A topic annual weight matrix and a sequence of topic-based trend coefficients are generated to quantitatively estimate the developing trends of the discovered topics, and evaluate to what degree various topics have contributed to the patenting activities of the whole area. To demonstrate the effectiveness of the approach, we present a case study using 13,910 utility patents that were published during the years 2000 to 2014, owned by Australian assignees, in the United States Patent and Trademark Office (USPTO). The results indicate that the proposed approach is effective for estimating the temporal patterns and forecast the future trends of the latent topics underlying massive claims. The topic-based knowledge and the corresponding trend analysis provided by the approach can be used to facilitate further technological decisions or opportunity discovery.
Chen, J, Li, K, Tang, Z, Bilal, K, Yu, S, Weng, C & Li, K 2017, 'A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment', IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 4, pp. 919-933.
View/Download from: Publisher's site
View description>>
With the emergence of the big data age, the issue of how to obtain valuable knowledge from a dataset efficiently and accurately has attracted increasingly attention from both academia and industry. This paper presents a Parallel Random Forest (PRF) algorithm for big data on the Apache Spark platform. The PRF algorithm is optimized based on a hybrid approach combining dataparallel and task-parallel optimization. From the perspective of data-parallel optimization, a vertical data-partitioning method is performed to reduce the data communication cost effectively, and a data-multiplexing method is performed is performed to allow the training dataset to be reused and diminish the volume of data. From the perspective of task-parallel optimization, a dual parallel approach is carried out in the training process of RF, and a task Directed Acyclic Graph (DAG) is created according to the parallel training process of PRF and the dependence of the Resilient Distributed Datasets (RDD) objects. Then, different task schedulers are invoked for the tasks in the DAG. Moreover, to improve the algorithm's accuracy for large, high-dimensional, and noisy data, we perform a dimension-reduction approach in the training process and a weighted voting approach in the prediction process prior to parallelization. Extensive experimental results indicate the superiority and notable advantages of the PRF algorithm over the relevant algorithms implemented by Spark MLlib and other studies in terms of the classification accuracy, performance, and scalability. With the expansion of the scale of the random forest model and the Spark cluster, the advantage of the PRF algorithm is more obvious.
Chen, J, Liu, B, Zhou, H, Yu, Q, Gui, L & Shen, X 2017, 'QoS-Driven Efficient Client Association in High-Density Software-Defined WLAN', IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp. 7372-7383.
View/Download from: Publisher's site
Chen, Z, You, X, Zhong, B, Li, J & Tao, D 2017, 'Dynamically Modulated Mask Sparse Tracking', IEEE Transactions on Cybernetics, vol. 47, no. 11, pp. 3706-3718.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Visual tracking is a critical task in many computer vision applications such as surveillance and robotics. However, although the robustness to local corruptions has been improved, prevailing trackers are still sensitive to large scale corruptions, such as occlusions and illumination variations. In this paper, we propose a novel robust object tracking technique depends on subspace learning-based appearance model. Our contributions are twofold. First, mask templates produced by frame difference are introduced into our template dictionary. Since the mask templates contain abundant structure information of corruptions, the model could encode information about the corruptions on the object more efficiently. Meanwhile, the robustness of the tracker is further enhanced by adopting system dynamic, which considers the moving tendency of the object. Second, we provide the theoretic guarantee that by adapting the modulated template dictionary system, our new sparse model can be solved by the accelerated proximal gradient algorithm as efficient as in traditional sparse tracking methods. Extensive experimental evaluations demonstrate that our method significantly outperforms 21 other cutting-edge algorithms in both speed and tracking accuracy, especially when there are challenges such as pose variation, occlusion, and illumination changes.
Cheng, Z, Zhang, X, Li, Y, Yu, S, Lin, R & He, L 2017, 'Congestion-Aware Local Reroute for Fast Failure Recovery in Software-Defined Networks', Journal of Optical Communications and Networking, vol. 9, no. 11, pp. 934-934.
View/Download from: Publisher's site
View description>>
Although a restoration approach derives a reroute path when failure occurs and greatly reduces forwarding rules in switches compared with a protection approach, software-defined networks (SDNs) induce a long failure recovery process because of frequent flow operations between the SDN controller and switches. Accordingly, it is indispensable to design a new resilience approach to balance failure recovery time and forwarding rule occupation. To this end, we leverage flexible flow aggregation in fast reroute to solve this problem. In the proposed approach, each disrupted traffic flow is reassigned to a local reroute path for the purpose of congestion avoidance. Thus, all traffic flows assigned to the same local reroute path are aggregated into a new 'big' flow, and the number of reconfigured forwarding rules in the restoration process is greatly reduced. We first formulate this problem as an integer linear programming model, then design an efficient heuristic named the 'congestion-aware local fast reroute' (CALFR). Extensive emulation results show that CALFR enables fast recovery while avoiding link congestion in the post-recovery network.
Choi, I, Milne, DN, Glozier, N, Peters, D, Harvey, SB & Calvo, RA 2017, 'Using different Facebook advertisements to recruit men for an online mental health study: Engagement and selection bias', Internet Interventions, vol. 8, pp. 27-34.
View/Download from: Publisher's site
View description>>
© 2017 A growing number of researchers are using Facebook to recruit for a range of online health, medical, and psychosocial studies. There is limited research on the representativeness of participants recruited from Facebook, and the content is rarely mentioned in the methods, despite some suggestion that the advertisement content affects recruitment success. This study explores the impact of different Facebook advertisement content for the same study on recruitment rate, engagement, and participant characteristics. Five Facebook advertisement sets (“resilience”, “happiness”, “strength”, “mental fitness”, and “mental health”) were used to recruit male participants to an online mental health study which allowed them to find out about their mental health and wellbeing through completing six measures. The Facebook advertisements recruited 372 men to the study over a one month period. The cost per participant from the advertisement sets ranged from $0.55 to $3.85 Australian dollars. The “strength” advertisements resulted in the highest recruitment rate, but participants from this group were least engaged in the study website. The “strength” and “happiness” advertisements recruited more younger men. Participants recruited from the “mental health” advertisements had worse outcomes on the clinical measures of distress, wellbeing, strength, and stress. This study confirmed that different Facebook advertisement content leads to different recruitment rates and engagement with a study. Different advertisement also leads to selection bias in terms of demographic and mental health characteristics. Researchers should carefully consider the content of social media advertisements to be in accordance with their target population and consider reporting this to enable better assessment of generalisability.
Corio, E, Laccone, F, Pietroni, N, Cignoni, P & Froli, M 2017, 'Conception and parametric design workflow for a timber large-spanned reversible grid shell to shelter the archaeological site of the roman shipwrecks in pisa', International Journal of Computational Methods and Experimental Measurements, vol. 5, no. 4, pp. 551-561.
View/Download from: Publisher's site
Davis, JJJ, Lin, C-T, Gillett, G & Kozma, R 2017, 'An Integrative Approach to Analyze Eeg Signals and Human Brain Dynamics in Different Cognitive States', Journal of Artificial Intelligence and Soft Computing Research, vol. 7, no. 4, pp. 287-299.
View/Download from: Publisher's site
View description>>
AbstractElectroencephalograph (EEG) data provide insight into the interconnections and relationships between various cognitive states and their corresponding brain dynamics, by demonstrating dynamic connections between brain regions at different frequency bands. While sensory input tends to stimulate neural activity in different frequency bands, peaceful states of being and self-induced meditation tend to produce activity in the mid-range (Alpha). These studies were conducted with the aim of: (a) testing different equipment in order to assess two (2) different EEG technologies together with their benefits and limitations and (b) having an initial impression of different brain states associated with different experimental modalities and tasks, by analyzing the spatial and temporal power spectrum and applying our movie making methodology to engage in qualitative exploration via the art of encephalography. This study complements our previous study of measuring multichannel EEG brain dynamics using MINDO48 equipment associated with three experimental modalities measured both in the laboratory and the natural environment. Together with Hilbert analysis, we conjecture, the results will provide us with the tools to engage in more complex brain dynamics and mental states, such as Meditation, Mathematical Audio Lectures, Music Induced Meditation, and Mental Arithmetic Exercises. This paper focuses on open eye and closed eye conditions, as well as meditation states in laboratory conditions. We assess similarities and differences between experimental modalities and their associated brain states as well as differences between the different tools for analysis and equipment.
Deng, S, Huang, L, Xu, G, Wu, X & Wu, Z 2017, 'On Deep Learning for Trust-Aware Recommendations in Social Networks', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1164-1177.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. With the emergence of online social networks, the social network-based recommendation approach is popularly used. The major benefit of this approach is the ability of dealing with the problems with cold-start users. In addition to social networks, user trust information also plays an important role to obtain reliable recommendations. Although matrix factorization (MF) becomes dominant in recommender systems, the recommendation largely relies on the initialization of the user and item latent feature vectors. Aiming at addressing these challenges, we develop a novel trust-based approach for recommendation in social networks. In particular, we attempt to leverage deep learning to determinate the initialization in MF for trust-aware social recommendations and to differentiate the community effect in user's trusted friendships. A two-phase recommendation process is proposed to utilize deep learning in initialization and to synthesize the users' interests and their trusted friends' interests together with the impact of community effect for recommendations. We perform extensive experiments on real-world social network data to demonstrate the accuracy and effectiveness of our proposed approach in comparison with other state-of-the-art methods.
Dong, D, Wang, Y, Hou, Z, Qi, B, Pan, Y & Xiang, G-Y 2017, 'State Tomography of Qubit Systems Using Linear Regression Estimation and Adaptive Measurements', IFAC-PapersOnLine, vol. 50, no. 1, pp. 13014-13019.
View/Download from: Publisher's site
Dong, P, Zheng, T, Yu, S, Zhang, H & Yan, X 2017, 'Enhancing Vehicular Communication Using 5G-Enabled Smart Collaborative Networking', IEEE Wireless Communications, vol. 24, no. 6, pp. 72-79.
View/Download from: Publisher's site
View description>>
5G is increasingly becoming a prominent technology promoting the development of mobile networks. Meanwhile, the ever increasing demands for vehicular networks are driven by a variety of vehicular services and application scenarios. Therefore, a new architectural design, which can harness the benefits of 5G for vehicular networks, can take a solid step toward increasing bandwidth and improving reliability for vehicular communications. In this article, we focus on the innovations of a novel and practical 5G-enabled smart collaborative vehicular network (SCVN) architecture, based on our long-Term research and practice in this field. SCVN not only considers the various technical features of a 5G network, but also includes different mobile scenarios of vehicular networks. We have performed extensive experiments in various scenarios, including high-density vehicles moving at low or high speed across dense cells, to evaluate the performance of the proposed architecture. The real-world experimental results demonstrate that SCVN achieves better performance in throughput, reliability, and handover latency compared to its counterparts.
Dovey, K, Burdon, S & Simpson, R 2017, 'Creative leadership as a collective achievement: An Australian case', Management Learning, vol. 48, no. 1, pp. 23-38.
View/Download from: Publisher's site
View description>>
In this article, we examine the construct of ‘leadership’ through an analysis of the social practices that underpinned the Australian Broadcasting Corporation television production entitled The Code. Positioning the production within the neo-bureaucratic organisational form currently adopted by the global television industry, we explore new conceptualisations of the leadership phenomenon emerging within this industry in response to the increasingly complex, uncertain and interdependent nature of creative work within it. We show how the polyarchic governance regime characteristic of the neo-bureaucratic organisational form ensures broadcaster control and coordination through ‘hard power’ mechanisms embedded in the commissioning process and through ‘soft power’ relational practices that allow creative licence to those employed in the production. Furthermore, we show how both sets of practices (commissioning and creative practices) leverage and regenerate the relational resources – such as trust, commitment and resilience – gained from rich stakeholder experience of working together in the creative industries over a significant period of time. Referencing the leadership-as-practice perspective, we highlight the contingent and improvisational nature of these practices and metaphorically describe the leadership manifesting in this production as a form of ‘interstitial glue’ that binds and shapes stakeholder interests and collective agency.
Du, B, Wang, Z, Zhang, L, Zhang, L, Liu, W, Shen, J & Tao, D 2017, 'Exploring Representativeness and Informativeness for Active Learning', IEEE Transactions on Cybernetics, vol. 47, no. 1, pp. 14-26.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms.
Du, J, Jiang, C, Wang, J, Ren, Y, Yu, S & Han, Z 2017, 'Resource Allocation in Space Multiaccess Systems', IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 2, pp. 598-618.
View/Download from: Publisher's site
View description>>
Currently, most Landsat satellites are deployed in the low earth orbit (LEO) to obtain high-resolution data of the Earth surface and atmosphere. However, the return channels of LEO satellites are unstable and discontinuous intrinsically, resulting from the high orbital velocity, long revisit interval, and limited ranges of ground-based radar receivers. Space-based information networks, in which data can be delivered by the cooperative transmission of relay satellites, can greatly expand the spatial transport connection ranges of LEO satellites. While different types of these relay satellites deployed in orbits of different altitudes represent distinctive performances when they are participating in forwarding. In this paper, we consider the cooperative mechanism of relay satellites deployed in the geosynchronous orbit (GEO) and LEO according to their different transport performances and orbital characteristics. To take full advantage of the transmission resource of different kinds of cooperative relays, we propose a multiple access and bandwidth resource allocation strategy for GEO relay, in which the relay can receive and transmit simultaneously according to channel characteristics of space-based systems. Moreover, a time-slot allocation strategy that is based on the slotted time division multiple access is introduced for the system with LEO relays. Based on the queueing theoretic formulation, the stability of the proposed systems and protocols is analyzed and the maximum stable throughput region is derived as well, which provides the guidance for the design of the system optimal control. Simulation results exhibit multiple factors that affect the stable throughput and verify the theoretical analysis.
Erfani, SS, Abedin, B & Blount, Y 2017, 'The effect of social network site use on the psychological well‐being of cancer patients', Journal of the Association for Information Science and Technology, vol. 68, no. 5, pp. 1308-1322.
View/Download from: Publisher's site
View description>>
Social network sites (SNSs) are growing in popularity and social significance. Although researchers have attempted to explain the effect of SNS use on users' psychological well‐being, previous studies have produced inconsistent results. In addition, most previous studies relied on healthy students as participants; other cohorts of SNSs users, in particular people living with serious health conditions, have been neglected. In this study, we carried out semistructured interviews with users of the Ovarian Cancer Australia (OCA) Facebook to assess how and in what ways SNS use impacts their psychological well‐being. A theoretical model was proposed to develop a better understanding of the relationships between SNS use and the psychological well‐being of cancer patients. Analysis of data collected through a subsequent quantitative survey confirmed the theoretical model and empirically revealed the extent to which SNS use impacts the psychological well‐being of cancer patients. Findings showed the use of OCA Facebook enhances social support, enriches the experience of social connectedness, develops social presence and learning and ultimately improves the psychological well‐being of cancer patients.
Fang, XS, Sheng, QZ, Wang, X, Ngu, AHH & Zhang, Y 2017, 'GrandBase: generating actionable knowledge from Big Data', PSU Research Review, vol. 1, no. 2, pp. 105-126.
View/Download from: Publisher's site
View description>>
PurposeThis paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.Design/methodology/approachIn particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.FindingsEmpirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.Originality/valueTo revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has ...
Feng, B, Zhang, H, Zhou, H & Yu, S 2017, 'Locator/Identifier Split Networking: A Promising Future Internet Architecture', IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2927-2948.
View/Download from: Publisher's site
View description>>
The Internet has achieved unprecedented success in human history. However, its original design has encountered many challenges in the past decades due to the significant changes of context and requirements. As a result, the design of future networks has received great attention from both academia and industry, and numerous novel architectures have sprung up in recent years. Among them, the locator/identifier (Loc/ID) split networking is widely discussed for its decoupling of the overloaded IP address semantics, which satisfies several urgent needs of the current Internet such as mobility, multi-homing, routing scalability, security, and heterogeneous network convergence. Hence, in this paper, we focus on Loc/ID split network architectures, and provide a related comprehensive survey on their principles, mechanisms, and characteristics. First, we illustrate the major serious problems of the Internet caused by the overloading of IP address semantics. Second, we classify the existing Loc/ID split network architectures based on their properties, abstract the general principle and framework for each classification, and demonstrate related representative architectures in detail. Finally, we summarize the fundamental features of the Loc/ID split networking, compare corresponding investigated architectures, and discuss several open issues and opportunities.
Feng, B, Zhou, H, Zhang, H, Li, G, Li, H, Yu, S & Chao, H-C 2017, 'HetNet: A Flexible Architecture for Heterogeneous Satellite-Terrestrial Networks', IEEE Network, vol. 31, no. 6, pp. 86-92.
View/Download from: Publisher's site
View description>>
As satellite networks have played an indispensable role in many fields, how to integrate them with terrestrial networks (e.g., the Internet) has attracted significant attention in academia. However, it is challenging to efficiently build such an integrated network, since terrestrial networks are facing a number of serious problems, and since they do not provide good support for heterogeneous network convergence. In this article, we propose a flexible network architecture, HetNet, for efficient integration of heterogeneous satellite-terrestrial networks. Specifically, the HetNet synthesizes Locator/ID split and Information-Centric Networking to establish a general network architecture. In this way, it is able to achieve heterogeneous network convergence, routing scalability alleviation, mobility support, traffic engineering, and efficient content delivery. Moreover, the HetNet can further improve its network elasticity by using the techniques of Software-Defined Networking and Network Functions Virtualization. In addition, to evaluate the HetNet performance, we build a proof-of-concept prototype system and conduct extensive experiments. The results confirm the feasibility of the HetNet and its advantages.
Gao, L, Luan, TH, Yu, S, Zhou, W & Liu, B 2017, 'FogRoute: DTN-based Data Dissemination Model in Fog Computing', IEEE Internet of Things Journal, vol. 4, no. 1, pp. 1-1.
View/Download from: Publisher's site
View description>>
Fog computing, known as 'cloud closed to ground,' deploys light-weight compute facility, called Fog servers, at the proximity of mobile users. By precatching contents in the Fog servers, an important application of Fog computing is to provide high-quality low-cost data distributions to proximity mobile users, e.g., video/live streaming and ads dissemination, using the single-hop low-latency wireless links. A Fog computing system is of a three tier Mobile-Fog-Cloud structure; mobile user gets service from Fog servers using local wireless connections, and Fog servers update their contents from Cloud using the cellular or wired networks. This, however, may incur high content update cost when the bandwidth between the Fog and Cloud servers is expensive, e.g., using the cellular network, and is therefore inefficient for nonurgent, high volume contents. How to economically utilize the Fog-Cloud bandwidth with guaranteed download performance of users thus represents a fundamental issue in Fog computing. In this paper, we address the issue by proposing a hybrid data dissemination framework which applies software-defined network and delay-tolerable network (DTN) approaches in Fog computing. Specifically, we decompose the Fog computing network with two planes, where the cloud is a control plane to process content update queries and organize data flows, and the geometrically distributed Fog servers form a data plane to disseminate data among Fog servers with a DTN technique. Using extensive simulations, we show that the proposed framework is efficient in terms of data-dissemination success ratio and content convergence time among Fog servers.
Gheisari, S, Charlton, A, Catchpoole, DR & Kennedy, PJ 2017, 'Computers can classify neuroblastic tumours from histopathological images using machine learning', Pathology, vol. 49, pp. S72-S73.
View/Download from: Publisher's site
Gholami, MF, Daneshgar, F, Beydoun, G & Rabhi, FA 2017, 'Challenges in migrating legacy software systems to the cloud - an empirical study.', Inf. Syst., vol. 67, pp. 100-113.
View/Download from: Publisher's site
View description>>
© 2017 Moving existing legacy systems to cloud platforms is a difficult and high cost process that may involve technical and non-technical resources and challenges. There is evidence that the lack of understanding and preparedness of cloud computing migration underpin many migration failures in achieving organisations’ goals. The main goal of this article is to identify the most important challenging activities for moving legacy systems to cloud platforms from a perspective of reengineering process. Through a combination of a bottom-up and a top-down analysis, a set of common activities is derived from the extant cloud computing literature. These are expressed as a model and are validated using a population of 104 shortlisted and randomly selected domain experts from different industry sectors. We used a Web-based survey questionnaire to collect data and analysed them using SPSS Sample T-Test. The results of this study highlight the most important and critical challenges that should be addressed by various roles within a legacy to cloud migration endeavour. The study provides an overall understanding of this process including common occurring activities, concerns and recommendations. In addition, the findings of this study constitute a practical guide to conduct this transition. This guide is platform agnostic and independent from any specific migration scenario, cloud platform, or an application domain.
Gill, AQ, Braytee, A & Hussain, FK 2017, 'Adaptive service e-contract information management reference architecture', VINE Journal of Information and Knowledge Management Systems, vol. 47, no. 3, pp. 395-410.
View/Download from: Publisher's site
View description>>
PurposeThe aim of this paper is to report on the adaptive e-contract information management reference architecture using the systematic literature review (SLR) method. Enterprises need to effectively design and implement complex adaptive e-contract information management architecture to support dynamic service interactions or transactions.Design/methodology/approachThe SLR method is three-fold and was adopted as follows. First, a customized literature search with relevant selection criteria was developed, which was then applied to initially identify a set of 1,573 papers. Second, 55 of 1,573 papers were selected for review based on the initial review of each identified paper title and abstract. Finally, based on the second review, 24 papers relevant to this research were selected and reviewed in detail.FindingsThis detailed review resulted in the adaptive e-contract information management reference architecture elements including structure, life cycle and supporting technology.Research limitations/implicationsThe reference architecture elements could serve as a taxonomy for researchers and practitioners to develop context-specific service e-contract information management architecture to support dynamic service interactions for value co-creation. The results are limited to the number of selected databases and papers reviewed in this study.Originality/valueThis paper offers a review of the body of knowledge and novel e-contract information management reference architecture, ...
Glynn, PD, Voinov, AA, Shapiro, CD & White, PA 2017, 'From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments', Earth's Future, vol. 5, no. 4, pp. 356-378.
View/Download from: Publisher's site
View description>>
Our different kinds of minds and types of thinking affect the ways we decide, take action, and cooperate (or not). Derived from these types of minds, innate biases, beliefs, heuristics, and values (BBHV) influence behaviors, often beneficially, when individuals or small groups face immediate, local, acute situations that they and their ancestors faced repeatedly in the past. BBHV, though, need to be recognized and possibly countered or used when facing new, complex issues or situations especially if they need to be managed for the benefit of a wider community, for the longer‐term and the larger‐scale. Taking BBHV into account, we explain and provide a cyclic science‐infused adaptive framework for (1) gaining knowledge of complex systems and (2) improving their management. We explore how this process and framework could improve the governance of science and policy for different types of systems and issues, providing examples in the area of natural resources, hazards, and the environment. Lastly, we suggest that an “Open Traceable Accountable Policy” initiative that followed our suggested adaptive framework could beneficially complement recent Open Data/Model science initiatives.Plain Language SummaryOur review paper suggests that society can improve the management of natural resources and environments by (1) recognizing the sources of human decisions and thinking and understanding their role in the scientific progression to knowledge; (2) considering innate human needs and biases, beliefs, heuristics, and values that may need to be countered or embraced; and (3) creating science and policy governance that is inclusive, integrated, considerate of diversity, explicit, and accountab...
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2017, 'On the application of reverse vaccinology to parasitic diseases: a perspective on feature selection and ranking of vaccine candidates', International Journal for Parasitology, vol. 47, no. 12, pp. 779-790.
View/Download from: Publisher's site
View description>>
Reverse vaccinology has the potential to rapidly advance vaccine development against parasites, but it is unclear which features studied in silico will advance vaccine development. Here we consider Neospora caninum which is a globally distributed protozoan parasite causing significant economic and reproductive loss to cattle industries worldwide. The aim of this study was to use a reverse vaccinology approach to compile a worthy vaccine candidate list for N. caninum, including proteins containing pathogen-associated molecular patterns to act as vaccine carriers. The in silico approach essentially involved collecting a wide range of gene and protein features from public databases or computationally predicting those for every known Neospora protein. This data collection was then analysed using an automated high-throughput process to identify candidates. The final vaccine list compiled was judged to be the optimum within the constraints of available data, current knowledge, and existing bioinformatics programs. We consider and provide some suggestions and experience on how ranking of vaccine candidate lists can be performed. This study is therefore important in that it provides a valuable resource for establishing new directions in vaccine research against neosporosis and other parasitic diseases of economic and medical importance.
Gordic, S, Ayache, JB, Kennedy, P, Besa, C, Wagner, M, Bane, O, Ehman, RL, Kim, E & Taouli, B 2017, 'Value of tumor stiffness measured with MR elastography for assessment of response of hepatocellular carcinoma to locoregional therapy', Abdominal Radiology, vol. 42, no. 6, pp. 1685-1694.
View/Download from: Publisher's site
Grochow, JA & Qiao, Y 2017, 'Algorithms for Group Isomorphism via Group Extensions and Cohomology', SIAM Journal on Computing, vol. 46, no. 4, pp. 1153-1216.
View/Download from: Publisher's site
View description>>
© 2017 SIAM. The isomorphism problem for finite groups of order n (GpI) has long been known to be solvable in nlog n+O(1) time, but only recently were polynomial-time algorithms designed for several interesting group classes. Inspired by recent progress, we revisit the strategy for GpI via the extension theory of groups. The extension theory describes how a normal subgroup N is related to G/N via G, and this naturally leads to a divide-and-conquer strategy that 'splits' GpI into two subproblems: one regarding group actions on other groups, and one regarding group cohomology. When the normal subgroup N is abelian, this strategy is well known. Our first contribution is to extend this strategy to handle the case when N is not necessarily abelian. This allows us to provide a unified explanation of all recent polynomial-time algorithms for special group classes. Guided by this strategy, to make further progress on GpI, we consider central-radical groups, proposed in Babai et al. [Code equivalence and group isomorphism, in Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'11), SIAM, Philadelphia, 2011, ACM, New York, pp. 1395-1408]: the class of groups such that G modulo its center has no abelian normal subgroups. This class is a natural extension of the group class considered by Babai et al. [Polynomial-time isomorphism test for groups with no abelian normal subgroups (extended abstract), in International Colloquium on Automata, Languages, and Programming (ICALP), 2012, pp. 51-62], namely those groups with no abelian normal subgroups. Following the above strategy, we solve GpI in nO(log log n) time for central-radical groups, and in polynomial time for several prominent subclasses of centralradical groups. We also solve GpI in nO(log log n) time for groups whose solvable normal subgroups are elementary abelian but not necessarily central. As far as we are aware, this is the first time there have been worst-case guarantees on an n...
Guan, J, Feng, Y & Ying, M 2017, 'Super-activating Quantum Memory with Entanglement', Quantum Information and Computation, vol. 18, no. 13-14, pp. 1115-1124.
View description>>
Noiseless subsystems were proved to be an efficient and faithful approach topreserve fragile information against decoherence in quantum informationprocessing and quantum computation. They were employed to design a general(hybrid) quantum memory cell model that can store both quantum and classicalinformation. In this paper, we find an interesting new phenomenon that thepurely classical memory cell can be super-activated to preserve quantum states,whereas the null memory cell can only be super-activated to encode classicalinformation. Furthermore, necessary and sufficient conditions for thisphenomenon are discovered so that the super-activation can be easily checked byexamining certain eigenvalues of the quantum memory cell without computing thenoiseless subsystems explicitly. In particular, it is found that entangled andseparable stationary states are responsible for the super-activation of storingquantum and classical information, respectively.
Han, J, Lu, J & Zhang, G 2017, 'Tri-level decision-making for decentralized vendor-managed inventory', Information Sciences, vol. 421, pp. 85-103.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. Vendor-managed inventory (VMI) is a common inventory management policy which allows the vendor to manage the buyer's inventory based on the information shared in the course of supply chain management. One challenge in VMI is that both the vendor and buyer are manufacturers who try to achieve an inventory as small as possible or even a zero inventory; it is therefore difficult to manage inventory coordination between them. This paper considers a decentralized VMI problem in a three-echelon supply chain network in which multiple distributors (third-party logistics companies) are selected to balance the inventory between a vendor (manufacturer) and multiple buyers (manufacturers). To handle this issue, this paper first proposes a tri-level decision model to describe the decentralized VMI problem, which allows us to examine how decision members coordinate with each other in respect of decentralized VMI decision-making in a predetermined sequence. We then turn our attention to the geometry of the solution space and present a vertex enumeration algorithm to solve the resulting tri-level decision model. Lastly, a computational study is developed to illustrate how the proposed tri-level decision model and solution approach can handle the decentralized VMI problem. The results indicate that the proposed tri-level decision-making techniques provide a practical way to design a novel manufacturer-manufacturer (vendor-buyer) VMI system where third-party logistics are involved.
He, X, Wu, Y, Yu, D & Merigó, JM 2017, 'Exploring the Ordered Weighted Averaging Operator Knowledge Domain: A Bibliometric Analysis', International Journal of Intelligent Systems, vol. 32, no. 11, pp. 1151-1166.
View/Download from: Publisher's site
View description>>
© 2017 Wiley Periodicals, Inc. Ordered weighted averaging (OWA) operator has been received increasingly widespread interest since its appearance in 1988. Recently, a topic search with the keywords “ordered weighted averaging operator” or “OWA operator” on Web of Science (WOS) found 1231 documents. As the publications about OWA operator increase rapidly, thus a scientometric analysis of this research field and discovery of its knowledge domain becomes very important and necessary. This paper studies the publications about OWA operator between 1988 and 2015, and it is based on 1213 bibliographic records obtained by using topic search from WOS. The disciplinary distribution, most cited papers, influential journals, as well as influential authors are analyzed through citation and cocitation analysis. The emerging trends in OWA operator research are explored by keywords and references burst detection analysis. The research methods and results in this paper are meaningful for researchers associated with OWA operator field to understand the knowledge domain and establish their own future research direction.
Heng, J, Wang, J, Xiao, L & Lu, H 2017, 'Research and application of a combined model based on frequent pattern growth algorithm and multi-objective optimization for solar radiation forecasting', Applied Energy, vol. 208, pp. 845-866.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Solar radiation forecasting plays a significant role in precisely designing solar energy systems and in the efficient management of solar energy plants. Most research only focuses on accuracy improvements; however, for an effective forecasting model, considering only accuracy or stability is inadequate. To solve this problem, a combined model based on nondominated sorting-based multiobjective bat algorithm (NSMOBA) is developed for the optimization of weight coefficients of each model to achieve high accuracy and stability results simultaneously. In addition, a statistical method and data mining-based approach are used to determine the input variables for constructing the combined model. Monthly average solar radiation and meteorological variables from six datasets in the U.S. collected for case studies were used to assess the comprehensive performance (both in accuracy and stability) of the proposed combined model. The simulation in four experiments demonstrated the following: (a) the proposed combined model is suitable for providing accurate and stable solar radiation forecasting; (b) the combined model exhibits a more competitive forecasting performance than the individual models by using the advantage of each model; (c) the NSMOBA is an efficient algorithm for providing accurate forecasting results and improving the stability where the single bat algorithm is insufficient.
Herr, D, Nori, F & Devitt, SJ 2017, 'Lattice surgery translation for quantum computation', New Journal of Physics, vol. 19, no. 1, pp. 013034-013034.
View/Download from: Publisher's site
View description>>
© 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. In this paper we outline a method for a compiler to translate any non fault tolerant quantum circuit to the geometric representation of the lattice surgery error-correcting code using inherent merge and split operations. Since the efficiency of state distillation procedures has not yet been investigated in the lattice surgery model, their translation is given as an example using the proposed method. The resource requirements seem comparable or better to the defect-based state distillation process, but modularity and eventual implementability allow the lattice surgery model to be an interesting alternative to braiding.
Herr, D, Nori, F & Devitt, SJ 2017, 'Optimization of Lattice Surgery is NP-Hard', npj Quantum Information 3, Article number: 35 (2017), vol. 3, no. 1, pp. 1-5.
View/Download from: Publisher's site
View description>>
The traditional method for computation in either the surface code or in theRaussendorf model is the creation of holes or 'defects' within the encodedlattice of qubits that are manipulated via topological braiding to enact logicgates. However, this is not the only way to achieve universal, fault-tolerantcomputation. In this work, we focus on the Lattice Surgery representation,which realizes transversal logic operations without destroying the intrinsic 2Dnearest-neighbor properties of the braid-based surface code and achievesuniversality without defects and braid based logic. For both techniques thereare open questions regarding the compilation and resource optimization ofquantum circuits. Optimization in braid-based logic is proving to be difficultand the classical complexity associated with this problem has yet to bedetermined. In the context of lattice-surgery-based logic, we can introduce anoptimality condition, which corresponds to a circuit with the lowest resourcerequirements in terms of physical qubits and computational time, and prove thatthe complexity of optimizing a quantum circuit in the lattice surgery model isNP-hard.
Hoermann, S, McCabe, KL, Milne, DN & Calvo, RA 2017, 'Application of Synchronous Text-Based Dialogue Systems in Mental Health Interventions: Systematic Review', Journal of Medical Internet Research, vol. 19, no. 8, pp. e267-e267.
View/Download from: Publisher's site
View description>>
© Simon Hoermann, Kathryn L McCabe, David N Milne, Rafael A Calvo. Background: Synchronous written conversations (or 'chats') are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions. Objective: The aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat. Methods: A systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure. Results: A total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling). Conclusions: Feasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.
Hollis, L, Barnhill, E, Perrins, M, Kennedy, P, Conlisk, N, Brown, C, Hoskins, PR, Pankaj, P & Roberts, N 2017, 'Finite element analysis to investigate variability of MR elastography in the human thigh', Magnetic Resonance Imaging, vol. 43, pp. 27-36.
View/Download from: Publisher's site
Hu, L, Cao, L, Cao, J, Gu, Z, Xu, G & Wang, J 2017, 'Improving the Quality of Recommendations for Users and Items in the Tail of Distribution', ACM Transactions on Information Systems, vol. 35, no. 3, pp. 1-37.
View/Download from: Publisher's site
View description>>
Short-head and long-tail distributed data are widely observed in the real world. The same is true of recommender systems (RSs), where a small number of popular items dominate the choices and feedback data while the rest only account for a small amount of feedback. As a result, most RS methods tend to learn user preferences from popular items since they account for most data. However, recent research in e-commerce and marketing has shown that future businesses will obtain greater profit from long-tail selling. Yet, although the number of long-tail items and users is much larger than that of short-head items and users, in reality, the amount of data associated with long-tail items and users is much less. As a result, user preferences tend to be popularity-biased. Furthermore, insufficient data makes long-tail items and users more vulnerable to shilling attack. To improve the quality of recommendations for items and users in the tail of distribution, we propose a coupled regularization approach that consists of two latent factor models: C-HMF, for enhancing credibility, and S-HMF, for emphasizing specialty on user choices. Specifically, the estimates learned from C-HMF and S-HMF recurrently serve as the empirical priors to regularize one another. Such coupled regularization leads to the comprehensive effects of final estimates, which produce more qualitative predictions for both tail users and tail items. To assess the effectiveness of our model, we conduct empirical evaluations on large real-world datasets with various metrics. The results prove that our approach significantly outperforms the compared methods.
Hu, L, Cao, L, Cao, J, Gu, Z, Xu, G & Yang, D 2017, 'Learning Informative Priors from Heterogeneous Domains to Improve Recommendation in Cold-Start User Domains', ACM Transactions on Information Systems, vol. 35, no. 2, pp. 1-37.
View/Download from: Publisher's site
View description>>
In the real-world environment, users have sufficient experience in their focused domains but lack experience in other domains. Recommender systems are very helpful for recommending potentially desirable items to users in unfamiliar domains, and cross-domain collaborative filtering is therefore an important emerging research topic. However, it is inevitable that the cold-start issue will be encountered in unfamiliar domains due to the lack of feedback data. The Bayesian approach shows that priors play an important role when there are insufficient data, which implies that recommendation performance can be significantly improved in cold-start domains if informative priors can be provided. Based on this idea, we propose a Weighted Irregular Tensor Factorization (WITF) model to leverage multi-domain feedback data across all users to learn the cross-domain priors w.r.t. both users and items. The features learned from WITF serve as the informative priors on the latent factors of users and items in terms of weighted matrix factorization models. Moreover, WITF is a unified framework for dealing with both explicit feedback and implicit feedback. To prove the effectiveness of our approach, we studied three typical real-world cases in which a collection of empirical evaluations were conducted on real-world datasets to compare the performance of our model and other state-of-the-art approaches. The results show the superiority of our model over comparison models.
Huang, J, Xu, L, Duan, Q, Xing, C-C, Luo, J & Yu, S 2017, 'Modeling and performance analysis for multimedia data flows scheduling in software defined networks', Journal of Network and Computer Applications, vol. 83, pp. 89-100.
View/Download from: Publisher's site
View description>>
Supporting diverse Quality of Service (QoS) performance for data flows generated by multimedia applications has been a challenging issue in Software Defined Network (SDN). However, the current available QoS provision mechanisms proposed for SDN have not fully considered the heterogeneity and performance diversity for multimedia data flows. To this end, this work presents a hybrid scheduling model by combining priority queueing with a packet general-processor sharing to provide diverse QoS guarantees for multimedia applications in SDN. Network Calculus is applied to develop modeling and analysis techniques to evaluate the QoS performance of the proposed scheme. Performance bounds guaranteed by the proposed scheme for heterogeneous data flows, including their worst-case end-to-end delay and queueing backlog, are thus determined. Both analytical and simulation results show that the modeling and analysis techniques are general and flexible thus are fully capable of modeling QoS for diverse requirements of multimedia applications in SDN.
Hussain, W, Hussain, FK, Hussain, OK, Damiani, E & Chang, E 2017, 'Formulating and managing viable SLAs in cloud computing from a small to medium service provider's viewpoint: A state-of-the-art review', Information Systems, vol. 71, pp. 240-259.
View/Download from: Publisher's site
View description>>
In today's competitive world, service providers need to be customer-focused and proactive in their marketing strategies to create consumer awareness of their services. Cloud computing provides an open and ubiquitous computing feature in which a large random number of consumers can interact with providers and request services. In such an environment, there is a need for intelligent and efficient methods that increase confidence in the successful achievement of business requirements. One such method is the Service Level Agreement (SLA), which is comprised of service objectives, business terms, service relations, obligations and the possible action to be taken in the case of SLA violation. Most of the emphasis in the literature has, until now, been on the formation of meaningful SLAs by service consumers, through which their requirements will be met. However, in an increasingly competitive market based on the cloud environment, service providers too need a framework that will form a viable SLA, predict possible SLA violations before they occur, and generate early warning alarms that flag a potential lack of resources. This is because when a provider and a consumer commit to an SLA, the service provider is bound to reserve the agreed amount of resources for the entire period of that agreement – whether the consumer uses them or not. It is therefore very important for cloud providers to accurately predict the likely resource usage for a particular consumer and to formulate an appropriate SLA before finalizing an agreement. This problem is more important for a small to medium cloud service provider which has limited resources that must be utilized in the best possible way to generate maximum revenue. A viable SLA in cloud computing is one that intelligently helps the service provider to determine the amount of resources to offer to a requesting consumer, and there are number of studies on SLA management in the literature. The aim of this paper is two-fold. First, it pr...
Hussain, W, Hussain, OK, Hussain, FK & Khan, MQ 2017, 'Usability Evaluation of English, Local and Plain Languages to Enhance On-Screen Text Readability: A Use Case of Pakistan', Global Journal of Flexible Systems Management, vol. 18, no. 1, pp. 33-49.
View/Download from: Publisher's site
View description>>
© 2016, Global Institute of Flexible Systems Management. In today’s digital world, information can very easily be accessed and digitally processed anywhere. Devices which are capable of processing digital data range from desktop computers to laptops, mobile phones, tablets, and personal digital assistants. For effective communication, text on a Web site should catch a reader’s attention and should be easy to both read and understand. Different constraints are associated with on-screen text readability and legibility, such as font size, color, and style, as well as foreground and background color contrast, line spacing, text congestion, vocabulary and grammar, but text recognition and comprehension are two of the major problems. In this study, we address the issue of how to enhance text readability for non-native English speakers who have a basic understanding of English language and speak local languages which are not formally taught in academia. We select a use case in Pakistan, a country in which English and Urdu are the official languages, and a number of local languages are spoken in different parts of the country. Due to the wide variety of local languages, no Web site can support the many local language scripts or alphabets and display them on digital devices. When users with only a basic knowledge of English—particularly low-literate users from a local language background—try to read an English text, it is highly challenging for them to understand the meaning of words. In this study, we propose a plain language scheme in which a text is converted into a roman text. A roman text is formed by using the English alphabet and combining letters in such a way that when it is read, it sounds like a local language. To evaluate the applicability of our approach, we conducted a survey of users from different educational backgrounds, using a text written in English, local and plain language from users who speak particular local language. For each survey, we ...
Inan, DI & Beydoun, G 2017, 'Disaster Knowledge Management Analysis Framework Utilizing Agent-Based Models: Design Science Research Approach', Procedia Computer Science, vol. 124, pp. 116-124.
View/Download from: Publisher's site
Ivanyos, G, Qiao, Y & Subrahmanyam, KV 2017, 'Non-commutative Edmonds’ problem and matrix semi-invariants', computational complexity, vol. 26, no. 3, pp. 717-763.
View/Download from: Publisher's site
View description>>
© 2016, Springer International Publishing. In 1967, J. Edmonds introduced the problem of computing the rank over the rational function field of an n× n matrix T with integral homogeneous linear polynomials. In this paper, we consider the non-commutative version of Edmonds’ problem: compute the rank of T over the free skew field. This problem has been proposed, sometimes in disguise, from several different perspectives in the study of, for example, the free skew field itself (Cohn in J Symbol Log 38(2):309–314, 1973), matrix spaces of low rank (Fortin-Reutenauer in Sémin Lothar Comb 52:B52f 2004), Edmonds’ original problem (Gurvits in J Comput Syst Sci 69(3):448–484, 2004), and more recently, non-commutative arithmetic circuits with divisions (Hrubeš and Wigderson in Theory Comput 11:357-393, 2015. doi:10.4086/toc.2015.v011a014). It is known that this problem relates to the following invariant ring, which we call the F-algebra of matrix semi-invariants, denoted as R(n, m). For a field F, it is the ring of invariant polynomials for the action of SL (n, F) × SL (n, F) on tuples of matrices—(A, C) ∈ SL (n, F) × SL (n, F) sends (B1, … , Bm) ∈ M(n, F) ⊕m to (AB1CT, … , ABmCT). Then those T with non-commutative rank < n correspond to those points in the nullcone of R(n, m). In particular, if the nullcone of R(n, m) is defined by elements of degree ≤ σ, then there follows a poly (n, σ) -time randomized algorithm to decide whether the non-commutative rank of T is full. To our knowledge, previously the best bound for σ was O(n2·4n2) over algebraically closed fields of characteristic 0 (Derksen in Proc Am Math Soc 129(4):955–964, 2001). We now state the main contributions of this paper:We observe that by using an algorithm of Gurvits, and assuming the above bound σ for R(n, m) over Q, deciding whether or not T has non-commutative rank < n over Q can be done deterministically in time polynomial in the input size and σ.When F is large enough, we devise an algorithm ...
Jiang, J, Wen, S, Yu, S, Xiang, Y & Zhou, W 2017, 'Identifying Propagation Sources in Networks: State-of-the-Art and Comparative Studies', IEEE Communications Surveys & Tutorials, vol. 19, no. 1, pp. 465-481.
View/Download from: Publisher's site
View description>>
It has long been a significant but difficult problem to identify propagation sources based on limited knowledge of network structures and the varying states of network nodes. In practice, real cases can be locating the sources of rumors in online social networks and finding origins of a rolling blackout in smart grids. This paper reviews the state-of-the-art in source identification techniques and discusses the pros and cons of current methods in this field. Furthermore, in order to gain a quantitative understanding of current methods, we provide a series of experiments and comparisons based on various environment settings. Especially, our observation reveals considerable differences in performance by employing different network topologies, various propagation schemes, and diverse propagation probabilities. We therefore reach the following points for future work. First, current methods remain far from practice as their accuracy in terms of error distance (δ) is normally larger than three in most scenarios. Second, the majority of current methods are too time consuming to quickly locate the origins of propagation. In addition, we list five open issues of current methods exposed by the analysis, from the perspectives of topology, number of sources, number of networks, temporal dynamics, and complexity and scalability. Solutions to these open issues are of great academic and practical significance.
Kaiwartya, O, Prasad, M, Prakash, S, Samadhiya, D, Abdullah, AH & Rahman, SOA 2017, 'An investigation on biometric internet security', International Journal of Network Security, vol. 19, no. 2, pp. 167-176.
View/Download from: Publisher's site
View description>>
Due to the Internet revolution in the last decade, each and every work area of society are directly or indirectly depending on computers, highly integrated computer networks and communication systems, electronic data storage and high transfer based devices, e-commerce, e-security, e-governance, and e-business. The Internet revolution is also emerged as significant challenge due to the threats of hacking systems and individual accounts, malware, fraud and vulnerabilities of system and networks, etc. In this context, this paper explores E-Security in terms of challenges and measurements. Biometric recognition is also investigated as a key e-security solution. E-Security is precisely described to understand the concept and requirements. The major challenges of e-security; namely, threats, attacks, vulnerabilities are presented in detail. Some measurement are identified and discussed for the challenges. Biometric recognition is discussed in detail wit pros and cons of the approach as a key e-security solution. This investigation helps in clear understating of e-security challenges and possible implementation of the identified measurements for the challenges in wide area of network communications.
Kennedy, P, Macgregor, LJ, Barnhill, E, Johnson, CL, Perrins, M, Hunter, A, Brown, C, van Beek, EJR & Roberts, N 2017, 'MR elastography measurement of the effect of passive warmup prior to eccentric exercise on thigh muscle mechanical properties', Journal of Magnetic Resonance Imaging, vol. 46, no. 4, pp. 1115-1127.
View/Download from: Publisher's site
View description>>
PurposeTo investigate the effect of warmup by application of the thermal agent Deep Heat (DH) on muscle mechanical properties using magnetic resonance elastography (MRE) at 3T before and after exercise‐induced muscle damage (EIMD).Materials and MethodsTwenty male participants performed an individualized protocol designed to induce EIMD in the quadriceps. DH was applied to the thigh in 50% of the participants before exercise. MRE, T2‐weighted MRI, maximal voluntary contraction (MVC), creatine kinase (CK) concentration, and muscle soreness were measured before and after the protocol to assess EIMD effects. Five participants were excluded: four having not experienced EIMD and one due to incidental findings.ResultsTotal workload performed during the EIMD protocol was greater in the DH group than the control group (P < 0.03), despite no significant differences in baseline MVC (P = 0.23). Shear stiffness |G*| increased in the rectus femoris (RF) muscle in both groups (P < 0.03); however, DH was not a significant between‐group factor (P = 0.15). MVC values returned to baseline faster in the DH group (5 days) than the control group (7 days). Participants who displayed hyperintensity on T2‐weighted images had a greater stiffness increase following damage than those without: RF; 0.61 kPa vs. 0.15 kPa, P < 0.006, vastus intermedius; 0.34 kPa vs. 0.03 kPa, P = 0.06.ConclusionEIMD produces increa...
Kieferová, M & Wiebe, N 2017, 'Tomography and generative training with quantum Boltzmann machines', Physical Review A, vol. 96, no. 6.
View/Download from: Publisher's site
Ko, L-W, Komarov, O, Hairston, WD, Jung, T-P & Lin, C-T 2017, 'Sustained Attention in Real Classroom Settings: An EEG Study', Frontiers in Human Neuroscience, vol. 11, pp. 1-10.
View/Download from: Publisher's site
View description>>
© 2017 Ko, Komarov, Hairston, Jung and Lin. Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra.
Kolamunna, H, Chauhan, J, Hu, Y, Thilakarathna, K, Perino, D, Makaroff, D & Seneviratne, A 2017, 'Are Wearables Ready for Secure and Direct Internet Communication?', GetMobile: Mobile Computing and Communications, vol. 21, no. 3, pp. 5-10.
View/Download from: Publisher's site
View description>>
Recent advances in wearable technology tend towards standalone wearables. Most of today's wearable devices and applications still rely on a paired smartphone for secure Internet communication, even though many current generation wearables are equipped with Wi-Fi and 3G/4G network interfaces that provide direct Internet access. Yet it is not clear if such communication can be efficiently and securely supported through existing protocols. Our findings show that it is possible to use secure and efficient direct communication between wearables and the Internet
Kuang, S, Dong, D & Petersen, IR 2017, 'Rapid Lyapunov control of finite-dimensional quantum systems', Automatica, vol. 81, pp. 164-175.
View/Download from: Publisher's site
Kurian, JC & John, BM 2017, 'User-generated content on the Facebook page of an emergency management agency', Online Information Review, vol. 41, no. 4, pp. 558-579.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to explore themes eventuating from the user-generated content posted by users on the Facebook page of an emergency management agency.Design/methodology/approachAn information classification framework was used to classify user-generated content posted by users including all of the content posted during a six month period (January to June 2015). The posts were read and analysed thematically to determine the overarching themes evident across the entire collection of user posts.FindingsThe results of the analysis demonstrate that the key themes that eventuate from the user-generated content posted are “Self-preparedness”, “Emergency signalling solutions”, “Unsurpassable companion”, “Aftermath of an emergency”, and “Gratitude towards emergency management staff”. Major user-generated content identified among these themes are status-update, criticism, recommendation, and request.Research limitations/implicationsThis study contributes to theory on the development of key themes from user-generated content posted by users on a public social networking site. An analysis of user-generated content identified in this study implies that, Facebook is primarily used for information dissemination, coordination and collaboration, and information seeking in the context of emergency management. Users may gain the benefits of identity construction and social provisions, whereas social conflict is a potential detrimental implication. Other user costs include lack of social support by stakeholders, investment in social infrastructure and additional work force required to allev...
Laengle, S, Loyola, G & Merigo, JM 2017, 'Mean-Variance Portfolio Selection With the Ordered Weighted Average', IEEE Transactions on Fuzzy Systems, vol. 25, no. 2, pp. 350-362.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Portfolio selection is the theory that studies the process of selecting the optimal proportion of different assets. The first approach was introduced by Harry Markowitz and was based on a mean-variance framework. This paper introduces the ordered weighted average (OWA) in the mean-variance model. The main idea is to replace the classical mean and variance by the OWA operator. By doing so, the new model is able to study different degrees of optimism and pessimism in the analysis being able to develop an approach that considers the decision makers attitude in the selection process. This paper also suggests a new framework for dealing with the attitudinal character of the decision maker based on the numerical values of the available arguments. The main advantage of this method is the ability to adapt to many situations offering a more complete representation of the available data from the most pessimistic situation to the most optimistic one. An illustrative with fictitious data and a real example are studied.
Laengle, S, Merigó, JM, Miranda, J, Słowiński, R, Bomze, I, Borgonovo, E, Dyson, RG, Oliveira, JF & Teunter, R 2017, 'Forty years of the European Journal of Operational Research: A bibliometric overview', European Journal of Operational Research, vol. 262, no. 3, pp. 803-816.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. The European Journal of Operational Research (EJOR) published its first issue in 1977. This paper presents a general overview of the journal over its lifetime by using bibliometric indicators. We discuss its performance compared to other journals in the field and identify key contributing countries/institutions/authors as well as trends in research topics based on the Web of Science Core Collection database. The results indicate that EJOR is one of the leading journals in the area of operational research (OR) and management science (MS), with a wide range of authors from institutions and countries from all over the world publishing in it. Graphical visualization of similarities (VOS) provides further insights into how EJOR links to other journals and how it links researchers across the globe.
Lai, P-W, Ko, L-W, Wang, Y & Lin, C-T 2017, 'EEG-based assessment of pilot spatial navigation on an aviation simulator', Journal of Science and Medicine in Sport, vol. 20, pp. S37-S38.
View/Download from: Publisher's site
Le, M, Gabrys, B & Nauck, D 2017, 'A hybrid model for business process event and outcome prediction', Expert Systems, vol. 34, no. 5, pp. 1-11.
View/Download from: Publisher's site
View description>>
AbstractLarge service companies run complex customer service processes to provide communication services to their customers. The flawless execution of these processes is essential because customer service is an important differentiator. They must also be able to predict if processes will complete successfully or run into exceptions in order to intervene at the right time, preempt problems and maintain customer service. Business process data are sequential in nature and can be very diverse. Thus, there is a need for an efficient sequential forecasting methodology that can cope with this diversity. This paper proposes two approaches, a sequential k nearest neighbour and an extension of Markov models both with an added component based on sequence alignment. The proposed approaches exploit temporal categorical features of the data to predict the process next steps using higher order Markov models and the process outcomes using sequence alignment technique. The diversity aspect of the data is also added by considering subsets of similar process sequences based on k nearest neighbours. We have shown, via a set of experiments, that our sequential k nearest neighbour offers better results when compared with the original ones; our extension Markov model outperforms random guess, Markov models and hidden Markov models.
Lekitsch, B, Weidt, S, Fowler, AG, Mølmer, K, Devitt, SJ, Wunderlich, C & Hensinger, WK 2017, 'Blueprint for a microwave trapped ion quantum computer', Science Advances, vol. 3, no. 2, pp. 1-11.
View/Download from: Publisher's site
View description>>
Design to build a trapped ion quantum computer with modules connected by ion transport and voltage-driven quantum gate technology.
Li, J, Mei, X, Prokhorov, D & Tao, D 2017, 'Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 3, pp. 690-703.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.
Li, X, Wang, L, Liu, Z & Dong, D 2017, 'Lower Bounds on the Proportion of Leaders Needed for Expected Consensus of 3-D Flocks', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 11, pp. 2555-2565.
View/Download from: Publisher's site
Li, Y & Qiao, Y 2017, 'On rank-critical matrix spaces', Differential Geometry and its Applications, vol. 55, pp. 68-77.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. A matrix space of size m×n is a linear subspace of the linear space of m×n matrices over a field F. The rank of a matrix space is defined as the maximal rank over matrices in this space. A matrix space A is called rank-critical, if any matrix space which properly contains it has rank strictly greater than that of A. In this note, we first exhibit a necessary and sufficient condition for a matrix space A to be rank-critical, when F is large enough. This immediately implies the sufficient condition for a matrix space to be rank-critical by Draisma (2006) [5], albeit requiring the field to be slightly larger. We then study rank-critical spaces in the context of compression and primitive matrix spaces. We first show that every rank-critical matrix space can be decomposed into a rank-critical compression matrix space and a rank-critical primitive matrix space. We then prove, using our necessary and sufficient condition, that the block-diagonal direct sum of two rank-critical matrix spaces is rank-critical if and only if both matrix spaces are primitive, when the field is large enough.
Lin, C-T, Chuang, C-H, Cao, Z, Singh, AK, Hung, C-S, Yu, Y-H, Nascimben, M, Liu, Y-T, King, J-T, Su, T-P & Wang, S-J 2017, 'Forehead EEG in Support of Future Feasible Personal Healthcare Solutions: Sleep Management, Headache Prevention, and Depression Treatment', IEEE Access, vol. 5, pp. 10612-10621.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. There are current limitations in the recording technologies for measuring EEG activity in clinical and experimental applications. Acquisition systems involving wet electrodes are time-consuming and uncomfortable for the user. Furthermore, dehydration of the gel affects the quality of the acquired data and reliability of long-term monitoring. As a result, dry electrodes may be used to facilitate the transition from neuroscience research or clinical practice to real-life applications. EEG signals can be easily obtained using dry electrodes on the forehead, which provides extensive information concerning various cognitive dysfunctions and disorders. This paper presents the usefulness of the forehead EEG with advanced sensing technology and signal processing algorithms to support people with healthcare needs, such as monitoring sleep, predicting headaches, and treating depression. The proposed system for evaluating sleep quality is capable of identifying five sleep stages to track nightly sleep patterns. Additionally, people with episodic migraines can be notified of an imminent migraine headache hours in advance through monitoring forehead EEG dynamics. The depression treatment screening system can predict the efficacy of rapid antidepressant agents. It is evident that frontal EEG activity is critically involved in sleep management, headache prevention, and depression treatment. The use of dry electrodes on the forehead allows for easy and rapid monitoring on an everyday basis. The advances in EEG recording and analysis ensure a promising future in support of personal healthcare solutions.
Lin, C-T, Liu, Y-T, Wu, S-L, Cao, Z, Wang, Y-K, Huang, C-S, King, J-T, Chen, S-A, Lu, S-W & Chuang, C-H 2017, 'EEG-Based Brain-Computer Interfaces: A Novel Neurotechnology and Computational Intelligence Method', IEEE Systems, Man, and Cybernetics Magazine, vol. 3, no. 4, pp. 16-26.
View/Download from: Publisher's site
View description>>
This article presents the latest BCI-related research done in our group. Our previous work applied computational intelligence technology in BCIs to inspire detailed investigations of practical issues in real-life applications. Novel EEG devices featuring dry electrodes facilitate and speed up electrode positioning before recording and allow subjects to move freely in operational environments. We also demonstrate the feasibility of applying CCA, RBFNs, effective connectivity measurements, and D-S theory to help BCIs extract informative knowledge from brain signals. Two recent trends in research in the computational and artificial intelligence community, big data and deep learning, are expected to impact the direction and development of BCIs.
Liu, C, Talaei-Khoei, A, Zowghi, D & Daniel, J 2017, 'Data Completeness in Healthcare: A Literature Survey', PACIFIC ASIA JOURNAL OF THE ASSOCIATION FOR INFORMATION SYSTEMS, vol. 9, no. 2, pp. 75-100.
Liu, F, Zhang, G & Lu, J 2017, 'Heterogeneous domain adaptation: An unsupervised approach', IEEE transactions on neural networks and learning systems, vol. 31, no. 12, pp. 5588-5602.
View/Download from: Publisher's site
View description>>
Domain adaptation leverages the knowledge in one domain - the source domain -to improve learning efficiency in another domain - the target domain. Existingheterogeneous domain adaptation research is relatively well-progressed, butonly in situations where the target domain contains at least a few labeledinstances. In contrast, heterogeneous domain adaptation with an unlabeledtarget domain has not been well-studied. To contribute to the research in thisemerging field, this paper presents: (1) an unsupervised knowledge transfertheorem that guarantees the correctness of transferring knowledge; and (2) aprincipal angle-based metric to measure the distance between two pairs ofdomains: one pair comprises the original source and target domains and theother pair comprises two homogeneous representations of two domains. Thetheorem and the metric have been implemented in an innovative transfer model,called a Grassmann-Linear monotonic maps-geodesic flow kernel (GLG), that isspecifically designed for heterogeneous unsupervised domain adaptation (HeUDA).The linear monotonic maps meet the conditions of the theorem and are used toconstruct homogeneous representations of the heterogeneous domains. The metricshows the extent to which the homogeneous representations have preserved theinformation in the original source and target domains. By minimizing theproposed metric, the GLG model learns the homogeneous representations ofheterogeneous domains and transfers knowledge through these learnedrepresentations via a geodesic flow kernel. To evaluate the model, five publicdatasets were reorganized into ten HeUDA tasks across three applications:cancer detection, credit assessment, and text classification. The experimentsdemonstrate that the proposed model delivers superior performance over theexisting baselines.
Liu, M, Dou, W & Yu, S 2017, 'How to shutdown a cloud: a DDoS attack in a private infrastructure-as-a-service cloud', International Journal of Autonomous and Adaptive Communications Systems, vol. 10, no. 1, pp. 1-1.
View/Download from: Publisher's site
View description>>
Cloud computing has become a hot spot in both industry and academia due to its rapid elasticity and on demand service. However, with outsourcing the data and business applications to a third party, security and privacy issues have become a critical concern. To decrease cloud availability, which is one of the most representative security attributes, DDoS attacks can be launched. In this paper, we try to show how a hacker can launch a DDoS attack based on virtual machine (VM) co-residence to deny the service of cloud data centre in a private infrastructure-as-a-service (IaaS) cloud. We first introduce how to launch this attack. Then we build a Markov-chain model to simulate this attack and analyse performance of cloud data centre. Finally, we also conduct several experiments to show how VM co-residence has impact on performance of physical machines (PMs).
Liu, W, Gao, Y, Ma, H, Yu, S & Nie, J 2017, 'Online multi-objective optimization for live video forwarding across video data centers', Journal of Visual Communication and Image Representation, vol. 48, pp. 502-513.
View/Download from: Publisher's site
View description>>
The proliferation of video surveillance has led to surveillance video forwarding services becoming a basic server in video data centers. End users in diverse locations require live video streams from the IP cameras through the inter-connected video data centers. Consequently, the resource scheduler, which is set up to assign the resources of the video data centers to each arriving end user, is in urgent need of achieving the global optimal resource cost and forwarding delay. In this paper, we propose a multi-objective resource provisioning (MORP) approach to minimize the resource provisioning cost during live video forwarding. Different from existed works, the MORP optimizes the resource provisioning cost from both the resource cost and forwarding delay. Moreover, as an approximate optimal approach, MORP adaptively assigns the proper media servers among video data centers, and connects these media servers together through network connections to provide system scalability and connectivity. Finally, we prove that the computational complexity of our online approach is only O(log(|U|)) (|U| is the number of arrival end users). The comprehensive evaluations show that our approach not only significantly reduces the resource provisioning cost, but also has a considerably shorter computational delay compared to the benchmark approaches.
Liu, Y, Huang, ML, Huang, W & Liang, J 2017, 'A physiognomy based method for facial feature extraction and recognition', Journal of Visual Languages & Computing, vol. 43, pp. 103-109.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd This paper proposes a novel calculation method of personality based on Chinese physiognomy. The proposed solution combines ancient and modern physiognomy to understand the relationship between personality and facial features and to model a baseline to shape facial features. We compute a histogram of image by searching for threshold values to create a binary image in an adaptive way. The two-pass connected component method indicates the feature's region. We encode the binary image to remove the noise point, so that the new connected image can provide a better result. According to our analysis of contours, we can locate facial features and classify them by means of a calculation method. The number of clusters is decided by a model and the facial feature contours are classified by using the k-means method. The validity of our method was tested on a face database and demonstrated by a comparative experiment.
Liu, Y-T, Pal, NR, Marathe, AR, Wang, Y-K & Lin, C-T 2017, 'Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources', Frontiers in Neuroscience, vol. 11, no. JUN, pp. 1-10.
View/Download from: Publisher's site
View description>>
© 2017 Liu, Pal, Marathe, Wang and Lin. A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance...
Liu, Z, Wang, L, Wang, J, Dong, D & Hu, X 2017, 'Distributed sampled-data control of nonholonomic multi-robot systems with proximity networks', Automatica, vol. 77, pp. 170-179.
View/Download from: Publisher's site
Llopis-Albert, C, Merigó, JM, Xu, Y & Liao, H 2017, 'Improving Regional Climate Projections by Prioritized Aggregation via Ordered Weighted Averaging Operators', Environmental Engineering Science, vol. 34, no. 12, pp. 880-886.
View/Download from: Publisher's site
View description>>
© Copyright 2017, Mary Ann Liebert, Inc. 2017. Decision makers express a strong need for reliable information on future climate changes to develop with the best mitigation and adaptation strategies to address impacts. These decisions are based on future climate projections that are simulated by using different Representative Concentration Pathways (RCPs), General Circulation Models (GCMs), and downscaling techniques to obtain high-resolution Regional Climate Models. RCPs defined by the Intergovernmental Panel on Climate Change entail a certain combination of the underlying driving forces behind climate and land use/land cover changes, which leads to different anthropogenic Greenhouse Gases concentration trajectories. Projections of global and regional climate change should also take into account relevant sources of uncertainty and stakeholders' risk attitudes when defining climate polices. The goal of this article is to improve regional climate projections by their prioritized aggregation through the ordered weighted averaging (OWA) operator. The aggregated projection is achieved by considering the similarity of the projections obtained by combining different GCMs, RCPs, and downscaling techniques. Relative weights of different projections to be aggregated by the OWA operator are obtained by regular increasing monotone fuzzy quantifiers, which enables modeling the stakeholders' risk attitudes. The methodology provides a robust decision-making tool to evaluate performance of future climate projections and to design sustainable policies under uncertainty and risk tolerance, which has been successfully applied to a real-case study.
Lu, M, Lai, C, Ye, T, Liang, J & Yuan, X 2017, 'Visual Analysis of Multiple Route Choices Based on General GPS Trajectories', IEEE Transactions on Big Data, vol. 3, no. 2, pp. 234-247.
View/Download from: Publisher's site
View description>>
There are often multiple routes between regions. Drivers choose different routes with different considerations. Such considerations, have always been a point of interest in the transportation area. Studies of route choice behaviour are usually based on small range experiments with a group of volunteers. However, the experiment data is quite limited in its spatial and temporal scale as well as the practical reliability. In this work, we explore the possibility of studying route choice behaviour based on general trajectory dataset, which is more realistic in a wider scale. We develop a visual analytic system to help users handle the large-scale trajectory data, compare different route choices, and explore the underlying reasons. Specifically, the system consists of: 1. the interactive trajectory filtering which supports graphical trajectory query; 2. the spatial visualization which gives an overview of all feasible routes extracted from filtered trajectories; 3. the factor visual analytics which provides the exploration and hypothesis construction of different factors' impact on route choice behaviour, and the verification with an integrated route choice model. Applying to real taxi GPS dataset, we report the system's performance and demonstrate its effectiveness with three cases.
Lund, AP, Bremner, MJ & Ralph, TC 2017, 'Quantum Sampling Problems, BosonSampling and Quantum Supremacy', npj Quantum Information (2017) 3:15, vol. 3, no. 1, pp. 1-8.
View/Download from: Publisher's site
View description>>
There is a large body of evidence for the potential of greater computationalpower using information carriers that are quantum mechanical over thosegoverned by the laws of classical mechanics. But the question of the exactnature of the power contributed by quantum mechanics remains only partiallyanswered. Furthermore, there exists doubt over the practicality of achieving alarge enough quantum computation that definitively demonstrates quantumsupremacy. Recently the study of computational problems that produce samplesfrom probability distributions has added to both our understanding of the powerof quantum algorithms and lowered the requirements for demonstration of fastquantum algorithms. The proposed quantum sampling problems do not require aquantum computer capable of universal operations and also permit physicallyrealistic errors in their operation. This is an encouraging step towards anexperimental demonstration of quantum algorithmic supremacy. In this paper, wewill review sampling problems and the arguments that have been used to deducewhen sampling problems are hard for classical computers to simulate. Twoclasses of quantum sampling problems that demonstrate the supremacy of quantumalgorithms are BosonSampling and IQP Sampling. We will present the details ofthese classes and recent experimental progress towards demonstrating quantumsupremacy in BosonSampling.
Mahalleh, MKK, Ashjari, B, Yousefi, F & Saberi, M 2017, 'A Robust Solution to Resource-Constraint Project Scheduling Problem', INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGENT SYSTEMS, vol. 17, no. 3, pp. 221-227.
View/Download from: Publisher's site
View description>>
© The Korean Institute of Intelligent Systems. This paper aims to propose a solution to the resource-constraint project scheduling problem (RCPSP). RCPSP is a significant scheduling problem in project management. Currently, there are insufficient studies dealing with the robustness of RCPSP. This paper improves the robustness of RCPSP and develops a Robust RCPSP, namely RRCSP. RRCSP is structured with relaxing a fundamental assumption that is 'the tasks start on time as planned'. Relaxing this assumption makes the model more realistic. The proposed solution minimizes the makespan while maximizing the robustness. Maximizing the robustness requires maximizing floating time of activities (it is NP hard). This creates more stability in the project finishing time. RCPSP stands as the root cause of many other problems such as multi-mode resourceconstrained project scheduling problems (MRCPSP), multi-skill resource-constrained project scheduling problem (MSRCPSP), or similar problems and hence proposing a solution to this problem contributes to pave a new line for future research in other mentioned areas. The applicability of the proposed model is examined through a numerical example.
Mahdavi, A, Saberi, M, Jeloudar, GA & Shahroozian, E 2017, 'Effects of different levels of Aloe Vera gel on some of hematological, biochemical and immunological parameters in the chicken model', Koomesh, vol. 19, no. 1, pp. 135-143.
View description>>
Introduction: The present study was conducted to investigate the effects of subchronic administration of different levels of Aloe Vera gel on some biochemical and immunological factors in the chicken model. Material and Methods: Two hundred and forty-one-day old Ross 308 (male and female) broilers were used on a completely randomized design in 5 groups with 4 replicates, and each replicate was consisting of 12 broilers. The groups included one control group (basal diet) and four experimental groups with basal diet mixed with different levels of Aloe Vera gel (1%, 2% and 3%) plus virginiamycin antibiotic. Results: No significant difference between experimental groups was observed in serum albumin level and ALP activity on days 14, 28 and 42. On the day 14, the variation of albumin and total protein levels were significant. In the days 28 and 42, the level of blood glucose in the group receiving 3% Aloe Vera and in the group receiving 1% Aloe Vera on the day 28 was decreased significantly. On the day 28, the count of lymphocytes was raised significantly in the group receiving 3% Aloe Vera. Also in this day, the most count of heterophils was found in the control and virginiamycin groups. On the day 42, the significant rise of lymphocyte count was observed in all groups receiving Aloe Vera gel. On the day 28, the level of antibody titers against sheep red blood cell was raised significantly. Conclusion: Daily supplementation with 1% Aloe Vera gel markedly potentiates cellular and humeral immunity in chickens and can be used as a food additive in order to prevent infections.
Mans, B & Mathieson, L 2017, 'Incremental Problems in the Parameterized Complexity Setting', Theory of Computing Systems, vol. 60, no. 1, pp. 3-19.
View/Download from: Publisher's site
View description>>
© 2016, Springer Science+Business Media New York. Dynamic systems are becoming steadily more important with the profusion of mobile and distributed computing devices. Coincidentally incremental computation is a natural approach to deal with ongoing changes. We explore incremental computation in the parameterized complexity setting and show that incrementalization leads to non-trivial complexity classifications. Interestingly, some incremental versions of hard problems become tractable, while others remain hard. Moreover tractability or intractability is not a simple function of the problem’s static complexity, every level of the W-hierarchy exhibits complete problems with both tractable and intractable incrementalizations. For problems that are already tractable in their static form, we also show that incrementalization can lead to interesting algorithms, improving upon the trivial approach of using the static algorithm at each step.
Mao, M, Lu, J, Zhang, G & Zhang, J 2017, 'Multirelational Social Recommendations via Multigraph Ranking', IEEE Transactions on Cybernetics, vol. 47, no. 12, pp. 4049-4061.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Recommender systems aim to identify relevant items for particular users in large-scale online applications. The historical rating data of users is a valuable input resource for many recommendation models such as collaborative filtering (CF), but these models are known to suffer from the rating sparsity problem when the users or items under consideration have insufficient rating records. With the continued growth of online social networks, the increased user-to-user relationships are reported to be helpful and can alleviate the CF rating sparsity problem. Although researchers have developed a range of social network-based recommender systems, there is no unified model to handle multirelational social networks. To address this challenge, this paper represents different user relationships in a multigraph and develops a multigraph ranking model to identify and recommend the nearest neighbors of particular users in high-order environments. We conduct empirical experiments on two real-world datasets: 1) Epinions and 2) Last.fm, and the comprehensive comparison with other approaches demonstrates that our model improves recommendation performance in terms of both recommendation coverage and accuracy, especially when the rating data are sparse.
McGregor, C & Bonnis, B 2017, 'New Approaches for Integration: Integration of Haptic Garments, Big Data Analytics, and Serious Games for Extreme Environments', IEEE Consumer Electronics Magazine, vol. 6, no. 4, pp. 92-96.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. Haptic garments present new opportunities to increase realism in gaming. As Real As It Gets (ARAIG) is a new form of haptic garment that uses muscle stimulation, vibration, and 7.1 surround sound to provide a new level of realism in gaming. The integration of new haptic garments like ARAIG with big data analytics and serious games presents new opportunities for more realistic virtual training that has application in many domains. In particular, there is great potential to support repeatable virtual training for extreme environments. In 2016, the IEEE Life Sciences Technical Community worked across the IEEE Societies to demonstrate this interdisciplinary nature, with a focus on solving life science problems in extreme environments. This article is based on our keynote address at the International Conference on Consumer Electronics (ICCE)-Berlin in 2016. It provides an example of this interdisciplinary case study research in action.
Meng, Q, Catchpoole, D, Skillicorn, D & Kennedy, PJ 2017, 'DBNorm: normalizing high-density oligonucleotide microarray data based on distributions', BMC Bioinformatics, vol. 18, no. 1.
View/Download from: Publisher's site
View description>>
© 2017 The Author(s). Background: Data from patients with rare diseases is often produced using different platforms and probe sets because patients are widely distributed in space and time. Aggregating such data requires a method of normalization that makes patient records comparable. Results: This paper proposed DBNorm, implemented as an R package, is an algorithm that normalizes arbitrarily distributed data to a common, comparable form. Specifically, DBNorm merges data distributions by fitting functions to each of them, and using the probability of each element drawn from the fitted distribution to merge it into a global distribution. DBNorm contains state-of-the-art fitting functions including Polynomial, Fourier and Gaussian distributions, and also allows users to define their own fitting functions if required. Conclusions: The performance of DBNorm is compared with z-score, average difference, quantile normalization and ComBat on a set of datasets, including several that are publically available. The performance of these normalization methods are compared using statistics, visualization, and classification when class labels are known based on a number of self-generated and public microarray datasets. The experimental results show that DBNorm achieves better normalization results than conventional methods. Finally, the approach has the potential to be applicable outside bioinformatics analysis.
Merigó, JM & Yang, J 2017, 'Accounting Research: A Bibliometric Analysis', Australian Accounting Review, vol. 27, no. 1, pp. 71-100.
View/Download from: Publisher's site
View description>>
Bibliometrics is a fundamental field of information science that studies bibliographic material quantitatively. It is very useful for organising available knowledge within a specific scientific discipline. This study presents a bibliometric overview of accounting research using the Web of Science database, identifying the most relevant research in the field classified by papers, authors, journals, institutions and countries. The results show that the most influential journals are: The Journal of Accounting and Economics, Journal of Accounting Research, The Accounting Review and Accounting, Organizations and Society. It also shows that US institutions are the most influential worldwide. However, it is important to note that some very good research in this area, including a small number of papers and citations, may not show up in this study due to the specific characteristics of different subtopics.
Merigó, JM & Yang, J-B 2017, 'A bibliometric analysis of operations research and management science', Omega, vol. 73, pp. 37-48.
View/Download from: Publisher's site
View description>>
© 2016 Bibliometric analysis is the quantitative study of bibliographic material. It provides a general picture of a research field that can be classified by papers, authors and journals. This paper presents a bibliometric overview of research published in operations research and management science in recent decades. The main objective of this study is to identify some of the most relevant research in this field and some of the newest trends according to the information found in the Web of Science database. Several classifications are made, including an analysis of the most influential journals, the two hundred most cited papers of all time and the most productive and influential authors. The results obtained are in accordance with the common wisdom, although some variations are found.
Merigó, JM, Blanco-Mesa, F, Gil-Lafuente, AM & Yager, RR 2017, 'Thirty Years of theInternational Journal of Intelligent Systems: A Bibliometric Review', International Journal of Intelligent Systems, vol. 32, no. 5, pp. 526-554.
View/Download from: Publisher's site
View description>>
© 2016 Wiley Periodicals, Inc. The International Journal of Intelligent Systems was created in 1986. Today, the journal is 30 years old. To celebrate this anniversary, this study develops a bibliometric review of all of the papers published in the journal between 1986 and 2015. The results are largely based on the Web of Science Core Collection, which classifies leading bibliographic material by using several indicators including total number of publications and citations, the h-index, cites per paper, and citing articles. The work also uses the VOS viewer software for visualizing the main results through bibliographic coupling and co-citation. The results show a general overview of leading trends that have influenced the journal in terms of highly cited papers, authors, journals, universities and countries.
Merigó, JM, Linares-Mustarós, S & Ferrer-Comalat, JC 2017, 'Guest editorial', Kybernetes, vol. 46, no. 1, pp. 2-7.
View/Download from: Publisher's site
Merigó, JM, Palacios-Marqués, D & Soto-Acosta, P 2017, 'Distance measures, weighted averages, OWA operators and Bonferroni means', Applied Soft Computing, vol. 50, pp. 356-366.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. The ordered weighted average (OWA) is an aggregation operator that provides a parameterized family of operators between the minimum and the maximum. This paper presents the OWA weighted average distance operator. The main advantage of this new approach is that it unifies the weighted Hamming distance and the OWA distance in the same formulation and considering the degree of importance that each concept has in the analysis. This operator includes a wide range of particular cases from the minimum to the maximum distance. Some further generalizations are also developed with generalized and quasi-arithmetic means. The use of Bonferroni means under this framework is also studied. The paper ends with an application of the new approach in a group decision making problem with Dempster-Shafer belief structure regarding the selection of strategies.
Mirtalaie, MA, Hussain, OK, Chang, E & Hussain, FK 2017, 'A decision support framework for identifying novel ideas in new product development from cross-domain analysis', Information Systems, vol. 69, pp. 59-80.
View/Download from: Publisher's site
View description>>
In current competitive times, product manufacturers need not only to retain their existing customer base, but also to increase their market share. One way they can achieve this is by generating new ideas and developing novel products with new features. As highlighted in the literature, in generating new ideas to develop novel and innovative products, it is important that product designers satisfy the needs of both current customers and new customers. However, despite the large number of existing studies that identify novel features in the ideation phase, product designers do not have a systematic framework that utilises additional information relating to products from either far-field or related domains to generate such new ideas in the ideation phase. This paper presents our proposed framework FEATURE which provides just such a systemic framework for product designers in the ideation phase of new product development. FEATURE has three phases. The first phase identifies and recommends to the product designers novel features that can be added to the next version of a reference product. In order to incorporate the customer's voice into the ideation phase, the second phase ascertains the popularity of the proposed features by using social media. The third phase ranks the proposed features based on the designer's decision criteria to select those that should be considered further in the next phases of new product development. We explain the importance of each phase of FEATURE and show the working of its first module in detail.
Mukhopadhyay, P & Qiao, Y 2017, 'Sparse multivariate polynomial interpolation on the basis of Schubert polynomials', computational complexity, vol. 26, no. 4, pp. 881-909.
View/Download from: Publisher's site
View description>>
© 2016, Springer International Publishing. Schubert polynomials were discovered by A. Lascoux and M. Schützenberger in the study of cohomology rings of flag manifolds in 1980s. These polynomials generalize Schur polynomials and form a linear basis of multivariate polynomials. In 2003, Lenart and Sottile introduced skew Schubert polynomials, which generalize skew Schur polynomials and expand in the Schubert basis with the generalized Littlewood–Richardson coefficients. In this paper, we initiate the study of these two families of polynomials from the perspective of computational complexity theory. We first observe that skew Schubert polynomials, and therefore Schubert polynomials, are in #P (when evaluating on nonnegative integral inputs) and VNP. Our main result is a deterministic algorithm that computes the expansion of a polynomial f of degree d in Z[ x1, ⋯ , xn] on the basis of Schubert polynomials, assuming an oracle computing Schubert polynomials. This algorithm runs in time polynomial in n, d, and the bit size of the expansion. This generalizes, and derandomizes, the sparse interpolation algorithm of symmetric polynomials in the Schur basis by Barvinok and Fomin (Adv Appl Math 18(3):271–285, 1997). In fact, our interpolation algorithm is general enough to accommodate any linear basis satisfying certain natural properties. Applications of the above results include a new algorithm that computes the generalized Littlewood–Richardson coefficients.
Narayan, N, Morenos, L, Phipson, B, Willis, SN, Brumatti, G, Eggers, S, Lalaoui, N, Brown, LM, Kosasih, HJ, Bartolo, RC, Zhou, L, Catchpoole, D, Saffery, R, Oshlack, A, Goodall, GJ & Ekert, PG 2017, 'Functionally distinct roles for different miR-155 expression levels through contrasting effects on gene expression, in acute myeloid leukaemia', Leukemia, vol. 31, no. 4, pp. 808-820.
View/Download from: Publisher's site
View description>>
Enforced expression of microRNA-155 (miR-155) in myeloid cells has been shown to have both oncogenic or tumour-suppressor functions in acute myeloid leukaemia (AML). We sought to resolve these contrasting effects of miR-155 overexpression using murine models of AML and human paediatric AML data sets. We show that the highest miR-155 expression levels inhibited proliferation in murine AML models. Over time, enforced miR-155 expression in AML in vitro and in vivo, however, favours selection of intermediate miR-155 expression levels that results in increased tumour burden in mice, without accelerating the onset of disease. Strikingly, we show that intermediate and high miR-155 expression also regulate very different subsets of miR-155 targets and have contrasting downstream effects on the transcriptional environments of AML cells, including genes involved in haematopoiesis and leukaemia. Furthermore, we show that elevated miR-155 expression detected in paediatric AML correlates with intermediate and not high miR-155 expression identified in our experimental models. These findings collectively describe a novel dose-dependent role for miR-155 in the regulation of AML, which may have important therapeutic implications.
Nemoto, K, Devitt, S & Munro, WJ 2017, 'Noise management to achieve superiority in quantum information systems', Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 375, no. 2099, pp. 20160236-20160236.
View/Download from: Publisher's site
View description>>
Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue ‘Quantum technology for the 21st century’.
Niazi, M, Mahmood, S, Alshayeb, M, Baqais, AAB & Gill, AQ 2017, 'Motivators for adopting social computing in global software development: An empirical study', Journal of Software: Evolution and Process, vol. 29, no. 8, pp. e1872-e1872.
View/Download from: Publisher's site
View description>>
Copyright © 2017 John Wiley & Sons, Ltd. Managing communication in collaborative global software development (GSD) projects is both critical and challenging. While social computing has received much attention from practitioners, social computing adoption is still an emerging research area in GSD. This research paper provides a review of the academic research in social computing and identifies motivators for adopting social computing in the GSD context. We applied the systematic literature review (SLR) and questionnaire survey with 35 software industry experts to address the research objective. Firstly, we implemented a formal SLR approach and identified an initial set of social computing adoption motivators. Secondly, a questionnaire survey was developed based on the SLR and was tested by means of a pilot study. The findings of this combined SLR and questionnaire survey indicate that real-time communication and coordination, knowledge acquisition, expert feedback, and information sharing are the key factors that motivate social computing adoption in GSD projects. The results of t test (ie, t =.558, P =.589) show that there is no significant difference between the findings of SLR and questionnaire. The results of this study suggest the need for developing social computing strategies and policies to guide the strategic adoption of social computing tools in GSD projects.
Oberst, S, Bann, G, Lai, JCS & Evans, TA 2017, 'Cryptic termites avoid predatory ants by eavesdropping on vibrational cues from their footsteps', Ecology Letters, vol. 20, no. 2, pp. 212-221.
View/Download from: Publisher's site
View description>>
AbstractEavesdropping has evolved in many predator–prey relationships. Communication signals of social species may be particularly vulnerable to eavesdropping, such as pheromones produced by ants, which are predators of termites. Termites communicate mostly by way of substrate‐borne vibrations, which suggest they may be able to eavesdrop, using two possible mechanisms: ant chemicals or ant vibrations. We observed termites foraging within millimetres of ants in the field, suggesting the evolution of specialised detection behaviours. We found the termite Coptotermes acinaciformis detected their major predator, the ant Iridomyrmex purpureus, through thin wood using only vibrational cues from walking, and not chemical signals. Comparison of 16 termite and ant species found the ants‐walking signals were up to 100 times higher than those of termites. Eavesdropping on passive walking signals explains the predator detection and foraging behaviours in this ancient relationship, which may be applicable to many other predator–prey relationships.
Oberst, S, Marburg, S & Hoffmann, N 2017, 'Determining periodic orbits via nonlinear filtering and recurrence spectra in the presence of noise', Procedia Engineering, vol. 199, pp. 772-777.
View/Download from: Publisher's site
Pan, Y, Dong, D & Petersen, IR 2017, 'Dark Modes of Quantum Linear Systems', IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 4180-4186.
View/Download from: Publisher's site
Peng, S, Yang, A, Cao, L, Yu, S & Xie, D 2017, 'Social influence modeling using information theory in mobile social networks', Information Sciences, vol. 379, pp. 146-159.
View/Download from: Publisher's site
View description>>
Social influence analysis has become one of the most important technologies in modern information and service industries. Thus, how to measure social influence of one user on other users in a mobile social network is also becoming increasingly important. It is helpful to identify the influential users in mobile social networks, and also helpful to provide important insights into the design of social platforms and applications. However, social influence modeling is an open and challenging issue, and most evaluation models are focused on online social networks, but fail to characterize indirect influence. In this paper, we present a mechanism to quantitatively measure social influence in mobile social networks. We exploit the graph theory to construct a social relationship graph that establishes a solid foundation for the basic understandings of social influence. We present an evaluation model to measure both direct and indirect influence based on the social relationship graph, by introducing friend entropy and interaction frequency entropy to describe the complexity and uncertainty of social influence. Based on the epidemic model, we design an algorithm to characterize propagation dynamics process of social influence, and to evaluate the performance of our solution by using a customized program on the basis of a real-world SMS/MMS-based communication data set. The real world numerical simulations and analysis show that the proposed influence evaluation strategies can characterize the social influence of mobile social networks effectively and efficiently.
Pietroni, N, Tarini, M, Vaxman, A, Panozzo, D & Cignoni, P 2017, 'Position-based tensegrity design.', ACM Trans. Graph., vol. 36, pp. 172:1-172:1.
View/Download from: Publisher's site
Pileggi, SF & Hunter, J 2017, 'An ontological approach to dynamic fine-grained Urban Indicators', Procedia Computer Science, vol. 108, pp. 2059-2068.
View/Download from: Publisher's site
View description>>
© 2017 The Authors. Published by Elsevier B.V. Urban indicators provide a unique multi-disciplinary data framework which social scientists, planners and policy makers employ to understand and analyze the complex dynamics of metropolitan regions. Indicators provide an independent, quantitative measure or benchmark of an aspect of an urban environment, by combining different metrics for a given region. While the current approach to urban indicators involves the systematic accurate collection of the raw data required to produce reliable indicators and the standardization of well-known commonly accepted or widely adopted indicators, the next generation of indicators is expected to support a more dynamic, customizable, fine-grained approach to indicators, via a context of interoperability and linked open data. Within this paper, we address these emerging requirements through an ontological approach aimed at (i) establishing interoperability among heterogeneous data sets, (ii) expressing the high-level semantics of the indicators, (iii) supporting indicator adaptability and dynamic composition for specific applications and (iv) representing properly the uncertainties of the resulting ecosystem.
Prasad, M, Lin, C-T, Li, D-L, Hong, C-T, Ding, W-P & Chang, J-Y 2017, 'Soft-Boosted Self-Constructing Neural Fuzzy Inference Network', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 3, pp. 584-588.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. This correspondence paper proposes an improved version of the self-constructing neural fuzzy inference network (SONFIN), called soft-boosted SONFIN (SB-SONFIN). The design softly boosts the learning process of the SONFIN in order to decrease the error rate and enhance the learning speed. The SB-SONFIN boosts the learning power of the SONFIN by taking into account the numbers of fuzzy rules and initial weights which are two important parameters of the SONFIN, SB-SONFIN advances the learning process by: 1) initializing the weights with the width of the fuzzy sets rather than just with random values and 2) improving the parameter learning rates with the number of learned fuzzy rules. The effectiveness of the proposed soft boosting scheme is validated on several real world and benchmark datasets. The experimental results show that the SB-SONFIN possesses the capability to outperform other known methods on various datasets.
Prasad, M, Liu, Y-T, Li, D-L, Lin, C-T, Shah, RR & Kaiwartya, OP 2017, 'A New Mechanism for Data Visualization with Tsk-Type Preprocessed Collaborative Fuzzy Rule Based System', Journal of Artificial Intelligence and Soft Computing Research, vol. 7, no. 1, pp. 33-46.
View/Download from: Publisher's site
View description>>
AbstractA novel data knowledge representation with the combination of structure learning ability of preprocessed collaborative fuzzy clustering and fuzzy expert knowledge of Takagi- Sugeno-Kang type model is presented in this paper. The proposed method divides a huge dataset into two or more subsets of dataset. The subsets of dataset interact with each other through a collaborative mechanism in order to find some similar properties within each-other. The proposed method is useful in dealing with big data issues since it divides a huge dataset into subsets of dataset and finds common features among the subsets. The salient feature of the proposed method is that it uses a small subset of dataset and some common features instead of using the entire dataset and all the features. Before interactions among subsets of the dataset, the proposed method applies a mapping technique for granules of data and centroid of clusters. The proposed method uses information of only half or less/more than the half of the data patterns for the training process, and it provides an accurate and robust model, whereas the other existing methods use the entire information of the data patterns. Simulation results show the proposed method performs better than existing methods on some benchmark problems.
Pratama, M, Lu, J, Lughofer, E, Zhang, G & Er, MJ 2017, 'An Incremental Learning of Concept Drifts Using Evolving Type-2 Recurrent Fuzzy Neural Networks', IEEE Transactions on Fuzzy Systems, vol. 25, no. 5, pp. 1175-1192.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The age of online data stream and dynamic environments results in the increasing demand of advanced machine learning techniques to deal with concept drifts in large data streams. Evolving fuzzy systems (EFS) are one of recent initiatives from the fuzzy system community to resolve the issue. Existing EFSs are not robust against data uncertainty, temporal system dynamics, and the absence of system order, because a vast majority of EFSs are designed in the type-1 feedforward network architecture. This paper aims to solve the issue of data uncertainty, temporal behavior, and the absence of system order by developing a novel evolving recurrent fuzzy neural network, called evolving type-2 recurrent fuzzy neural network (eT2RFNN). eT2RFNN is constructed in a new recurrent network architecture, featuring double recurrent layers. The new recurrent network architecture evolves a generalized interval type-2 fuzzy rule, where the rule premise is built upon the interval type-2 multivariate Gaussian function, whereas the rule consequent is crafted by the nonlinear wavelet function. The eT2RFNN adopts a holistic concept of evolving systems, where the fuzzy rule can be automatically generated, pruned, merged, and recalled in the single-pass learning mode. eT2RFNN is capable of coping with the problem of high dimensionality because it is equipped with online feature selection technology. The efficacy of eT2RFNN was experimentally validated using artificial and real-world data streams and compared with prominent learning algorithms. eT2RFNN produced more reliable predictive accuracy, while retaining lower complexity than its counterparts.
Qi, M, Sun, T, Zhang, H, Zhu, M, Yang, W, Shao, D & Voinov, A 2017, 'Maintenance of salt barrens inhibited landward invasion ofSpartinaspecies in salt marshes', Ecosphere, vol. 8, no. 10, pp. e01982-e01982.
View/Download from: Publisher's site
View description>>
AbstractSpartinaspp. (cordgrasses) often dominates intertidal mudflats and/or low marshes. The landward invasion of these species was typically thought to be restrained by low tidal inundation frequencies and interspecific competition. We noticed that the reported soil salinity levels in some salt marshes were much higher than those at the mean higher high water level, which might inhibit the landward invasion of cordgrass. To test this possibility, we transplantedSpartina alternifloraacross an elevational gradient in an invaded salt marsh in the Yellow River Delta National Nature Reserve, where a salt accumulation zone (i.e., salt barren) was previously observed. We found thatS. alterniflorawas significantly inhibited by the salt barren in high marsh regions, although it performed better at upland and low marsh regions. A common garden experiment further elucidated thatS. alternifloraperformed best at low salinity levels and that this species is less sensitive to inundation frequency. Our results indicated that the salt barren inhibited the landward invasion ofS. alterniflorain salt marshes and provided a natural barrier to protect the upland from invasion. Though field observations suggest thatS. alternifloracould propagate along tidal channels, which provide low‐salinity corridors for the dispersal of propagules, natural salt barrens can inhibit the landward invasion ofSpartinain salt marshes. However, artificial disturbances that break the salt barren band in salt marshes (e.g., artificial ditches) might accelerate the invasion ofSpartinaspp. This new finding should alert salt marsh managers to pay attention to artificial ditches and/or other human activities when attempting to controlSpartin...
Qiao, M, Liu, L, Yu, J, Xu, C & Tao, D 2017, 'Diversified dictionaries for multi-instance learning', Pattern Recognition, vol. 64, pp. 407-416.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Multiple-instance learning (MIL) has been a popular topic in the study of pattern recognition for years due to its usefulness for such tasks as drug activity prediction and image/text classification. In a typical MIL setting, a bag contains a bag-level label and more than one instance/pattern. How to bridge instance-level representations to bag-level labels is a key step to achieve satisfactory classification accuracy results. In this paper, we present a supervised learning method, diversified dictionaries MIL, to address this problem. Our approach, on the one hand, exploits bag-level label information for training class-specific dictionaries. On the other hand, it introduces a diversity regularizer into the class-specific dictionaries to avoid ambiguity between them. To the best of our knowledge, this is the first time that the diversity prior is introduced to solve the MIL problems. Experiments conducted on several benchmark (drug activity and image/text annotation) datasets show that the proposed method compares favorably to state-of-the-art methods.
Qiao, M, Xu, RYD, Bian, W & Tao, D 2017, 'Fast Sampling for Time-Varying Determinantal Point Processes', ACM Transactions on Knowledge Discovery from Data, vol. 11, no. 1, pp. 1-24.
View/Download from: Publisher's site
View description>>
Determinantal Point Processes (DPPs) are stochastic models which assign each subset of a base dataset with a probability proportional to the subset’s degree of diversity. It has been shown that DPPs are particularly appropriate in data subset selection and summarization (e.g., news display, video summarizations). DPPs prefer diverse subsets while other conventional models cannot offer. However, DPPs inference algorithms have a polynomial time complexity which makes it difficult to handle large and time-varying datasets, especially when real-time processing is required. To address this limitation, we developed a fast sampling algorithm for DPPs which takes advantage of the nature of some time-varying data (e.g., news corpora updating, communication network evolving), where the data changes between time stamps are relatively small. The proposed algorithm is built upon the simplification of marginal density functions over successive time stamps and the sequential Monte Carlo (SMC) sampling technique. Evaluations on both a real-world news dataset and the Enron Corpus confirm the efficiency of the proposed algorithm.
Quan, W, Liu, Y, Zhang, H & Yu, S 2017, 'Enhancing Crowd Collaborations for Software Defined Vehicular Networks', IEEE Communications Magazine, vol. 55, no. 8, pp. 80-86.
View/Download from: Publisher's site
View description>>
Vehicular networking is promising to improve traffic efficiency and driving safety, as well as travel experience. However, the traditional network employs a highly coupled design, which is quite limited in its ability to satisfy various challenging vehicular demands. Recently, new studies focus on how to design software defined vehicular networks smartly to meet various vehicular demands. In this article, we investigate a new smart identifier networking (SINET) paradigm and propose a SINET customized solution enabling crowd collaborations for software defined vehicular networks (SINET-V). In particular, through crowd sensing, network function slices are well organized with a group of function-similar components. Different function slices are further driven to serve various applications by using crowd collaborations. We clearly illustrate how SINET-V works and also analyze its potential advantages in several special vehicular instances. Experimental results show that SINET-V has great potential to promote powerful vehicular networks.
Ramezani, F, Lu, J, Taheri, J & Zomaya, AY 2017, 'A Multi-Objective Load Balancing System for Cloud Environments', The Computer Journal, vol. 60, no. 9, pp. 1316-1337.
View/Download from: Publisher's site
View description>>
© 2017 The British Computer Society. All rights reserved. Virtual machine (VM) live migration has been applied to system load balancing in cloud environments for the purpose of minimizing VM downtime and maximizing resource utilization. However, the migration process is both time-and cost-consuming as it requires the transfer of large size files or memory pages and consumes a huge amount of power and memory for the origin and destination physical machine (PM), especially for storage VM migration. This process also leads to VM downtime or slowdown. To deal with these shortcomings, we develop a Multi-objective Load Balancing (MO-LB) system that avoids VM migration and achieves system load balancing by transferring extra workload from a set of VMs allocated on an overloaded PM to other compatible VMs in the cluster with greater capacity. To reduce the time factor even more and optimize load balancing over a cloud cluster, MO-LB contains a CPU Usage Prediction (CUP) sub-system. The CUP not only predicts the performance of the VMs but also determines a set of appropriate VMs with the potential to execute the extra workload imposed on the VMs of an overloaded PM. We also design a Multi-Objective Task Scheduling optimization model using Particle Swarm Optimization to migrate the extra workload to the compatible VMs. The proposed method is evaluated using a VMware-vSphere-based private cloud in contrast to the VM migration technique applied by vMotion. The evaluation results show that the MO-LB system dramatically increases VM performance while reducing service response time, memory usage, job makespan, power consumption and the time taken for the load balancing process.
Ritter, SM & Ferguson, S 2017, 'Happy creativity: Listening to happy music facilitates divergent thinking', PLOS ONE, vol. 12, no. 9, pp. e0182210-e0182210.
View/Download from: Publisher's site
View description>>
Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.
Romeo, M, Yepes-Baldó, M, Boria-Reverter, S & Merigó, JM 2017, 'Twenty-five years of research on work and organizational psychology: A bibliometric perspective', Anuario de Psicología, vol. 47, no. 1, pp. 32-44.
View/Download from: Publisher's site
View description>>
© 2017 Universitat de Barcelona The research aims to analyze the scientific productivity in the field of work/organizational psychology (WOP) in the last 25 years. We focus our analysis on the most influential journals and articles, generally and for 5-year periods, as well as structures of co-citation among the highest quality journals based on their h-index. We found that a high percentage of papers published each year receive between 5 and 10 cites. Secondly, we observe an exponential increase in the number of papers published, citations, and h-index. Additionally, the number of self-citations significantly increases in the last 5 years. In this sense, we consider that the most recent papers need more time to increase their level of citation and, subsequently, to correct the bias on self-citation. This research shows the status of research in the field of work/organizational psychology, analyzing the scientific journals and papers published in the Web of Science.
Saberi, M, Khadeer Hussain, O & Chang, E 2017, 'Past, present and future of contact centers: a literature review', Business Process Management Journal, vol. 23, no. 3, pp. 574-597.
View/Download from: Publisher's site
View description>>
PurposeContact centers (CCs) are one of the main touch points of customers in an organization. They form one of the inputs to customer relationship management (CRM) to enable an organization to efficiently resolve customer queries. CCs have an important impact on customer satisfaction and are a strategic asset for CRM systems. The purpose of this paper is to review the current literature on CCs and identify their shortcomings to be addressed in the current digital age.Design/methodology/approachThe current literature on CCs can be classified into the analytical and the managerial aspects of CCs. In the former, data mining, text mining, and voice recognition techniques are discussed, and in the latter, staff training, CC performance, and outsourced CCs are discussed.FindingsWith the growth of information and communication technologies, the information that CCs must handle both in terms of type and volume, has changed. To deal with such changes, CCs need to evolve in terms of their operation and public relations. The authors present a state-of-the-art review of the challenges in identifying the gaps in order to have the next generation of CCs. Lack of an interactive CC and lack of data integrity for CCs are highlighted as important issues that need to be dealt with properly by CCs.Originality/valueAs far as the authors know, this is the first paper that reviews CCs’ literature by providing the comprehensive survey, critical evaluation, and future research.
Saberi, M, Pu, Q, Valasek, P, Norizadeh-Abbariki, T, Patel, K & Huang, R 2017, 'The hypaxial origin of the epaxially located rhomboid muscles', Annals of Anatomy - Anatomischer Anzeiger, vol. 214, pp. 15-20.
View/Download from: Publisher's site
Saletta, F, Vilain, RE, Gupta, AK, Nagabushan, S, Yuksel, A, Catchpoole, D, Scolyer, RA, Byrne, JA & McCowage, G 2017, 'Programmed Death-Ligand 1 Expression in a Large Cohort of Pediatric Patients With Solid Tumor and Association With Clinicopathologic Features in Neuroblastoma', JCO Precision Oncology, vol. 1, no. 1, pp. 1-12.
View/Download from: Publisher's site
View description>>
Purpose Programmed death-ligand 1 (PD-L1) expression represents a potential predictive biomarker of immune checkpoint blockade response. However, literature about the prevalence of PD-L1 expression in the pediatric cancer setting is discordant. Methods PD-L1 expression was analyzed using immunohistochemistry in 500 pediatric tumors (including neuroblastoma, sarcomas, and brain cancers). Tumors with ≥ 1% cells showing PD-L1 membrane staining of any intensity were scored as positive. Positive cases were further characterized, with cases with weak intensity PD-L1 staining reported as having low PD-L1 expression and cases with a moderate or strong intensity of staining considered to have high PD-L1 expression. Results PD-L1–positive staining was identified in 13% of cases, whereas high PD-L1 expression was found in 3% of cases. Neuroblastoma (n = 254) showed PD-L1 expression of any intensity in 18.9% of cases and was associated with longer overall survival ( P = .045). However, high PD-L1 expression in neuroblastoma (3.1%) was significantly associated with an increased risk of relapse ( P = .002). Positive PD-L1 staining was observed more frequently in low- and intermediate-risk patients ( P = .037) and in cases lacking MYCN amplification ( P = .002). Conclusion In summary, high PD-L1 expression in patients with neuroblastoma may represent an unfavorable prognostic factor associated with a higher risk of cancer relapse. This work proposes PD-L1 immunohistochemical assessment as a novel parameter for identifying patients with an increased likelihood of cancer recurrence.
Saxena, A, Prasad, M, Gupta, A, Bharill, N, Patel, OP, Tiwari, A, Er, MJ, Ding, W & Lin, C-T 2017, 'A review of clustering techniques and developments', Neurocomputing, vol. 267, pp. 664-681.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. This paper presents a comprehensive study on clustering: exiting methods and developments made at various times. Clustering is defined as an unsupervised learning where the objects are grouped on the basis of some similarity inherent among them. There are different methods for clustering the objects such as hierarchical, partitional, grid, density based and model based. The approaches used in these methods are discussed with their respective states of art and applicability. The measures of similarity as well as the evaluation criteria, which are the central components of clustering, are also presented in the paper. The applications of clustering in some fields like image segmentation, object and character recognition and data mining are highlighted.
Seneviratne, S, Hu, Y, Nguyen, T, Lan, G, Khalifa, S, Thilakarathna, K, Hassan, M & Seneviratne, A 2017, 'A Survey of Wearable Devices and Challenges', IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2573-2620.
View/Download from: Publisher's site
View description>>
© 1998-2012 IEEE. As smartphone penetration saturates, we are witnessing a new trend in personal mobile devices-wearable mobile devices or simply wearables as it is often called. Wearables come in many different forms and flavors targeting different accessories and clothing that people wear. Although small in size, they are often expected to continuously sense, collect, and upload various physiological data to improve quality of life. These requirements put significant demand on improving communication security and reducing power consumption of the system, fueling new research in these areas. In this paper, we first provide a comprehensive survey and classification of commercially available wearables and research prototypes. We then examine the communication security issues facing the popular wearables followed by a survey of solutions studied in the literature. We also categorize and explain the techniques for improving the power efficiency of wearables. Next, we survey the research literature in wearable computing. We conclude with future directions in wearable market and research.
Sharma, S, Puthal, D, Tazeen, S, Prasad, M & Zomaya, AY 2017, 'MSGR: A Mode-Switched Grid-Based Sustainable Routing Protocol for Wireless Sensor Networks', IEEE Access, vol. 5, pp. 19864-19875.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. A Wireless Sensor Network (WSN) consists of enormous amount of sensor nodes. These sensor nodes sense the changes in physical parameters from the sensing range and forward the information to the sink nodes or the base station. Since sensor nodes are driven with limited power batteries, prolonging the network lifetime is difficult and very expensive, especially for hostile locations. Therefore, routing protocols for WSN must strategically distribute the dissipation of energy, so as to increase the overall lifetime of the system. Current research trends from areas, such as from Internet of Things and fog computing use sensors as the source of data. Therefore, energy-efficient data routing in WSN is still a challenging task for real-Time applications. Hierarchical grid-based routing is an energy-efficient method for routing of data packets. This method divides the sensing area into grids and is advantageous in wireless sensor networks to enhance network lifetime. The network is partitioned into virtual equal-sized grids. The proposed mode-switched grid-based routing protocol for WSN selects one node per grid as the grid head. The routing path to the sink is established using grid heads. Grid heads are switched between active and sleep modes alternately. Therefore, not all grid heads take part in the routing process at the same time. This saves energy in grid heads and improves the network lifetime. The proposed method builds a routing path using each active grid head which leads to the sink. For handling the mobile sink movement, the routing path changes only for some grid head nodes which are nearer to the grid, in which the mobile sink is currently positioned. Data packets generated at any source node are routed directly through the data disseminating grid head nodes on the routing path to the sink.
Shehabat, A, Mitew, T & Alzoubi, Y 2017, 'Encrypted Jihad: Investigating the Role of Telegram App in Lone Wolf Attacks in the West', Journal of Strategic Security, vol. 10, no. 3, pp. 27-53.
View/Download from: Publisher's site
View description>>
The study aims to capture links between the use of encrypted communication channel -Telegram and lone wolf attacks occurred in Europe between 2015-2016. To understand threads of ISIS communication on Telegram we used digital ethnography approach which consists of the self-observation of information flows on four of ISIS’s most celebrated telegram Channels. We draw on public sphere theory and coined the term terror socio-sphere 3.0 as the theoretical background of this study. The collected data is presented as screenshots to capture a visual evidence of ISIS communication threads. This study shows that ISIS Telegram channels play critical role in personal communication between potential recruits and dissemination of propaganda that encourage ‘lone wolves’ to carry attacks in the world at large. This study was limited to the number of the channels that have been widely celebrated.
Shen, F, Yang, Y, Liu, L, Liu, W, Tao, D & Shen, HT 2017, 'Asymmetric Binary Coding for Image Search', IEEE Transactions on Multimedia, vol. 19, no. 9, pp. 2022-2032.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Learning to hash has attracted broad research interests in recent computer vision and machine learning studies, due to its ability to accomplish efficient approximate nearest neighbor search. However, the closely related task, maximum inner product search (MIPS), has rarely been studied in this literature. To facilitate the MIPS study, in this paper, we introduce a general binary coding framework based on asymmetric hash functions, named asymmetric inner-product binary coding (AIBC). In particular, AIBC learns two different hash functions, which can reveal the inner products between original data vectors by the generated binary vectors. Although conceptually simple, the associated optimization is very challenging due to the highly nonsmooth nature of the objective that involves sign functions. We tackle the nonsmooth optimization in an alternating manner, by which each single coding function is optimized in an efficient discrete manner. We also simplify the objective by discarding the quadratic regularization term which significantly boosts the learning efficiency. Both problems are optimized in an effective discrete way without continuous relaxations, which produces high-quality hash codes. In addition, we extend the AIBC approach to the supervised hashing scenario, where the inner products of learned binary codes are forced to fit the supervised similarities. Extensive experiments on several benchmark image retrieval databases validate the superiority of the AIBC approaches over many recently proposed hashing algorithms.
Shen, S, Ma, H, Fan, E, Hu, K, Yu, S, Liu, J & Cao, Q 2017, 'A non-cooperative non-zero-sum game-based dependability assessment of heterogeneous WSNs with malware diffusion', Journal of Network and Computer Applications, vol. 91, pp. 26-35.
View/Download from: Publisher's site
View description>>
We consider Heterogeneous Wireless Sensor Networks (HWSNs) with malware diffusion and find a solution to assess their dependability in order to guarantee dependable operations on sending sensed data from sensor nodes (SNs) to a sink node. To this end, we regard an infection as a state transition of a Markov chain and propose a heterogeneous discrete-time Susceptible-Infected-Susceptible (SIS) model to disclose the diffusion process by combining the SNs’ heterogeneity with the malware's spread probability, which is foreseen by a developed non-cooperative non-zero-sum game. We further present reliability and availability measures for a susceptible SN from the perspective of reliability theory, from which we deduce and obtain metrics of reliability and availability assessment of HWSNs with star and cluster topologies. We therefore set up a dependability assessment mechanism for HWSNs with malware diffusion. Experiments illustrate the influence of the parameters on malware's selection of the optimal spread probability and the mean time to infection of a susceptible SN. We also validate the effectiveness of the proposed mechanism. Our results can be applied to set up theoretical bases for governing the employment of reliable techniques.
Shu, C-C, Dong, D & Yuan, K-J 2017, 'Single-laser-induced quantum interference in photofragmentation reaction of D+ 2', Molecular Physics, vol. 115, no. 15-16, pp. 1908-1915.
View/Download from: Publisher's site
Shu, C-C, Yuan, K-J, Dong, D, Petersen, IR & Bandrauk, AD 2017, 'Identifying Strong-Field Effects in Indirect Photofragmentation Reactions', The Journal of Physical Chemistry Letters, vol. 8, no. 1, pp. 1-6.
View/Download from: Publisher's site
Sun, G, Cui, T, Beydoun, G, Chen, S, Dong, F, Xu, D & Shen, J 2017, 'Towards Massive Data and Sparse Data in Adaptive Micro Open Educational Resource Recommendation: A Study on Semantic Knowledge Base Construction and Cold Start Problem', Sustainability, vol. 9, no. 6, pp. 898-898.
View/Download from: Publisher's site
View description>>
Micro Learning through open educational resources (OERs) is becoming increasingly popular. However, adaptive micro learning support remains inadequate by current OER platforms. To address this, our smart system, Micro Learning as a Service (MLaaS), aims to deliver personalized OER with micro learning to satisfy their real-time needs. In this paper, we focus on constructing a knowledge base to support the decision-making process of MLaaS. MLaas is built using a top-down approach. A conceptual graph-based ontology construction is first developed. An educational data mining and learning analytic strategy is then proposed for the data level. The learning resource adaptation still requires learners’ historical information. To compensate for the absence of this information initially (aka ‘cold start’), we set up a predictive ontology-based mechanism. As the first resource is delivered to the beginning of a learner’s learning journey, the micro OER recommendation is also optimized using a tailored heuristic.
Tian, F, Liu, B, Sun, X, Zhang, X, Cao, G & Gui, L 2017, 'Movement-Based Incentive for Crowdsourcing', IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp. 7223-7233.
View/Download from: Publisher's site
Tsai, Z-R, Chang, Y-Z, Zhang, H-W & Lin, C-T 2017, 'Relax the chaos-model-based human behavior by electrical stimulation therapy design', Computers in Human Behavior, vol. 67, pp. 151-160.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd The brain's electrical activity is chaotic and unpredictable yet has a hidden order that is attracted to a certain region. There are numerous fractal strange attractors in the brain that change as thinking processes vary. Further, the thinking processes change the human behaviors, especially in schizophrenia or internet addiction. The proposed chaos modelling and control theory may offer useful and relevant information on electrical stimulation therapy design to change this thinking processes through stimulating the brain's electrical activities. The experimental result of relaxing body from a lots of electrotherapy clinics helps mental disorders relax the thinking chaos in mind to replace their chaotic behaviors from the brain's electrical activities. This paper tries to explain the above claim in the aspect of the electrotherapy and control theory to suggest the control signal of electrotherapy based on an assumption for chaos model of patient and its control signals design according to multiple stabilization solutions. In the future, the electrical stimulation therapy will be proof in the Raphael Humanistic Clinic or the other electrotherapy clinics.
Tsai, Z-R, Zhang, H-W & Lin, C-T 2017, 'A study on an energy consumption model correlated to abnormal behavior by contactless method', Computers in Human Behavior, vol. 74, pp. 53-62.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd The anxiety disorders, major depressive illness, substance use disorder, false beliefs, confused thinking, reduced social engagement, reduced emotional expression, a lack of motivation, and refusing to accept the wearable medical devices have happened in schizophrenia patients. Of course, a methodological critique of wearable medical devices towards a behavior model is suffering from a refusing action of schizophrenia patient. Hence, a novel real-time and robust application on correlation of stereo vision and abnormal behavior in schizophrenia is proposed in this paper. A robust image process is key to further exploring the behavior of schizophrenia patient by contactless surveillance, and from any view of patient to predict the abnormal sign of patient. This abnormal sign of energies consumption may be caused by inappropriate prescription or other medical negligence. An indicator for this abnormal sign of single specific patient should be designed by comparing with the past normal records of this patient. This study aims to provide a predictive diagnosis of patient. This diagnosis is obtained by this indicator to inform the hospital workers to make the preventing medical treatment. It enhances the secure healthcare, and will be proof in the Chang Bing Show Chwan Memorial hospital since first author has completed training and submitted proof with Institutional Review Board (IRB) in Taiwan in the past.
Valenzuela Fernández, L, Merigó, JM & Nicolas, C 2017, 'Universidades influyentes en investigación sobre orientación al mercado. Una visión general entre 1990 y 2014', Estudios Gerenciales, vol. 33, no. 144, pp. 221-227.
View/Download from: Publisher's site
View description>>
El objetivo de este estudio es identificar las universidades más productivas e influyentes para la comunidad científica sobre el tópico de orientación al mercado. Lo anterior se realiza principalmente a través de indicadores bibliométricos —como el índice h— y la relación total citas/total artículos para el periodo 1990-2014, a partir de la información encontrada en Web of Science. Dentro de los hallazgos se destaca el interés de la comunidad científica en esta temática, lo que se ve reflejado en el aumento considerado en la contribución que se ha generado durante los últimos 25 años. Además, se determina un ranking de las 30 universidades más influyentes, junto con un ranking que relaciona universidades y revistas con mayor influencia en temas de orientación al mercado.
Valenzuela, LM, Merigó, JM, Johnston, WJ, Nicolas, C & Jaramillo, JF 2017, 'Thirty years of the Journal of Business & Industrial Marketing: a bibliometric analysis', Journal of Business & Industrial Marketing, vol. 32, no. 1, pp. 1-17.
View/Download from: Publisher's site
View description>>
PurposeThe aim of this study is to reveal the contribution that Journal of Business & Industrial Marketing has to scientific research and its most influential thematic work in B-to-B since its beginning in 1986 until 2015, in commemoration of the 30th anniversary.Design/methodology/approachThe paper begins with a qualitative introduction: the emergence of the magazine, its origins, editorial and positioning. Subsequently, it is based on bibliometric methodologies to develop quantitative analysis. The distribution of annual publications is analyzed, the most cited papers, the keywords that are mostly used, the influence on the publishing industry and authors, universities and the countries that have the most publications.FindingsThe predominant role of the USA at all levels is highlighted. It also highlights the presence (given its size and population) of the countries of Northern Europe. There is great interest in appreciating the evolution of the number of publications that are always increasing which demonstrates the growing and sustained interest in these types of articles, with certain times of retreat (often coincide with economic crisis).Research limitations/implicationsThe Scopus database gives one unit to each author, university or country involved in the paper, without distinguishing whether it was one or more authors in the study. Therefore, this may bring some deviations in the analysis. However, the study considers some figures with fractional counting to partially solve these limitations.
Vieira, GC, Chockalingam, S, Melegh, Z, Greenhough, A, Malik, S, Szemes, M, Park, JH, Kaidi, A, Zhou, L, Catchpoole, D, Morgan, R, Bates, DO, Gabb, PJ & Malik, K 2017, 'Correction: LGR5 regulates pro-survival MEK/ERK and proliferative Wnt/β-catenin signalling in neuroblastoma', Oncotarget, vol. 8, no. 19, pp. 32381-32381.
View/Download from: Publisher's site
View description>>
Present: The originally supplied Figure 5 contains duplicate total-ERK panels. Correct: The proper Figure 5 appears below. The authors sincerely apologize for this error.
Walsh, L, Bluff, A & Johnston, A 2017, 'Water, image, gesture and sound: composing and performing an interactive audiovisual work', Digital Creativity, vol. 28, no. 3, pp. 177-195.
View/Download from: Publisher's site
View description>>
© 2017 Informa UK Limited, trading as Taylor & Francis Group. Performing and composing for interactive audiovisual system presents many challenges to the performer. Working with visual, sonic and gestural components requires new skills and new ways of thinking about performance. However, there are few studies that focus on performer experience with interactive systems. We present the work Blue Space for oboe and interactive audiovisual system, highlighting the evolving process of the collaborative development of the work. We consider how musical and technical demands interact in this process, and outline the challenges of performing with interactive systems. Using the development of Blue Space as a self-reflective case study, we examine the role of gestures in interactive audiovisual works and identify new modes of performance.
Wang, J, Merigó, JM & Jin, L 2017, 'S-H OWA Operators with Moment Measure', International Journal of Intelligent Systems, vol. 32, no. 1, pp. 51-66.
View/Download from: Publisher's site
View description>>
© 2016 Wiley Periodicals, Inc. Step-like or Hurwicz-like ordered weighted averaging (OWA) (S-H OWA) operators connect two fundamental OWA operators, step OWA operators and Hurwicz OWA operators. S-H OWA operators also generalize them and some other well-know OWA operators such as median and centered OWA operators. Generally, there are two types of determination methods for S-H OWA operators: One is from the motivation of some existed mathematical results; the other is by a set of “nonstrict” definitions and often via some intermediate elements. For the second type, in this study we define two sets of strict definitions for Hurwitz/step degree, which are more effective and necessary for theoretical studies and practical usages. Both sets of definitions are useful in different situations. In addition, they are based on the same concept moment of OWA operators proposed in this study, and therefore they become identical in limit forms. However, the Hurwicz/step degree (HD/SD) puts more concerns on its numerical measure and physical meaning, whereas the relative Hurwicz/step degree (rHD/rSD), still being accurate numerically, sometimes is more reasonable intuitively and has larger potential in further studies and practical applications.
Wang, J, Zhang, X, Guo, Z & Lu, H 2017, 'Developing an early-warning system for air quality prediction and assessment of cities in China', Expert Systems with Applications, vol. 84, pp. 102-116.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Air quality has received continuous attention from both environmental managers and citizens. Accordingly, early-warning systems for air pollution are very useful tools to avoid negative health effects and develop effective prevention programs. However, developing robust early-warning systems is very challenging, as well as necessary. This paper develops a reliable and effective early-warning system that consists of air quality prediction and assessment modules. In the prediction module, a hybrid forecasting method is developed for predicting pollutant concentrations that effectively estimates future air quality conditions. In developing this proposed model, we suggest the use of a back propagation neural network algorithm, combined with a probabilistic parameter model and data preprocessing techniques, to address the uncertainties involved in future air quality prediction. Meanwhile, a pre-analysis is implemented, primarily by using optimized distribution functions to examine and analyze statistical characteristics and emission behaviors of air pollutants. The second method, which is developed as part of the second module, is based on fuzzy set theory and the Analytic Hierarchy Process, and it performs air quality assessments to provide a clear and intelligible description of air quality conditions. Using data from the Ministry of Environmental Protection of China and six stages of air quality classification levels, specifically good, moderate, lightly polluted, moderately polluted, heavily polluted and severely polluted, two cities in China, Chengdu and Hangzhou, are used as illustrative examples to verify the effectiveness of the developed early-warning system. The results demonstrate that the proposed methods are effective and reliable for use by environmental supervisors in air pollution monitoring and management.
Wang, S & Dong, D 2017, 'Fault-Tolerant Control of Linear Quantum Stochastic Systems', IEEE Transactions on Automatic Control, vol. 62, no. 6, pp. 2929-2935.
View/Download from: Publisher's site
Wang, S, Gao, Q & Dong, D 2017, 'RobustH∞controller design for a class of linear quantum systems with time delay', International Journal of Robust and Nonlinear Control, vol. 27, no. 3, pp. 380-392.
View/Download from: Publisher's site
Wang, X, Liu, Y, Zhang, G, Xiong, F & Lu, J 2017, 'Diffusion-based recommendation with trust relations on tripartite graphs', Journal of Statistical Mechanics: Theory and Experiment, vol. 2017, no. 8, pp. 083405-083405.
View/Download from: Publisher's site
Wang, X, Liu, Y, Zhang, G, Zhang, Y, Chen, H & Lu, J 2017, 'Mixed Similarity Diffusion for Recommendation on Bipartite Networks', IEEE Access, vol. 5, pp. 21029-21038.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In recommender systems, collaborative filtering technology is an important method to evaluate user preference through exploiting user feedback data, and has been widely used in industrial areas. Diffusion-based recommendation algorithms inspired by diffusion phenomenon in physical dynamics are a crucial branch of collaborative filtering technology, which use a bipartite network to represent collection behaviors between users and items. However, diffusion-based recommendation algorithms calculate the similarity between users and make recommendations by only considering implicit feedback but neglecting the benefits from explicit feedback data, which would be a significant feature in recommender systems. This paper proposes a mixed similarity diffusion model to integrate both explicit feedback and implicit feedback. First, cosine similarity between users is calculated by explicit feedback, and we integrate it with resource-allocation index calculated by implicit feedback. We further improve the performance of the mixed similarity diffusion model by considering the degrees of users and items at the same time in diffusion processes. Some sophisticated experiments are designed to evaluate our proposed method on three real-world data sets. Experimental results indicate that recommendations given by the mixed similarity diffusion perform better on both the accuracy and the diversity than that of most state-of-the-art algorithms.
Wang, Y, Dong, D, Petersen, IR & Zhang, J 2017, 'An Approximate Algorithm for Quantum Hamiltonian Identification with Complexity Analysis', IFAC-PapersOnLine, vol. 50, no. 1, pp. 11744-11748.
View/Download from: Publisher's site
Wen, S, Jiang, J, Liu, B, Xiang, Y & Zhou, W 2017, 'Using epidemic betweenness to measure the influence of users in complex networks', Journal of Network and Computer Applications, vol. 78, pp. 288-299.
View/Download from: Publisher's site
Woodside, AG & Sood, S 2017, 'Vignettes in the two-step arrival of the internet of things and its reshaping of marketing management’s service-dominant logic', Journal of Marketing Management, vol. 33, no. 1-2, pp. 98-110.
View/Download from: Publisher's site
View description>>
© 2016 Westburn Publishers Ltd. This commentary offers vignettes on the introductions of the ‘internet of things’ (IoT) and their impacts on revising the service-dominant (S-D) logic paradigm in marketing. Except smart phones, most consumer households are not participating now in the IoT revolution–but most product-service radical innovations include a 20+ year low-growth start-up. Because the benefits really are enormous and the technical advances in smart devices are now rapidly improving, expect the IoT revolution to hit hard in all areas of daily life before 2025 similar to the great impacts occurring now in business-to-business applications. This study proposes substantial revisions in the S-D logic due to the upcoming take-off stage of adopting radically new IoT innovations.
Wu, C, Qi, B, Chen, C & Dong, D 2017, 'Robust Learning Control Design for Quantum Unitary Transformations', IEEE Transactions on Cybernetics, vol. 47, no. 12, pp. 4405-4417.
View/Download from: Publisher's site
Wu, D, Lance, BJ, Lawhern, VJ, Gordon, S, Jung, T-P & Lin, C-T 2017, 'EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 11, pp. 2157-2168.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance. In this paper, for the first time, it is applied to BCI regression problems, an important category of BCI applications. More specifically, we propose a new feature extraction approach for electroencephalogram (EEG)-based BCI regression problems: a spatial filter is first used to increase the signal quality of the EEG trials and also to reduce the dimensionality of the covariance matrices, and then Riemannian tangent space features are extracted. We validate the performance of the proposed approach in reaction time estimation from EEG signals measured in a large-scale sustained-attention psychomotor vigilance task, and show that compared with the traditional powerband features, the tangent space features can reduce the root mean square estimation error by 4.30%-8.30%, and increase the estimation correlation coefficient by 6.59%-11.13%.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 2017, 'Driver Drowsiness Estimation From EEG Signals Using Online Weighted Adaptation Regularization for Regression (OwARR)', IEEE Transactions on Fuzzy Systems, vol. 25, no. 6, pp. 1522-1535.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. One big challenge that hinders the transition of brain-computer interfaces (BCIs) from laboratory settings to real-life applications is the availability of high-performance and robust learning algorithms that can effectively handle individual differences, i.e., algorithms that can be applied to a new subject with zero or very little subject-specific calibration data. Transfer learning and domain adaptation have been extensively used for this purpose. However, most previous works focused on classification problems. This paper considers an important regression problem in BCI, namely, online driver drowsiness estimation from EEG signals. By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR. Using a simulated driving dataset with 15 subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller estimation errors than several other approaches. We also provide comprehensive analyses on the robustness of OwARR and OwARR-SDS.
Wu, S-L, Liu, Y-T, Hsieh, T-Y, Lin, Y-Y, Chen, C-Y, Chuang, C-H & Lin, C-T 2017, 'Fuzzy Integral With Particle Swarm Optimization for a Motor-Imagery-Based Brain–Computer Interface', IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 21-28.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. A brain-computer interface (BCI) system using electroencephalography signals provides a convenient means of communication between the human brain and a computer. Motor imagery (MI), in which motor actions are mentally rehearsed without engaging in actual physical execution, has been widely used as a major BCI approach. One robust algorithm that can successfully cope with the individual differences in MI-related rhythmic patterns is to create diverse ensemble classifiers using the subband common spatial pattern (SBCSP) method. To aggregate outputs of ensemble members, this study uses fuzzy integral with particle swarm optimization (PSO), which can regulate subject-specific parameters for the assignment of optimal confidence levels for classifiers. The proposed system combining SBCSP, fuzzy integral, and PSO exhibits robust performance for offline single-trial classification of MI and real-time control of a robotic arm using MI. This paper represents the first attempt to utilize fuzzy fusion technique to attack the individual differences problem of MI applications in real-world noisy environments. The results of this study demonstrate the practical feasibility of implementing the proposed method for real-world applications.
Wu, T, Dou, W, Ni, Q, Yu, S & Chen, G 2017, 'Mobile Live Video Streaming Optimization via Crowdsourcing Brokerage', IEEE Transactions on Multimedia, vol. 19, no. 10, pp. 2267-2281.
View/Download from: Publisher's site
View description>>
Nowadays, people can enjoy a rich real-time sensing cognition of what they are interested in anytime and anywhere by leveraging powerful mobile devices such as smartphones. As a key support for the propagation of these richer live media contents, cellular-based access technologies play a vital role to provide reliable and ubiquitous Internet access to mobile devices. However, these limited wireless network channel conditions vary and fluctuate depending on weather, building shields, congestion, etc., which degrade the quality of live video streaming dramatically. To address this challenge, we propose to use crowdsourcing brokerage in future networks which can improve each mobile user's bandwidth condition and reduce the fluctuation of network condition. Further, to serve mobile users better in this crowdsourcing style, we study the brokerage scheduling problem which aims at maximizing the user's quality of experience satisfaction degree cost effectively. Both offline and online algorithms are proposed to solve this problem. The results of extensive evaluations demonstrate that by leveraging crowdsourcing technique, our solution can cost-effectively guarantee a higher quality view experience.
Wu, Y, Liao, L-D, Pan, H-C, He, L, Lin, C-T & Tan, MC 2017, 'Fabrication and interfacial characteristics of surface modified Ag nanoparticle based conductive composites', RSC Advances, vol. 7, no. 47, pp. 29702-29712.
View/Download from: Publisher's site
View description>>
Surface modification of Ag nanoparticles with PAA–PVP complex was conducted and successfully improved the dispersion of Ag nanoparticles in PDMS.
Wu, Z, Lei, L, Li, G, Huang, H, Zheng, C, Chen, E & Xu, G 2017, 'A topic modeling based approach to novel document automatic summarization', Expert Systems with Applications, vol. 84, pp. 12-23.
View/Download from: Publisher's site
Wu, Z, Zhu, H, Li, G, Cui, Z, Huang, H, Li, J, Chen, E & Xu, G 2017, 'An efficient Wikipedia semantic matching approach to text document classification', Information Sciences, vol. 393, pp. 15-28.
View/Download from: Publisher's site
Xiang, C, Petersen, IR & Dong, D 2017, 'Coherent robust H∞ control of linear quantum systems with uncertainties in the Hamiltonian and coupling operators', Automatica, vol. 81, pp. 8-21.
View/Download from: Publisher's site
Xiang, C, Petersen, IR & Dong, D 2017, 'Performance Analysis and Coherent Guaranteed Cost Control for Uncertain Quantum Systems Using Small Gain and Popov Methods', IEEE Transactions on Automatic Control, vol. 62, no. 3, pp. 1524-1529.
View/Download from: Publisher's site
Xu, C, Han, Z, Zhao, G & Yu, S 2017, 'A Sleeping and Offloading Optimization Scheme for Energy-Efficient WLANs', IEEE Communications Letters, vol. 21, no. 4, pp. 877-880.
View/Download from: Publisher's site
View description>>
In this letter, we propose an access point (AP) sleeping and user offloading optimization scheme to improve energy efficiency in densely deployed WLANs. Through real trace analysis, we investigate AP energy efficiency to obtain the sleep-awake threshold, which is used to select sleep or awake APs according to real-time status information monitored on controller. Moreover, we formulate the user offloading problem as a reverse auction process to optimize energy efficiency of APs involved in offloading. Simulation results demonstrate that, comparing to traditional methods, our scheme can achieve up to 20% energy saving while maintaining effective system coverage and throughput.
Xu, C, Jin, W, Zhao, G, Tianfield, H, Yu, S & Qu, Y 2017, 'A Novel Multipath-Transmission Supported Software Defined Wireless Network Architecture', IEEE Access, vol. 5, pp. 2111-2125.
View/Download from: Publisher's site
View description>>
The inflexible management and operation of today's wireless access networks cannot meet the increasingly growing specific requirements, such as high mobility and throughput, service differentiation, and high-level programmability. In this paper, we put forward a novel multipath-transmission supported software-defined wireless network architecture (MP-SDWN), with the aim of achieving seamless handover, throughput enhancement, and flow-level wireless transmission control as well as programmable interfaces. In particular, this research addresses the following issues: 1) for high mobility and throughput, multi-connection virtual access point is proposed to enable multiple transmission paths simultaneously over a set of access points for users and 2) wireless flow transmission rules and programmable interfaces are implemented into mac80211 subsystem to enable service differentiation and flow-level wireless transmission control. Moreover, the efficiency and flexibility of MP-SDWN are demonstrated in the performance evaluations conducted on a 802.11 based-testbed, and the experimental results show that compared to regular WiFi, our proposed MP-SDWN architecture achieves seamless handover and multifold throughput improvement, and supports flow-level wireless transmission control for different applications.
Xu, X, Liu, Z, Wang, Z, Sheng, QZ, Yu, J & Wang, X 2017, 'S-ABC: A paradigm of service domain-oriented artificial bee colony algorithms for service selection and composition', Future Generation Computer Systems, vol. 68, pp. 304-319.
View/Download from: Publisher's site
Xuan, J, Lu, J, Zhang, G & Xu, RYD 2017, 'Cooperative Hierarchical Dirichlet Processes: Superposition vs. Maximization', Artificial Intelligence, vol. 271, pp. 43-73.
View/Download from: Publisher's site
View description>>
The cooperative hierarchical structure is a common and significant datastructure observed in, or adopted by, many research areas, such as: text mining(author-paper-word) and multi-label classification (label-instance-feature).Renowned Bayesian approaches for cooperative hierarchical structure modelingare mostly based on topic models. However, these approaches suffer from aserious issue in that the number of hidden topics/factors needs to be fixed inadvance and an inappropriate number may lead to overfitting or underfitting.One elegant way to resolve this issue is Bayesian nonparametric learning, butexisting work in this area still cannot be applied to cooperative hierarchicalstructure modeling. In this paper, we propose a cooperative hierarchical Dirichlet process (CHDP)to fill this gap. Each node in a cooperative hierarchical structure is assigneda Dirichlet process to model its weights on the infinite hidden factors/topics.Together with measure inheritance from hierarchical Dirichlet process, twokinds of measure cooperation, i.e., superposition and maximization, are definedto capture the many-to-many relationships in the cooperative hierarchicalstructure. Furthermore, two constructive representations for CHDP, i.e.,stick-breaking and international restaurant process, are designed to facilitatethe model inference. Experiments on synthetic and real-world data withcooperative hierarchical structures demonstrate the properties and the abilityof CHDP for cooperative hierarchical structure modeling and its potential forpractical application scenarios.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2017, 'A Bayesian nonparametric model for multi-label learning', Machine Learning, vol. 106, no. 11, pp. 1787-1815.
View/Download from: Publisher's site
View description>>
© 2017, The Author(s). Multi-label learning has become a significant learning paradigm in the past few years due to its broad application scenarios and the ever-increasing number of techniques developed by researchers in this area. Among existing state-of-the-art works, generative statistical models are characterized by their good generalization ability and robustness on large number of labels through learning a low-dimensional label embedding. However, one issue of this branch of models is that the number of dimensions needs to be fixed in advance, which is difficult and inappropriate in many real-world settings. In this paper, we propose a Bayesian nonparametric model to resolve this issue. More specifically, we extend a Gamma-negative binomial process to three levels in order to capture the label-instance-feature structure. Furthermore, a mixing strategy for Gamma processes is designed to account for the multiple labels of an instance. The mixed process also leads to a difficulty in model inference, so an efficient Gibbs sampling inference algorithm is then developed to resolve this difficulty. Experiments on several real-world datasets show the performance of the proposed model on multi-label learning tasks, comparing with three state-of-the-art models from the literature.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2017, 'Bayesian Nonparametric Relational Topic Model through Dependent Gamma Processes', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 7, pp. 1357-1369.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Traditional relational topic models provide a successful way to discover the hidden topics from a document network. Many theoretical and practical tasks, such as dimensional reduction, document clustering, and link prediction, could benefit from this revealed knowledge. However, existing relational topic models are based on an assumption that the number of hidden topics is known a priori, which is impractical in many real-world applications. Therefore, in order to relax this assumption, we propose a nonparametric relational topic model using stochastic processes instead of fixed-dimensional probability distributions in this paper. Specifically, each document is assigned a Gamma process, which represents the topic interest of this document. Although this method provides an elegant solution, it brings additional challenges when mathematically modeling the inherent network structure of typical document network, i.e., two spatially closer documents tend to have more similar topics. Furthermore, we require that the topics are shared by all the documents. In order to resolve these challenges, we use a subsampling strategy to assign each document a different Gamma process from the global Gamma process, and the subsampling probabilities of documents are assigned with a Markov Random Field constraint that inherits the document network structure. Through the designed posterior inference algorithm, we can discover the hidden topics and its number simultaneously. Experimental results on both synthetic and real-world network datasets demonstrate the capabilities of learning the hidden topics and, more importantly, the number of topics.
Xuan, J, Luo, X, Lu, J & Zhang, G 2017, 'Explicitly and implicitly exploiting the hierarchical structure for mining website interests on news events', Information Sciences, vol. 420, pp. 263-277.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. After a news event, many different websites publish coverage of that event, each expressing their own unique commentary, perspectives, and viewpoints. Websites form around a specific set of interests to cater to different audiences, and discovering these interests can help audiences C especially people and organizations that are interested in news C select the most appropriate websites to use as their sources of information. This paper presents three methods for formally defining and mining a websites interests, each of which is explicitly or implicitly based on a hierarchial structure: website-webpage-keyword. The first, and most straightforward, method explicitly uses keyword-layer network communities and the mapping relations between websites and keywords. The second method expands upon the first method with an iterative algorithm that combines both the mapping relations and the network relations from the website-webpage-keyword structure to further refine the keyword-layer network communities. In the third method, a website topic model implicitly captures the mapping relations among the websites, webpages, and keywords. The performance of three proposed methods in website interest mining is compared using a bespoke evaluation metric. The experimental results show that the iterative procedure designed in the second method is able to improve website interest mining performance, and the website topic model in the third method achieves the best performance among the three methods.
Yang, C, Zhu, D, Wang, X, Zhang, Y, Zhang, G & Lu, J 2017, 'Requirement-oriented core technological components’ identification based on SAO analysis', Scientometrics, vol. 112, no. 3, pp. 1229-1248.
View/Download from: Publisher's site
View description>>
© 2017, Akadémiai Kiadó, Budapest, Hungary. Technologies play an important role in the survival and development of enterprises. Understanding and monitoring the core technological components (e.g., technology process, operation method, function) of a technology is an important issue for researchers to develop R&D policy and manage product competitiveness. However, it is difficult to identify core technological components from a mass of terms, and we may experience some difficulties with describing complete technical details and understanding the terms-based results. This paper proposes a Subject-Action-Object (SAO)-based method, in which (1) a syntax-based approach is constructed to extract the SAO structures describing the function, relationship and operation in specified topics; (2) a systematic method is built to extract and screen technological components from SAOs; and (3) we propose a “relevance indicator” to calculate the relevance of the technological components to requirements, and finally identify core technological components based on this indicator. Based on the considerations for requirements and novelty, the core technological components identified have great market potential and can be useful in monitoring and forecasting new technologies. An empirical study of graphene is performed to demonstrate the proposed method. The resulting knowledge may hold interest for R&D management and corporate technology strategies in practice.
Yang, H, Jiang, Z & Lu, H 2017, 'A Hybrid Wind Speed Forecasting System Based on a ‘Decomposition and Ensemble’ Strategy and Fuzzy Time Series', Energies, vol. 10, no. 9, pp. 1422-1422.
View/Download from: Publisher's site
View description>>
Accurate and stable wind speed forecasting is of critical importance in the wind power industry and has measurable influence on power-system management and the stability of market economics. However, most traditional wind speed forecasting models require a large amount of historical data and face restrictions due to assumptions, such as normality postulates. Additionally, any data volatility leads to increased forecasting instability. Therefore, in this paper, a hybrid forecasting system, which combines the ‘decomposition and ensemble’ strategy and fuzzy time series forecasting algorithm, is proposed that comprises two modules—data pre-processing and forecasting. Moreover, the statistical model, artificial neural network, and Support Vector Regression model are employed to compare with the proposed hybrid system, which is proven to be very effective in forecasting wind speed data affected by noise and instability. The results of these comparisons demonstrate that the hybrid forecasting system can improve the forecasting accuracy and stability significantly, and supervised discretization methods outperform the unsupervised methods for fuzzy time series in most cases.
Yao, S-N, Lin, C-T, King, J-T, Liu, Y-C & Liang, C 2017, 'Learning in the visual association of novice and expert designers', Cognitive Systems Research, vol. 43, pp. 76-88.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Designers are adept at determining similarities between previously seen objects and new creations using visual association. However, extant research on the visual association of designers and the differences between expert and novice designers when they engage in the visual association task are scant. Using electroencephalography (EEG), this study attempted to narrow this research gap. Sixteen healthy designers—eight experts and eight novices—were recruited, and asked to perform visual association while EEG signals were acquired, subsequently analysed using independent component analysis. The results indicated that strong connectivity was observed among the prefrontal, frontal, and cingulate cortices, and the default mode network. The experts used both hemispheres and executive functions to support their association tasks, whereas the novices mainly used their right hemisphere and memory retrieval functions. The visual association of experts appeared to be more goal-directed than that of the novices. Accordingly, designing and implementing authentic and goal-directed activities for improving the executive functions of the prefrontal cortex and default mode network are critical for design educators and creativity researchers.
Ying, M, Ying, S & Wu, X 2017, 'Invariants of quantum programs: characterisations and generation', ACM SIGPLAN Notices, vol. 52, no. 1, pp. 818-832.
View/Download from: Publisher's site
View description>>
Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for verification and analysis of quantum programs. Interestingly, the notion of invariant can be defined for quantum programs in two different ways -- additive invariants and multiplicative invariants -- corresponding to two interpretations of implication in a continuous valued logic: the Lukasiewicz implication and the Godel implication. It is shown that both of them can be used to establish partial correctness of quantum programs. The problem of generating additive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms -- quantum walk and quantum Metropolis sampling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimising quantum Metropolis sampling. To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
Yu, S, Liu, M, Dou, W, Liu, X & Zhou, S 2017, 'Networking for Big Data: A Survey', IEEE Communications Surveys & Tutorials, vol. 19, no. 1, pp. 531-549.
View/Download from: Publisher's site
View description>>
Complementary to the fancy big data applications, networking for big data is an indispensable supporting platform for these applications in practice. This emerging research branch has gained extensive attention from both academia and industry in recent years. In this new territory, researchers are facing many unprecedented theoretical and practical challenges. We are therefore motivated to solicit the latest works in this area, aiming to pave a comprehensive and solid starting ground for interested readers. We first clarify the definition of networking for big data based on the cross disciplinary nature and integrated needs of the domain. Second, we present the current understanding of big data from different levels, including its formation, networking features, mathematical representations, and the networking technologies. Third, we discuss the challenges and opportunities from various perspectives in this hopeful field. We further summarize the lessons we learned based on the survey. We humbly hope this paper will shed light for forthcoming researchers to further explore the uncharted part of this promising land.
Yu, S, Muller, P & Zomaya, A 2017, 'Editorial: special issue on “big data security and privacy”', Digital Communications and Networks, vol. 3, no. 4, pp. 211-212.
View/Download from: Publisher's site
Yuan, K-J, Shu, C-C, Dong, D & Bandrauk, AD 2017, 'Attosecond Dynamics of Molecular Electronic Ring Currents', The Journal of Physical Chemistry Letters, vol. 8, no. 10, pp. 2229-2235.
View/Download from: Publisher's site
Yusoff, B, Merigó, JM & Ceballos, D 2017, 'Owa-based aggregation operations in multi-expert mcdm model', Economic Computation and Economic Cybernetics Studies and Research, vol. 51, no. 2, pp. 211-230.
View description>>
This paper presents an analysis of multi-expert multi-criteria decision making (ME-MCDM) model based on the ordered weighted averaging (OWA) operators. Two methods of modeling the majority opinion are studied as to aggregate the experts’ judgments, in which based on the induced OWA operators. Then, an overview of OWA with the inclusion of different degrees of importance is provided for aggregating the criteria. An alternative OWA operator with a new weighting method is proposed which termed as alternative OWAWA (AOWAWA) operator. Some extensions of ME-MCDM model with respect to two-stage aggregation processes are developed based on the classical and alternative schemes. A comparison of results of different decision schemes then is conducted. Moreover, with respect to the alternative scheme, a further comparison is given for different techniques in integrating the degrees of importance. A numerical example in the selection of investment strategy is used as to exemplify the model and for the analysis purpose.
Zamani, R, Brown, RBK, Beydoun, G & Tibben, W 2017, 'The architecture of an effective software application for managing enterprise projects', Journal of Modern Project Management, vol. 5, no. 1, pp. 114-122.
View/Download from: Publisher's site
View description>>
This paper presents the architecture of an effective software application for managing enterprise projects. Viewing the execution of an enterprise project as a highly complex system in which many delicate trade-offs among completion time, cost, safety, and quality are required, the architecture has been designed based on the fact that any action in one part of such a project can highly impact its other parts. Highlighting the complexity of the system, and the way computational intelligence should be employed in making these trade-offs are the base of the presented architecture. The architecture is also based on the fact that developing a software application for appropriate managing of such trade-offs is not a trivial task, and a robust application for this purpose should be involved with an array of sophisticated optimization techniques. A multi-agent system (MAS), as a software application composed of multiple interacting modules, has been used as the main component of architecture. In this multi-agent system, modules interact with environment on-line, and resolve various resource conflicts which are complex and hard-to-resolve on daily basis. Based on the proposed architecture, the paper also provides a template software application in which an array of optimization techniques show how the necessary trade-offs can be made. The template is the result of the integration of several highly sophisticated recent procedures for single and multimode resource-constrained projects scheduling problems.
Zeng, S, Merigó, JM, Palacios-Marqués, D, Jin, H & Gu, F 2017, 'RETRACTED: Intuitionistic fuzzy induced ordered weighted averaging distance operator and its application to decision making', Journal of Intelligent & Fuzzy Systems, vol. 32, no. 1, pp. 11-22.
View/Download from: Publisher's site
View description>>
© 2017 - IOS Press and the authors. In this paper, we develop a new method for intuitionistic fuzzy decision making problems with induced aggregation operators and distance measures. Firstly, we introduce the intuitionistic fuzzy induced ordered weighted averaging distance (IFIOWAD) operator. It is an extension of the ordered weighted averaging (OWA) operator that uses the main characteristics of the induced OWA (IOWA), the distance measures and uncertain information represented by intuitionistic fuzzy numbers. The main advantage of this operator is that it is able to consider complex attitudinal characters of the decision-maker by using order-inducing variables in the aggregation of the distance measures. We further generalize the IFIOWAD by using weighted average. The result is the intuitionistic fuzzy induced ordered weighted averaging weighted average distance (IFIOWAWAD) operator. Finally, a practical example about the selection of investments is provided to illustrate the developed intuitionistic fuzzy aggregation operators.
Zhang, H, Dong, P, Yu, S & Song, J 2017, 'A Scalable and Smart Hierarchical Wireless Communication Architecture Based on Network/User Separation', IEEE Wireless Communications, vol. 24, no. 1, pp. 18-24.
View/Download from: Publisher's site
View description>>
Due to the dramatic development of mobile devices and technologies, wireless networks have become a convenient and popular means of accessing the Internet for the public. However, the existing wireless networking techniques still face several fundamental inherent problems, such as network scalability, flexibility, and interoperability. As a result, future wireless communication networking, such as 5G, is expected to address these problems. In this article, we present a hierarchical identifier network, a novel and practical hierarchical network architecture based on the idea of network/user separation and mapping. We performed extensive experiments in a realworld high-mobility scenario to evaluate the proposed wireless network architecture. The results demonstrate that the hierarchical identifier network achieves better performance on scalability, flexibility, and interoperability compared to its existing counterparts.
Zhang, Q, Wu, D, Lu, J, Liu, F & Zhang, G 2017, 'A cross-domain recommender system with consistent information transfer', Decision Support Systems, vol. 104, pp. 49-63.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Recommender systems provide users with personalized online product and service recommendations and are a ubiquitous part of today's online entertainment smorgasbord. However, many suffer from cold-start problems due to a lack of sufficient preference data, and this is hindering their development. Cross-domain recommender systems have been proposed as one possible solution. These systems transfer knowledge from one domain that has adequate preference information to another domain that does not. The outlook for cross-domain recommendation is promising, but existing methods cannot ensure the knowledge extracted from the source domain is consistent with the target domain, which may impact the accuracy of the recommendations. To address this challenging issue, we propose a cross-domain recommender system with consistent information transfer (CIT). Knowledge consistency is based on user and item latent groups, and domain adaptation techniques are used to map and adjust these groups in both domains to maintain consistency during the transfer learning process. Experiments were conducted on five real-world datasets in three categories: movies, books, and music. The results for nine cross-domain recommendation tasks show that CIT outperforms five benchmarks and increases the accuracy of recommendations in the target domain, especially with sparse data. Practically, our proposed method is applied into a telecom product recommender system and a business partner recommender system (Smart BizSeeker) to enhance personalized decision making for both businesses and individual customers.
Zhang, W, Dong, D, Petersen, IR & Rabitz, HA 2017, 'Robust control of photoassociation of slow O + H collision', Chemical Physics, vol. 483-484, pp. 149-155.
View/Download from: Publisher's site
Zhang, X, Cheng, Z, Lin, R, He, L, Yu, S & Luo, H 2017, 'Local Fast Reroute With Flow Aggregation in Software Defined Networks', IEEE Communications Letters, vol. 21, no. 4, pp. 785-788.
View/Download from: Publisher's site
View description>>
In this letter, we propose a local fast reroute (LFR) algorithm with flow aggregation in software defined networks (SDN). In LFR, if a link failure is detected, all traffic flows affected by the failure are aggregated to be a new 'big' flow. Then, a local reroute path is dynamically deployed by the SDN controller for the aggregated flow. LFR reduces the number of flow operations between the SDN controller and switches. Numerical results show that the LFR enables fast recovery while minimizing the total number of flow entries in SDN.
Zhang, Y, Chen, H, Lu, J & Zhang, G 2017, 'Detecting and predicting the topic change of Knowledge-based Systems: A topic-based bibliometric analysis from 1991 to 2016', Knowledge-Based Systems, vol. 133, pp. 255-268.
View/Download from: Publisher's site
View description>>
© 2017 The journal Knowledge-based Systems (KnoSys) has been published for over 25 years, during which time its main foci have been extended to a broad range of studies in computer science and artificial intelligence. Answering the questions: “What is the KnoSys community interested in?” and “How does such interest change over time?” are important to both the editorial board and audience of KnoSys. This paper conducts a topic-based bibliometric study to detect and predict the topic changes of KnoSys from 1991 to 2016. A Latent Dirichlet Allocation model is used to profile the hotspots of KnoSys and predict possible future trends from a probabilistic perspective. A model of scientific evolutionary pathways applies a learning-based process to detect the topic changes of KnoSys in sequential time slices. Six main research areas of KnoSys are identified, i.e., expert systems, machine learning, data mining, decision making, optimization, and fuzzy, and the results also indicate that the interest of KnoSys communities in the area of computational intelligence is raised, and the ability to construct practical systems through knowledge use and accurate prediction models is highly emphasized. Such empirical insights can be used as a guide for KnoSys submissions.
Zhang, Y, Qian, Y, Huang, Y, Guo, Y, Zhang, G & Lu, J 2017, 'An entropy-based indicator system for measuring the potential of patents in technological innovation: rejecting moderation', Scientometrics, vol. 111, no. 3, pp. 1925-1946.
View/Download from: Publisher's site
View description>>
© 2017, Akadémiai Kiadó, Budapest, Hungary. How to evaluate the value of a patent in technological innovation quantitatively and systematically challenges bibliometrics. Traditional indicator systems and weighting approaches mostly lead to “moderation” results; that is, patents ranked to a top list can have only good-looking values on all indicators rather than distinctive performances in certain individual indicators. Orienting patents authorized by the United States Patent and Trademark Office (USPTO), this paper constructs an entropy-based indicator system to measure their potential in technological innovation. Shannon’s entropy is introduced to quantitatively weight indicators and a collaborative filtering technique is used to iteratively remove negative patents. What remains is a small set of positive patents with potential in technological innovation as the output. A case study with 28,509 USPTO-authorized patents with Chinese assignees, covering the period from 1976 to 2014, demonstrates the feasibility and reliability of this method.
Zhang, Y, Zhang, G, Zhu, D & Lu, J 2017, 'Scientific evolutionary pathways: Identifying and visualizing relationships for scientific topics', Journal of the Association for Information Science and Technology, vol. 68, no. 8, pp. 1925-1939.
View/Download from: Publisher's site
View description>>
Whereas traditional science maps emphasize citation statistics and static relationships, this paper presents a term‐based method to identify and visualize the evolutionary pathways of scientific topics in a series of time slices. First, we create a data preprocessing model for accurate term cleaning, consolidating, and clustering. Then we construct a simulated data streaming function and introduce a learning process to train a relationship identification function to adapt to changing environments in real time, where relationships of topic evolution, fusion, death, and novelty are identified. The main result of the method is a map of scientific evolutionary pathways. The visual routines provide a way to indicate the interactions among scientific subjects and a version in a series of time slices helps further illustrate such evolutionary pathways in detail. The detailed outline offers sufficient statistical information to delve into scientific topics and routines and then helps address meaningful insights with the assistance of expert knowledge. This empirical study focuses on scientific proposals granted by the United States National Science Foundation, and demonstrates the feasibility and reliability. Our method could be widely applied to a range of science, technology, and innovation policy research, and offer insight into the evolutionary pathways of scientific activities.
Zhou, X, Wen, Y, Goodale, UM, Zuo, H, Zhu, H, Li, X, You, Y, Yan, L, Su, Y & Huang, X 2017, 'Optimal rotation length for carbon sequestration in Eucalyptus plantations in subtropical China', New Forests, vol. 48, no. 5, pp. 609-627.
View/Download from: Publisher's site
Zhu, Z, Liu, X, Wang, Y, Lu, W, Gong, L, Yu, S & Ansari, N 2017, 'Impairment- and Splitting-Aware Cloud-Ready Multicast Provisioning in Elastic Optical Networks', IEEE/ACM Transactions on Networking, vol. 25, no. 2, pp. 1220-1234.
View/Download from: Publisher's site
View description>>
It is known that multicast provisioning is important for supporting cloud-based applications, and as the traffics from these applications are increasing quickly, we may rely on optical networks to realize high-throughput multicast. Meanwhile, the flexible-grid elastic optical networks (EONs) achieve agile access to the massive bandwidth in optical fibers, and hence can provision variable bandwidths to adapt to the dynamic demands from the cloud-based applications. In this paper, we consider all-optical multicast in EONs in a practical manner and focus on designing impairment- and splitting-aware multicast provisioning schemes. We first study the procedure of adaptive modulation selection for a light-tree, and point out that the multicast scheme in EONs is fundamentally different from that in the fixed-grid wavelength-division multiplexing networks. Then, we formulate the problem of impairment- and splitting-aware routing, modulation and spectrum assignment (ISa-RMSA) for all-optical multicast in EONs and analyze its hardness. Next, we analyze the advantages brought by the flexibility of routing structures and discuss the ISa-RMSA schemes based on light-trees and light-forests. This paper suggests that for ISa-RMSA, the light-forest-based approach can use less bandwidth than the light-tree-based one, while still satisfying the quality of transmission requirement. Therefore, we establish the minimum light-forest problem for optimizing a light-forest in ISa-RMSA. Finally, we design several time-efficient ISa-RMSA algorithms, and prove that one of them can solve the minimum light-forest problem with a fixed approximation ratio.
Zuo, H, Zhang, G, Pedrycz, W, Behbood, V & Lu, J 2017, 'Fuzzy Regression Transfer Learning in Takagi–Sugeno Fuzzy Models', IEEE Transactions on Fuzzy Systems, vol. 25, no. 6, pp. 1795-1807.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Data science is a research field concerned with processes and systems that extract knowledge from massive amounts of data. In some situations, however, data shortage renders existing data-driven methods difficult or even impossible to apply. Transfer learning has recently emerged as a way of exploiting previously acquired knowledge to solve new yet similar problems much more quickly and effectively. In contrast to classical data-driven machine learning methods, transfer learning methods exploit the knowledge accumulated from data in auxiliary domains to facilitate predictive modeling in the current domain. A significant number of transfer learning methods that address classification tasks have been proposed, but studies on transfer learning in the case of regression problems are still scarce. This study focuses on using transfer learning techniques to handle regression problems in a domain that has insufficient training data. We propose an original fuzzy regression transfer learning method, based on fuzzy rules, to address the problem of estimating the value of the target for regression. A Takagi-Sugeno fuzzy regression model is developed to transfer knowledge from a source domain to a target domain. Experimental results using synthetic data and real-world datasets demonstrate that the proposed fuzzy regression transfer learning method significantly improves the performance of existing models when tackling regression problems in the target domain.
Abedin, B, Erfani, S & Blount, Y 1970, 'Social media adoption framework for aged care service providers in Australia', 2017 International Conference on Research and Innovation in Information Systems (ICRIIS), 2017 5th International Conference on Research and Innovation in Information Systems (ICRIIS), IEEE, Langkawi, Malaysia, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The aged care sector has been a late adopter of social media platforms for communicating, collaborating, marketing and creating brand awareness. There is little research that examines the adoption of social media by aged care service providers for these purposes. This paper reviews the status of social media adoption in the Australian aged care industry, to understand in what ways social media can serve older people's needs, and to develop recommendations for aged-care service providers to adopt social media applications to empower older people. Through a review of the literature and interviews with Australian experts, this paper suggests aged care providers use a three-phase framework when adopting social media in the aged care sector. The first phase is to adopt a popular public social media platform such as Facebook followed by Instagram and Twitter. The second phase supports interaction by encouraging posts and feedback by locally hosted member forums. The third phase is the adoption of specialised social applications for closed groups and specific functions. The paper concludes with a discussion on the implications of the framework and proposes directions for future research.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Impact of struck-out text on writer identification', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 1465-1471.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The presence of struck-out text in handwritten manuscripts may affect the accuracy of automated writer identification. This paper presents a study on such effects of struck-out text. Here we consider offline English and Bengali handwritten document images. At first, the struck-out texts are detected using a hybrid classifier of a CNN (Convolutional Neural Network) and an SVM (Support Vector Machine). Then the writer identification process is activated on normal and struck-out text separately, to ascertain the impact of struck-out texts. For writer identification, we use two methods: (a) a hand-crafted feature-based SVM classifier, and (b) CNN-extracted auto-derived features with a recurrent neural model. For the experimental analysis, we have generated a database from 100 English and 100 Bengali writers. The performance of our system is very encouraging.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Legibility and Aesthetic Analysis of Handwriting', 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), IEEE, Kyoto, Japan, pp. 175-182.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper deals with computer-based cognitive analysis towards legibility and aesthetics of a handwritten document. The legible text creates a human perception that the writing can be read effortlessly because of its orthographic clarity. The aesthetic property relates to the beautiful appearance of a handwritten document. In this study, we deal with these properties on offline Bengali handwriting. We formulate both legibility and aesthetic analysis tasks as machine learning problems supervised by the human cognitive system. We employ automatically derived feature-based recurrent neural networks to investigate writing legibility. For aesthetics evaluation, we employ hand-crafted feature-based support vector machines (SVMs). We have collected contemporary Bengali handwritings, on which the subjective legibility and aesthetic scores are provided by human readers. On this corpus containing legibility and aesthetic ground-Truth information, we executed our experiments. The experimental results obtained on various handwritings are encouraging.
Ahadi, A, Lister, R, Lal, S, Leinonen, J & Hellas, A 1970, 'Performance and Consistency in Learning to Program', Proceedings of the Nineteenth Australasian Computing Education Conference, ACE '17: Nineteenth Australasian Computing Education Conference, ACM, Geelong, pp. 11-16.
View/Download from: Publisher's site
View description>>
Performance and consistency play a large role in learning. Decreasing the effort that one invests into course work may have short-term benefits such as reduced stress. However, as courses progress, neglected work accumulates and may cause challenges with learning the course content at hand.
In this work, we analyze students' performance and consistency with programming assignments in an introductory programming course. We study how performance, when measured through progress in course assignments, evolves throughout the course, study weekly fluctuations in students' work consistency, and contrast this with students' performance in the course final exam.
Our results indicate that whilst fluctuations in students' weekly performance do not distinguish poor performing students from well performing students with a high accuracy, more accurate results can be achieved when focusing on the performance of students on individual assignments which could be used for identifying struggling students who are at risk of dropping out of their studies.
Ahmed, S, Quinn, J, Catherwood, M, Thornton, P, Bergin, S, Kennedy, P, Elsir, S, Hennessy, B & Murphy, P 1970, 'Unmutated IGHV and Double Negative CD38/CD49d Predict Good Prognosis and Long Treatment Free Survival (TFS) in Chronic Lymphocytic Leukaemia; Regardless of Genetic Mutations', BLOOD, 59th Annual Meeting of the American-Society-of-Hematology (ASH), AMER SOC HEMATOLOGY, GA, Atlanta.
Al-Doghman, F, Chaczko, Z & Jiang, J 1970, 'A Review of Aggregation Algorithms for the Internet of Things', 2017 25th International Conference on Systems Engineering (ICSEng), 2017 25th International Conference on Systems Engineering (ICSEng), IEEE, Piscataway, USA, pp. 480-487.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The Internet of Things (IoT) epitomizes the upcoming eminent transition in the world’s economy and human lifestyle where people and various objects are correlated within networks. Data Aggregation is a technique which can be used to mitigate Big Data challenges within IoT. This paper provides an overview of various approaches for aggregation of data in IoT infrastructure. A new class of reliable Data Aggregation algorithm is discussed as well. This new class of algorithm uses a consensus based aggregation with fault tolerance methodology in Fog Computing. The new approach allows promoting adaptive behavior and more efficient delivery of the aggregation outcomes to the ascendant node(s). The proposed method is fault tolerant and deals with nodes reliability issues.
Alexander-Floyd, JJ, Entezari, A, Ying, M, Haroon, S & Gidalevitz, T 1970, 'Natural genetic variation modifies polyglutamine aggregation via an imbalance in autophagy.', MOLECULAR BIOLOGY OF THE CELL, Annual Joint Meeting of the American-Society-for-Cell-Biology and the European-Molecular-Biology-Organization (ASCB/EMBO), AMER SOC CELL BIOLOGY, PA, Philadelphia.
Alghamdi, A, Hussain, W, Alharthi, A & Almusheqah, AB 1970, 'The Need of an Optimal QoS Repository and Assessment Framework in Forming a Trusted Relationship in Cloud: A Systematic Review', 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), IEEE, Shanghai, China, pp. 301-306.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Due to the cost-effectiveness and scalable features of the cloud the demand of its services is increasing every next day. Quality of Service (QOS) is one of the crucial factor in forming a viable Service Level Agreement (SLA) between a consumer and the provider that enable them to establish and maintain a trusted relationship with each other. SLA identifies and depicts the service requirements of the user and the level of service promised by provider. Availability of enormous service solutions is troublesome for cloud users in selecting the right service provider both in terms of price and the degree of promised services. On the other end a service provider need a centralized and reliable QoS repository and assessment framework that help them in offering an optimal amount of marginal resources to requested consumer. Although there are number of existing literatures that assist the interaction parties to achieve their desired goal in some way, however, there are still many gaps that need to be filled for establishing and maintaining a trusted relationship between them. In this paper we tried to identify all those gaps that is necessary for a trusted relationship between a service provider and service consumer. The aim of this research is to present an overview of the existing literature and compare them based on different criteria such as QoS integration, QoS repository, QoS filtering, trusted relationship and an SLA.
Al-Jubouri, B & Gabrys, B 1970, 'Diversity and Locality in Multi-Component, Multi-Layer Predictive Systems: A Mutual Information Based Approach', ADVANCED DATA MINING AND APPLICATIONS, ADMA 2017, International Conference on Advanced Data Mining and Applications (ADMA), Springer International Publishing, Singapore, SINGAPORE, pp. 313-325.
View/Download from: Publisher's site
Alkalbani, AM, Gadhvi, L, Patel, B, Hussain, FK, Ghamry, AM & Hussain, OK 1970, 'Analysing Cloud Services Reviews Using Opining Mining', 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA), 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA), IEEE, Tamkang Univ, Taipei, TAIWAN, pp. 1124-1129.
View/Download from: Publisher's site
Anaissi, A, Khoa, NLD, Mustapha, S, Alamdari, MM, Braytee, A, Wang, Y & Chen, F 1970, 'Adaptive One-Class Support Vector Machine for Damage Detection in Structural Health Monitoring', Advances in Knowledge Discovery and Data Mining (LNAI), Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, Springer International Publishing, Jeju, South Korea, pp. 42-57.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Machine learning algorithms have been employed extensively in the area of structural health monitoring to compare new measurements with baselines to detect any structural change. One-class support vector machine (OCSVM) with Gaussian kernel function is a promising machine learning method which can learn only from one class data and then classify any new query samples. However, generalization performance of OCSVM is profoundly influenced by its Gaussian model parameter ϭ. This paper proposes a new algorithm named Appropriate Distance to the Enclosing Surface (ADES) for tuning the Gaussian model parameter. The semantic idea of this algorithm is based on inspecting the spatial locations of the edge and interior samples, and their distances to the enclosing surface of OCSVM. The algorithm selects the optimal value of ϭ which generates a hyperplane that is maximally distant from the interior samples but close to the edge samples. The sets of interior and edge samples are identified using a hard margin linear support vector machine. The algorithm was successfully validated using sensing data collected from the Sydney Harbour Bridge, in addition to five public datasets. The designed ADES algorithm is an appropriate choice to identify the optimal value of ϭ for OCSVM especially in high dimensional datasets.
Asadabadi, MR, Saberi, M & Chang, E 1970, 'A fuzzy game based framework to address ambiguities in performance based contracting', Proceedings of the International Conference on Web Intelligence, WI '17: International Conference on Web Intelligence 2017, ACM, Leipzig, GERMANY, pp. 1214-1217.
View/Download from: Publisher's site
Asadabadi, MR, Saberi, M & Chang, E 1970, 'Logistic informatics modelling using concept of stratification (CST)', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY, pp. 1-7.
View/Download from: Publisher's site
Azadeh, A, Pourreza, P, Saberi, M, Hussain, OK & Chang, E 1970, 'An integrated fuzzy cognitive map-Bayesian network model for improving HSEE in energy sector', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY, pp. 1-7.
View/Download from: Publisher's site
Balaji, S, Patil, M & McGregor, C 1970, 'A Cloud Based Big Data Based Online Health Analytics for Rural NICUs and PICUs in India: Opportunities and Challenges', 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Aristotle Univ Thessaloniki, Thessaloniki, GREECE, pp. 385-390.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. High frequency physiological data has great potential to provide new insights for many conditions patients can develop in critical care when utilized by Big Data Analytics based Clinical Decision Support Systems, such as Artemis. Artemis was deployed in NICU at SickKids Hospital in Toronto in August 2009. It employs all the potentiality of big data. Both original data together with newly generated analytics is stored in the data persistence component of Artemis. Real-time analytics is performed in the Online Analytics component. The knowledge extraction component of the system takes care of data mining which is enabled to support clinical research for various conditions. Artemis to date has been utilized in three different implementations. However the use of Artemis still holds many challenges for lower resource settings. This research demonstrates the challenges and opportunities to use Artemis cloud as a cloud computing based Health Analytics-as-a-Service approach for the provision of remote real-time patient monitoring for low resource settings. We present case study research to demonstrate the implications, opportunities and challenges of utilizing Artemis in a low resource setting for small and remote pediatric critical care units viz NICU/PICU in India. Utilizing potentiality of big data within pediatric intensive care units has great potential to improve healthcare in these low resource settings.
Bano, M & Zowghi, D 1970, 'Crowd Vigilante - Detecting Sabotage in Crowdsourcing.', APRES, Springer, pp. 114-120.
View/Download from: Publisher's site
View description>>
© Springer Nature Singapore Pte Ltd. 2018. Crowdsourcing is a complex and sociotechnical problem solving approach for collaboration of geographically distributed volunteer crowd to contribute to the achievement of a common task. One of the major issues faced by crowdsourced projects is the trustworthiness of the crowd. This paper presents a vision to develop a framework with supporting methods and tools for early detection of the malicious acts of sabotage in crowdsourced projects by utilizing and scaling digital forensic techniques. The idea is to utilize the crowd to build the digital evidence of sabotage with systematic collection and analysis of data from the same crowdsourced project where the threat is situated. The proposed framework aims to improve the security of the crowdsourced projects and their outcomes by building confidence about the trustworthiness of the workers.
Bano, M, Zowghi, D & Kearney, M 1970, 'Feature Based Sentiment Analysis for Evaluating the Mobile Pedagogical Affordances of Apps.', WCCE, IFIP TC 3 World Conference on Computers in Education, Springer, Dublin, Ireland, pp. 281-291.
View/Download from: Publisher's site
View description>>
© 2017, IFIP International Federation for Information Processing. The launch of millions of apps has made it challenging for teachers to select the most suitable educational app to support students’ learning. Several evaluation frameworks have been proposed in the research literature to assist teachers in selecting the right apps for their needs. This paper presents preliminary results of an innovative technique for evaluating educational mobile apps by analysing the feedback of past app users through the lens of a mobile pedagogical perspective. We have utilized a sentiment analysis tool to assess the opinions of the app users through the lens of the criteria offered by a rigorous mobile learning pedagogical framework highlighting the learners’ experience of Personalization, Authenticity and Collaboration (iPAC). The investigation has provided initial confirmation of the powerful utility of the feature based sentiment analysis technique for evaluating the mobile pedagogical affordances of learning apps.
Bashir, MR & Gill, AQ 1970, 'IoT enabled smart buildings: A systematic review', 2017 Intelligent Systems Conference (IntelliSys), 2017 Intelligent Systems Conference (IntelliSys), IEEE, London, UK, pp. 151-159.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. There is an increasing interest in the Internet of Things (IoT) enabled smart buildings. The main question is: What are the key challenges, which must be addressed to effectively manage and analyze the big data for IoT enabled smart buildings. There is a need for the systematic literature review to understand the challenges and the solutions to overcome such challenges. Using the SLR approach, 22 relevant studies were identified and reviewed in this paper. The data from these selected studies were extracted to identify the challenges and relevant solutions. The findings from this research paper will serve as a knowledge base for researchers and practitioners for conducting further research and development in this important area.
Bei, X, Qiao, Y & Zhang, S 1970, 'Networked Fairness in Cake Cutting', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, pp. 3632-3638.
View/Download from: Publisher's site
View description>>
We introduce a graphical framework for fair division in cake cutting, where comparisons between agents are limited by an underlying network structure. We generalize the classical fairness notions of envy-freeness and proportionality in this graphical setting. An allocation is called envy-free on a graph if no agent envies any of her neighbor's share, and is called proportional on a graph if every agent values her own share no less than the average among her neighbors, with respect to her own measure. These generalizations enable new research directions in developing simple and efficient algorithms that can produce fair allocations under specific graph structures.On the algorithmic frontier, we first propose a moving-knife algorithm that outputs an envy-free allocation on trees. The algorithm is significantly simpler than the discrete and bounded envy-free algorithm introduced in [Aziz and Mackenzie, 2016] for compete graphs. Next, we give a discrete and bounded algorithm for computing a proportional allocation on transitive closure of trees, a class of graphs by taking a rooted tree and connecting all its ancestor-descendant pairs.
Bell, J & Leong, TW 1970, 'Collaborative futures', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, Brisbane, Australia, pp. 397-401.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. This paper presents insights into Younger Onset Dementia (YOD) offering clear differentiation in the circumstances, needs and challenges of people with YOD from those with late onset dementia. We point to opportunities for the potential role of digital technology to improve the experiences of people living with YOD. This is important because while HCI has long engaged with dementia, these efforts have been predominantly focused on designing technologies for elderly people experiencing dementia. In particular, this paper highlights concerns raised by people with YOD which have significant impact for HCI researchers when engaging people with YOD in research and in technology design. As such, this paper argues for a broadening of HCI research to include YOD and to rethink current research and design methods in 'dementia and technology' settings.
Belovs, A, Ivanyos, G, Qiao, Y, Santha, M & Yang, S 1970, 'On the polynomial parity argument complexity of the combinatorial nullstellensatz', Leibniz International Proceedings in Informatics, LIPIcs.
View/Download from: Publisher's site
View description>>
The complexity class PPA consists of NP-search problems which are reducible to the parity principle in undirected graphs. It contains a wide variety of interesting problems from graph theory, combinatorics, algebra and number theory, but only a few of these are known to be complete in the class. Before this work, the known complete problems were all discretizations or combinatorial analogues of topological fixed point theorems. Here we prove the PPA-completeness of two problems of radically different style. They are PPA-Circuit CNSS and PPA-Circuit Chevalley, related respectively to the Combinatorial Nullstellensatz and to the Chevalley-Warning Theorem over the two elements field F2. The input of these problems contain PPA-circuits which are arithmetic circuits with special symmetric properties that assure that the polynomials computed by them have always an even number of zeros. In the proof of the result we relate the multilinear degree of the polynomials to the parity of the maximal parse subcircuits that compute monomials with maximal multilinear degree, and we show that the maximal parse subcircuits of a PPA-circuit can be paired in polynomial time.
Berry, DM, Cleland-Huang, J, Ferrari, A, Maalej, W, Mylopoulos, J & Zowghi, D 1970, 'Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and Beyond', 2017 IEEE 25th International Requirements Engineering Conference (RE), 2017 IEEE 25th International Requirements Engineering Conference (RE), IEEE, Lisbon, Portugal, pp. 570-573.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Context and Motivation Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. Problem Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. Principal Ideas This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. Contribution The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool’s task.
Binsawad, M, Sohaib, O & Hawryszkiewycz, IT 1970, 'Knowledge-Sharing in Technology Business Incubator.', ISD, International Conference on Information Systems Development, Association for Information Systems, Cyprus, pp. 1-12.
View description>>
Given the economic growth challenges facing countries all around the world, the importance of the initiative of technology business incubators in developing the economic growth of countries has been recognized. Technology business incubators are included in many of the processes that support economic growth, such as job creation and developing innovative technologies. This research paper examined how the knowledge sharing aspects impact technology business incubator performance in Saudi Arabia. The findings provide key factors affecting knowledge-sharing process towards technology incubator performance.
Bluff, A & Johnston, A 1970, 'Storytelling with Interactive Physical Theatre', Proceedings of the 4th International Conference on Movement Computing, MOCO '17: 4th International Conference on Movement Computing, ACM, London, United Kingdom, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 ACM. This paper examines the way movement based interactive visuals were used as a storytelling device in the physical theatre production of Creature: Dot and the Kangaroo. A number of performers and artists involved in the production were interviewed and their perceptions of the interactive technology have been contrasted against a similar study into abstract dance. The animated backgrounds and interactive animal graphics projected onto the stage were found to reduce the density of script by describing the location of action and spirit of the character, reducing the necessity for this to be spoken. Peak moments of the show were identified by those interviewed and a scene analysis revealed that the most successful scenes featured a more integrated storytelling where the interaction between performers and the digital projections portrayed a key narrative message.
Braytee, A, Liu, W & Kennedy, PJ 1970, 'Supervised context-aware non-negative matrix factorization to handle high-dimensional high-correlated imbalanced biomedical data', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 4512-4519.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Traditional feature selection techniques are used to identify a subset of the most useful features, and consider the rest as unimportant, redundant or noisy. In the presence of highly correlated features, many variable selection methods consider correlated features as redundant and need to be removed. In this paper, a novel supervised feature selection algorithm SCANMF is proposed by jointly integrating correlation analysis and structural analysis of the balanced supervised non-negative matrix factorization (NMF). Furthermore, ℓ2,1-norm minimization constraint is incorporated into the objective function to guarantee sparsity in the feature matrix rows and reduce noisy features. Our algorithm exploits the discriminative information, feature combinations, and the original features in the context of a supervised NMF method which can be beneficial for both classification and interpretation. An efficient iterative algorithm is designed to solve the constrained optimization problem with guaranteed convergence. Finally, a series of extensive experiments are conducted on 8 complex datasets. Promising results using multiple classifiers demonstrate the effectiveness and efficiency of our algorithm over state-of-the-art methods.
Braytee, A, Liu, W, Catchpoole, DR & Kennedy, PJ 1970, 'Multi-Label Feature Selection using Correlation Information', Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17: ACM Conference on Information and Knowledge Management, ACM, Singapore, Singapore, pp. 1649-1656.
View/Download from: Publisher's site
View description>>
© 2017 ACM. High-dimensional multi-labeled data contain instances, where each instance is associated with a set of class labels and has a large number of noisy and irrelevant features. Feature selection has been shown to have great benefits in improving the classification performance in machine learning. In multi-label learning, to select the discriminative features among multiple labels, several challenges should be considered: interdependent labels, different instances may share different label correlations, correlated features, and missing and .awed labels. This work is part of a project at .e Children's Hospital at Westmead (TB-CHW), Australia to explore the genomics of childhood leukaemia. In this paper, we propose a CMFS (Correlated-and Multi-label Feature Selection method), based on non-negative matrix factorization (NMF) for simultaneously performing feature selection and addressing the aforementioned challenges. Significantly, a major advantage of our research is to exploit the correlation information contained in features, labels and instances to select the relevant features among multiple labels. Furthermore, l2;1-norm regularization is incorporated in the objective function to undertake feature selection by imposing sparsity on the feature matrix rows. We employ CMFS to decompose the data and multi-label matrices into a low-dimensional space. To solve the objective function, an efficient iterative optimization algorithm is proposed with guaranteed convergence. Finally, extensive experiments are conducted on high-dimensional multi-labeled datasets. The experimental results demonstrate that our method significantly outperforms state-of-the-art multi-label feature selection methods.
Broekhuijsen, M, van den Hoven, E & Markopoulos, P 1970, 'Design Directions for Media-Supported Collocated Remembering Practices', Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17: Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama, Japan, pp. 21-30.
View/Download from: Publisher's site
View description>>
Since the widespread adoption of digital photography, people create many digital photos, often with the intention to use them for shared remembering. Practices around digital photography have changed along with advances in media sharing technologies such as smartphones, social media, and mobile connectivity. Although much research was done at the start of digital photography, commercially available tools for media-supported shared remembering still have many limitations. The objective of our research is to explore spatial and material design directions to better support the use of personal photos for collocated shared remembering. In this paper, we present seven design requirements that resulted from a redesign workshop with fifteen participants, and four design concepts (two spatial, two material) that we developed based on those requirements. By reflecting on the requirements and designs we conclude with challenges for interaction designers to support collocated remembering practices.
Buchan, J, Bano, M, Zowghi, D, MacDonell, SG & Shinde, A 1970, 'Alignment of Stakeholder Expectations about User Involvement in Agile Software Development.', EASE, International Conference on Evaluation and Assessment in Software Engineering, ACM, Karlskrona, Sweden, pp. 334-343.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. Context: User involvement is generally considered to contributing to user satisfaction and project success and is central to Agile software development. In theory, the expectations about user involvement, such as the PO's, are quite demanding in this Agile way of working. But what are the expectations seen in practice, and are the expectations of user involvement aligned among the development team and users? Any misalignment could contribute to conflict and miscommunication among stakeholders that may result in ineffective user involvement. Objective: Our aim is to compare and contrast the expectations of two stakeholder groups (software development team, and software users) about user involvement in order to understand the expectations and assess their alignment. Method: We have conducted an exploratory case study of expectations about user involvement in an Agile software development. Qualitative data was collected through interviews to design a novel method for the assessing the alignment of expectations about user involvement by applying Repertory Grids (RG). Results: By aggregating the results from the interviews and RGs, varying degrees of expectation alignments were observed between the development team and user representatives. Conclusion: Alignment of expectations can be assessed in practice using the proposed RG instrument and can reveal misalignment between user roles and activities they participate in Agile software development projects. Although we used RG instrument retrospectively in this study, we posit that it could also be applied from the start of a project, or proactively as a diagnostic tool throughout a project to assess and ensure that expectations are aligned.
Cao, J, Ma, M, Li, H, Fu, Y, Niu, B & Li, F 1970, 'Trajectory Prediction-based Handover Authentication Mechanism for Mobile Relays in LTE-A High-Speed Rail Networks', 2017 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), IEEE International Conference on Communications (ICC), IEEE, FRANCE, Paris.
Cao, Z, Prasad, M & Lin, C-T 1970, 'Estimation of SSVEP-based EEG complexity using inherent fuzzy entropy', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This study considers the dynamic changes of complexity feature by fuzzy entropy measurement and repetitive steady-state visual evoked potential (SSVEP) stimulus. Since brain complexity reflects the ability of the brain to adapt to changing situations, we suppose such adaptation is closely related to the habituation, a form of learning in which an organism decreases or increases to respond to a stimulus after repeated presentations. By a wearable electroencephalograph (EEG) with Fpz and Oz electrodes, EEG signals were collected from 20 healthy participants in one resting and five-times 15 Hz SSVEP sessions. Moreover, EEG complexity feature was extracted by multi-scale Inherent Fuzzy Entropy (IFE) algorithm, and relative complexity (RC) was defined the difference between resting and SSVEP. Our results showed the enhanced frontal and occipital RC was accompanied with increased stimulus times. Compared with the 1st SSVEP session, the RC was significantly higher than the 5th SSVEP session at frontal and occipital areas (p < 0.05). It suggested that brain has adapted to changes in stimulus influence, and possibly connected with the habituation. In conclusion, effective evaluation of IFE has a potential EEG signature of complexity in the SSEVP-based experiment.
Cetindamar, D & Beyhan, B 1970, 'Social Innovation Assessment at the University Level', 2017 Portland International Conference on Management of Engineering and Technology (PICMET), 2017 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, OR, USA, pp. 1-4.
View/Download from: Publisher's site
View description>>
© 2017 PICMET. Based on a literature review, our paper points out the need for an assessment model that could account social aspect of technological innovations generated in universities. Rather than quantitative metrics, using cases seems an appropriate approach to evaluate social innovations at universities to capture the richness of social impact. We further suggest the consideration of gathering information on four dimensions of social innovation to complement case approach: (1) the categories of beneficiaries who will benefit from social innovations, (2) the geographic location of impact, (3) the type of social innovations in terms of their output, and (4) the social benefit that innovation will bring. This paper ends with a few suggestions for further studies.
Chang, Y-C, Wang, Y-K, Wu, D & Lin, C-T 1970, 'Generating a fuzzy rule-based brain-state-drift detector by riemann-metric-based clustering', 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Banff, AB, Canada, pp. 1220-1225.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Brain-state drifts could significantly impact on the performance of machine-learning algorithms in brain computer interface (BCI). However, less is understood with regard to how brain transition states influence a model and how it can be represented for a system. Herein we are interested in the hidden information of brain state-drift occurring in both simulated and real-world human-system interaction. This research introduced the Riemann metric to categorize EEG data, and visualized the clustering result so that the distribution of the data can be observable. Moreover, to defeat subjective uncertainty of electroencephalography (EEG) signals, fuzzy theory was employed. In this study, we built a fuzzy rule-based brain-statedrift detector to observe the brain state and imported data from different subjects to testify the performance. The result of the detection is acceptable and shown in this paper. In the future, we expect that brain-state drifting can be connected with human behaviors via the proposed fuzzy rule-based classification. We also will develop a new structure for a fuzzy rule-based brain-statedrift detector to improve the detection accuracy.
Chauhan, J, Hu, Y, Seneviratne, S, Misra, A, Seneviratne, A & Lee, Y 1970, 'BreathPrint', Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys'17: The 15th Annual International Conference on Mobile Systems, Applications, and Services, ACM, pp. 278-291.
View/Download from: Publisher's site
Chen, S, Chen, S, Lin, L, Yuan, X, Liang, J & Zhang, X 1970, 'E-Map: A Visual Analytics Approach for Exploring Significant Event Evolutions in Social Media', 2017 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, AZ, Phoenix, pp. 36-47.
Chen, S, Chen, S, Lin, L, Yuan, X, Liang, J & Zhang, X 1970, 'E-Map: A Visual Analytics Approach for Exploring Significant Event Evolutions in Social Media', 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, Phoenix, Arizona, USA, pp. 36-47.
View/Download from: Publisher's site
View description>>
Significant events are often discussed and spread through social
media, involving many people. Reposting activities and opinions
expressed in social media offer good opportunities to understand
the evolution of events. However, the dynamics of reposting activities
and the diversity of user comments pose challenges to understand
event-related social media data. We propose E-Map, a visual
analytics approach that uses map-like visualization tools to help
multi-faceted analysis of social media data on a significant event
and in-depth understanding of the development of the event. E-Map
transforms extracted keywords, messages, and reposting behaviors
into map features such as cities, towns, and rivers to build a structured
and semantic space for users to explore. It also visualizes
complex posting and reposting behaviors as simple trajectories and
connections that can be easily followed. By supporting multi-level
spatial temporal exploration, E-Map helps to reveal the patterns
of event development and key players in an event, disclosing the
ways they shape and affect the development of the event. Two
cases analysing real-world events confirm the capacities of E-Map
in facilitating the analysis of event evolution with social media data
Chen, Z, Li, J, Chen, Z & You, X 1970, 'Generic Pixel Level Object Tracker Using Bi-Channel Fully Convolutional Network', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 666-676.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2017. As most of the object tracking algorithms predict bounding boxes to cover the target, pixel-level tracking methods provide a better description of the target. However, it remains challenging for a tracker to precisely identify detailed foreground areas of the target. In this work, we propose a novel bi-channel fully convolutional neural network to tackle the generic pixel-level object tracking problem. By capturing and fusing both low-level and high-level temporal information, our network is able to produce pixel-level foreground mask of the target accurately. In particular, our model neither updates parameters to fit the tracked target nor requires prior knowledge about the category of the target. Experimental results show that the proposed network achieves compelling performance on challenging videos in comparison with competitive tracking algorithms.
Chen, Z, You, X & Li, J 1970, 'Learning to focus for object proposals', 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), IEEE, Shenzhen, China, pp. 439-444.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Object proposal generators address the wasteful exhaustive search of the sliding window scheme in visual object detection and have been shown effective. However, the number of candidate windows is still large in order to ensure full coverage of potential objects. This paper presents a complementary technique that aims to work with any proposal generating system, amending the workflow from 'propose-assess' to 'propose-adjust-assess'. The adjustment serves as an auto-focus mechanism for the system and reduces the number of object proposals to be processed. The auto-focus is realized by two learning-based transformation models, one translating and the other deforming the windows towards better alignments of the objects, which are trained for identifying generic objects using image cues. Experiments on reallife image data sets show that the proposed technique can reduce the number of proposals without loss of performance.
Cheng, E-J, Prasad, M, Puthal, D, Sharma, N, Prasad, OK, Chin, P-H, Lin, C-T & Blumenstein, M 1970, 'Deep Learning Based Face Recognition with Sparse Representation Classification', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 24th International Conference on Neural Information Processing (ICONIP), Springer International Publishing, Guangzhou, PEOPLES R CHINA, pp. 665-674.
View/Download from: Publisher's site
View description>>
Feature extraction is an essential step in solving real-world pattern recognition and classification problems. The accuracy of face recognition highly depends on the extracted features to represent a face. The traditional algorithms uses geometric techniques, comprising feature values including distance and angle between geometric points (eyes corners, mouth extremities, and nostrils). These features are sensitive to the elements such as illumination, variation of poses, various expressions, to mention a few. Recently, deep learning techniques have been very effective for feature extraction, and deep features have considerable tolerance for various conditions and unconstrained environment. This paper proposes a two layer deep convolutional neural network (CNN) for face feature extraction and applied sparse representation for face identification. The sparsity and selectivity of deep features can strengthen sparseness for the solution of sparse representation, which generally improves the recognition rate. The proposed method outperforms other feature extraction and classification methods in terms of recognition accuracy.
Cheng, H, Ning, Y & Yu, S 1970, 'Establishing secure and efficient encrypted connectivity for random cloud tenants', 2017 3rd IEEE International Conference on Computer and Communications (ICCC), 2017 3rd IEEE International Conference on Computer and Communications (ICCC), IEEE, pp. 2439-2444.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In cloud computing, the cloud participants join in or leave the network, and launch on-demand secure communication randomly. In such situations, KDC and PKI infrastructure are very low efficiency and costly. In order to establish secure and efficient encrypted connectivity for random on-demand cloud tenants, some important topics should be considered and solved firstly, such as secret agreement, sharing and session key establishment. In this paper, a secret sharing scheme is proposed for cloud tenants in cloud computing when any two of them launch on-demand encrypted connectivity, for the application of the proposed secret sharing scheme, we construct the symmetric key application in cloud computing environment. Analysis and simulation of the proposed scheme demonstrate that it is secure and efficient to establish encrypted connectivity for random on-demand network participants for cloud computing.
Chiu, C-Y, Singh, AK, Wang, Y-K, King, J-T & Lin, C-T 1970, 'A wireless steady state visually evoked potential-based BCI eating assistive system.', IJCNN, International Joint Conference on Neural Networks, IEEE, Anchorage, AK, USA, pp. 3003-3007.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Brain-Computer interface (BCI) which aims at enabling users to perform tasks through their brain waves has been a feasible and worth developing solution for growing demand of healthcare. Current proposed BCI systems are often with lower applicability and do not provide much help for reducing burdens of users because of the time-consuming preparation required by adopted wet sensors and the shortage of provided interactive functions. Here, by integrating a state visually evoked potential (SSVEP)-based BCI system and a robotic eating assistive system, we propose a non-invasive wireless steady state visually evoked potential (SSVEP)-based BCI eating assistive system that enables users with physical disabilities to have meals independently. The analysis compared different methods of classification and indicated the best method. The applicability of the integrated eating assistive system was tested by an Amyotrophic Lateral Sclerosis (ALS) patient, and a questionnaire reply and some suggestion are provided. Fifteen healthy subjects engaged the experiment, and an average accuracy of 91.35%, and information transfer rate (ITR) of 20.69 bit per min are achieved. For online performance evaluation, the ALS patient gave basic affirmation and provided suggestions for further improvement. In summary, we proposed a usable SSVEP-based BCI system enabling users to have meals independently. With additional adjustment of movement design of the robotic arm and classification algorithm, the system may offer users with physical disabilities a new way to take care of themselves.
Choi, Y & McGregor, C 1970, 'A Flexible Parental Engaged Consent Model for the Secondary Use of Their Infant’s Physiological Data in the Neonatal Intensive Care Context', 2017 IEEE International Conference on Healthcare Informatics (ICHI), 2017 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Park City, UT, USA, pp. 502-507.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The secondary use of health data, especially the use of physiological data for research holds many opportunities for improving the current understanding of neonatal conditions. As a neonate is unable to provide their consent regarding participation in research studies, a substitute decision maker (SDM) must provide parental or legal guardian consent. However it has been well documented that there are many emotional, mental and physical challenges associated with the parental consent process in the neonatal intensive care unit (NICU). It is proposed that a flexible parental engaged consent model could help alleviate some of these issues by providing parents with the ability to choose and change their clinical engagement level preference for their infant's participation in research at their convenience at any point in time. In this paper, an extension to Service based Multidimensional Temporal Data Mining Framework (STDMn0) to allow for the functionality of flexible patient or surrogate consent is presented based on the use of a flexible consent model initially proposed by Heath [1]. This functionality is demonstrated via an example implementation for a generic retrospective research study in the NICU setting.
Chou, K-P, Li, D-L, Prasad, M, Lin, C-T & Lin, W-C 1970, 'A method to enhance the deep learning in an aerial image', 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), IEEE, Xiamen, China, pp. 724-728.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this paper, we propose a kind of pre-processing method which can be applied to the depth learning method for the characteristics of aerial image. This method combines the color and spatial information to do the quick background filtering. In addition to increase execution speed, but also to reduce the rate of false positives.
Chou, K-P, Li, D-L, Prasad, M, Pratama, M, Su, S-Y, Lu, H, Lin, C-T & Lin, W-C 1970, 'Robust Facial Alignment for Face Recognition', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017 International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 497-504.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. This paper proposes a robust real-time face recognition system that utilizes regression tree based method to locate the facial feature points. The proposed system finds the face region which is suitable to perform the recognition task by geometrically analyses of the facial expression of the target face image. In real-world facial recognition systems, the face is often cropped based on the face detection techniques. The misalignment is inevitably occurred due to facial pose, noise, occlusion, and so on. However misalignment affects the recognition rate due to sensitive nature of the face classifier. The performance of the proposed approach is evaluated with four benchmark databases. The experiment results show the robustness of the proposed approach with significant improvement in the facial recognition system on the various size and resolution of given face images.
Chou, K-P, Prasad, M, Gupta, D, Sankar, S, Xu, T-W, Sundaram, S, Lin, C-T & Lin, W-C 1970, 'Block-based feature extraction model for early fire detection', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Every year the fire disaster always causes a lot of casualties and property damage. Many researchers are involved in the study of related disaster prevention. Early warning systems and stable fire can significantly reduce the damage caused by fire. Many existing image-based early warning systems can perform well in a particular field. In this paper, we propose a general framework that can be applied in most realistic environments. The proposed system is based on a block-based feature extraction method, which analyses local information in separate regions leading to a reduction in computing data. Local features of fire block are extracted from the detailed characteristics of fire objects, which include fire color, fire source immobility, and disorder. Each local feature has high detection rate and filter out different false-positive cases. Global analysis with fire texture and non-moving properties are applied to further reduce false alarm rate. The proposed system is composed of algorithms with low computation. Through a series of experiments, it can be observed that Experimental results show that the proposed system has higher detection rate and low false alarm rate under various environment.
Chou, K-P, Prasad, M, Gupta, D, Sankar, S, Xu, T-W, Sundaram, S, Lin, C-T & Lin, W-C 1970, 'Block-based Feature Extraction Model for Early Fire Detection', 2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), IEEE Symposium Series on Computational Intelligence (IEEE SSCI), IEEE, HI, Honolulu, pp. 3540-3547.
Chou, K-P, Prasad, M, Li, D-L, Bharill, N, Lin, Y-F, Hussain, F, Lin, C-T & Lin, W-C 1970, 'Automatic Multi-view Action Recognition with Robust Features', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 554-563.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. This paper proposes view-invariant features to address multi-view action recognition for different actions performed in different views. The view-invariant features are obtained from clouds of varying temporal scale by extracting holistic features, which are modeled to explicitly take advantage of the global, spatial and temporal distribution of interest points. The proposed view-invariant features are highly discriminative and robust for recognizing actions as the view changes. This paper proposes a mechanism for real world application which can follow the actions of a person in a video based on image sequences and can separate these actions according to given training data. Using the proposed mechanism, the beginning and ending of an action sequence can be labeled automatically without the need for manual setting. It is not necessary in the proposed approach to re-train the system if there are changes in scenario, which means the trained database can be applied in a wide variety of environments. The experiment results show that the proposed approach outperforms existing methods on KTH and WEIZMANN datasets.
Chou, K-P, Prasad, M, Puthal, D, Chen, P-H, Vishwakarma, DK, Sundarami, S, Lin, C-T & Lin, W-C 1970, 'Fast Deformable Model for Pedestrian Detection with Haar-like features', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper proposes a novel Fast Deformable Model for Pedestrian Detection (FDMPD) to detect the pedestrians efficiently and accurately in the crowded environment. Despite of multiple detection methods available, detection becomes difficult due to variety of human postures and perspectives. The proposed study is divided into two parts. First part trains six Adaboost classifiers with Haar-like feature for different body parts (e.g., head, shoulders, and knees) to build the response feature maps. Second part uses these six response feature maps with full-body model to produce spatial deep features. The combined deep features are used as an input to SVM to judge the existence of pedestrian. As per the experiments conducted on the INRIA person dataset, the proposed FDMPD approach shows greater than 44.75 % improvement compared to other state-of-the-art methods in terms of efficiency and robustness.
Chu, C, Brownlow, J, Meng, Q, Fu, B, Culbert, B, Zhu, M, Xu, G & He, X 1970, 'Combining heterogeneous features for time series prediction', 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), IEEE, Krakow, Poland, pp. 1-2.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Time series prediction is a challenging task in reality, and various methods have been proposed for it. However, only the historical series of values are exploited in most of existing methods. Therefore, the predictive models might be not effective in some cases, due to: (1) the historical series of values is not sufficient usually, and (2) features from heterogeneous sources such as the intrinsic features of data samples themselves, which could be very useful, are not take into consideration. To address these issues, we proposed a novel method in this paper which learns the predictive model based on the combination of dynamic features extracted from series of historical values and static features of data samples. To evaluate the performance of our proposed method, we compare it with linear regression and boosted trees, and the experimental results validate our method's superiority.
Coenen, MJ, Vos, HI, Groothuismink, JM, van der Graaf, WT, Flucke, U, Schreuder, BH, Hagleitner, MM, Gelderblom, H, van der Straaten, T, de Bont, ES, Kremer, LC, Bras, J, Caron, HN, Windsor, R, Whelan, JS, Patino-Garcia, A, Gonzalez-Neira, A, McCowage, G, Nagabushan, S, Catchpoole, D, van Leeuwen, FN, Guchelaar, H-J & te Loo, MD 1970, 'PHARMACOGENETICS OF CHEMOTHERAPY RESPONSE IN OSTEOSARCOMA: A GENETIC VARIANT IN SLC7A8 IS ASSOCIATED WITH PROGRESSIVE DISEASE', CLINICAL PHARMACOLOGY & THERAPEUTICS, Annual Meeting of the American-Society-for-Clinical-Pharmacology-and-Therapeutics (ASCPT), WILEY-BLACKWELL, DC, Washington, pp. S16-S16.
Coluccia, A, Ghenescu, M, Piatrik, T, De Cubber, G, Schumann, A, Sommer, L, Klatte, J, Schuchert, T, Beyerer, J, Farhadi, M, Amandi, R, Aker, C, Kalkan, S, Saqib, M, Sharma, N, Daud, S, Makkah, K & Blumenstein, M 1970, 'Drone-vs-Bird detection challenge at IEEE AVSS2017', 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Lecce, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the 'drone-vs-bird detection challenge' to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results1.
Dasgupta, A & Gill, AQ 1970, 'Fog computing challenges: A systematic review', Proceedings of the 28th Australasian Conference on Information Systems Acis 2017, Australasian Conference on Information Systems, Hobart, Australia.
View description>>
Internet of Things (IoT) applications continue to grow at a rapid scale. However, current cloud centric IoT architectures are not feasible to support the mobility needs as well as latency requirements of time critical IoT applications. This has restricted the growth of IoT in certain sectors. This paper investigates the fog-computing paradigm as an alternative for IoT applications. There is a need to systematically review and synthesize fog computing concerns or challenges for IoT applications. This paper aims to address this important research need using a well-known systematic literature review (SLR) approach. Using the SLR approach and applying customized search criteria derived from the research question, 17 relevant studies were identified and reviewed in this regard from an initial set of 439 papers. In addition, 4 papers were manually identified based on their relevance. The data was organized into four major challenge categories. The findings of this research paper can help practitioners and researchers to understand fog computing related concerns, and provide a number of useful insights for future work. The scope of this paper is limited to the number of reviewed studies from chosen database.
Diesner, J, Ferrari, E & Xu, G 1970, 'Welcome from the ASONAM 2017 program chairs', Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2017, p. xviii.
Dong, D & Wang, Y 1970, 'Several recent developments in estimation and robust control of quantum systems', 2017 Australian and New Zealand Control Conference (ANZCC), 2017 Australian and New Zealand Control Conference (ANZCC), IEEE, pp. 190-195.
View/Download from: Publisher's site
Dong, F, Lu, J, Li, K & Zhang, G 1970, 'Concept drift region identification via competence-based discrepancy distribution estimation', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Real-world data analytics often involves cumulative data. While such data contains valuable information, the pattern or concept underlying these data may change over time and is known as concept drift. When learning under concept drift, it is essential to know when, how and where the context has evolved. Most existing drift detection methods focus only on triggering a signal when drift is detected, and little research has endeavored to explain how and where the data changes. To address this issue, we introduce kernel density estimation into competence-based drift detection method, and invent competence-based discrepancy distribution estimation to identify specific regions in the data feature space where drift has occurred. Two experiments demonstrate that our proposed approach, competence-based discrepancy density estimation, can quantitatively highlight drift regions through data feature space, and produce results that are very close to preset drift regions.
Dou, W, Xu, X, Meng, S, Zhang, X, Hu, C, Yu, S & Yang, J 1970, 'An energy‐aware virtual machine scheduling method for service QoS enhancement in clouds over big data', Concurrency and Computation: Practice and Experience, Wiley, pp. e3909-e3909.
View/Download from: Publisher's site
View description>>
SummaryBecause of the strong demands of physical resources of big data, it is an effective and efficient way to store and process big data in clouds, as cloud computing allows on‐demand resource provisioning. With the increasing requirements for the resources provisioned by cloud platforms, the Quality of Service (QoS) of cloud services for big data management is becoming significantly important. Big data has the character of sparseness, which leads to frequent data accessing and processing, and thereby causes huge amount of energy consumption. Energy cost plays a key role in determining the price of a service and should be treated as a first‐class citizen as other QoS metrics, because energy saving services can achieve cheaper service prices and environmentally friendly solutions. However, it is still a challenge to efficiently schedule Virtual Machines (VMs) for service QoS enhancement in an energy‐aware manner. In this paper, we propose an energy‐aware dynamic VM scheduling method for QoS enhancement in clouds over big data to address the above challenge. Specifically, the method consists of two main VM migration phases where computation tasks are migrated to servers with lower energy consumption or higher performance to reduce service prices and execution time. Extensive experimental evaluation demonstrates the effectiveness and efficiency of our method. Copyright © 2016 John Wiley & Sons, Ltd.
ElShaweesh, O, Hussain, FK, Lu, H, Al-Hassan, M & Kharazmi, S 1970, 'Personalized Web Search Based on Ontological User Profile in Transportation Domain', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 24th International Conference on Neural Information Processing 2017, Springer International Publishing, Guangzhou, China, pp. 239-248.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Current conventional search engines deliver similar results to all users for the same query. Because of the variety of user interests and preferences, personalized search engines, based on semantics, hold the promise of providing more efficient information that better reflects users’ needs. The main feature of building a personalized web search is to represent user interests in terms of user profiles. This paper proposes a personalized search approach using an ontology-based user profile. The aim of this approach is to build user profiles based on user browsing behavior and semantic knowledge of specific domain ontology to enhance the quality of the search results. The proposed approach utilizes a re-ranked algorithm to sort the results returned by the search engine to provide a search result that best relates to the user query. This algorithm evaluates the similarity between a user query, the retrieved search results and the ontological concepts. This similarity is computed by taking into account a user’s explicit browsing behavior, semantic knowledge of concepts, and synonyms of term-based vectors extracted from the WordNet API. A set of experiments using a case study from a transport service domain validates the effectiveness of the proposed approach and demonstrates promising results.
Erfani, SS, Lawrence, C, Abedin, B, Beydoun, G & Malimu, L 1970, 'Indigenous people living with cancer: Developing a mobile health application for improving their psychological well-being', AMCIS 2017 - America's Conference on Information Systems: A Tradition of Innovation, Americas Conference on Information Systems, AIS Electronic Library, Boston, pp. 1-5.
View description>>
Poor cancer outcomes experienced by Indigenous Australians result from advanced cancer stages at diagnosis, poorer uptake of and adherence to treatments, higher levels of co-morbidity, and poorer access to inclusive and culturally appropriate care compared with non-Indigenous Australians. Socio-economics and social support can mitigate these problems. Technology-based interventions hold considerable promise for enhancing social support. This paper asks what are the key features of a mobile health application designed to improve the social support and consequently psychological well-being of Indigenous Australians living with cancer? To answer this question, a comprehensive literature review of studies conducted in information systems and health disciplines has been undertaken and a theoretical model is proposed. This study contributes to the existing knowledge base through the development of a new theoretical model and the introduction of the features of a mobile health application that may have a positive impact among Indigenous Australian cancer patients’ psychological well-being.
Fang, XS, Sheng, QZ, Wang, X & Ngu, AHH 1970, 'Value Veracity Estimation for Multi-Truth Objects via a Graph-Based Approach', Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion, the 26th International Conference, ACM Press, pp. 777-778.
View/Download from: Publisher's site
Fang, XS, Sheng, QZ, Wang, X, Barhamgi, M, Yao, L & Ngu, AHH 1970, 'SourceVote: Fusing Multi-valued Data via Inter-source Agreements', CONCEPTUAL MODELING, ER 2017, 36th International Conference on Conceptual Modeling (ER), Springer International Publishing, Valencia, SPAIN, pp. 164-172.
View/Download from: Publisher's site
Fei, F, Li, S, Dou, W & Yu, S 1970, 'An Evolutionary Approach for Short-Term Traffic Flow Forecasting Service in Intelligent Transportation System', SMART COMPUTING AND COMMUNICATION, SMARTCOM 2016, 1st International Conference on Smart Computing and Communication (SmartCom), Springer International Publishing, Shenzhen, PEOPLES R CHINA, pp. 477-486.
View/Download from: Publisher's site
Feng, VX & Leong, TW 1970, 'Digital meaning', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, Brisbane, pp. 366-370.
View/Download from: Publisher's site
Ferguson, S & Bown, O 1970, 'Creative Coding for the Raspberry Pi using the HappyBrackets Platform', Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, C&C '17: Creativity and Cognition, ACM, Singapore, Singapore, pp. 551-553.
View/Download from: Publisher's site
View description>>
This workshop will introduce creative coding audio for the Raspberry Pi, using the 'beads' platform for audio programming, and the 'HappyBrackets' platform for inter-device communication and sensor data acquisition. We will demonstrate methods to allow each self-contained battery-powered device to acquire sensor data about its surroundings and the way it is being interacted with, as well as methods for designing systems where groups of these devices wirelessly communicate their state, allowing new interaction possibilities and approaches.
Ferguson, S, Rowe, A, Bown, O, Birtles, L & Bennewith, C 1970, 'Networked Pixels', Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, C&C '17: Creativity and Cognition, ACM, Singapore, pp. 299-308.
View/Download from: Publisher's site
View description>>
This paper describes the development of the hardware and software for Bloom, a light installation installed at Kew Gardens, London in December of 2016. The system is made up of a set of nearly 1000 distributed pixel devices each with LEDs, GPS sensor, and sound hardware, networked together with WiFi to form a display system. Media design for this system required consideration of the distributed nature of the devices. We outline the software and hardware designed for this system, and describe two approaches to the software and media design, one whereby we employ the distributed devices themselves for computation purposes (the approach we ultimately selected), and another whereby the devices are controlled from a central server that is performing most of the computation necessary. We then review these approaches and outline possibilities for future research.
Fernando, KES, McGregor, C & James, AG 1970, 'CRISP-TDM<inf>0</inf> for standardized knowledge discovery from physiological data streams: Retinopathy of prematurity and blood oxygen saturation case study', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 226-229.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The CRoss Industry Standard Process for Temporal Data Mining (CRISP-TDM) that supports physiological stream temporal data mining and CRISP-DM0 that supports null hypothesis driven confirmatory data mining in combination was proposed by prior research. This combined CRISP-TDM0 is utilised as the standardised approach to managing, reporting and performing retrospective clinical research and is designed to solve the limitation in knowledge discovery amongst physiological data streams [1]. The temporal abstractions (TA) of high fidelity blood oxygenation saturation (SpO2) levels of nine premature neonates are analysed using data collected by the Artemis Platform that complies with the Big Data concept [2] and correlated with Retinopathy of Prematurity (ROP) data. The hourly SpO2, TA pattern visualisation manifested three clusters and this is further supported by mathematical review of time percentage spent in target, below and over oxygenation. Clustering based on ROP stage and gestational age identified probable association within these three clusters. However known risk factors showed no association with ROP.
Ferrari, A, Spoletini, P, Donati, B, Zowghi, D & Gnesi, S 1970, 'Interview Review: Detecting Latent Ambiguities to Improve the Requirements Elicitation Process', 2017 IEEE 25th International Requirements Engineering Conference (RE), 2017 IEEE 25th International Requirements Engineering Conference (RE), IEEE, Lisbon, pp. 400-405.
View/Download from: Publisher's site
View description>>
The review of software process artifacts, which include requirements as well as source code [1], is an effective practice to improve the quality of products [2]–[3][4][5]. In particular, the benefits of requirements reviews have been highlighted by several studies, especially for what concerns the identification of defects in requirements specifications [3], [6], [7]. Nevertheless, despite the usage of requirements reviews dates back at least 40 years [6], challenges exist for their widespread application in the software industry [8], [9]. Among the challenges, Salger highlights that “Software requirements are based on flawed ‘upstream’ requirements and reviews on requirements specifications are thus in vain” [8]. This observation poses an emphasis on the need to ameliorate early requirements elicitation activities, especially to improve the completeness of the specifications, a quality attribute that is recognised to be hard to assess by means of reviews [10].
Gao, F, Musial, K & Gabrys, B 1970, 'A Community Bridge Boosting Social Network Link Prediction Model', Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17: Advances in Social Networks Analysis and Mining 2017, ACM, pp. 683-689.
View/Download from: Publisher's site
View description>>
Link prediction in social networks is a very challenging research problem. The majority of existing approaches are based on the assumption that a given network evolves following a single phenomenon, e.g.”rich get richer” or”friend of my friend is my friend”. However, dynamics of network dynamic changes over time and different parts of the network evolve in different manner. Because of that, we hypothesise that the prediction accuracy can be improved by providing different treatment to different nodes and links. Building on that assumption, we propose a Community Bridge Boosting Prediction Model (CBBPM) that treats certain bridge nodes differently depending on their structural position. For such bridge nodes their similarity score obtained using traditional link-based prediction methods is boosted. By doing so the importance of these nodes is increased and at the same time ensuring that the CBBPM can be used with any existing link prediction method. Our experimental results show that such bridge node similarity boosting mechanism can improve the accuracy of traditional link prediction methods.
Ghamry, AM, Alkalbani, AM, Tran, V, Tsai, Y-C, Hoang, ML & Hussain, FK 1970, 'Towards a Public Cloud Services Registry', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 18th International Conference Web Information Systems Engineering, Springer International Publishing, Puschino, Russia, pp. 290-295.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Cloud services registry is a cloud services datadase which contains thousands of records of cloud consumers’ reviews and cloud services, such as Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). The data set is harvested from a web portal called www.serchen.com. Each record holds detail information about the service such as service name, service description, categories, key features, service provider link and review list. Each review contains reviewer name, review date and review content. This work is an extension of our previous work Blue Pages data set [6]. The data set is valuable for future research in cloud service identification, discovery, comparison and selection.
Ghantous, GB & Gill, AQ 1970, 'DevOps: Concepts, practices, tools, benefits and challenges', Proceedings Ot the 21st Pacific Asia Conference on Information Systems Societal Transformation Through is IT Pacis 2017, PACIS2017, AIS Electronic Library (AISeL), Malaysia.
View description>>
DevOps, originated in the context of agile software development, seems an appropriate approach to enable the continuous delivery and deployment of working software in small releases. Organizations are taking significant interest in adopting DevOps ways of working. The interest is there, however the challenge is how to effectively adopt DevOps in practice? Before disembarking on the journey of DevOps, there is a need to clearly understand the DevOps concepts, practice, tools, benefits and underlying challenges. Thus, in order to address the research question in hand, this paper adopts a Systematic Literature Review (SLR) approach to identify, review and synthesize the relevant studies published in public domain between: 2010-2016. SLR approach was applied to initially identify a set of 450 papers. Finally, 30 of 450 relevant papers were selected and reviewed to identify the eight key DevOps concepts, twenty practices, and a twelve categories tools. The research also identified seventeen benefits of using DevOps approach for application development and encountered four known challenges. The results of this review will serve as a knowledge base for researchers and practitioners, which can be used to effectively understand and establish the integrated DevOps capability in the local context.
Gill, AQ, Behbood, V, Ramadan-Jradi, R & Beydoun, G 1970, 'IoT architectural concerns: a systematic review.', ICC, International Conference on Internet of things and Cloud Computing, ACM, Cambridge, United Kingdom, pp. 117:1-117:1.
View/Download from: Publisher's site
View description>>
© 2017 ACM. There is increasing interest in studying and applying Internet of Things (IoT) within the overall context of digital-physical ecosystems. Most recently, much has been published on the benefits and applications of IoT. The main question is: what are the key IoT architectural concerns, which must be addressed to effectively develop and implement an IoT architecture? There is a need to systematically review and synthesize the literature on IoT architectural challenges or concerns. Using the SLR approach and applying customised search criteria derived from the research question, 22 relevant studies were identified and reviewed in this paper. The data from these papers were extracted to identify the IoT architectural challenges and relevant solutions. These results were organised into to 9 major challenge and 7 solution categories. The results of this research will serve as a resource for practitioners and researchers for the effective adoption, and setting future research priorities and directions in this emerging area of IoT architecture.
Gu, L, Wang, K, Liu, X, Guo, S & Liu, B 1970, 'A reliable task assignment strategy for spatial crowdsourcing in big data environment', 2017 IEEE International Conference on Communications (ICC), ICC 2017 - 2017 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. With the ubiquitous deployment of the mobile devices with increasingly better communication and computation capabilities, an emerging model called spatial crowdsourcing is proposed to solve the problem of unstructured big data by publishing location-based tasks to participating workers. However, massive spatial data generated by spatial crowdsourcing entails a critical challenge that the system has to guarantee quality control of crowdsourcing. This paper first studies a practical problem of task assignment, namely reliability aware spatial crowdsourcing (RA-SC), which takes the constrained tasks and numerous dynamic workers into consideration. Specifically, the worker confidence is introduced to reflect the completion reliability of the assigned task. Our RA-SC problem is to perform task assignments such that the reliability under budget constraints is maximized. Then, we reveal the typical property of the proposed problem, and design an effective strategy to achieve a high reliability of the task assignment. Besides the theoretical analysis, extensive experimental results also demonstrate that the proposed strategy is stable and effective for spatial crowdsourcing.
Guo, J, Yue, B, Xu, G, Yang, Z & Wei, J-M 1970, 'An Enhanced Convolutional Neural Network Model for Answer Selection', Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion, the 26th International Conference, ACM Press, Perth, Western Australia, pp. 789-790.
View/Download from: Publisher's site
View description>>
Answer selection is an important task in question answering (QA) from the Web. To address the intrinsic difficulty in encoding sentences with semantic meanings, we introduce a general framework, i.e., Lexical Semantic Feature based Skip Convolution Neural Network (LSF-SCNN), with several op- timization strategies. The intuitive idea is that the granular representations with more semantic features of sentences are deliberately designed and estimated to capture the similar- ity between question-answer pairwise sentences. The experi- mental results demonstrate the effectiveness of the proposed strategies and our model outperforms the state-of-the-art ones by up to 3.5% on the metrics of MAP and MRR.
Gupta, D, Borah, P & Prasad, M 1970, 'A fuzzy based Lagrangian twin parametric-margin support vector machine (FLTPMSVM)', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In the spirit of twin parametric-margin support vector machine (TPMSVM) and support vector machine based on fuzzy membership values (FSVM), a new method termed as fuzzy based Lagrangian twin parametric-margin support vector machine (FLTPMSVM) is proposed in this paper to reduce the effect of the outliers. In FLTPMSVM, we assign the weights to each data samples on the basis of fuzzy membership values to reduce the effect of outliers. Also, we consider the square of the 2-norm of slack variables to make the objective function strongly convex and find the solution of the proposed FLTPMSVM by solving simple linearly convergent iterative schemes instead of solving a pair of quadratic programming problems as in case of SVM, TWSVM, FTSVM and TPMSVM. No need of external toolbox is required for FLTPMSVM. The numerical experiments are performed on artificial as well as well known real-world datasets which show that our proposed FLTPMSVM is having better generalization performance and less training cost in comparison to support vector machine, twin support vector machine, fuzzy twin support vector machine and twin parametric-margin support vector machine.
Han, J, Wan, S, Lu, J & Zhang, G 1970, 'Tri-level multi-follower decision-making in a partial-cooperative situation', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Tri-level decision-making addresses compromises between interactive decision entities that are distributed throughout a three-level hierarchy. These decision entities are respectively termed as the top-level leader, the middle-level follower and the bottom-level follower. This paper considers a tri-level multi-follower (TLMF) decision problem where both cooperative and uncooperative relationships coexist between multiple followers at the same level. In this situation, followers share some decision variables with their counterparts and also control individual decision variables to achieve their respective goals; this is also known as partial-cooperative TLMF decision-making. To support this category of decision-making, this paper first presents a linear model to characterize the partial-cooperative TLMF decision-making process. It then develops a vertex enumeration algorithm to obtain a solution to the resulting model. Lastly, we apply the proposed TLMF decision techniques to handle an inventory management problem in applications.
Haque, MN, Mathieson, L & Moscato, P 1970, 'A memetic algorithm for community detection by maximising the connected cohesion', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, Hawaii, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Community detection is an exciting field of research which has attracted the interest of many researchers during the last decade. While many algorithms and heuristics have been proposed to scale existing approaches a relatively smaller number of studies have looked at exploring different measures of quality of the detected community. Recently, a new score called 'cohesion' was introduced in the computing literature. The cohesion score is based comparing the number of triangles in a given group of vertices to the number of triangles only partly in that group. In this contribution, we propose a memetic algorithm that aims to find a subset of the vertices of an undirected graph that maximizes the cohesion score. The associated combinatorial optimisation problem is known to be NP-Hard and we also prove it to be W[1]-hard when parameterized by the score. We used a Local Search individual improvement heuristic to expand the putative solution. Then we removed all vertices from the group which are not a part of any triangle and expand the neighbourhood by adding triangles which have at least two nodes already in the group. Finally we compute the maximum connected component of this group. The highest quality solutions of the memetic algorithm have been obtained for four real-world network scenarios and we compare our results with ground-truth information about the graphs. We also compare the results to those obtained with eight other community detection algorithms via interrater agreement measures. Our results give a new lower bound on the parameterized complexity of this problem and give novel insights on its potential usefulness as a new natural score for community detection.
Henderson, H & Leong, TW 1970, 'Lessons learned', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, pp. 533-537.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. This paper presents a study on user difficulties with parking meters. Using known Human-Computer Interaction (HCI) concepts as a guide, we explore the reasons for these difficulties and propose recommendations for designers of parking meters to improve the usability and experience. This paper also considers the applicability of these learnings to similar technologies that are of interest to HCI.
Herron, D, Moncur, W & van den Hoven, E 1970, 'Digital Decoupling and Disentangling', Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17: Designing Interactive Systems Conference 2017, ACM, Edinburgh, United Kingdom, pp. 1175-1185.
View/Download from: Publisher's site
View description>>
Romantic relationships are often facilitated through digital technologies, such as social networking sites and communication services. They are also facilitated through "digital possessions", such as messages sent to mobile devices and photos shared through social media. When individuals break up, digitally disconnecting can be facilitated by using those digital technologies and managing or curating these digital possessions. This research explores the break up stories of 13 individuals aged between 18 and 52. The aim of this work is to inform the design of systems focused on supporting individuals to decouple and disentangle digitally in the wake of a break up. Four areas of interest emerged from the data: communication, using digital possessions, managing digital possessions, and experiences of technology. Opportunities for design were identified in decoupling and disentangling, and designing around guilt.
Hu, L, Cao, L, Wang, S, Xu, G, Cao, J & Gu, Z 1970, 'Diversifying Personalized Recommendation with User-session Context', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 1858-1864.
View/Download from: Publisher's site
View description>>
Recommender systems (RS) have become an integral part of our daily life. However, most current RS often repeatedly recommend items to users with similar profiles. We argue that recommendation should be diversified by leveraging session contexts with personalized user profiles. For this, current session-based RS (SBRS) often assume a rigidly ordered sequence over data which does not fit in many real-world cases. Moreover, personalization is often omitted in current SBRS. Accordingly, a personalized SBRS over relaxedly ordered user-session contexts is more pragmatic. In doing so, deep-structured models tend to be too complex to serve for online SBRS owing to the large number of users and items. Therefore, we design an efficient SBRS with shallow wide-in-wide-out networks, inspired by the successful experience in modern language modelings. The experiments on a real-world e-commerce dataset show the superiority of our model over the state-of-the-art methods.
Huan, H, Wei, Z, Liang, L & Yang, L 1970, 'Collaborative Filtering Recommendation Model based on Convolutional Denoising Auto Encoder', Proceedings of the 12th Chinese Conference on Computer Supported Cooperative Work and Social Computing, ChineseCSCW '17: Chinese Conference on Computer Supported Cooperative Work and Social Computing, ACM, pp. 64-71.
View/Download from: Publisher's site
Huang, C, Yao, L, Wang, X, Benatallah, B & Sheng, QZ 1970, 'Expert as a Service: Software Expert Recommendation via Knowledge Domain Embeddings in Stack Overflow', 2017 IEEE International Conference on Web Services (ICWS), 2017 IEEE International Conference on Web Services (ICWS), IEEE, Honolulu, HI, pp. 317-324.
View/Download from: Publisher's site
Hung, Y-C, Wang, Y-K, Prasad, M & Lin, C-T 1970, 'Brain dynamic states analysis based on 3D convolutional neural network', 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Banff, AB, Canada, pp. 222-227.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Drowsiness driving is one major factor of traffic accident. Monitoring the changes of brain signals provides an effective and direct way for drowsiness detection. One 3D convolutional neural network (3D CNN)-based forecasting system has been proposed to monitor electroencephalography (EEG) signals and predict fatigue level during driving. The limited weight sharing and channel-wise convolution were both applied to extract the significant phenomenon in various frequency bands of brain signals and the spatial information of EEG channel location, respectively. The proposed 3D CNN with limited weight sharing and channel-wise convolution has been demonstrated to predict reaction time (RT) of driving with low root mean square error (RMSE) through the brain dynamics. This proposed approach outperforms with the state-of-the-art algorithms, such as traditional CNN, Neural Network (NN), and support vector regression (SVR). Compared with traditional CNN and Artificial Neural Network, the RMSE of 3D CNN-based RT prediction has been improved 9.5% (RMSE from 0.6322 to 0.5720) and 8% (RMSE from 0.6217 to 0.5720), respectively. We envision that this study might open a new branch between deep learning application in neuro-cognitive analysis and real world application.
Huo, H, Liu, X, Zheng, D, Wu, Z, Yu, S & Liu, L 1970, 'Collaborative Filtering Fusing Label Features Based on SDAE', Springer International Publishing, pp. 223-236.
View/Download from: Publisher's site
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Risk Management Framework to Avoid SLA Violation in Cloud from a Provider’s Perspective', ADVANCES ON P2P, PARALLEL, GRID, CLOUD AND INTERNET COMPUTING, Advances on P2P, Parallel, Grid, Cloud and Internet Computing, Springer International Publishing, Soonchunhyang Univ, Asan, SOUTH KOREA, pp. 233-241.
View/Download from: Publisher's site
View description>>
Managing risk is an important issue for a service provider to avoid SLA violation in any business. The elastic nature of cloud allows consumers to use a number of resources depending on their business needs. Therefore, it is crucial for service providers; particularly SMEs to first form viable SLAs and then manage them. When a provider and a consumer execute an agreed SLA, the next step is monitoring and, if a violation is predicted, appropriate action should be taken to manage that risk. In this paper we propose a Risk Management Framework to avoid SLA violation (RMF-SLA) that assists cloud service providers to manage the risk of service violation. Our framework uses a Fuzzy Inference System (FIS) and considers inputs such as the reliability of a consumer; the attitude towards risk of the provider; and the predicted trajectory of consumer behavior to calculate the amount of risk and the appropriate action to manage it. The framework will help small-to-medium sized service providers manage the risk of service violation in an optimal way.
Inan, DI & Beydoun, G 1970, 'Facilitating disaster knowledge management with agent-based modelling', Proceedings Ot the 21st Pacific Asia Conference on Information Systems Societal Transformation Through is IT Pacis 2017.
View description>>
In developed countries, for recurring disasters (e.g. floods), there are dedicated document repositories of Disaster Management Plans (DISPLANs) that can be accessed as needs arise. Nevertheless, accessing the appropriate plan in a timely manner and sharing activities between plans often requires domain knowledge and intimate knowledge of the plans in the first place. In this paper, we introduce an Agent-Based (AB) knowledge analysis framework to convert DISPLANs into a collection of knowledge units that can be stored in a unified repository. The repository of DM actions then enables the mixing and matching knowledge between different plans. The repository is structured as a layered abstraction according to Meta Object Facility (MOF) to allow the free flow access to the knowledge across the layers. We use the flood DISPLAN of the SES (State Emergency Service), an authoritative DM agency in NSW (New State Wales) State of Australia to illustrate and validate the developed framework.
Inan, DI, Beydoun, G & Opper, S 1970, 'Customising Agent Based Analysis Towards Analysis of Disaster Management Knowledge', Australasian Conference on Information Systems, University of Wollongong, Wollongong NSW, pp. 1-12.
View description>>
In developed countries such as Australia, for recurring disasters (e.g.floods), there are dedicated document repositories of Disaster Management Plans(DISPLANs), and supporting doctrine and processes that are used to prepareorganisations and communities for disasters. They are maintained on an ongoingcyclical basis and form a key information source for community education,engagement and awareness programme in the preparation for and mitigation ofdisasters. DISPLANS, generally in semi-structured text document format, arethen accessed and activated during the response and recovery to incidents tocoordinate emergency service and community safety actions. However, accessingthe appropriate plan and the specific knowledge within the text document fromacross its conceptual areas in a timely manner and sharing activities betweenstakeholders requires intimate domain knowledge of the plan contents and itsdevelopment. This paper describes progress on an ongoing project with NSW StateEmergency Service (NSW SES) to convert DISPLANs into a collection of knowledgeunits that can be stored in a unified repository with the goal to form thebasis of a future knowledge sharing capability. All Australian emergencyservices covering a wide range of hazards develop DISPLANs of various structureand intent, in general the plans are created as instances of a template, forexample those which are developed centrally by the NSW and Victorian SESs Stateplanning policies. In this paper, we illustrate how by using selected templatesas part of an elaborate agent-based process, we can apply agent-orientedanalysis more efficiently to convert extant DISPLANs into a centralisedrepository. The repository is structured as a layered abstraction according toMeta Object Facility (MOF). The work is illustrated using DISPLANs along theflood-prone Murrumbidgee River in central NSW.
Inibhunu, C, Schauer, A, Redwood, O, Clifford, P & McGregor, C 1970, 'Predicting hospital admissions and emergency room visits using remote home monitoring data', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, pp. 282-285.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The costs of lengthy hospital admissions (HA) and multiple emergency room visits (ER Visits) from patients with conditions such as heart failure (HF) and chronic obstructive pulmonary disease (COPD) can place a significant burden on healthcare systems. Understanding the various factors contributing to hospitalization and ER visits could aid cost-effective management in the delivery of services leading to potential improvement on quality of life for patients. This can be facilitated by collecting data using remoting patient monitoring (RPM) services and using analytics to discover important factors about patients. This paper presents our research that utilizes predictive modeling to determine key factors that are significant determinant to hospitalization and multiple ER Visits. The results shows that gender, past medical history and vital status are key factors to hospital admissions and ER Visits. Additionally, when a factor to indicate the period before, during and after an ER Visits was included, the resulting model shows a very high likelihood ratio and improved p values on all vital status. Our results shows that more research is needed to fully understand the temporal patterns among variables during hospitalization or ER visit.
Inibhunu, C, Schauer, A, Redwood, O, Clifford, P & McGregor, C 1970, 'The impact of gender, medical history and vital status on emergency visits and hospital admissions: A remote patient monitoring case study', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 278-281.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Remote Program Monitoring (RPM) is considered to have potential to improve the quality of life on patients diagnosed with cardiac conditions such as heart failure (HF) and chronic obstructive pulmonary disease (COPD). Remote collection and analysis of patients data could aid in effective decision making on necessary care needed by patients monitored. This could lead to reduction on healthcare costs as well as improved outcomes for the patients. As a component of our predictive analytics research, this paper presents results of remote patient monitoring study of patients from the Cardiac Clinic of Southlake Regional Health Centre who were referred to WeCare for home based monitoring. Results indicate statistically significant evidence on impact of gender, medical history and vital status as risk factors for subsequent hospitalization and multiple emergency room visits.
Ivanyos, G, Qiao, Y & Venkata Subrahmanyam, K 1970, 'Constructive non-commutative rank computation is in deterministic polynomial time', Leibniz International Proceedings in Informatics Lipics, Innovations in Theoretical Computer Science Conference, Schloss Dagstuhl, Berkeley, CA, USA, pp. 1-18.
View/Download from: Publisher's site
View description>>
Let B be a linear space of matrices over a field F spanned by n × n matrices B1, . . . ,Bm. The non-commutative rank of B is the minimum r € N such that there exists U < Fn satisfying dim(U) - dim(B(U)) n - r, where B(U) := span(Ui2€[m]Bi (U)). Computing the non-commutative rank generalizes some well-known problems including the bipartite graph maximum matching problem and the linear matroid intersection problem. In this paper we give a deterministic polynomial-Time algorithm to compute the non-commutative rank over any field F. Prior to our work, such an algorithm was only known over the rational number field Q, a result due to Garg et al, [20]. Our algorithm is constructive and produces a witness certifying the non-commutative rank, a feature that is missing in the algorithm from [20]. Our result is built on techniques which we developed in a previous paper [24], with a new reduction procedure that helps to keep the blow-up parameter small. There are two ways to realize this reduction. The first involves constructivizing a key result of Derksen and Makam [12] which they developed in order to prove that the null cone of matrix semi-invariants is cut out by generators whose degree is polynomial in the size of the matrices involved. We also give a second, simpler method to achieve this. This gives another proof of the polynomial upper bound on the degree of the generators cutting out the null cone of matrix semi-invariants. Both the invariant-Theoretic result and the algorithmic result rely crucially on the regularity lemma proved in [24]. In this paper we improve on the constructive version of the regularity lemma from [24] by removing a technical coprime condition that was assumed there.
Jayan Chirayath Kurian, J, Watkins, J & Macallum, K 1970, 'User-generated content on the Facebook page of Emergency Management Organizations: Perspectives of Emergency Management Administrators', Sydney.
Jia, Z, Xie, G, Gao, J & Yu, S 1970, 'Bike-Sharing System: A Big-Data Perspective', SMART COMPUTING AND COMMUNICATION, SMARTCOM 2016, 1st International Conference on Smart Computing and Communication (SmartCom), Springer International Publishing, Shenzhen, PEOPLES R CHINA, pp. 548-557.
View/Download from: Publisher's site
Jiang, J, Chaczko, Z, Al-Doghman, F & Narantaka, W 1970, 'New LQR Protocols with Intrusion Detection Schemes for IOT Security', 2017 25th International Conference on Systems Engineering (ICSEng), 2017 25th International Conference on Systems Engineering (ICSEng), IEEE, Las Vegas, NV, USA, pp. 466-474.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Link quality protocols employ link quality estimators to collect statistics on the wireless link either independently or cooperatively among the sensor nodes. Furthermore, link quality routing protocols for wireless sensor networks may modify an estimator to meet their needs. Link quality estimators are vulnerable against malicious attacks that can exploit them. A malicious node may share false information with its neighboring sensor nodes to affect the computations of their estimation. Consequently, malicious node may behave maliciously such that its neighbors gather incorrect statistics about their wireless links. This paper aims to detect malicious nodes that manipulate the link quality estimator of the routing protocol. In order to accomplish this task, MINTROUTE and CTP routing protocols are selected and updated with intrusion detection schemes (IDSs) for further investigations with other factors. It is proved that these two routing protocols under scrutiny possess inherent susceptibilities, that are capable of interrupting the link quality calculations. Malicious nodes that abuse such vulnerabilities can be registered through operational detection mechanisms. The overall performance of the new LQR protocol with IDSs features is experimented, validated and represented via the detection rates and false alarm rates.
Jiang, J, Gao, L, Jin, J, Luan, TH, Yu, S, Yuan, D, Xiang, Y & Yuan, D 1970, 'Towards an Analysis of Traffic Shaping and Policing in Fog Networks Using Stochastic Fluid Models', Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2017: Computing, Networking and Services, ACM, Melbourne, AUSTRALIA, pp. 196-204.
View/Download from: Publisher's site
Jiang, J, Gao, L, Yu, S, Jin, J & Yuan, D 1970, 'Preferential attachment and the spreading influence of users in online social networks', 2017 IEEE/CIC International Conference on Communications in China (ICCC), 2017 IEEE/CIC International Conference on Communications in China (ICCC), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Identifying influential users who lead to large-scale spreading in online social networks (OSNs) is of theoretical and practical significance. Many methods have been proposed to measure the influence of users, but little literature studies the interplay of the global influence of users and their local connections. In this paper, we particularly focus on the assortative and disassortative preference. The results in this paper can address three main issues in this area: (i) What is the difference in spreading influence between ordinary and core users? (ii) What is the distribution of influential users? (iii) How do they evolve from a fresh user to a powerful influencer? The k-shell hierarchy is adopted to quantify the global spreading influence of users. Firstly, we find that the global influence varies dramatically among users in disassortative OSNs, but the variation is relatively small in assortative OSNs. Hence, ordinary users also possess high influence in assortative OSNs. Secondly, we empirically and theoretically prove that the global influence of users follows a power-law distribution. Moreover, many users concentrate on the core in assortative OSNs, but few users locate at the core in disassortative OSNs. Thirdly, it is found that users in assortative OSNs gain influence over time and gradually upgrade to core members. In disassortative OSNs, the core users gain much influence along with network growth but other users scatter among all hierarchical levels. The results are verified on real OSN datasets and the state-of-the-art OSN model.
Jiang, J, Gao, L, Yu, S, Jin, J & Yuan, D 1970, 'Preferential attachment and the spreading influence of users in online social networks', 2017 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), IEEE/CIC International Conference on Communications in China (ICCC), IEEE, PEOPLES R CHINA, Qingdao, pp. 690-695.
Jiang, J, Wen, S, Yu, S, Xiang, Y, Zhou, W & Hassan, H 1970, 'The structure of communities in scale‐free networks', Concurrency and Computation: Practice and Experience, Wiley, pp. e4040-e4040.
View/Download from: Publisher's site
View description>>
SummaryScale‐free networks are often used to model a wide range of real‐world networks, such as social, technological, and biological networks. Understanding the structure of scale‐free networks evolves into a big data problem for business, management, and protein function prediction. In the past decade, there has been a surge of interest in exploring the properties of scale‐free networks. Two interesting properties have attracted much attention: the assortative mixing and community structure. However, these two properties have been studied separately in either theoretical models or real‐world networks. In this paper, we show that the structural features of communities are highly related with the assortative mixing in scale‐free networks. According to the value of assortativity coefficient, scale‐free networks can be categorized into assortative, disassortative, and neutral networks, respectively. We systematically analyze the community structure in these three types of scale‐free networks through six metrics: node embeddedness, link density, hub dominance, community compactness, the distribution of community sizes, and the presence of hierarchical communities. We find that the three types of scale‐free networks exhibit significant differences in these six metrics of community structures. First, assortative networks present high embeddedness, meaning that many links lying within communities but few links lying between communities. This leads to the high link density of communities. Second, disassortative networks exhibit great hubs in communities, which results in the high compactness of communities that nodes can reach each other via short paths. Third, in neutral networks, a big portion of links act as community bridges, so they display sparse and less compact communities. In addition, we find that (dis)assortative networks show hierarchical community structure with power‐law‐distributed community sizes, while neutral ...
Jiang, Z, Xu, C, Guan, J, Zhang, H & Yu, S 1970, 'Loss-aware adaptive scalable transmission in wireless high-speed railway networks', 2017 IEEE International Conference on Communications (ICC), ICC 2017 - 2017 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Widespread deployment of High-Speed Railway (HSR) in recent years brings strong demand for high-quality onboard Internet services. However, the wireless link between the train and Base Station (BS) suffers from numerous problems, including frequent handover, severe Doppler shift, and great penetration loss, etc. It is still a challenge to provide HSR passengers with high-quality Internet services. In this paper, the Packet Loss Rate (PLR) of HSR networks is measured along Beijing-Shanghai railway line. Measurement results indicate that PLR remains at a high level for a long period and changes frequently in a large interval. To address this problem, we focus on frame loss problem of train-to-BS wireless link and propose a novel Loss-Aware Adaptive Scalable Transmission mechanism (LAAST) to fit for HSR networks. In LAAST, variable number of frame copies are transmitted according to Frame Loss Probability (FLP) to improve the scalability and efficiency of transmission over train-to-BS link. The optimal relationship between frame duplication number and FLP is derived through nonlinear programming model. Simulations demonstrate the effectiveness and fitness of LAAST for HSR networks.
Jiao, S, Zhang, X, Yu, S, Song, X & Xu, Z 1970, 'Joint Virtual Network Function Selection and Traffic Steering in Telecom Networks', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, Singapore, Singapore, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Following the trend of Network Function Virtualization (NFV), telecom operators widely deploy diverse types of Virtual Network Functions (VNF) (such as firewall, load balancer, and proxy, etc.) on specified software-defined middlebox at various network locations. Traffic needs to go through the desired VNFs according to pre-defined policies, which forms a Service Function Chain (SFC). However, how to maximize traffic throughput with end-to-end latency guaranteed when steering SFC requests is still an open problem. To this end, we study a joint optimization of VNF selection and traffic steering problem in telecom networks. Firstly, we formulate this problem as an Integer Linear Programming (ILP) model. Then, we design an efficient heuristic algorithm based on dynamic programming. Extensive simulation results show that compared with the previous algorithms, our algorithm can increase 30.86% throughput of SFC requests while guaranteeing end-to-end latency requirement.
Khalifa, NH, Nguyen, QV, Simoff, S & Catchpoole, D 1970, 'Interaction Visualisation of Complex Genomic Data with Game Engines', 2017 21st International Conference Information Visualisation (IV), 2017 21st International Conference on Information Visualisation (IV), IEEE, pp. 133-139.
View/Download from: Publisher's site
View description>>
Graphic game engines have introduced even more advanced technologies to improve the rendering, image quality, ergonomics, and user experience of their creations by providing user-friendly yet powerful tools to design and develop new games. There are thousands of genes in the human genome that contain information about specific individual patients and the biological mechanisms of their diseases. The complexity in biomedical and genomic data usually requires effective visual information processing and analytics. Unfortunately, available visualisation techniques for this domain are limited, many in static forms. The open study questions here are as follow: Are there lessons to be learnt from these video games? Or could the game technology help us explore new graphic ideas accessible to non-specialists? This paper presents a visual analytics model that enables the analysis of large and complex genomic data using Unity3D game technology. This includes an interactive visualisation, providing an overview of the patient cohort with a detailed view of the individual genes. We illustrate the effectiveness of our approach in guiding the effective treatment decision in the cohort through datasets from the childhood cancer B-Cell acute lymphoblastic leukaemia.
Kolamunna, H, Chauhan, J, Hu, Y, Thilakarathna, K, Perino, D, Makaroff, D & Seneviratne, A 1970, 'Are Wearables Ready for HTTPS? On the Potential of Direct Secure Communication on Wearables', 2017 IEEE 42nd Conference on Local Computer Networks (LCN), 2017 IEEE 42nd Conference on Local Computer Networks (LCN), IEEE, pp. 321-329.
View/Download from: Publisher's site
Kridalukmana, R, Lu, HY & Naderpour, M 1970, 'An object oriented Bayesian network approach for unsafe driving maneuvers prevention system', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. As the main contributor to the traffic accidents, unsafe driving maneuvers have taken attentions from automobile industries. Although driving feedback systems have been developed in effort of dangerous driving reduction, it lacks of drivers awareness development. Therefore, those systems are not preventive in nature. To cover this weakness, this paper presents an approach to develop drivers awareness to prevent dangerous driving maneuvers. The approach uses Object-Oriented Bayesian Network to model hazardous situations. The result of the model can truthfully reflect a driving environment based upon situation analysis, data generated from sensors, and maneuvers detectors. In addition, it also alerts drivers when a driving situation that has high probability to cause unsafe maneuver to be detected. This model then is used to design a system, which can raise drivers awareness and prevent unsafe driving maneuvers.
Kutay, CM & Lawrence, C 1970, 'Enduring Engineering for our Water Resources', Putting Water to Work: Australian Engineering Heritage, Mildura.
Kutay, CM & Lawrence, C 1970, 'Language Located', Information Technologies for Indigenous Communities, Melbourne September 2017.
Lawrence, C, Leong, TW, Gay, V, Woods, A & Wadley, G 1970, '#thismymob', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, Brisbane, pp. 646-647.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. We propose to hold a one-day workshop on developing projects relating to #thismymob: Digital Land Rights and Reconnecting Indigenous Communities at OzCHI 2017 Brisbane. See http://www.arc.gov.au/newsmedia/ news/thismymob-digital-land-rights-andreconnecting-indigenous-communities.
Li, B, Xiong, J, Liu, B, Gui, L, Qiu, M & Shi, Z 1970, 'On Services Pushing and Caching in High-speed Train by Using Converged Broadcasting and Cellular Networks', 2017 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 12th IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Cagliari, ITALY, pp. 457-463.
Li, B, Xiong, J, Liu, B, Gui, L, Qiu, M & Shi, Z 1970, 'On services pushing and caching in high-speed train by using converged broadcasting and cellular networks', 2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper proposes a services pushing and caching algorithm in high-speed train (HST) by using converged wireless broadcasting and cellular networks (CWBCN). Services pushing and caching on the HST is an efficient way to improve the capacity of the network; and it can also lead to better user experience. In the proposed services pushing and caching model, the most popular services are delivered and cached on the vehicle relay station (VRS) of the train ahead the departure time. Then, the most popular services are broadcasted and cached on the User Equipments (UEs) after all the passengers are on the train; the less popular services are transmitted to the user by p2p mode by the relayed cellular network on the train. In order to maximize the network capacity in limited time slots, we transform the issue into the 0-1 Knapsack problem. Dynamic programming algorithm is adopted to solve it in polynomial time. As the passengers may get on or get off the train when pushing the most popular services, an information retransfer algorithm is also proposed when more intermediate stations are considered. Simulations show that the proposed algorithms can efficiently improve the capacity of the converged network.
Li, H, Zhou, H, Quan, W, Feng, B, Zhang, H & Yu, S 1970, 'HCaching: High-Speed Caching for Information-Centric Networking', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, Singapore, Singapore, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Information-Centric Networking (ICN) introduces ubiquitous in-network caching to reduce network load and improve Quality of Service (QoS). This peculiarity requires high-speed caching technologies to support wire-speed and large-amount data forwarding, which brings new challenges to existing routers. To promote practical ICN deployment, many emerging researches focus on how to accelerate caching. In this paper, we propose a novel two-layer High-speed Caching scheme (HCaching), which leverages the characteristics of both SRAM and DRAM to accelerate caching for ICN routers. In particular, using DRAM as a primary cache and SRAM as a secondary one, HCaching is able to: (i) reduce excessive utilization of high-cost SRAM, (ii) speed up access of DRAM, (iii) and improve total network throughput. We implement and analyze HCaching performance by comparing with another two state-of-the-art solutions. The results show that HCaching achieves an improved throughput by 3-10 times faster than the compared solutions.
Li, T, Zhou, H, Luo, H, Quan, W & Yu, S 1970, 'Modeling software defined satellite networks using queueing theory', 2017 IEEE International Conference on Communications (ICC), ICC 2017 - 2017 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Existing satellite communication has a low efficiency due to the inherent defects of the traditional design, i.e., coarsegrained control, and long configuration delay. In some previous work, researchers developed Software Defined Satellite Networks (SDSN). We reconsidered many characteristics equipped in satellite links, and deployed SDSN in the prototype by leveraging Delay Tolerant Network (DTN) and OpenFlow. However, it is necessary to develop a theoretical tool for this new network architecture to evaluate its performance. In this paper, we propose such an analytical model for SDSN using the queueing model. In particular, the Jackson's theorem is adopted to model the communication between a controller and forwarding nodes, and the store-and-forward process. The comparisons between the numerical and experimental results indicate that the proposed model is able to accurately evaluate the performance of SDSN, and will provide great benefits for the further related researches.
Li, Y & Qiao, Y 1970, 'Linear Algebraic Analogues of the Graph Isomorphism Problem and the Erdős-Rényi Model', 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, Berkeley, CA, USA, pp. 463-474.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. A classical difficult isomorphism testing problem is to test isomorphism of p-groups of class 2 and exponent p in time polynomial in the group order. It is known that this problem can be reduced to solving the alternating matrix space isometry problem over a finite field in time polynomial in the underlying vector space size. We propose a venue of attack for the latter problem by viewing it as a linear algebraic analogue of the graph isomorphism problem. This viewpointleads us to explore the possibility of transferring techniques for graph isomorphism to this long-believed bottleneck case of group isomorphism.In 1970s, Babai, Erds, and Selkow presented the first average-case efficient graph isomorphism testing algorithm (SIAM J Computing, 1980). Inspired by that algorithm, we devise an average-case efficient algorithm for the alternating matrix space isometry problem over a key range of parameters, in a random model of alternating matrix spaces in vein of the Erdos-R4;enyi model of random graphs. For this, we develop a linear algebraic analogue of the classical individualisation technique, a technique belonging to a set of combinatorial techniques that has been critical for the progress on the worst-case time complexity for graph isomorphism, but was missing in the group isomorphism context. This algorithm also enables us to improve Higmans 57-year-old lower bound on the number of p-groups (Proc. of the LMS, 1960). We finally show that Luks dynamic programming technique for graph isomorphism (STOC 1999) can be adapted to slightly improve the worst-case time complexity of the alternating matrix space isometry problem in a certain range of parameters.Most notable progress on the worst-case time complexity of graph isomorphism, including Babais recent breakthrough (STOC 2016) and Babai and Luks previous record (STOC 1983), has relied on both group theoretic and combinatorial techniques. By developing a linear algebraic analogue of the individu...
Liu, A, Song, Y, Zhang, G & Lu, J 1970, 'Regional Concept Drift Detection and Density Synchronized Drift Adaptation', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 2280-2286.
View/Download from: Publisher's site
View description>>
In data stream mining, the emergence of new patterns or a pattern ceasing to exist is called concept drift. Concept drift makes the learning process complicated because of the inconsistency between existing data and upcoming data. Since concept drift was first proposed, numerous articles have been published to address this issue in terms of distribution analysis. However, most distribution-based drift detection methods assume that a drift happens at an exact time point, and the data arrived before that time point is considered not important. Thus, if a drift only occurs in a small region of the entire feature space, the other non-drifted regions may also be suspended, thereby reducing the learning efficiency of models. To retrieve non-drifted information from suspended historical data, we propose a local drift degree (LDD) measurement that can continuously monitor regional density changes. Instead of suspending all historical data after a drift, we synchronize the regional density discrepancies according to LDD. Experimental evaluations on three public data sets show that our concept drift adaptation algorithm improves accuracy compared to other methods.
Liu, A, Zhang, G & Lu, J 1970, 'Fuzzy time windowing for gradual concept drift adaptation', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The aim of machine learning is to find hidden insights into historical data, and then apply them to forecast the future data or trends. Machine learning algorithms optimize learning models for lowest error rate based on the assumption that the historical data and the data to be predicted conform to the same knowledge pattern (data distribution). However, if the historical data is not enough, or the knowledge pattern keeps changing (data uncertainty), this assumption will become invalid. In data stream mining, this phenomenon of knowledge pattern changing is called concept drift. To address this issue, we propose a novel fuzzy windowing concept drift adaptation (FW-DA) method. Compared to conventional windowing-based drift adaptation algorithms, FW-DA achieves higher accuracy by allowing the sliding windows to keep an overlapping period so that the data instances belonging to different concepts can be determined more precisely. In addition, FW-DA statistically guarantees that the upcoming data conforms to the inferred knowledge pattern with a certain confidence level. To evaluate FW-DA, four experiments were conducted using both synthetic and real-world data sets. The experiment results show that FW-DA outperforms the other windowing-based methods including state-of-the-art drift adaptation methods.
Liu, B, Zhou, W, Yu, S, Wang, K, Wang, Y, Xiang, Y & Li, J 1970, 'Home Location Protection in Mobile Social Networks: A Community Based Method (Short Paper)', INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2017, 13th International Conference on Information Security Practice and Experience (ISPEC) / 3rd International Symposium on Security and Privacy in Social Networks and Big Data (SocialSec), Springer International Publishing, Melbourne, AUSTRALIA, pp. 694-704.
View/Download from: Publisher's site
Liu, F, Zhang, G & Lu, J 1970, 'Heterogeneous unsupervised domain adaptation based on fuzzy feature fusion', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Domain adaptation is a transfer learning approach that has been widely studied in the last decade. However, existing works still have two limitations: 1) the feature spaces of the domains are homogeneous, and 2) the target domain has at least a few labeled instances. Both limitations significantly restrict the domain adaptation approach when knowledge is transferred across domains, especially in the current era of big data. To address both issues, this paper proposes a novel fuzzy-based heterogeneous unsupervised domain adaptation approach. This approach maps the feature spaces of the source and target domains onto the same latent space constructed by fuzzy features. In the new feature space, the label spaces of two domains are maintained to reduce the probability of negative transfer occurring. The proposed approach delivers superior performance over current benchmarks, and the heterogeneous unsupervised domain adaptation (HeUDA) method provides a promising means of giving a learning system the associative ability to judge unknown things using related knowledge.
Liu, Q, Huang, H, Lut, J, Gao, Y & Zhang, G 1970, 'Enhanced word embedding similarity measures using fuzzy rules for query expansion', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY, pp. 1-6.
View/Download from: Publisher's site
Liu, S, Pang, N, Xu, G & Liu, H 1970, 'Collaborative Filtering via Different Preference Structures', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Knowledge Science, Engineering and Management, Springer International Publishing, Melbourne, Australia, pp. 309-321.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2017. Recently, social network websites start to provide third-parity sign-in options via the OAuth 2.0 protocol. For example, users can login Netflix website using their Facebook accounts. By using this service, accounts of the same user are linked together, and so does their information. This fact provides an opportunity of creating more complete profiles of users, leading to improved recommender systems. However, user opinions distributed over different platforms are in different preference structures, such as ratings, rankings, pairwise comparisons, voting, etc. As existing collaborative filtering techniques assume the homogeneity of preference structure, it remains a challenge task of how to learn from different preference structures simultaneously. In this paper, we propose a fuzzy preference relation-based approach to enable collaborative filtering via different preference structures. Experiment results on public datasets demonstrate that our approach can effectively learn from different preference structures, and show strong resistance to noises and biases introduced by cross-structure preference learning.
Lu, H, Heng, J & Wang, C 1970, 'An AI-Based Hybrid Forecasting Model for Wind Speed Forecasting', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 221-230.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Forecasting of wind speed plays an important role in wind power prediction for management of wind energy. Due to intermittent nature of wind, accurately forecasting of wind speed has been a long standing research challenge. Artificial neural networks (ANNs) is one of promising approaches to predict wind speed. However, since the results of ANN-based models are strongly dependent on the initial weights and thresholds values which are usually randomly generated, the stability of forecasting results is not always satisfactory. This paper presents a new hybrid model for short term forecasting of wind speed with high accuracy and strong stability by optimizing the parameters in a generalized regression neural network (GRNN) using a multi-objective firefly algorithm (MOFA). To evaluate the effectiveness of this hybrid algorithm, we apply it for short-term forecasting of wind speed from four wind power stations in Penglai, China, along with four typical ANN-based models, which are back propagation neural network (BPNN), radical basis function neural network (RBFNN), wavelet neural network (WNN) and GRNN. The comparison results clearly show that this hybrid model can significantly reduce the impact of randomness of initialization on the forecasting results and achieve good accuracy and stability.
Lucassen, G, Dalpiaz, F, van der Werf, JMEM, Brinkkemper, S & Zowghi, D 1970, 'Behavior-Driven Requirements Traceability via Automated Acceptance Tests', 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), IEEE, Lisbon, pp. 431-434.
View/Download from: Publisher's site
View description>>
Although information retrieval advances significantly improved automated traceability tools, their accuracy is still far from 100% and therefore they still need human intervention. Furthermore, despite the demonstrated benefits of traceability, many practitioners find the overhead for its creation and maintenance too high. We propose the Behavior Driven Traceability Method (BDT) that takes a different standpoint on automated traceability: we establish ubiquitous traceability between user story requirements and source code by taking advantage of the automated acceptance tests that are created as part of the Behavior Driven Development process.
Madhisetty, S & Williams, M-A 1970, 'Framework for Privacy in Photos and Videos When using Social Media', Proceedings of the 19th International Conference on Enterprise Information Systems, 19th International Conference on Enterprise Information Systems, SCITEPRESS - Science and Technology Publications, Porto, Portugal, pp. 331-336.
View/Download from: Publisher's site
View description>>
Privacy is a social construct. Having said that, how can it be contextualised and studied scientifically? This research contributes by investigating how to manage privacy better in the context of sharing and storing photos and videos using social media. Social media such as Facebook, Twitter, WhatsApp and many more applications are becoming popular. The instant sharing of tacit information via photos and videos makes the problem of privacy even more critical.The main problem was, nobody could define the actual meaning of privacy. Though there are definitions about privacy and Acts to protect it, there is no clear consensus as to what it actually means. I asked myself a question, how do I manage something when I don't know what it means exactly? I then decided to do this research by asking questions about privacy in particular categories of photos so that I could arrive at a general consensus. The data has been processed using the principles of Grounded Theory (GT) to develop a framework which assists in the effective management of privacy in photos and videos.
Manzoor, S, Manzoor, M & Hussain, W 1970, 'An Analysis of Energy-Efficient Approaches Used for Virtual Machines and Data Centres', 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), IEEE, Shanghai, China, pp. 91-96.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The adoption of cloud computing has increased significantly, but this has given rise to the problem of efficient energy usage. The efficient use of energy by data centers and the use of virtual machines can help to minimize cost deadlines, resources, and utilization and execution times. There is a consequent need for different approaches that can reduce energy consumption whilst still achieving the multiple objectives of cloud computing. In this study, we examine a number of different approaches that have been discussed in the recent literature w.r.t. energy-efficient cloud workflow management, and we compare these approaches for energy-efficient usage of data centers and virtual machines. The results show that virtual machine scheduling and virtual machine allocation approaches are the most commonly used approaches that achieve an optimal energy consumption.
Mateos, MK, Trahair, TN, Mayoh, C, Barbaro, PM, George, C, Sutton, R, Revesz, T, Barbaric, D, Giles, J, Alvaro, F, Mechinaud, FM, Catchpoole, DR, Kotecha, RS, Quinn, MCJ, Chenevix-Trench, G, MacGregor, S, Dalla-Pozza, L & Marshall, GM 1970, 'Clinical and Germline Risk Factors for the Occurrence of Multiple Treatment Toxicities during Childhood Acute Lymphoblastic Leukemia (ALL) Therapy', BLOOD, 59th Annual Meeting of the American-Society-of-Hematology (ASH), AMER SOC HEMATOLOGY, GA, Atlanta.
McGregor, C, Bonnis, B, Stanfield, B & Stanfield, M 1970, 'Integrating Big Data analytics, virtual reality, and ARAIG to support resilience assessment and development in tactical training', 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), IEEE, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Combat tactical training activities utilising virtual reality environments are being used increasingly to create training scenarios to promote resilience against stressors and to enable standardized training scenarios to allow trainees to learn techniques for various stressors. Resilience is an important component for mental health. However, assessment of the trainees' response to these training activities has either been limited to various pre and post training assessment metrics or collected in parallel during experiments and analysed after collection rather than in real-time. New Big Data approaches have the potential to provide real-time analytics. We have created a Big Data analytics platform, Athena, that in real-time acquires data from a first person shooter military combat simulation game, ArmA 3, as well as the data ArmA 3 sends to the muscle stimulation component of a multisensory garment, ARAIG that provides on the body feedback to the wearer for communications, weapon fire and being hit and integrates that data with physiological response data such as heart rate, breathing behaviour and blood oxygen saturation. We present results from our initial pilot study from an ethics approved equipment integration study. Our approach is equally applicable for Virtual Reality Graded Exposure Therapy with physiological monitoring.
McGregor, C, Orlov, O, Baevsky, R, Chernikova, A & Rusanov, V 1970, 'A Method for the Integration of Real-time Probabilistic Approaches for Astronaut Wellness in Human in the Loop Related Missions and Situations with Big Data Analytics', 19th AIAA Non-Deterministic Approaches Conference, 19th AIAA Non-Deterministic Approaches Conference, American Institute of Aeronautics and Astronautics, Grapevine, Texas.
View/Download from: Publisher's site
View description>>
© 2017, American Institute of Aeronautics and Astronautics Inc, AIAA. All rights reserved. The man-instrumentation-equipment-vehicle-environment ecosystem is complex in aerospace missions. Health status of the individual has important implications on decision making and performance that should be factored into assessments for probability of success/risk of failure both in offline and real-time models. To date probabilistic models have not considered the dynamic nature of health status. Big Data analytics is enabling new forms of analytics to assess health status in real-time. There is great potential to integrate dynamic health status information with platforms assessing risk and the probability of success for dynamic individualized real-time probabilistic predictive risk assessment. In this research we present an approach utilizing Big Data analytics to enable continuous assessment of astronaut health risk and show its implications for integration with HITL related aerospace mission.
Meng, Q, Catchpoole, D, Skillicom, D & Kennedy, PJ 1970, 'Relational autoencoder for feature extraction', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 364-371.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Feature extraction becomes increasingly important as data grows high dimensional. Autoencoder as a neural network based feature extraction method achieves great success in generating abstract features of high dimensional data. However, it fails to consider the relationships of data samples which may affect experimental results of using original and new features. In this paper, we propose a Relation Autoencoder model considering both data features and their relationships. We also extend it to work with other major autoencoder models including Sparse Autoencoder, Denoising Autoencoder and Variational Autoencoder. The proposed relational autoencoder models are evaluated on a set of benchmark datasets and the experimental results show that considering data relationships can generate more robust features which achieve lower construction loss and then lower error rate in further classification compared to the other variants of autoencoders.
Meng, Q, Wu, J, Ellis, J & Kennedy, PJ 1970, 'Dynamic island model based on spectral clustering in genetic algorithm', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 1724-1731.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. How to maintain relative high diversity is important to avoid premature convergence in population-based optimization methods. Island model is widely considered as a major approach to achieve this because of its flexibility and high efficiency. The model maintains a group of sub-populations on different islands and allows sub-populations to interact with each other via predefined migration policies. However, current island model has some drawbacks. One is that after a certain number of generations, different islands may retain quite similar, converged sub-populations thereby losing diversity and decreasing efficiency. Another drawback is that determining the number of islands to maintain is also very challenging. Meanwhile initializing many sub-populations increases the randomness of island model. To address these issues, we proposed a dynamic island model (DIM-SP) which can force each island to maintain different sub-populations, control the number of islands dynamically and starts with one sub-population. The proposed island model outperforms the other three state-of-the-art island models in three baseline optimization problems including job shop scheduler, travelling salesmen, and quadratic multiple knapsack.
Mi, J, Wang, K, Liu, B, Ding, F, Sun, Y & Huang, H 1970, 'A Multiobjective Evolution Algorithm Based Rule Certainty Updating Strategy in Big Data Environment', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, YourSingapore, Singapore, SINGAPORE, pp. 1-6.
View/Download from: Publisher's site
Min Lu, Jie Liang, Yu Zhang, Guozheng Li, Siming Chen, Zongru Li & Yuan, X 1970, 'Interaction+: Interaction enhancement for web-based visualizations', 2017 IEEE Pacific Visualization Symposium (PacificVis), 2017 IEEE Pacific Visualization Symposium (PacificVis), IEEE, Seoul, Korea, pp. 61-70.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this work, we present Interaction+, a tool that enhances the interactive capability of existing web-based visualizations. Different from the toolkits for authoring interactions during the visualization construction, Interaction+ takes existing visualizations as input, analyzes the visual objects, and provides users with a suite of interactions to facilitate the visual exploration, including selection, aggregation, arrangement, comparison, filtering, and annotation. Without accessing the underlying data or process how the visualization is constructed, Interaction+ is application-independent and can be employed in various visualizations on the web. We demonstrate its usage in two scenarios and evaluate its effectiveness with a qualitative user study.
Mols, I, van den Hoven, E & Eggen, B 1970, 'Balance, Cogito and Dott', Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17: Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama, Japan, pp. 427-433.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Reflection in and on everyday life can provide selfinsight, increase gratitude and have a positive effect on well-being. To integrate reflection in everyday life, media technologies can provide support. In this paper, we explore how both media creation & use in different modalities can support reflection. We present the ongoing work of designing and building Balance, Cogito, and Dott, focusing on media in audible, textual and visual form. We discuss our research-Through-design process and address the differences between modalities in terms of interaction, tangibility, and the integration in everyday life.
Mustapha, S, Braytee, A & Ye, L 1970, 'Detection of surface cracking in steel pipes based on vibration data using a multi-class support vector machine classifier', SPIE Proceedings, SPIE Smart Structures and Materials + Nondestructive Evaluation and Health Monitoring, SPIE, Portland, OR.
View/Download from: Publisher's site
Naik, T, McGregor, C & James, A 1970, 'Automated partial premature infant pain profile scoring using big data analytics', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 246-249.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Lack of valid and reliable pain assessment in the neonatal population has become a significant challenge in the Neonatal Intensive Care Unit (NICU). In an attempt to forego the manual pain scoring system, this paper presents an initial framework to automate a partial pain score for newborn infants using big data analytics that automates the analysis of high speed physiological data. An ethically approved retrospective clinical research study was performed to calculate Artemis Premature Infant Pain Profile (APIPP) scores from premature infant data collected from the Artemis platform. Using the Premature Infant Pain Profile (PIPP) as the gold standard scale, scoring techniques were automated to create data abstractions from gestational age and the physiological streams of Heart Rate (HR) and Oxygen Saturation (SpO2). These were then brought together to compute an automated partial pain score. APIPP was retrospectively compared with the PIPP which was manually scored by nursing staff at The Hospital for Sick Children, Toronto. Differences within both the scales were evaluated and analysed by creating a data model. Future research will focus on the clinical validation of this work by implementing this work into a clinical decision support system (CDSS) named Artemis.
Nascimben, M, Wang, YK, Singh, AK, King, JT & Lin, CT 1970, 'Influence of EEG tonic changes on Motor Imagery performance', 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), IEEE, Shanghai, PEOPLES R CHINA, pp. 46-49.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In Motor Imagery literature, performance predictors are commonly divided in four categories: personal, psychological, anatomical and neurophysiological. However these predictors are limited to inter-subjects changes. To overcome this limitation and evaluate intra-subjects performance, we tried to combine two groups of these measures: psychological and neurophysiological. As neurophysiological variables tonic changes in resting EEG theta and alpha sub-bands were considered. As psychological parameter we analyzed internalized attention and its correlates in lower alpha. We found that when internalized attention doesn't decrease, Motor Imagery performance outcome can be correctly predicted by resting EEG tonic variations.
Nassir, S & Leong, TW 1970, 'Traversing Boundaries', Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17: CHI Conference on Human Factors in Computing Systems, ACM, Denver, Colorado, USA, pp. 6386-6397.
View/Download from: Publisher's site
View description>>
© 2017 ACM. This is a methods paper that draws from our fieldwork experiences of conducting qualitative research in Saudi Arabia where we used interviews and probes to understand ageing people's experiences. The aim of this paper is to present insights gained about conducting qualitative research in Saudi. We present a range of the cultural considerations that shaped the design of the fieldwork and highlight opportunities, challenges, and issues that we faced when conducting interviews and deploying research probes. Influences of social-cultural practices and religion presented interesting challenges for recruitment, conducting cross-gender communications, and how participants reported their experiences. This paper offers methodological considerations that include the influences of local culture, gender, religion, etc. We also discuss how we shaped our fieldwork tools based upon considerations of local cultural and religious contexts. In particular, we highlight the usefulness of probes in traversing cultural boundaries when conducting fieldwork in Saudi Arabia.
Nejad, MZ, Lu, J & Behbood, V 1970, 'Applying dynamic Bayesian tree in property sales price estimation', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, NanJing, JiangSu, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Accurate prediction of Residential Property Sale Price is very important and significant in the operation of the real estate market. Property sellers and buyers/Investors wish to know a fair value for their properties in particular at the time of the sales transaction. The main reason to build an Automated Valuation Model is to be accurate enough to replace human. To select a most suitable model for the property sale price prediction, this paper examined seven Tree-based machine learning models including Dynamic Bayesian Tree (online learning method), Random Forest, Stochastic Gradient Boosting, CART, Bagged CART, Tree Bagged Ensembles and Boosted Tree (batch learning methods) by comparing their RMSE and MAE performances. The performance of these models are tested on 1967 records of unit sales from 19 suburbs of Sydney, Australia. The main purpose of this study is to compare the performance of batch models with the online model. The results demonstrated that Dynamic Bayesian Tree as an online model stands in the middle of batch models based on the root mean square error (RMSE) and mean absolute error (MAE). It shows using online model for estimating the property sale price is reasonable for real world application.
Nguyen, H-P, Do, T-TN & Kim, J 1970, 'Exponential coordinates based rotation stabilization for panoramic videos', 2017 IEEE International Conference on Image Processing (ICIP), 2017 IEEE International Conference on Image Processing (ICIP), IEEE, Beijing, PEOPLES R CHINA, pp. 46-50.
View/Download from: Publisher's site
Nie, L, Jiang, D, Yu, S & Song, H 1970, 'Network Traffic Prediction Based on Deep Belief Network in Wireless Mesh Backbone Networks', 2017 IEEE Wireless Communications and Networking Conference (WCNC), 2017 IEEE Wireless Communications and Networking Conference (WCNC), IEEE, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Wireless mesh network is prevalent for providing a decentralized access for users. For a wireless mesh backbone network, it has obtained extensive attention because of its large capacity and low cost. Network traffic prediction is important for network planning and routing configurations that are implemented to improve the quality of service for users. This paper proposes a network traffic prediction method based on a deep belief network and a Gaussian model. The proposed method first adopts discrete wavelet transform to extract the low-pass component of network traffic that describes the long-range dependence of itself. Then a prediction model is built by learning a deep belief network from the extracted low-pass component. Otherwise, for the rest high-pass component that expresses the gusty and irregular fluctuations of network traffic, a Gaussian model is used to model it. We estimate the parameters of the Gaussian model by the maximum likelihood method. Then we predict the high-pass component by the built model. Based on the predictors of two components, we can obtain a predictor of network traffic. From the simulation, the proposed prediction method outperforms three existing methods.
Ning, X, Yao, L, Wang, X & Benatallah, B 1970, 'Calling for Response: Automatically Distinguishing Situation-Aware Tweets During Crises', ADVANCED DATA MINING AND APPLICATIONS, ADMA 2017, International Conference on Advanced Data Mining and Applications (ADMA), Springer International Publishing, Singapore, SINGAPORE, pp. 195-208.
View/Download from: Publisher's site
Nosouhi, MR, Pham, VVH, Yu, S, Xiang, Y & Warren, M 1970, 'A Hybrid Location Privacy Protection Scheme in Big Data Environment', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, GLOBECOM 2017 - 2017 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Location privacy has become a significant challenge of big data. Particularly, by the advantage of big data handling tools availability, huge location data can be managed and processed easily by an adversary to obtain user private information from Location-Based Services (LBS). So far, many methods have been proposed to preserve user location privacy for these services. Among them, dummy-based methods have various advantages in terms of implementation and low computation costs. However, they suffer from the spatiotemporal correlation issue when users submit consecutive requests. To solve this problem, a practical hybrid location privacy protection scheme is presented in this paper. The proposed method filters out the correlated fake location data (dummies) before submissions. Therefore, the adversary can not identify the user's real location. Evaluations and experiments show that our proposed filtering technique significantly improves the performance of existing dummy-based methods and enables them to effectively protect the user's location privacy in the environment of big data.
Nosouhi, MR, Qu, Y, Yu, S, Xiang, Y & Manuel, D 1970, 'Distance-based location privacy protection in social networks', 2017 27th International Telecommunication Networks and Applications Conference (ITNAC), 2017 27th International Telecommunication Networks and Applications Conference (ITNAC), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The current privacy protection methods adopted by social network providers rely on restricting users' access rights. They force a user to rigidly divide other users into two categories only: friends and strangers. Based on this classification, they prevent non-friends from accessing to the user's data while provide full access for friends regardless of how close they are to her. However, the level of privacy protection can be increased gradually and smoothly rather than firmly like a zero/one function. Moreover, the utility of social networks is reduced if we prevent data miners from computing global statistics by applying rigid privacy policies. In this paper, we present a distance-based location privacy protection system (DBLP2) to preserve location privacy of social network users based on their friendship distance. Whenever a user wants to see another user's location (in her profile or from spatiotemporal tags on her posts), the system returns a differentially private response based on their friendship distance. In our proposed system, a user's location information provided for other users is more generalized as the friendship distance increases. In other words, family members and close friends receive a more accurate response than casual friends and strangers. Through analysis, we show that our proposed system makes the process of location privacy protection more flexible in terms of friendship distances.
Orlov, O, McGregor, C, Baevsky, R, Chernikova, A, Prysyazhnyuk, A & Rusanov, V 1970, 'Perspective use of the technologies for big data analysis in manned space flights on the international space station', Proceedings of the International Astronautical Congress, IAC, pp. 1951-1960.
View description>>
Recent technologies in the area of Big Data analytics which provide fast and effective review of various and diverse files of information arriving from different sources are being developed increasingly. Various new software are being proposed to provide useful results in this area. Such technologies are the important stimulus of modern scientific and technical progress, in particular in the field of development of piloted space flights. In this publication we present the prospects of the use of Big Data analytics technology in a system of medical control of crews of the International space station (ISS). Today there is an active accumulation of experience of piloted space flights on ISS where the international scientific and technical cooperation actively develops. An important step withm this direction is the organisation of a new joint Russian-Canadian space experiment 'Cosmocard 2018' It will build on the Russian experiment 'Cosmocard' which is currently being carried out on the ISS since September, 2014. In this project we have begun work for the modernisation of the software for the onboard computer which will enable the estimation in real-time of a mode of state of health of members of the crew. The Artemis platform, a Big Data analytics platform proposed by McGregor for the analysis of great volumes of physiological and other environmental data, will be used for this purpose. We have begun to reengrneer algorithms for definition of a functional condition of an organism and risk of development of diseases developed previously by the Institute of Biomedical Problems of the Russian Academy of Sciences to run in real-time within the structure of the new software for the onboard computer that is based on Artemis. These new algorithms will be tested, in the beginning, during simulation experiments with long isolation using the same 'Cosmocard' physiological monitoring devices, currently used on the ISS as part of the current 'Cosmocard' experiments. T...
Padmanabha, AGA, Appaji, MA, Prasad, M, Lu, H & Joshi, S 1970, 'Classification of diabetic retinopathy using textural features in retinal color fundus image', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Early, diagnosis is essential for diabetic patients to avoid partial or complete blindness. This work presents a new analysis method of texture features for classification of Diabetic Retinopathy (DR). The proposed method masks the blood vessels and optic disk segmented and directly extracts the textural features from the remaining retinal region. The proposed method is much simpler with comparison of the other methods that detect the defective regions first and then extract the required features for classification. The Haralick texture measures calculated are used for classification of DR. The proposed method is evaluated through a classification of DR using both Support Vector Machine (SVM) and Artificial Neural Network (ANN). The results of SVM have a better accuracy (87.5%) over ANN (79%). The performance of the proposed method is presented also in terms of sensitivity and specificity.
Perry, R, Bandara, M, Kutay, C & Rabhi, F 1970, 'Visualising complex event hierarchies using relevant domain ontologies', Proceedings of the 11th ACM International Conference on Distributed and Event-based Systems, DEBS '17: The 11th ACM International Conference on Distributed and Event-based Systems, ACM, Barcelona Spain, pp. 351-354.
View/Download from: Publisher's site
View description>>
© 2017 Copyright held by the owner/author(s). With the growth of data available for analysis, people in many sectors are looking for tools to assist them in collating and visualising patterns in that data. We have developed an event based visualisation system which provides an interactive interface for experts to filter and analyse data. We show that by thinking in terms of events, event hierarchies, and domain ontologies, that we can provide unique results that display patterns within the data being investigated. The proposed system uses a combination of Complex Event Processing (CEP) concepts and domain knowledge via RDF based ontologies. In this case we combine an event model and domain model based on the Financial Industry Business Ontology (FIBO) and conduct experiments on financial data. Our experiments show that, by thinking in terms of event hierarchies, and pre-existing domain ontologies, that certain new relationships between events are more easily discovered.
Pickrell, M, van den Hoven, E & Bongers, B 1970, 'Exploring in-hospital rehabilitation exercises for stroke patients', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, Brisbane, Australia, pp. 228-237.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. Rehabilitation 1 exercises following stroke are by necessity repetitive and consequently can be tedious for patients. Hospitals are set up with equipment such as clothes pegs, wooden blocks and mechanical hand counters, which patients use to re-learn how to manipulate objects. The aim of this study is to understand the context of stroke patients rehabilitation as well as which types of feedback are most appropriate for patients when performing their rehabilitation exercises. Over 60 hours were spent observing stroke patients undergoing rehabilitation. Fourteen stroke patients who had attended a balance class were interviewed about their experiences and the feedback they received. From this fieldwork, a set of design guidelines has been developed to guide researchers and designers developing computer-based equipment for stroke patient rehabilitation.
Pileggi, S & Hunter, J 1970, 'An Ontology-Based, Linked Open Data Framework to support the Publishing, Re-use and Dynamic Calculation of Urban Planning Indicators', 15th International Conference on Computers in Urban Planning and Urban Management, Adelaide, Australia.
Prysyazhnyuk, A, Baevsky, R, Berseneva, A, Chernikova, A, Luchitskaya, E, Rusanov, V & McGregor, C 1970, 'Big data analytics for enhanced clinical decision support systems during spaceflight', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 296-299.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Recent advancements in the field of space medicine and technology have extended the boundaries of space travel, presenting humankind with the ability to explore undiscovered habitats. As humans embark on long range missions, adaptation mechanisms will be put to the test, challenging provision of medical care in space. To date, a vast amount of knowledge has been accumulated through a series of experiments, both in terrestrial simulation environments and space missions on the ISS. As a result, functional health state algorithm has been developed and validated by IBMP, to identify transitional states between health and disease. Significant limitations on provision of medical care in space are imposed due to retrospective data processing and analysis techniques. Some of these limitations can be addressed by the proposed instantiation of the functional state algorithm within the Online Analytics component of the Artemis platform, to enhance clinical decision support systems during spaceflight.
Qi, L, Dou, W, Zhang, X & Yu, S 1970, 'Amplified Locality-Sensitive Hashing for Privacy-Preserving Distributed Service Recommendation', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 280-297.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. With the ever-increasing volume of services registered in various web communities, service recommendation techniques, e.g., Collaborative Filtering (i.e., CF) have provided a promising way to alleviate the heavy burden on the service selection decisions of target users. However, traditional CF-based service recommendation approaches often assume that the recommendation bases, i.e., historical service quality data are centralized, without considering the distributed service recommendation scenarios as well as the resulted privacy leakage risks. In view of this shortcoming, Locality-Sensitive Hashing (LSH) technique is recruited in this paper to protect the private information of users when distributed service recommendations are made. Furthermore, LSH is essentially a probability-based search technique and hence may generate “False-positive” or “False-negative” recommended results; therefore, we amplify LSH by AND/OR operations to improve the recommendation accuracy. Finally, through a set of experiments deployed on a real distributed service quality dataset, i.e., WS-DREAM, we validate the feasibility of our proposed recommendation approach named DistSRAmplify-LSH in terms of recommendation accuracy and efficiency while guaranteeing privacy-preservation in the distributed environment.
Qiao, M, Yu, J, Bian, W, Li, Q & Tao, D 1970, 'Improving Stochastic Block Models by Incorporating Power-Law Degree Characteristic', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 2620-2626.
View/Download from: Publisher's site
View description>>
Stochastic block models (SBMs) provide a statistical way modeling network data, especially in representing clusters or community structures. However, most block models do not consider complex characteristics of networks such as scale-free feature, making them incapable of handling degree variation of vertices, which is ubiquitous in real networks. To address this issue, we introduce degree decay variables into SBM, termed power-law degree SBM (PLD-SBM), to model the varying probability of connections between node pairs. The scale-free feature is approximated by a power-law degree characteristic. Such a property allows PLD-SBM to correct the distortion of degree distribution in SBM, and thus improves the performance of cluster prediction. Experiments on both simulated networks and two real-world networks including the Adolescent Health Data and the political blogs network demonstrate the validity of the motivation of PLD-SBM, and its practical superiority.
Qin, M, Jin, D, He, D, Gabrys, B & Musial, K 1970, 'Adaptive Community Detection Incorporating Topology and Content in Social Networks', Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17: Advances in Social Networks Analysis and Mining 2017, ACM, pp. 675-682.
View/Download from: Publisher's site
View description>>
In social network analysis, community detection is a basic step to understand the structure, function and semantics of networks. Some conventional community detection methods may have limited performance because they merely focus on topological structure of networks. In addition to topology, content information is another significant aspect of social networks. Some state-of-the-art methods started to combine these two aspects of information, but they often assume that topology and content share the same characteristics. However, for some examples of social networks, content may mismatch with topological structure. In order to better cope with such situations, we introduce a novel community detection method under the framework of nonnegative matrix factorization (NMF). Our proposed method integrates topology and content of networks, and introduces a novel adaptive parameter for controlling the contribution of content with respect to the identified mismatch degree between the topological and content information. The case study using real social networks show that our new method can simultaneously obtain community partition and the corresponding semantic descriptions. Experiments on both artificial networks and real social networks further indicate that our method outperforms some state-of-the-art methods while exhibiting more robust behaviour when the mismatch topological and content information is observed.
Qu, Y, Xu, J & Yu, S 1970, 'Privacy preserving in big data sets through multiple shuffle', Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2017: Australasian Computer Science Week 2017, ACM, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Big data privacy-preserving has attracted increasing attention of researchers in recent years. But existing models are so complicated and time-consuming that they are not easy to implement. In this paper, we propose a more feasible and efficient model for big data sets privacy-preserving using shuffling multiple attributes(M-Shuffle) to achieve a tradeoff between data utility and privacy. Our strategy is firstly categorize all the records into some groups using K-means algorithm according to the sensitive attributes. Then we choose the columns to be shuffled using entropy. At last we intro- duce the random shuffle algorithm to our model to break the correlation among the columns of big data sets. Experiments on real-world datasets show that our framework achieves excellent data utility and efficiency while satisfying privacy-preserving.
Qu, Y, Yu, S, Gao, L & Niu, J 1970, 'Big data set privacy preserving through sensitive attribute-based grouping', 2017 IEEE International Conference on Communications (ICC), ICC 2017 - 2017 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. There is a growing trend towards attacks on database privacy due to great value of privacy information stored in big data set. Public's privacy are under threats as adversaries are continuously cracking their popular targets such as bank accounts. We find a fact that existing models such as K-anonymity, group records based on quasi-identifiers, which harms the data utility a lot. Motivated by this, we propose a sensitive attribute-based privacy model. Our model is the early work of grouping records based on sensitive attributes instead of quasi-identifiers which is popular in existing models. Random shuffle is used to maximize information entropy inside a group while the marginal distribution maintains the same before and after shuffling, therefore, our method maintains a better data utility than existing models. We have conducted extensive experiments which confirm that our model can achieve a satisfying privacy level without sacrificing data utility while guarantee a higher efficiency.
Qu, Y, Yu, S, Gao, L, Peng, S, Xiang, Y & Xiao, L 1970, 'FuzzyDP: Fuzzy-based big data publishing against inquiry attacks', 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2017 IEEE Conference on Computer Communications: Workshops (INFOCOM WKSHPS), IEEE, Atlanta, GA, pp. 7-12.
View/Download from: Publisher's site
Ramezani, F & Naderpour, M 1970, 'A fuzzy virtual machine workload prediction method for cloud environments', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Due to the dynamic nature of cloud environments, the workload of virtual machines (VMs) fluctuates leading to imbalanced loads and utilization of virtual and physical cloud resources. It is, therefore, essential that cloud providers accurately forecast VM performance and resource utilization so they can appropriately manage their assets to deliver better quality cloud services on demand. Current workload and resource prediction methods forecast the workload or CPU utilization pattern of the given web-based applications based on their historical data. This gives cloud providers an indication of the required number of resources (VMs or CPUs) for these applications to optimize resource allocation for software as a service (SaaS) or platform as a service (PaaS), reducing their service costs. However, historical data cannot be used as the only data source for VM workload predictions as it may not be available in every situation. Nor can historical data provide information about sudden and unexpected peaks in user demand. To solve these issues, we have developed a fuzzy workload prediction method that monitors both historical and current VM CPU utilization and workload to predict VMs that are likely to be performing poorly. This model can also predict the utilization of physical machine (PM) resources for virtual resource discovery.
Saberi, M, Hussain, OK & Chang, E 1970, 'An online statistical quality control framework for performance management in crowdsourcing', Proceedings of the International Conference on Web Intelligence, WI '17: International Conference on Web Intelligence 2017, ACM, Leipzig, GERMANY, pp. 476-482.
View/Download from: Publisher's site
Saberi, Z, Hussain, OK, Saberi, M & Chang, E 1970, 'Online Retailer Assortment Planning and Managing under Customer and Supplier Uncertainty Effects Using Internal and External Data', 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), IEEE, Shanghai, PEOPLES R CHINA, pp. 7-14.
View/Download from: Publisher's site
Salama, U, Yao, L, Wang, X, Paik, H-Y & Beheshti, A 1970, 'Multi-Level Privacy-Preserving Access Control as a Service for Personal Healthcare Monitoring', 2017 IEEE International Conference on Web Services (ICWS), 2017 IEEE International Conference on Web Services (ICWS), IEEE, Honolulu, HI, pp. 878-881.
View/Download from: Publisher's site
Salvador, MM, Budka, M & Gabrys, B 1970, 'Modelling multi-component predictive systems as petri nets', 15th International Industrial Simulation Conference 2017 Isc 2017, pp. 17-23.
View description>>
Building reliable data-driven predictive systems requires a considerable amount of human effort, especially in the data preparation and cleaning phase. In many application domains, multiple preprocessing steps need to be applied in sequence, constituting a 'workflow' and facilitating reproducibility. The concatenation of such workflow with a predictive model forms a Multi-Component Predictive System (MCPS). Automatic MCPS composition can speed up this process by taking the human out of the loop, at the cost of model transparency (i.e. not being comprehensible by human experts). In this paper, we adopt and suitably re-define the Well-handled with Regular Iterations Work Flow (WRI-WF) Petri nets to represent MCPSs. The use of such WRI-WF nets helps to increase the transparency of MCPSs required in industrial applications and make it possible to automatically verify the composed workflows. We also present our experience and results of applying this representation to model soft sensors in chemical production plants.
Saqib, M, Daud Khan, S, Sharma, N & Blumenstein, M 1970, 'A study on detecting drones using deep convolutional neural networks', 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Lecce, Italy, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The object detection is a challenging problem in computer vision with various potential real-world applications. The objective of this study is to evaluate the deep learning based object detection techniques for detecting drones. In this paper, we have conducted experiments with different Convolutional Neural Network (CNN) based network architectures namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16) etc. Due to sparse data available for training, networks are trained with pre-trained models using transfer learning. The snapshot of trained models is saved at regular interval during training. The best models having high mean Average Precision (mAP) for each network architecture are used for evaluation on the test dataset. The experimental results show that VGG16 with Faster R-CNN perform better than other architectures on the training dataset. Visual analysis of the test dataset is also presented.
Saqib, M, Daud Khan, S, Sharma, N & Blumenstein, M 1970, 'Extracting descriptive motion information from crowd scenes', 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Christchurch, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. An important contribution that automated analysis tools can generate for management of pedestrians and crowd safety is the detection of conflicting large pedestrian flows: this kind of movement pattern, in fact, may lead to dangerous situations and potential threats to pedestrian's safety. For this reason, detecting dominant motion patterns and summarizing motion information from the scene are inevitable for crowd management. In this paper, we develop a framework that extracts motion information from the scene by generating point trajectories using particle advection approach. The trajectories obtained are then clustered by using unsupervised hierarchical clustering algorithm, where the similarity is measured by the Longest Common Sub-sequence (LCS) metric. The achieved motions patterns in the scene are summarized and represented by using color-coded arrows, where speeds of the different flows are encoded with colors, the width of an arrow represents the density (number of people belonging to a particular motion pattern) while the arrowhead represents the direction. This novel representation of crowded scene provides a clutter free visualization which helps the crowd managers in understanding the scene. Experimental results show that our method outperforms state-of-the-art methods.
Saqib, M, Khan, SD & Blumenstein, M 1970, 'Detecting dominant motion patterns in crowds of pedestrians', SPIE Proceedings, Eighth International Conference on Graphic and Image Processing, SPIE, Tokyo, Japan, pp. 102251L-102251L.
View/Download from: Publisher's site
View description>>
© 2017 SPIE. As the population of the world increases, urbanization generates crowding situations which poses challenges to public safety and security. Manual analysis of crowded situations is a tedious job and usually prone to errors. In this paper, we propose a novel technique of crowd analysis, the aim of which is to detect different dominant motion patterns in real-time videos. A motion field is generated by computing the dense optical flow. The motion field is then divided into blocks. For each block, we adopt an Intra-clustering algorithm for detecting different flows within the block. Later on, we employ Inter-clustering for clustering the flow vectors among different blocks. We evaluate the performance of our approach on different real-time videos. The experimental results show that our proposed method is capable of detecting distinct motion patterns in crowded videos. Moreover, our algorithm outperforms state-of-the-art methods.
Scopigno, R, Cignoni, P, Pietroni, N, Callieri, M & Dellepiane, M 1970, 'Digital Fabrication Techniques for Cultural Heritage: A Survey.', Comput. Graph. Forum, pp. 6-21.
View/Download from: Publisher's site
Sharma, N, Sengupta, A, Sharma, R, Pal, U & Blumenstein, M 1970, 'Pincode detection using deep CNN for postal automation', 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Christchurch, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Postal automation has been a topic of research over a decade. The challenges and complexity involved in developing a postal automation system for a multi-lingual and multi-script country like India are many-fold. The characteristics of Indian postal documents include: multi-lingual behaviour, unconstrained handwritten addresses, structured/unstructured envelopes and postcards, being among the most challenging aspects. This paper examines the state-of-the-art Deep CNN architectures for detecting pin-code in both structured and unstructured postal envelopes and documents. Region-based Convolutional Neural Networks (RCNN) are used for detecting the various significant regions, namely Pin-code blocks/regions, destination address block, seal and stamp in a postal document. Three network architectures, namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16), and VGG M were considered for analysis and identifying their potential. A dataset consisting of 2300 multilingual Indian postal documents of three different categories was developed and used for experiments. The VGG-M architecture with Faster-RCNN performed better than others and promising results were obtained.
Shen, F, Mu, Y, Yang, Y, Liu, W, Liu, L, Song, J & Shen, HT 1970, 'Classification by Retrieval', Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '17: The 40th International ACM SIGIR conference on research and development in Information Retrieval, ACM, Shinjuku, Tokyo, Japan, pp. 595-604.
View/Download from: Publisher's site
View description>>
© 2017 Copyright held by the owner/author(s). This paper proposes a generic formulation that significantly expedites the training and deployment of image classification models, particularly under the scenarios of many image categories and high feature dimensions. As the core idea, our method represents both the images and learned classifiers using binary hash codes, which are simultaneously learned from the training data. Classifying an image thereby reduces to retrieving its nearest class codes in the Hamming space. Specifically, we formulate multiclass image classification as an optimization problem over binary variables. The optimization alternatingly proceeds over the binary classifiers and image hash codes. Profiting from the special property of binary codes, we show that the sub-problems can be efficiently solved through either a binary quadratic program (BQP) or a linear program. In particular, for attacking the BQP problem, we propose a novel bit-flipping procedure which enjoys high efficacy and a local optimality guarantee. Our formulation supports a large family of empirical loss functions and is, in specific, instantiated by exponential and linear losses. Comprehensive evaluations are conducted on several representative image benchmarks. The experiments consistently exhibit reduced computational and memory complexities of model training and deployment, without sacrificing classification accuracy.
Singh, J, Prasad, M, Daraghmi, YA, Tiwari, P, Yadav, P, Bharill, N, Pratama, M & Saxena, A 1970, 'Fuzzy logic hybrid model with semantic filtering approach for pseudo relevance feedback-based query expansion', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Individual query expansion term selection methods have been widely investigated in an attempt to improve their performance. Each expansion term selection method has its own weaknesses and strengths. To overcome the weaknesses and utilize the strengths of individual methods, this paper combined multiple term selection methods. In this paper, initially the possibility of improving the overall performance using individual query expansion (QE) term selection methods are explored. Secondly, some well-known rank aggregation approaches are used for combining multiple QE term selection methods. Thirdly, a new fuzzy logic-based QE approach that considers the relevance score produced by different rank aggregation approaches is proposed. The proposed fuzzy logic approach combines different weights of each term using fuzzy rules to infer the weights of the additional query terms. Finally, Word2vec approach is used to filter semantically irrelevant terms obtained after applying the fuzzy logic approach. The experimental results demonstrate that the proposed approaches achieve significant improvements over each individual term selection method, aggregated method and related state-of-the-art method.
Sohaib, O & Naderpour, M 1970, 'Decision making on adoption of cloud computing in e-commerce using fuzzy TOPSIS.', FUZZ-IEEE, IEEE International Conference on Fuzzy Systems, IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Cloud computing promises enhanced scalability, flexibility, and cost-efficiency. In practice, however, there are many uncertainties about the usage of cloud computing resources in the e-commerce context. As e-commerce is dependent on a reliable and secure online store, it is important for decision makers to adopt an optimal cloud computing mode (Such as SaaS, PaaS and IaaS). This study assesses the factors associated with cloud-based e-commerce based on TOE (technological, organizational, and environmental) framework using multi-criteria decision-making technique (Fuzzy TOPSIS). The results show that Fuzzy TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) approach proposes software-as-a-service (SaaS) as the best choice for e-commerce business.
Sohaib, O, Lu, H & Hussain, W 1970, 'Internet of Things (IoT) in E-commerce: For people with disabilities', 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), IEEE, Cambodia, pp. 419-423.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Internet of Things (IoT) is an interconnection between the physical object and digital world. As a result, many e-commerce companies seize the advantages of the IoT to grow their business. However, the world's largest minority are people with disabilities. IoT can lower barriers for the disabled people by offering assistance in accessing information. Increasing Internet accessibility can help to make that happen for both social and economic benefit. This paper presents the proposed integrated framework of the IoT and cloud computing for people with disabilities such as sensory (hearing and vision), motor (limited use of hands) and cognitive (language and learning disabilities) impairments in the context of business-to-consumer e-commerce context. We conclude that IoT-enabled services offer great potential for success of disabled people in the context of online shopping.
Song, X, Zhang, X, Yu, S, Jiao, S & Xu, Z 1970, 'Resource-Efficient Virtual Network Function Placement in Operator Networks', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, Singapore, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Network Function Virtualization (NFV) is an emerging network resource utilization approach which decouples network functions from proprietary hardware. To accommodate Service Function Chain (SFC) requests, service providers offer Virtual Network Function (VNF) instances in operator networks. However, how to efficiently place VNFs at various network locations while jointly optimizing computing and communication resource is still an open problem. To this end, we study the resource-efficient VNF placement problem in operator networks. We firstly formulate this problem as an Integer Linear Programming (ILP) model. Then we design an efficient heuristic algorithm named Resource- efficient Virtual Network Function Placement (RVNFP) based on Hidden Markov Model (HMM). Extensive simulation results show that compared with the previous VNF placement algorithms, RVNFP saves up to 12.51% network cost, and achieves a good tradeoff between computing resource cost and communication resource cost.
Song, Y, Zhang, G, Lu, J & Lu, H 1970, 'A fuzzy kernel c-means clustering model for handling concept drift in regression', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Concept drift, given the huge volume of high-speed data streams, requires traditional machine learning models to be self-adaptive. Techniques to handle drift are especially needed in regression cases for a wide range of applications in the real world. There is, however, a shortage of research on drift adaptation for regression cases in the literature. One of the main obstacles to further research is the resulting model complexity when regression methods and drift handling techniques are combined. This paper proposes a self-adaptive algorithm, based on a fuzzy kernel c-means clustering approach and a lazy learning algorithm, called FKLL, to handle drift in regression learning. Using FKLL, drift adaptation first updates the learning set using lazy learning, then fuzzy kernel c-means clustering is used to determine the most relevant learning set. Experiments show that the FKLL algorithm is better able to respond to drift as soon as the learning sets are updated, and is also suitable for dealing with reoccurring drift, when compared to the original lazy learning algorithm and other state-of-the-art regression methods.
Sood, K, Yu, S & Xiang, Y 1970, 'Are current resources in SDN allocated to maximum performance and minimize costs and maintaining QoS problems?', Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2017: Australasian Computer Science Week 2017, ACM, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 ACM. In order to maintain application specific Quality of Service (QoS) requirements, the number of resources used in network directly impacts the capital (CAPEX) and operational expenditure (OPEX). Therefore, it is vital to investigate feasible strategies to maintain QoS and minimizing resources provisioning cost. In this paper, we propose a solution in a hierarchical Software-defined network (SDN) architecture that provides ow-balancing (with guaranteed QoS) in proactive operations of SDN controllers, and attempts to optimize the use of instance resources provisioning costs to the controller. Furthermore, in order to validate our findings, we showed results from performance evaluations using appropriate analytical model. We believe that our solution will helps to set-up a network with minimum resources and affordable cost with guaranteed application QoS.
Soro, A, Brereton, M, Roe, P, Wyeth, P, Johnson, D, Ambe, AH, Morrison, A, Bardzell, S, Leong, TW, Ju, W, Lindtner, S, Rogers, Y & Buur, J 1970, 'Designing the Social Internet of Things', Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI '17: CHI Conference on Human Factors in Computing Systems, ACM, Denver, Colorado, USA, pp. 617-623.
View/Download from: Publisher's site
View description>>
Copyright © 2017 by the Association for Computing Machinery, Inc. (ACM). What role do people have in the Internet of Things? Compared to the impressive body of research that is currently tackling the technical issues of the Internet of Things, social aspects of agency, engagement, participation, and ethics, are receiving less attention. The goal of this 'Designing the Social Internet of Things' workshop is to contribute by shedding light on these aspects. We invite prospective participants to take a humanistic standpoint, explore people's relations with 'things' first, and then build on such relations so as to support socially relevant goals of engagement, relatedness, participation, and creativity.
Sun, G, Cui, T, Beydoun, G, Chen, S, Xu, D & Shen, J 1970, 'Organizing Online Computation for Adaptive Micro Open Education Resource Recommendation.', ICWL, International Conference on Web-Based Learning, Springer, Cape Town, South Africa, pp. 177-182.
View description>>
Our previous work, Micro Learning as a Service (MLaaS), aimed to deliver adaptive micro open education resources (OERs). However, relying solely on the offline computation, the recommendation lacks rationality and timeliness. It is also difficult to make the first recommendation to a new learner. In this paper we introduce the organization of the online computation of the MLaaS. It targets at solving the cold start problem due to the shortage of learner information and real-time updates of the learner-micro OER profile.
Sun, G, Cui, T, Shen, J, Xu, D, Beydoun, G & Chen, S 1970, 'Ontological Learner Profile Identification for Cold Start Problem in Micro Learning Resources Delivery.', ICALT, IEEE 17th International Conference on Advanced Learning Technologies, IEEE Computer Society, Timisoara, Romania, pp. 16-20.
View/Download from: Publisher's site
View description>>
Open learning is a rising trend in the educational sector and it attracts millions of learners to be engaged to enjoy massive latest and free open education resources (OERs). Through the use of mobile devices, open learning is often carried out in a micro learning mode, where each unit of learning activity is commonly shorter than 15 minutes. Learners are often at a loss in the process of choosing OER leading to their long term objectives and short term demands. Our pilot work, namely MLaaS, proposed a smart system to deliver personalized OER with micro learning to satisfy their real-time needs, while its decision-making process is scarcely supported due to the lack of historical data. Inspired by this, MLaaS now embeds a new solution to tackle the cold start problem, by opening up a brand new profile for each learner and delivering them the first resources in their fresh start learning journey. In this paper, we also propose an ontology-based mechanism for learning prediction and recommendation.
Sun, X, Kuang, S & Dong, D 1970, 'Rapid control of two-qubit systems based on measurement feedback', 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, pp. 310-315.
View/Download from: Publisher's site
Thuy Do, QN, Hussain, FK & Nguyen, BT 1970, 'A fuzzy approach to detect spammer groups', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Cloud computing has been advancing at an impressive rate in recent years and is likely to increase more and more in the near future. New services are being developed constantly, such as cloud infrastructure, security and platform as a service, to name just a few. Due to the vast pool of available services, review websites have been created to help customers make decisions for their business. This leads to some reviewers taking advantage of these tools to promote the providers that hire them or to discredit competitors. These reviewers can either act individually or cooperate with each other. When reviewers collude to promote one product or defame another, they are called spammer groups. In this paper, we present an approach to identify spammer groups. First, a network-based method is used to identify individual spam reviewers. Then, a fuzzy k-means clustering algorithm is used to find the group that they belong to. A case study that suggests which group an incorrect review belongs to is provided to further understand the new method.
Tsai, W-C, Orth, D & van den Hoven, E 1970, 'Designing Memory Probes to Inform Dialogue', Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17: Designing Interactive Systems Conference 2017, ACM, Edinburgh, United Kingdom, pp. 889-901.
View/Download from: Publisher's site
View description>>
To investigate the phenomenon that occurs during
interactions between used objects and autobiographical
memories, which are both ever-changing and imbedded
with personal significance, an adapted probing method
capable of managing these complex qualities is
needed. This pictorial is our attempt to find a nuanced
indication of how probes could go beyond common
usage to facilitate complex felt experience, and how
probes can be used in less prescriptive ways to instead
promote reminiscent dialogues that are rich and open to
interpretation for both participants and researchers. It
illustrates our exploration into potential Memory Probes
and how this might be done that reflects the value we
see in creating restrictions or limitations in technology mediated
interactions to encourage active participation
by users in social acts such as memory creation and
remembrance.
Verma, S, Liu, W, Wang, C & Zhu, L 1970, 'Extracting highly effective features for supervised learning via simultaneous tensor factorization', 31st Aaai Conference on Artificial Intelligence Aaai 2017, AAAI Conference on Artificial Intelligence, AAAI, San Francisco, USA, pp. 4995-4996.
View description>>
Real world data is usually generated over multiple time periods associated with multiple labels, which can be represented as multiple labeled tensor sequences. These sequences are linked together, sharing some common features while exhibiting their own unique features. Conventional tensor factorization techniques are limited to extract either common or unique features, but not both simultaneously. However, both types of these features are important in many machine learning systems as they inherently affect the systems' performance. In this paper, we propose a novel supervised tensor factorization technique which simultaneously extracts ordered common and unique features. Classification results using features extracted by our method on CIFAR-10 database achieves significantly better performance over other factorization methods, illustrating the effectiveness of the proposed technique.
Vo, NNY & Xu, G 1970, 'The volatility of Bitcoin returns and its correlation to financial markets', 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), IEEE, Krakow, Poland, pp. 1-6.
View/Download from: Publisher's site
View description>>
The 2008 financial crisis had scattered incredulity around the globe regarding traditional financial systems, which made investors and non-financial customers turn to other alternative such as digital banking systems. The existence and development of blockchain technology make cryptocurrency in recent years believably become a complete alternative to traditional ones. Bitcoin is the world's first peer-to-peer and decentralized digital cash system initiated by Nakamoto [1]. Though being the most prominent cryptocurrency, Bitcoin has not been a legal trading currency in various countries. Its exchange rate has appeared to be an exceptionally high-risk portfolio with extreme volatility, which requires a more detailed evaluation before making any decision. This paper utilizes knowledge of statistics for financial time series and machine learning to (i) fit the parametric distribution and (ii) model and forecast the volatility of Bitcoin returns, and (iii) analyze its correlation to other financial market indicators. The fitted parametric time series model significantly outperforms other standard models in explaining the stylized facts and statistical variances in the behavior of Bitcoin returns. The model forecast also outperforms some machine learning methodologies, which would benefit policy makers, banks and financial investors in trading activities for both long-term and short-term strategies.
Wang, D, Xu, G & Deng, S 1970, 'Music recommendation via heterogeneous information graph embedding', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, Alaska, USA, pp. 596-603.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Traditional music recommendation techniques suffer from limited performance due to the sparsity of user-music interaction data, which is addressed by incorporating auxiliary information. In this paper, we study the problem of personalized music recommendation that takes different kinds of auxiliary information into consideration. To achieve this goal, a Heterogeneous Information Graph (HIG) is first constructed to encode different kinds of heterogeneous information, including the interactions between users and music pieces, music playing sequences, and the metadata of music pieces. Based on HIG, a Heterogeneous Information Graph Embedding method (HIGE) is proposed to learn the latent low-dimensional representations of music pieces. Then, we further develop a context-aware music recommendation method. Extensive experiments have been conducted on real-world datasets to compare the proposed method with other state-of-the-art recommendation methods. The results demonstrate that the proposed method significantly outperforms those baselines, especially on sparse datasets.
Wang, G, Zhang, G, Choi, K-S, Lam, K-M & Lu, J 1970, 'An output-based knowledge transfer approach and its application in bladder cancer prediction', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 356-363.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Many medical applications face a situation that the on-hand data cannot fully fit an existing predictive model or on-line tool, since these models or tools only use the most common predictors and the other valuable features collected in the current scenario are not considered altogether. On the other hand, the training data in the current scenario is not sufficient to learn a predictive model effectively yet. In order to overcome these problems and construct an efficient classifier, for these real situations in medical fields, in this work we present an approach based on the least squares support vector machine (LS-SVM), which utilizes a transfer learning framework to make maximum use of the data and guarantee its enhanced generalization capability. The proposed approach is capable of effectively learning a target domain with limited samples by relying on the probabilistic outputs from the other previously learned model using a heterogeneous method in the source domain. Moreover, it autonomously and quickly decides how much output knowledge to transfer from source domain to the target one using a fast leave-one-out cross validation strategy. This approach is applied on a real-world clinical dataset to predict 5-year mortality of bladder cancer patients after radical cystectomy, and the experimental results indicate that the proposed method can achieve better performances compared to traditional machine learning methods, consistently showing the potential of the proposed method under the circumstances with insufficient data.
Wang, J, Jiang, C, Ni, Z, Guan, S, Yu, S & Ren, Y 1970, 'Reliability of Cloud Controlled Multi-UAV Systems for On-Demand Services', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, GLOBECOM 2017 - 2017 IEEE Global Communications Conference, IEEE, Singapore, Singapore, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Unmanned Aerial Vehicle (UAV) technology has been widely applied in both military and civilian applications. With the increasing complexity of application scenarios, the coordination of multiple UAVs has become a hot topic. However, the limited capability of UAVs make it hard to achieve stable and reliable control. Considering this practical problem, we propose a cloud-based UAV system. It extricates the computing and data storage from UAVs and utilizes the cloud to process the sensor data and to maintain the stable operation of multi-UAV systems. Firstly, we analyze the cloud-based system's on-demand service ability and its impact on UAVs' control procedure. Secondly, we propose a UAV cloud control system (CCS) which serves as a network control system. Moreover, the stable condition of the UAV cloud control system is derived. It reveals the relationship between the acquisition rate of sensor data and the stability of the cloud-based UAV system. Finally, simulations are conducted to verify the effectiveness of previous theoretical analysis.
Wang, Y, Dong, D & Petersen, IR 1970, 'An approximate quantum Hamiltonian identification algorithm using a Taylor expansion of the matrix exponential function', 2017 IEEE 56th Annual Conference on Decision and Control (CDC), 2017 IEEE 56th Annual Conference on Decision and Control (CDC), IEEE, pp. 5523-5528.
View/Download from: Publisher's site
Wang, Y, Yin, Q, Dong, D, Qi, B, Petersen, IR, Hou, Z, Yonezawa, H & Xiang, G-Y 1970, 'Efficient identification of unitary quantum processes', 2017 Australian and New Zealand Control Conference (ANZCC), 2017 Australian and New Zealand Control Conference (ANZCC), IEEE, pp. 196-201.
View/Download from: Publisher's site
Wegman-Ostrosky, T, Patidar, R, Sindiri, S, Shern, J, Hawkins, DS, Catchpoole, D, Wei, JS, Skapek, S, Khan, J & Stewart, DR 1970, 'Abstract 3003: Exome analysis of known hereditary cancer genes in 122 children with rhabdomyosarcoma', Cancer Research, American Association for Cancer Research (AACR), pp. 3003-3003.
View/Download from: Publisher's site
View description>>
Abstract Introduction. Rhabdomyosarcoma (RMS) accounts for 5% of all pediatric cancer and is the most prevalent soft tissue tumor in childhood and adolescents. RMS is thought to arise from primitive mesenchymal stem cells directed towards myogenesis. Between 7-33% of RMS cases arise from a hereditary cancer syndrome, like LFS or NF1. We analyzed germline genetic variants in hereditary cancer genes in 122 children with RMS. Methodology. In 122 children with RMS and 1001 cancer-free adults, we examined germline exome data to determine the frequency of genetic variants in 51 cancer genes known to underlie syndromes associated with RMS. DNA was extracted from blood or buccal cells using standard methods. Exome enrichment was performed with NimbleGen SeqCap EZ Human Exome Library v3.0+UTR, on an Illumina HiSeq. Annotation of each exome variant was performed using a custom software pipeline. We evaluated all variants that passed quality controls with a population minor allele frequency (MAF) <0.1%. The cataloging of the variants was based on the ACMG classification as pathogenic (P), likely pathogenic (LP), or variable of unknown significance. Results We compared the age, gender, histologic type and localization of the primary RMS of the patients with and without P/LP variants in the 51 genes. In the patients without P/LP variants, the mean age of diagnosis was 5 years and the most frequent site of diagnosis was head and neck. In the group with P/LP variants, the mean age of diagnosis was 10 years, and the most frequent site was pelvis. In the 51 genes that were analyzed we found 9 P and 12 LP variants in 15 genes: TP53, ATM, MSH6, PMS, DICER1, FANCA, RECQL4, PTEN, WRN, RB1, BUB1B, RET, APC, FANCM and TSC2; genes with 2 variants include WRN, PTEN, BUB1B, FANCA and RET. Most of the variations were stopgain, foll...
Wu, D, Sharma, N & Blumenstein, M 1970, 'Recent advances in video-based human action recognition using deep learning: A review', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, Alaska, USA, pp. 2865-2872.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Video-based human action recognition has become one of the most popular research areas in the field of computer vision and pattern recognition in recent years. It has a wide variety of applications such as surveillance, robotics, health care, video searching and human-computer interaction. There are many challenges involved in human action recognition in videos, such as cluttered backgrounds, occlusions, viewpoint variation, execution rate, and camera motion. A large number of techniques have been proposed to address the challenges over the decades. Three different types of datasets namely, single viewpoint, multiple viewpoint and RGB-depth videos, are used for research. This paper presents a review of various state-of-the-art deep learning-based techniques proposed for human action recognition on the three types of datasets. In light of the growing popularity and the recent developments in video-based human action recognition, this review imparts details of current trends and potential directions for future work to assist researchers.
Yan, X, Dong, P, Zheng, T, Zhang, H & Yu, S 1970, 'Fuzzy Multi-Attribute Utility Based Network Selection Approach for High-Speed Railway Scenario', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Due to the complexity and fluctuation of the wireless network state in high-speed mobility scenarios, the existing works related to network selection face a great challenge for selecting the accurate network in terms of the imprecise information and mobility. Therefore, we design a novel dynamic imprecise-aware network selection approach, named FSNS by taking advantage of fuzzy logic and utility function of multiple attributes. Our proposed approach not only copes with imprecise network information but also dynamically adapts to the high-speed mobility scenarios, which are not presented for the existing proposals. In this paper, we compare FSNS approach with an enhanced TOPSIS method through simulation experiments of two types of services. The results demonstrate that FSNS outperforms TOPSIS for a preferable decision to keep relatively stable and reduce abnormal selection. The conclusions of experimental results have some extent pragmatic value because the simulation imitates network state in the high-speed mobile environment by real world data from high-speed railways.
Yang, G, Dai, Y, Zhao, H, Hirota, K & Lu, H 1970, 'Intelligent web-based experiment management system using multi-agent concept', IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, IEEE, Beijing, China, pp. 8508-8514.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Many web-based online learning systems focus more on textual and/or image based content delivery without including experiment systems, or if included they are usually operated under pre-defined conditions, such as fixed scenarios and pre-determined delivery orders. These limitations hinder personalized learning and collaboration between students and discourage student engagement. To circumvent these limitations, an Intelligent Web-based Experiment Management System (IWEMS) using multi-agent concept is presented. In the system, three kinds of software agents are used: (i) Student-Agent, responsible for assessing the knowledge levels of students. A fuzzy set based algorithm is used and the results are plotted through a dynamic polar chart; (ii) Teacher-Agent, responsible for tracking experiment progress of each student and recommending personalized the next-to-do experiment to him or her; and (iii) Co-Agent, responsible for group formation based on similar knowledge levels to facilitate collaborative learning between students. A prototype of this system is developed using a Java Agent Development Framework(JADE), where a client/server architecture and a MySQL database are used. It demonstrates the validity of the design and effectiveness of this system's functionality, achieves the personalization recommendation of next-to-do experiment and collaborative learning environment.
Yi, C, Yu, D, Sun, Y & Liang, J 1970, 'McVA: A Multi-comparison Visual Analysis System for Maximum Residue Limit Standard in Food Safety', Proceedings of ChinaVis 2017, ChinaVis 2017, QingDao, China.
Yin, R, Li, K, Zhang, G & Lu, J 1970, 'Detecting overlapping protein complexes in dynamic protein-protein interaction networks by developing a fuzzy clustering algorithm', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Protein complexes play important roles in proteinprotein interaction networks. Recent studies reveal that many proteins have multiple functions and belong to more than one different complexes. To get better complex division, we need to consider time-dependent information of networks. However, only few studies can be found to concentrate on detecting overlapping clusters in time-dependent networks. To solve this problem, we propose integrated model of time-dependent network (IM-TDN) to describe time-dependent networks. On the base of this model, we propose similarity based dynamic fuzzy clustering (SDFC) algorithm to detect overlapping clusters. We apply the algorithm to synthetic data and real world protein-protein interaction network dataset. The results showed that our algorithm by using the model which we proposed achieved better results over the state-of-the-art baseline algorithms.
Ying, M, Ying, S & Wu, X 1970, 'Invariants of quantum programs: characterisations and generation.', POPL, ACM SIGPLAN Symposium on Principles of Programming Languages, ACM, Paris, France, pp. 818-832.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for ver- ification and analysis of quantum programs. Interestingly, the no- tion of invariant can be defined for quantum programs in two d- ifferent ways - additive invariants and multiplicative invariants - corresponding to two interpretations of implication in a continuous valued logic: the £ukasiewicz implication and the Godel implica- tion. It is shown that both of them can be used to establish partial correctness of quantum programs. The problem of generating ad- ditive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms - quantum walk and quantum Metropolis sam- pling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimis- ing quantum Metropolis sampling. To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
Yu, C, Quan, W, Yu, S & Zhang, H 1970, 'On the two time scale characteristics of wireless high speed railway networks', 2017 IEEE International Conference on Communications (ICC), ICC 2017 - 2017 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Due to the severe environment along the High-Speed Railway (HSR), it is essential to research an efficient HSR communication system. In our previous work, we collected and analyzed an amount of the first hand dataset of signal intensity in HSR networks. We first observed that the link status variation presented an obvious Two-Time-Scale characteristics. However, that work did not analyze the cause of the Two-Time-Scale characteristics clearly. In this work, we focus on the fundamental cause of the periodic Two-Time-Scale characteristics, and make a lot of in-depth studies on this interesting phenomenon. Furthermore, we rebuild Two-Time-Scale characteristics by leveraging the relationship between the link state variation and the geographical position along HSR lines. In particular, considering the distribution of urban areas and rural ones along the HSR, a periodic distance based small time-scale model and a path-loss based large time-scale model are proposed respectively. Simulation results show the proposed models can perfectly explain the Two-Time-Scale characteristics and predict HSR link quality.
Yu, H, Lu, J & Zhang, G 1970, 'Learning a fuzzy decision tree from uncertain data', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Piscataway, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Uncertainty in data exists when the value of a data item is not a precise value, but rather by an interval data with a probability distribution function, or a probability distribution of multiple values. Since there are intrinsic differences between uncertain and certain data, it is difficult to deal with uncertain data using traditional classification algorithms. Therefore, in this paper, we propose a fuzzy decision tree algorithm based on a classical ID3 algorithm, it integrates fuzzy set theory and ID3 to overcome the uncertain data classification problem. Besides, we propose a discretization algorithm that enables our proposed Fuzzy-ID3 algorithm to handle the interval data. Experimental results show that our Fuzzy-ID3 algorithm is a practical and robust solution to the problem of uncertain data classification and that it performs better than some of the existing algorithms.
Yu, Q, Dong, D, Petersen, IR & Gao, Q 1970, 'Hybrid Filtering for a Class of Quantum Systems with Classical Disturbances', IFAC-PapersOnLine, Elsevier BV, pp. 11738-11743.
View/Download from: Publisher's site
Zekveld, J, Bakker, S, Zijlema, A & van den Hoven, E 1970, 'Wobble', Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17: Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama, pp. 31-35.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Reminders are designed to support remembering actions or intentions to be performed later in time. Most technologies that have a reminding functionality do so by asking attention (e.g., by using auditory alerts or vibration patterns) from users at a certain point in time or location. Because of their obtrusive nature, the reminders of many (digital) prospective memory AIDS we use on a daily basis are hard to ignore, regardless of our ability and motivation to perform the reminded action or intention. In this paper, we present Wobble: An interactive cone-shaped artefact for reminding in the home environment. Wobble was designed to investigate peripheral reminders. Our results imply that wobble is best suitable for reminding intentions that do not require direct action but can be carried out over a period of time, which is a type of reminding currently not met by most electronic memory AIDS.
Zhang, W, Dong, D & Petersen, IR 1970, 'Adaptive target scheme for learning control of post-field alignment', 2017 36th Chinese Control Conference (CCC), 2017 36th Chinese Control Conference (CCC), IEEE, pp. 9752-9756.
View/Download from: Publisher's site
Zhang, X, Yao, L, Huang, C, Sheng, QZ & Wang, X 1970, 'Intent Recognition in Smart Living Through Deep Recurrent Neural Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 748-758.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Electroencephalography (EEG) signal based intent recognition has recently attracted much attention in both academia and industries, due to helping the elderly or motor-disabled people controlling smart devices to communicate with outer world. However, the utilization of EEG signals is challenged by low accuracy, arduous and time-consuming feature extraction. This paper proposes a 7-layer deep learning model to classify raw EEG signals with the aim of recognizing subjects’ intents, to avoid the time consumed in pre-processing and feature extraction. The hyper-parameters are selected by an Orthogonal Array experiment method for efficiency. Our model is applied to an open EEG dataset provided by PhysioNet and achieves the accuracy of 0.9553 on the intent recognition. The applicability of our proposed model is further demonstrated by two use cases of smart living (assisted living with robotics and home automation).
Zhang, X, Yao, L, Zhang, D, Wang, X, Sheng, QZ & Gu, T 1970, 'Multi-Person Brain Activity Recognition via Comprehensive EEG Signal Analysis', Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2017: Computing, Networking and Services, ACM, pp. 28-37.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. An electroencephalography (EEG) based brain activity recognition is a fundamental field of study for a number of significant applications such as intention prediction, appliance control, and neurological disease diagnosis in smart home and smart healthcare domains. Existing techniques mostly focus on binary brain activity recognition for a single person, which limits their deployment in wider and complex practical scenarios. Therefore, multi-person and multi-class brain activity recognition has obtained popularity recently. Another challenge faced by brain activity recognition is the low recognition accuracy due to the massive noises and the low signal-to-noise ratio in EEG signals. Moreover, the feature engineering in EEG processing is time-consuming and highly relies on the expert experience. In this paper, we attempt to solve the above challenges by proposing an approach which has better EEG interpretation ability via raw Electroencephalography (EEG) signal analysis for multi-person and multi-class brain activity recognition. Specifically, we analyze inter-class and inter-person EEG signal characteristics, based on which to capture the discrepancy of inter-class EEG data. Then, we adopt an Autoencoder layer to automatically refine the raw EEG signals by eliminating various artifacts. We evaluate our approach on both a public and a local EEG datasets and conduct extensive experiments to explore the effect of several factors (such as normalization methods, training data size, and Autoencoder hidden neuron size) on the recognition results. The experimental results show that our approach achieves a high accuracy comparing to competitive state-of-the-art methods, indicating its potential in promoting future research on multi-person EEG recognition.
Zhang, X, Yu, S, Xu, Z, Li, Y, Cheng, Z & Zhou, W 1970, 'Flow Entry Sharing in Protection Design for Software Defined Networks', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, Singapore, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In Software Defined Networks (SDN), though we can design protection mechanism to enable fast recovery against a single link failure, it requires proactively installing a large number of flow entries in switches on working paths and backup paths. However, these additional flow entries in a SDN switch may exhaust Ternary Content Addressable Memory (TCAM) which is limited in size since it is expensive and power hungry. Accordingly, it is emergent to design a new protection technology to minimize flow entry occupation. To this end, we leverage flow entry sharing in SDN protection to solve this problem. We first present the problem as an ILP (Integer Linear Programming) model, and then design a greedy based heuristic algorithm named Flow Entry Sharing Protection (FESP). Extensive simulation results show that compared with the previous SDN protection algorithms, FESP reduces 28.31% flow entries and 27.65% link bandwidth in average.
Zhang, Y, Huang, Y, Porter, AL, Zhang, G & Lu, J 1970, 'Discovering Interactions in Big Data Research: A Learning-Enhanced Bibliometric Study', 2017 Portland International Conference on Management of Engineering and Technology (PICMET), 2017 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, OR, USA, pp. 1-12.
View/Download from: Publisher's site
View description>>
© 2017 PICMET. As one of the most representative emerging technologies, big data analytics and its related applications are rapidly leading the development of information technologies and are significantly shaping thinking and behavior in today's interconnected world. Exploring the technological evolution of big data research is an effective way to enhance technology management and create value for research and development strategies for both government and industry. This paper uses a learning-enhanced bibliometric study to discover interactions in big data research by detecting and visualizing its evolutionary pathways. Concentrating on a set of 5840 articles derived from Web of Science covering the period between 2000 and 2015, text mining and bibliometric techniques are combined to profile the hotspots in big data research and its core constituents. A learning process is used to enhance the ability to identify the interactive relationships between topics in sequential time slices, revealing technological evolution and death. The outputs include a landscape of interactions within big data research from 2000 to 2015 with a detailed map of the evolutionary pathways of specific technologies. Empirical insights for related studies in science policy, innovation management, and entrepreneurship are also provided.
Zhang, Y, Saberi, M & Chang, E 1970, 'Semantic-based lightweight ontology learning framework', Proceedings of the International Conference on Web Intelligence, WI '17: International Conference on Web Intelligence 2017, ACM, Leipzig, GERMANY, pp. 1171-1177.
View/Download from: Publisher's site
Zhang, Z, Oberst, S & Lai, JCS 1970, 'Uncertainty analysis for the prediction of disc brake squeal propensity', INTER-NOISE 2017 - 46th International Congress and Exposition on Noise Control Engineering: Taming Noise and Moving Quiet, Internoise 2017, Hong Kong, China.
View description>>
ACT Since brake squeal was first investigated in the 1930s, it has been a noise, vibration and harshness (NVH) problem plaguing the automotive industry due to warranty-related claims and customer dissatisfaction. Accelerating research efforts in the last decade, represented by almost 70% of the papers published in the open literature, have improved the understanding of the generation mechanisms of brake squeal, resulting in better analysis of the problem and better development of countermeasures by combining numerical simulations with noise dynamometer tests. However, it is still a challenge to predict brake squeal propensity with any confidence. This is because of modelling difficulties that include the often transient and nonlinear nature of brake squeal, and uncertainties in material properties, operating conditions (brake pad pressure and temperature, speed), contact conditions between pad and disc, and friction. Although the conventional Complex Eigenvalue Analysis (CEA) method, widely used in industry, is a good linear analysis tool for identifying unstable vibration modes to complement noise dynamometer tests, it is not a predictive tool as it may either over-predict or under-predict the number of unstable vibration modes. In addition, there is no correlation between the magnitude of the positive real part of a complex eigenvalue and the likelihood that the unstable vibration mode will squeal. Transient nonlinear simulations are still computationally too expensive to be implemented in industries for even exploratory predictions. In this paper, a stochastic approach, incorporating uncertainties in the surface roughness of the lining, material properties and the friction coefficient, is applied to predict the squeal propensity of a full disc brake system by using CEA on a finite element model updated by experimental modal testing results. Results compared with noise dynamometer squeal tests illustrate the potential of the stochastic CEA approach ov...
Zhou, L & Ying, M 1970, 'Differential Privacy in Quantum Computation.', CSF, IEEE Computer Security Foundations Symposium, IEEE Computer Society, Santa Barbara, CA, USA, pp. 249-262.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. More and more quantum algorithms have been designed for solving problems in machine learning, database search and data analytics. An important problem then arises: how privacy can be protected when these algorithms are used on private data? For classical computing, the notion of differential privacy provides a very useful conceptual framework in which a great number of mechanisms that protect privacy by introducing certain noises into algorithms have been successfully developed. This paper defines a notion of differential privacy for quantum information processing. We carefully examine how the mechanisms using three important types of quantum noise, the amplitude/phase damping and depolarizing, can protect differential privacy. A composition theorem is proved that enables us to combine multiple privacy-preserving operations in quantum information processing.
Zhu, F, Zhang, G, Lu, J & Zhu, D 1970, 'First-order causal process for causal modelling with instantaneous and cross-temporal relations', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, USA, pp. 380-387.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Motivated by the real damped simple harmonic oscillator (SHO) system, in this paper, we propose a process interpretation of causality and the first-order causal process (FoCP) model for temporal causal modelling. Compared with existing causal models that are able to model feedbacks, such as the structural equation model (SEM) and the structure vector autoregressive (SVAR) model, the FoCP model entails a novel 2-stage evolution semantic for instantaneous and cross-temporal causal relations existing in many real world dynamic systems. Graphical representations are developed to illustrate the causal structure compactly. Useful properties of the new model are identified and used to develop a conditional independence based algorithm for learning the causal structure from a multivariate time series dataset. Experiments on both simulated and real data validate the feasibility of the method to discover simple while meaningful causal structures of dynamic systems.
Zijlema A., Van den Hoven E. & Eggen B. 1970, 'Preserving objects, preserving memories: repair professionals and object owners on the relation between memories and traces on personal possessions', PRODUCT LIFETIMES AND THE ENVIRONMENT (PLATE), 2nd Conference on Product Lifetimes and the Environment (PLATE), IOS Press, Delft Univ Technol, Fac Ind Design Engn, Delft, NETHERLANDS, pp. 453-457.
View/Download from: Publisher's site
View description>>
Traces of ageing and use on the material of products, and memories associated with products, have been found to contribute to product attachment and can stimulate product longevity. We present findings of a qualitative study that focused on the relation between traces of ageing and use on personal possessions and memories and the effects of repair on objects. With this research, we intended to increase our understanding of the role of traces on personal possessions and memories. We interviewed five professionals at their workplace who worked as a restorer or did repairs of personal possessions, and five owners of a repaired or restored possession. The motivations for bringing an object for repair were not only related to the deteriorating condition of the object but were also triggered by situational events or circumstances, such as passing on ownership or knowing someone who could repair the object. We found five different categories of traces among the possessions of the interviewed object owners: Traces of use, traces of ageing, traces of repair, traces of accidents and alterations. We found that objects gained meaning after the repair. When object owners or repair professionals decided not to repair traces, it was often for aesthetical and reminding reasons, but also because it may be how the owner remembered the object. Traces can cue associations to their use in the past, and also to the (imagined) history of the objects. These findings indicate that repair can enhance the cueing of memories and that preservation of meaningful traces may contribute to attachment.
Zijlema, AF, Van den Hoven, E & Eggen, B 1970, 'What comes to mind when being triggered by personal items in the home? A qualitative exploration of cuing responses', Conference of Society for Applied Research in Memory and Cognition, Sydney, Australia.
View description>>
We investigated how personal holiday-items affect the retrieval of autobiographical memories. People often keep souvenirs, photos, and other acquired items from their holidays in their home. But what do the holiday-items evoke when people encounter them? We interviewed nine participants during a ‘home tour’, discussing holiday-items from one particular holiday while walking with the participant through their homes. Qualitative analysis resulted in four types of cuing responses: ‘no-memory’ responses, ‘know’ responses, ‘memory evoked think or feel’ responses, and ‘remember’ responses. For each of these responses, we discuss the item types and their characteristics, giving a peek into everyday life remembering.
Zuo, H, Zhang, G, Lu, J & Pedrycz, W 1970, 'Fuzzy rule-based transfer learning for label space adaptation', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. As the age of big data approaches, methods of massive scale data management are rapidly evolving. The traditional machine learning methods can no longer satisfy the exponential development of big data; there is a common assumption in these data-driving methods that the distribution of both the training data and testing data should be equivalent. A model built using today's data will not adequately address the classification tasks tomorrow if the distribution of the data item values has changed. Transfer learning is emerging as a solution to this issue, and many methods have been proposed. Few of the existing methods, however, explicitly indicate the solution to the case where the labels' distributions in two domains are different. This work proposes the fuzzy rule-based methods to deal with transfer learning problems where the discrepancy between the two domains shows in the label spaces. The presented methods are validated in both the synthetic and real-world datasets, and the experimental results verify the effectiveness of the introduced methods.
Berry, DW, Kieferová, M, Scherer, A, Sanders, YR, Low, GH, Wiebe, N, Gidney, C & Babbush, R 2017, 'Improved Techniques for Preparing Eigenstates of Fermionic Hamiltonians'.
Chauhan, J, Seneviratne, S, Hu, Y, Misra, A, Seneviratne, A & Lee, Y 2017, 'BreathRNNet: Breathing Based Authentication on Resource-Constrained IoT Devices using RNNs'.
Choi, I, Milne, DN, Deady, M, Calvo, RA, Harvey, SB & Glozier, N 2017, 'Impact of Mental Health Screening on Promoting Immediate Online Help-Seeking: Randomized Trial Comparing Normative Versus Humor-Driven Feedback', JMIR Publications Inc..
View/Download from: Publisher's site
Choi, I, Milne, DN, Deady, M, Calvo, RA, Harvey, SB & Glozier, N 2017, 'Impact of Mental Health Screening on Promoting Immediate Online Help-Seeking: Randomized Trial Comparing Normative Versus Humor-Driven Feedback (Preprint)', JMIR Publications Inc..
View/Download from: Publisher's site
Demazeau, Y, Gao, J, Xu, G, Kozlak, J, Müller, K, Razzak, I, Chen, H & Gu, Y 2017, 'Proceedings of 2017 International Conference on Behavioral, Economic, Socio-cultural Computing'.
Dickson, A & Gill, AQ 2017, 'Aquatic Sciences Data Reference Model'.
View description>>
Context: To inform effective decision making, aquatic ecosystem scientists are required to integrate and interpret information from a variety of sources and domains such as biology, chemistry, hydrology, geology, meteorology, climate science and geophysics.Problem: Preparation, analysis, interpretation and communication of data from disparate sources in different formats is a challenging task. Data from different domains can often be stored differently and is subject to multiple interpretations. This suggests a need to define a taxonomy of aquatic ecosystem data.Solutions: To address the problem in hand, this paper proposes the Data Reference Model (DRM) to facilitate the understanding of data entities, topics and relationships of data within the Aquatic Science domain.Research Method: A series of brainstorming exercises with experienced aquatic ecologists identified a range of data entities and an analysis of existing data standards provided clarity to the entity selection process. Adherence to the Open Geospatial Consortium (OGC) WaterML-WQ Best Practice and the data standards of the Atlas of Living Australia provided guidance to the development of the Aquatic Sciences DRM. The DRM is documented as a tree structure diagram and presented as a poster of data entity taxonomy, to enhance communication. Finally, it is applied to a ten-year aquatic ecosystem monitoring dataset, to demonstrate implementation.Impact: The DRM is aimed to be an important tool for the facilitation of communication between practitioners of aquatic ecosystem science and information systems specialists. It establishes a vocabulary that the two, almost opposing parties, can comprehend and provides a structured and tested approach to data interpretation and governance.Conclusion: The DRM is not intended to be a static document that covers the entirety of its subject matter, but rather an evolving model that, through ongoing collaboration, will facilitate communication and u...
Diesner, J, Ferrari, E & Xu, G 2017, 'Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017'.
Gill, AQ 2017, 'Adaptive Strategy: Digital Transformation in Higher Education'.
View description>>
EduTech – International Congress & Expo 2017
Gill, AQ 2017, 'Building adaptive architecture as a strategic platform for digital transformation'.
View description>>
2nd annual Digital Government Transformation Conference 2017
Gill, AQ 2017, 'Building an Adaptive Enterprise Architecture Capability for our Rapidly Evolving Digital World'.
View description>>
CFO Edge Conference 2017
Herr, D, Paler, A, Devitt, SJ & Nori, F 2017, 'A local and scalable lattice renormalization method for ballistic quantum computation'.
Herr, D, Paler, A, Devitt, SJ & Nori, F 2017, 'Lattice Surgery on the Raussendorf Lattice'.
Liu, S, Wang, X, Zhou, L, Guan, J, Li, Y, He, Y, Duan, R & Ying, M 2017, '$Q|SI\rangle$: A Quantum Programming Environment', Symposium on Real-Time and Hybrid Systems.
View description>>
This paper describes a quantum programming environment, named $Q|SI\rangle$.It is a platform embedded in the .Net language that supports quantumprogramming using a quantum extension of the $\mathbf{while}$-language. Theframework of the platform includes a compiler of the quantum$\mathbf{while}$-language and a suite of tools for simulating quantumcomputation, optimizing quantum circuits, and analyzing and verifying quantumprograms. Throughout the paper, using $Q|SI\rangle$ to simulate quantumbehaviors on classical platforms with a combination of components isdemonstrated. The scalable framework allows the user to program customizedfunctions on the platform. The compiler works as the core of $Q|SI\rangle$bridging the gap from quantum hardware to quantum software. The built-indecomposition algorithms enable the universal quantum computation on thepresent quantum hardware.
Liu, Y-H, Jung, T-P, Lin, C-T & Liao, L-D 2017, 'Editorial Message: Special Issue on Fuzzy Brain–Computer Interface Systems', Springer Science and Business Media LLC, pp. 528-528.
View/Download from: Publisher's site
Lu, J, Herrera, F & Zhang, G 2017, 'Guest Editorial Special Section on Fuzzy Systems in Data Science', Institute of Electrical and Electronics Engineers (IEEE), pp. 1373-1375.
View/Download from: Publisher's site
Madhav, KV, Biswas, T & Ghosh, S 2017, 'Coarse-graining of measurement and quantum-to-classical transition in the bipartite scenario'.
View description>>
The connection between coarse-graining of measurement and emergence of
classicality has been investigated for some time, if not well understood.
Recently in (PRL $\textbf{112}$, 010402, (2014)) it was pointed out that
coarse-graining measurements can lead to non-violation of Bell-type
inequalities by a state which would violate it under sharp measurements. We
study here the effects of coarse-grained measurements on bipartite cat states.
We show that while it is true that coarse-graining does indeed lead to
non-violation of a Bell-type inequality, this is not reflected at the state
level. Under such measurements the post-measurement states can be non-classical
(in the quantum optical sense) and in certain cases coarse-graning can lead to
an increase in this non-classicality with respect to the coarse-graining
parameter. While there is no universal way to quantify non-classicality, we do
so using well understood notions in quantum optics such as the negativity of
the Wigner function and the singular nature of the Gluaber-Sudharshan P
distribution.
Mann, RL & Bremner, MJ 2017, 'On the Complexity of Random Quantum Computations and the Jones Polynomial'.
View description>>
There is a natural relationship between Jones polynomials and quantum
computation. We use this relationship to show that the complexity of evaluating
relative-error approximations of Jones polynomials can be used to bound the
classical complexity of approximately simulating random quantum computations.
We prove that random quantum computations cannot be classically simulated up to
a constant total variation distance, under the assumption that (1) the
Polynomial Hierarchy does not collapse and (2) the average-case complexity of
relative-error approximations of the Jones polynomial matches the worst-case
complexity over a constant fraction of random links. Our results provide a
straightforward relationship between the approximation of Jones polynomials and
the complexity of random quantum computations.
Mills, PW, Rundle, RP, Samson, JH, Devitt, SJ, Tilma, T, Dwyer, VM & Everitt, MJ 2017, 'On quantum invariants and the graph isomorphism problem'.
Murphy, A, Farley, H, Dyson, LE & Jones, H 2017, 'Preface', p. xv.
Paler, A & Devitt, SJ 2017, 'A Specification Format and a Verification Method of Fault-Tolerant Quantum Circuits'.
Peris-Ortiz, M, Gómez, JA, Merigó, JM & Rueda-Armengot, C 2017, 'Preface', pp. ix-xiii.
Singh, AK, Chen, H-T, King, J-T & Lin, C-T 2017, 'Measuring Cognitive Conflict in Virtual Reality with Feedback-Related Negativity'.
View description>>
As virtual reality (VR) emerges as a mainstream platform, designers havestarted to experiment new interaction techniques to enhance the userexperience. This is a challenging task because designers not only strive toprovide designs with good performance but also carefully ensure not to disruptusers' immersive experience. There is a dire need for a new evaluation toolthat extends beyond traditional quantitative measurements to assist designersin the design process. We propose an EEG-based experiment framework thatevaluates interaction techniques in VR by measuring intentionally elicitedcognitive conflict. Through the analysis of the feedback-related negativity(FRN) as well as other quantitative measurements, this framework allowsdesigners to evaluate the effect of the variables of interest. We studied theframework by applying it to the fundamental task of 3D object selection usingdirect 3D input, i.e. tracked hand in VR. The cognitive conflict isintentionally elicited by manipulating the selection radius of the targetobject. Our first behavior experiment validated the framework in line with thefindings of conflict-induced behavior adjustments like those reported in otherclassical psychology experiment paradigms. Our second EEG-based experimentexamines the effect of the appearance of virtual hands. We found that theamplitude of FRN correlates with the level of realism of the virtual hands,which concurs with the Uncanny Valley theory.
Wang, J, Jiang, C, Gao, L, Yu, S, Han, Z & Ren, Y 2017, 'Complex Network Theoretical Analysis on Information Dissemination over Vehicular Networks'.
Wu, D, King, J-T, Chuang, C-H, Lin, C-T & Jung, T-P 2017, 'Spatial Filtering for EEG-Based Regression Problems in Brain-Computer Interface (BCI)'.
Wu, D, Lance, BJ, Lawhern, VJ, Gordon, S, Jung, T-P & Lin, C-T 2017, 'EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features'.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 2017, 'Driver Drowsiness Estimation from EEG Signals Using Online Weighted Adaptation Regularization for Regression (OwARR)'.
Ying, S, Ying, M & Feng, Y 2017, 'Quantum Privacy-Preserving Data Analytics.'.
Ying, S, Ying, M & Feng, Y 2017, 'Quantum Privacy-Preserving Perceptron.'.
Zhang, J, Devitt, SJ, You, JQ & Nori, F 2017, 'Holonomic Surface Codes for Fault-Tolerant Quantum Computation'.
Zijlema, AF, Van den Hoven, E & Eggen, B 2017, 'Memory cue evolvement with personal objects and media: The development of the item-memory relation over time'.
View description>>
Personal items that remind us of our past have not always been reminders; at some point they started cuing autobiographical memories, and its function and meaning may have changed over time. We are interested in the item-memory relationship and how this relationship evolves over time. Therefore, we set up a study with 19 participants, who filled in cards with questions about the memories the item cued and the interaction with the item, over a time period of eight months with three personal items for each participant. In this poster presentation we will discuss factors that contributed to the item-memory relationship.