Ahadi, A, Brennan, S, Kennedy, PJ, Hutvagner, G & Tran, N 2016, 'Long non-coding RNAs harboring miRNA seed regions are enriched in prostate cancer exosomes', Scientific Reports, vol. 6, no. 1, pp. 1-14.
View/Download from: Publisher's site
View description>>
AbstractLong non-coding RNAs (lncRNAs) form the largest transcript class in the human transcriptome. These lncRNA are expressed not only in the cells, but they are also present in the cell-derived extracellular vesicles such as exosomes. The function of these lncRNAs in cancer biology is not entirely clear, but they appear to be modulators of gene expression. In this study, we characterize the expression of lncRNAs in several prostate cancer exosomes and their parental cell lines. We show that certain lncRNAs are enriched in cancer exosomes with the overall expression signatures varying across cell lines. These exosomal lncRNAs are themselves enriched for miRNA seeds with a preference for let-7 family members as well as miR-17, miR-18a, miR-20a, miR-93 and miR-106b. The enrichment of miRNA seed regions in exosomal lncRNAs is matched with a concomitant high expression of the same miRNA. In addition, the exosomal lncRNAs also showed an over representation of RNA binding protein binding motifs. The two most common motifs belonged to ELAVL1 and RBMX. Given the enrichment of miRNA and RBP sites on exosomal lncRNAs, their interplay may suggest a possible function in prostate cancer carcinogenesis.
Alzoubi, YI, Gill, AQ & Al-Ani, A 2016, 'Empirical studies of geographically distributed agile development communication challenges: A systematic review.', Inf. Manag., vol. 53, no. 1, pp. 22-37.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. All rights reserved. There is increasing interest in studying and applying geographically distributed agile development (GDAD). Much has been published on GDAD communication. There is a need to systematically review and synthesize the literature on GDAD communication challenges. Using the SLR approach and applying customized search criteria derived from the research questions, 21 relevant empirical studies were identified and reviewed in this paper. The data from these papers were extracted to identify communication challenges and the techniques used to overcome these challenges. The findings of this research serve as a resource for GDAD practitioners and researchers when setting future research priorities and directions.
Anaissi, A, Goyal, M, Catchpoole, DR, Braytee, A & Kennedy, PJ 2016, 'Ensemble Feature Learning of Genomic Data Using Support Vector Machine', PLOS ONE, vol. 11, no. 6, pp. e0157330-e0157330.
View/Download from: Publisher's site
View description>>
© 2016 Anaissi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algo...
Argent, RM, Sojda, RS, Giupponi, C, McIntosh, B, Voinov, AA & Maier, HR 2016, 'Best practices for conceptual modelling in environmental planning and management', Environmental Modelling & Software, vol. 80, pp. 113-121.
View/Download from: Publisher's site
Azadeh, A, Aryaee, M, Zarrin, M & Saberi, M 2016, 'A novel performance measurement approach based on trust context using fuzzy T-norm and S-norm operators: The case study of energy consumption', Energy Exploration & Exploitation, vol. 34, no. 4, pp. 561-585.
View/Download from: Publisher's site
View description>>
In today’s economic environment, performance and efficiency assessment is essential for organizations in order to survive and raise their market share. Energy efficient consumption is a major issue in the energy planning of each country which is a big concern of managers, hence, exploitation of a strong approach for efficiency evaluation and assessment seems necessary in the energy section. In this study, a novel performance assessment model is proposed based on the concept of trust, using two popular fuzzy operators called T-norm and S-norm. The developed model is applied for a real case study of energy consumption efficiency assessment for 36 countries. An adaptive network based fuzzy inference system (ANFIS) is used to measure the efficiencies. Also, to predict efficiency rates of the future time periods, a regression model is applied as a time series model. The obtained results indicate the superiority and applicability of the proposed methodology. To the best of our knowledge, this is the first study that proposes a novel performance measurement approach based on trust context by using fuzzy T-norm and S-norm operators.
Bano, M, Zowghi, D & Sarkissian, N 2016, 'Empirical study of communication structures and barriers in geographically distributed teams', IET SOFTWARE, vol. 10, no. 5, pp. 147-153.
View/Download from: Publisher's site
View description>>
Conway's law asserts that communication structures of organisations constrain the design of the products they develop. This law is more explicitly observable in geographically distributed contexts because distributed teams are required to share information across different time zones and barriers. The diverse business processes and functions adopted by individual teams in geographically distributed settings create challenges for effective communication. Since the publication of Conway's law, a significant body of research has emerged in its relation to the communication structures. When it comes to software projects, the explicit observation about Conway's law has produced mixed results. The research reported in this study explores the communication structures and corresponding challenges faced by teams within a large geographically distributed software development organisation. The data was collected from relevant documents, a questionnaire and interviews with relevant stakeholders. The findings suggest that Conway's law is observable within the communication structures of globally distributed software development teams. The authors have identified the barriers and challenges of effective communications in this setting and have investigated the benefits of utilising an integrated system to overcome these challenges.
Belete, GF & Voinov, A 2016, 'Exploring temporal and functional synchronization in integrating models: A sensitivity analysis', Computers & Geosciences, vol. 90, pp. 162-171.
View/Download from: Publisher's site
Benavides Espinosa, MDM & Merigó Lindahl, JM 2016, 'Organizational design as a learning enabler: A fuzzy-set approach', Journal of Business Research, vol. 69, no. 4, pp. 1340-1344.
View/Download from: Publisher's site
View description>>
In the literature on organizational learning, very few empirical studies attempt to show how organizational design can enable or hinder learning in organizations. This study uses a fuzzy-set technique (fuzzy-set qualitative comparative analysis: fsQCA) as an initial approach to analyzing different design variables and how they affect organizational learning. The results prove that themechanical structures are suitable for organizational learning, especially in large companies. Furthermore, qualified workers should have autonomy to learn.
Beydoun, G & Low, G 2016, 'Centering ontologies in agent oriented software engineering processes', Complex & Intelligent Systems, vol. 2, no. 3, pp. 235-242.
View/Download from: Publisher's site
View description>>
A plethora of Multi Agent Systems (MAS) development methodologies exists and all compete for prominence. This paper advocates unification of best of breed activities from these methodologies and examines two existing approaches for unifying access to them. It proposes an alternative approach that focusses on the use of domain knowledge through ontologies as offering the best potential for unifying access to them. The reliance on ontologies will provide flexibility in the process and workproducts use within the methodology. The focus on domain knowledge will reduce the number of mandatory methodological tasks and at the same time create scope for reuse with respect to both system designs and components. The paper will further sketch and argue for a full software development lifecycle for MAS where ontologies expressing domain knowledge are the central artifacts.
Blanco-Mesa, F, Merigó, JM & Kacprzyk, J 2016, 'Bonferroni means with distance measures and the adequacy coefficient in entrepreneurial group theory', Knowledge-Based Systems, vol. 111, pp. 217-227.
View/Download from: Publisher's site
View description>>
© 2016 The aim of the paper is to develop new aggregation operators using Bonferroni means, OWA operators and some distance measure. We introduce the BON-OWAAC and BON-OWAIMAM operators. We are able to include coefficient adequacy and the maximum and minimum levels in the same formulation with Bonferroni means and an OWA operator. The main advantages of using these operators are that they allow consideration of continuous aggregations, multiple comparisons between each argument and distance measures in the same formulation. An application is developed using these new algorithms in combination with Moore's families and Galois lattices to solve group decision-making problems. The professional and personal interests of the entrepreneurs who share co-working spaces are taken as an example for establishing relationships and groups. According to the professional and personal profile affinities for each entrepreneur, the results show dissimilarity and fuzzy relationships and the maximum similarity sub-relations to establish relationships and groups using Moore's families and Galois lattice. Finally, this new type of distance family can be used for applications in areas such as sports teams, strategy marketing and teamwork.
Blount, Y, Abedin, B, Vatanasakdakul, S & Erfani, S 2016, 'Integrating enterprise resource planning (SAP) in the accounting curriculum: a systematic literature review and case study', Accounting Education, vol. 25, no. 2, pp. 185-202.
View/Download from: Publisher's site
View description>>
© 2016 Taylor & Francis. This study investigates how an enterprise resource planning (ERP) software package SAP was integrated into the curriculum of an accounting information systems (AIS) course in an Australian university. Furthermore, the paper provides a systematic literature review of articles published between 1990 and 2013 to understand how ERP systems were integrated into curriculums of other institutions, and to inform the curriculum designers on approaches for adopting SAP, the benefits and potential limitations. The experiences of integrating SAP into an AIS course from both the students and teaching staff perspectives are described and evaluated. The main finding was the importance of resourcing the instructors with technical and pedagogical support to achieve the learning outcomes. The paper concludes by proposing critical success factors for integrating ERP effectively into an AIS course.
Boixo, S, Isakov, SV, Smelyanskiy, VN, Babbush, R, Ding, N, Jiang, Z, Bremner, MJ, Martinis, JM & Neven, H 2016, 'Characterizing Quantum Supremacy in Near-Term Devices', Nature Physics, vol. 14, no. 6, pp. 595-600.
View/Download from: Publisher's site
View description>>
A critical question for the field of quantum computing in the near future iswhether quantum devices without error correction can perform a well-definedcomputational task beyond the capabilities of state-of-the-art classicalcomputers, achieving so-called quantum supremacy. We study the task of samplingfrom the output distributions of (pseudo-)random quantum circuits, a naturaltask for benchmarking quantum computers. Crucially, sampling this distributionclassically requires a direct numerical simulation of the circuit, withcomputational cost exponential in the number of qubits. This requirement istypical of chaotic systems. We extend previous results in computationalcomplexity to argue more formally that this sampling task must take exponentialtime in a classical computer. We study the convergence to the chaotic regimeusing extensive supercomputer simulations, modeling circuits with up to 42qubits - the largest quantum circuits simulated to date for a computationaltask that approaches quantum supremacy. We argue that while chaotic states areextremely sensitive to errors, quantum supremacy can be achieved in thenear-term with approximately fifty superconducting qubits. We introduce crossentropy as a useful benchmark of quantum circuits which approximates thecircuit fidelity. We show that the cross entropy can be efficiently measuredwhen circuit simulations are available. Beyond the classically tractableregime, the cross entropy can be extrapolated and compared with theoreticalestimates of circuit fidelity to define a practical quantum supremacy test.
Bremner, MJ, Montanaro, A & Shepherd, DJ 2016, 'Achieving quantum supremacy with sparse and noisy commuting quantum computations', Quantum, vol. 1, pp. 8-8.
View/Download from: Publisher's site
View description>>
The class of commuting quantum circuits known as IQP (instantaneous quantumpolynomial-time) has been shown to be hard to simulate classically, assumingcertain complexity-theoretic conjectures. Here we study the power of IQPcircuits in the presence of physically motivated constraints. First, we showthat there is a family of sparse IQP circuits that can be implemented on asquare lattice of n qubits in depth O(sqrt(n) log n), and which is likely hardto simulate classically. Next, we show that, if an arbitrarily small constantamount of noise is applied to each qubit at the end of any IQP circuit whoseoutput probability distribution is sufficiently anticoncentrated, there is apolynomial-time classical algorithm that simulates sampling from the resultingdistribution, up to constant accuracy in total variation distance. However, weshow that purely classical error-correction techniques can be used to designIQP circuits which remain hard to simulate classically, even in the presence ofarbitrary amounts of noise of this form. These results demonstrate thechallenges faced by experiments designed to demonstrate quantum supremacy overclassical computation, and how these challenges can be overcome.
Bremner, MJ, Montanaro, A & Shepherd, DJ 2016, 'Average-Case Complexity Versus Approximate Simulation of Commuting Quantum Computations', PHYSICAL REVIEW LETTERS, vol. 117, no. 8.
View/Download from: Publisher's site
View description>>
© 2016 American Physical Society. We use the class of commuting quantum computations known as IQP (instantaneous quantum polynomial time) to strengthen the conjecture that quantum computers are hard to simulate classically. We show that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error. One conjecture relates to the hardness of estimating the complex-temperature partition function for random instances of the Ising model; the other concerns approximating the number of zeroes of random low-degree polynomials. We observe that both conjectures can be shown to be valid in the setting of worst-case complexity. We arrive at these conjectures by deriving spin-based generalizations of the boson sampling problem that avoid the so-called permanent anticoncentration conjecture. 2016 UK.
Brown, RBK, Beydoun, G, Low, G, Tibben, W, Zamani, R, Garcia-Sanchez, F & Martinez-Bejar, R 2016, 'Computationally efficient ontology selection in software requirement planning', INFORMATION SYSTEMS FRONTIERS, vol. 18, no. 2, pp. 349-358.
View/Download from: Publisher's site
Cao, Z, Lin, C-T, Chuang, C-H, Lai, K-L, Yang, AC, Fuh, J-L & Wang, S-J 2016, 'Resting-state EEG power and coherence vary between migraine phases', The Journal of Headache and Pain, vol. 17, no. 1.
View/Download from: Publisher's site
View description>>
© 2016, The Author(s). Background: Migraine is characterized by a series of phases (inter-ictal, pre-ictal, ictal, and post-ictal). It is of great interest whether resting-state electroencephalography (EEG) is differentiable between these phases. Methods: We compared resting-state EEG energy intensity and effective connectivity in different migraine phases using EEG power and coherence analyses in patients with migraine without aura as compared with healthy controls (HCs). EEG power and isolated effective coherence of delta (1–3.5 Hz), theta (4–7.5 Hz), alpha (8–12.5 Hz), and beta (13–30 Hz) bands were calculated in the frontal, central, temporal, parietal, and occipital regions. Results: Fifty patients with episodic migraine (1–5 headache days/month) and 20 HCs completed the study. Patients were classified into inter-ictal, pre-ictal, ictal, and post-ictal phases (n = 22, 12, 8, 8, respectively), using 36-h criteria. Compared to HCs, inter-ictal and ictal patients, but not pre- or post-ictal patients, had lower EEG power and coherence, except for a higher effective connectivity in fronto-occipital network in inter-ictal patients (p <.05). Compared to data obtained from the inter-ictal group, EEG power and coherence were increased in the pre-ictal group, with the exception of a lower effective connectivity in fronto-occipital network (p <.05). Inter-ictal and ictal patients had decreased EEG power and coherence relative to HCs, which were “normalized” in the pre-ictal or post-ictal groups. Conclusion: Resting-state EEG power density and effective connectivity differ between migraine phases and provide an insight into the complex neurophysiology of migraine.
Casanovas, M, Torres-Martínez, A & Merigó, JM 2016, 'Decision Making in Reinsurance with Induced OWA Operators and Minkowski Distances', Cybernetics and Systems, vol. 47, no. 6, pp. 460-477.
View/Download from: Publisher's site
Cetindamar, D, Phaal, R & Probert, DR 2016, 'Technology management as a profession and the challenges ahead', Journal of Engineering and Technology Management, vol. 41, pp. 1-13.
View/Download from: Publisher's site
Chen, S, Yuan, X, Wang, Z, Guo, C, Liang, J, Wang, Z, Zhang, X & Zhang, J 2016, 'Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data', IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 270-279.
View/Download from: Publisher's site
View description>>
© 1995-2012 IEEE. Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.
Chen, Y, Zhen, YG, Hu, HY, Liang, J & Ma, KL 2016, 'Visualization technique for multi-attribute in hierarchical structure', Ruan Jian Xue Bao/Journal of Software, vol. 27, no. 5, pp. 1091-1102.
View/Download from: Publisher's site
View description>>
Nowadays, there is increasing need to analyze the complex data with both hierarchical and multi-attributes in many fields such as food safety, stock market, and network security. The visual analytics appeared in recent years provides a good solution to analyze this kind of data. So far, many visualization methods for multi-dimensional data and hierarchical data, the typical data objects in the field of information visualization, have been presented to solve data analyzing problems effectively. However, the existing solutions can't meet requirements of visual analysis for the complex data with both multi-dimensional and hierarchical attributes. This paper presents a technology named Multi-Coordinate in Treemap (MCT), which combines rectangle treemap and multi-dimensional coordinates techniques. MCT uses treemap created with Squarified and Strip layout algorithm to represent hierarchical structure, uses four edges of treemap's rectangular node as the attribute axis, and through mapping property values to attribute axis, connecting attribute points and fitting curve, to achieve visualization of multi-attribute in hierarchical structure. This work applies MCT technology to visualize pesticide residue detection data and implements the visualization for detecting excessive pesticide residue in fruits and vegetables distributed in each provinces of China. This technology provides an efficient analysis tool for field experts. MCT can also be applied in other fields which require visual analysis of complex data with both hierarchical and multi-attribute.
Chuang, S-W, Chuang, C-H, Yu, Y-H, King, J-T & Lin, C-T 2016, 'EEG Alpha and Gamma Modulators Mediate Motion Sickness-Related Spectral Responses', International Journal of Neural Systems, vol. 26, no. 02, pp. 1650007-1650007.
View/Download from: Publisher's site
View description>>
Motion sickness (MS) is a common experience of travelers. To provide insights into brain dynamics associated with MS, this study recruited 19 subjects to participate in an electroencephalogram (EEG) experiment in a virtual-reality driving environment. When riding on consecutive winding roads, subjects experienced postural instability and sensory conflict between visual and vestibular stimuli. Meanwhile, subjects rated their level of MS on a six-point scale. Independent component analysis (ICA) was used to separate the filtered EEG signals into maximally temporally independent components (ICs). Then, reduced logarithmic spectra of ICs of interest, using principal component analysis, were decomposed by ICA again to find spectrally fixed and temporally independent modulators (IMs). Results demonstrated that a higher degree of MS accompanied increased activation of alpha ([Formula: see text]) and gamma ([Formula: see text]) IMs across remote-independent brain processes, covering motor, parietal and occipital areas. This co-modulatory spectral change in alpha and gamma bands revealed the neurophysiological demand to regulate conflicts among multi-modal sensory systems during MS.
Devitt, SJ 2016, 'Performing Quantum Computing Experiments in the Cloud', Phys. Rev. A, vol. 94, no. 3, p. 032329.
View/Download from: Publisher's site
View description>>
Quantum computing technology has reached a second renaissance in the pastfive years. Increased interest from both the private and public sector combinedwith extraordinary theoretical and experimental progress has solidified thistechnology as a major advancement in the 21st century. As anticipated by many,the first realisation of quantum computing technology would occur over thecloud, with users logging onto dedicated hardware over the classical internet.Recently IBM has released the {\em Quantum Experience} which allows users toaccess a five qubit quantum processor. In this paper we take advantage of thisonline availability of actual quantum hardware and present four quantuminformation experiments that have never been demonstrated before. We utilisethe IBM chip to realise protocols in Quantum Error Correction, QuantumArithmetic, Quantum graph theory and Fault-tolerant quantum computation, byaccessing the device remotely through the cloud. While the results are subjectto significant noise, the correct results are returned from the chip. Thisdemonstrates the power of experimental groups opening up their technology to awider audience and will hopefully allow for the next stage development inquantum information technology.
Devitt, SJ 2016, 'Programming quantum computers using 3-D puzzles, coffee cups, and doughnuts', XRDS, vol. 23, no. 1, pp. 45-50.
View/Download from: Publisher's site
View description>>
The task of programming a quantum computer is just as strange as quantummechanics itself. But it now looks like a simple 3D puzzle may be the futuretool of quantum software engineers.
Ding, W-P, Lin, C-T, Prasad, M, Chen, S-B & Guan, Z-J 2016, 'Attribute Equilibrium Dominance Reduction Accelerator (DCCAEDR) Based on Distributed Coevolutionary Cloud and Its Application in Medical Records', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 3, pp. 384-400.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Aimed at the tremendous challenge of attribute reduction for big data mining and knowledge discovery, we propose a new attribute equilibrium dominance reduction accelerator (DCCAEDR) based on the distributed coevolutionary cloud model. First, the framework of N-populations distributed coevolutionary MapReduce model is designed to divide the entire population into N subpopulations, sharing the reward of different subpopulations' solutions under a MapReduce cloud mechanism. Because the adaptive balancing between exploration and exploitation can be achieved in a better way, the reduction performance is guaranteed to be the same as those using the whole independent data set. Second, a novel Nash equilibrium dominance strategy of elitists under the N bounded rationality regions is adopted to assist the subpopulations necessary to attain the stable status of Nash equilibrium dominance. This further enhances the accelerator's robustness against complex noise on big data. Third, the approximation parallelism mechanism based on MapReduce is constructed to implement rule reduction by accelerating the computation of attribute equivalence classes. Consequently, the entire attribute reduction set with the equilibrium dominance solution can be achieved. Extensive simulation results have been used to illustrate the effectiveness and robustness of the proposed DCCAEDR accelerator for attribute reduction on big data. Furthermore, the DCCAEDR is applied to solve attribute reduction for traditional Chinese medical records and to segment cortical surfaces of the neonatal brain 3-D-MRI records, and the DCCAEDR shows the superior competitive results, when compared with the representative algorithms.
Erfani, SS, Blount, Y & Abedin, B 2016, 'The influence of health-specific social network site use on the psychological well-being of cancer-affected people', JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, vol. 23, no. 3, pp. 467-476.
View/Download from: Publisher's site
Faed, A, Chang, E, Saberi, M, Hussain, OK & Azadeh, A 2016, 'Intelligent customer complaint handling utilising principal component and data envelopment analysis (PDA)', Applied Soft Computing, vol. 47, pp. 614-630.
View/Download from: Publisher's site
Farley, J & Voinov, A 2016, 'Economics, socio-ecological resilience and ecosystem services', JOURNAL OF ENVIRONMENTAL MANAGEMENT, vol. 183, pp. 389-398.
View/Download from: Publisher's site
Frawley, JK, Dyson, LE, Wakefield, J & Tyler, J 2016, 'Supporting Graduate Attribute Development in Introductory Accounting with Student-Generated Screencasts', International Journal of Mobile and Blended Learning (IJMBL), vol. 8, no. 3, pp. 65-82.
View/Download from: Publisher's site
View description>>
In recent years educational, industry and government bodies have placed increasing emphasis on the need to better support the development of “soft” skills or graduate attributes within higher education. This paper details the adoption of a student-generated multimedia screencast assignment that was found to address this need. Implemented within a large introductory accounting subject, this optional assignment allowed undergraduate students to design, develop and record a screencast so as to explain a key accounting concept to their peers. This paper reports on the trial, evaluation and redesign of this assignment. Drawing on data from student surveys, practitioner reflections and descriptive analysis of the screencasts themselves, this paper demonstrates the ways that the assignment contributed to the development and expression of a number of graduate attributes. These included the students' skills in multimedia, creativity, teamwork and self-directed learning. Adopting free-to-use software and providing a fun and different way of learning accounting, this novel approach constitutes a sustainable and readily replicable way of supporting graduate attribute development. This paper contributes understandings that will be relevant to both researchers and practitioners.
Garcia, JA, Schoene, D, Lord, SR, Delbaere, K, Valenzuela, T & Navarro, KF 2016, 'A Bespoke Kinect Stepping Exergame for Improving Physical and Cognitive Function in Older People: A Pilot Study', Games for Health Journal, vol. 5, no. 6, pp. 382-388.
View/Download from: Publisher's site
View description>>
© 2016 Mary Ann Liebert, Inc. Background: Systematic review evidence has shown that step training reduces the number of falls in older people by half. This study investigated the feasibility and effectiveness of a bespoke Kinect stepping exergame in an unsupervised home-based setting. Materials and Methods: An uncontrolled pilot trial was conducted in 12 community-dwelling older adults (mean age 79.3 ± 8.7 years, 10 females). The stepping game comprised rapid stepping, attention, and response inhibition. Participants were recommended to exercise unsupervised at home for a minimum of three 20-minute sessions per week over the 12-week study period. The outcome measures were choice stepping reaction time (CSRT) (main outcome measure), standing balance, gait speed, five-time sit-to-stand (STS), timed up and go (TUG) performance, and neuropsychological function (attention: letter-digit and executive function:Stroop tests) assessed at baseline, 4 weeks, 8 weeks, and trial end (12 weeks). Results: Ten participants (83%) completed the trial and reassessments. A median 8.2 20-minute sessions were completed and no adverse events were reported. Across the trial period, participants showed significant improvements in CSRT (11%), TUG (13%), gait speed (29%), standing balance (7%), and STS (24%) performance (all P < 0.05). There were also nonsignificant, but meaningful, improvements for the letter-digit (13%) and Stroop tests (15%). Conclusions: This study found that a bespoke Kinect step training program was safe and feasible for older people to undertake unsupervised at home and led to improvements in stepping, standing balance, gait speed, and mobility. The home-based step training program could therefore be included in exercise programs designed to prevent falls.
Gholami, MF, Daneshgar, F, Low, G & Beydoun, G 2016, 'Cloud migration process-A survey, evaluation framework, and open challenges', JOURNAL OF SYSTEMS AND SOFTWARE, vol. 120, pp. 31-69.
View/Download from: Publisher's site
Gill, AQ, Phennel, N, Lane, D & Phung, VL 2016, 'IoT-enabled emergency information supply chain architecture for elderly people: The Australian context.', Inf. Syst., vol. 58, pp. 75-86.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd. All rights reserved. The effective delivery of emergency information to elderly people is a challenging task. Failure to deliver appropriate information can have an adverse impact on the well-being of the elderly people. This paper addresses this challenge and proposes an IoT-enabled information architecture driven approach, which is called 'Resalert'. Resalert offers IoT-enabled emergency information supply chain architecture pattern, IoT device architecture and system architecture. The applicability of the Resalert is evaluated by the means of an example scenario, a portable Raspberry Pi based system prototype and user evaluation. The results of this research indicate that the proposed approach seems useful to the effective delivery of emergency information to elderly people.
González, LO, Rodríguez Gil, LI, Martorell Cunill, O & Merigó Lindahl, JM 2016, 'The effect of financial innovation on European banks' risk', Journal of Business Research, vol. 69, no. 11, pp. 4781-4786.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. This study examines the effect of the use of securitization and credit derivatives on the risk profile of European banks. Using information from 134 listed European banks during the period of 2006–2010, the results show that securitization and trading with credit derivatives have a negative effect on financial stability. The main findings also show the dominance of trading positions over hedging positions for credit derivatives. The results of this study support the higher capital requirements of the new Basel III international banking regulations. Furthermore, accounting measures do not readily indicate market risks, and thus the results support central banks’ use of market-solvency measures to monitor financial stability.
Green, D, Naidoo, E, Olminkhof, C & Dyson, LE 2016, 'Tablets@university: The ownership and use of tablet devices by students', AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, vol. 32, no. 3, pp. 50-64.
View/Download from: Publisher's site
View description>>
Tablet devices have made a dramatic impact in the computing industry, and have been widely adopted by consumers, including tertiary students. Published research surrounding the use of tablet computers in tertiary settings appears to be largely centred on the advantages of integrating tablets into university pedagogies. However, there appears to have been very little research into the current level of ownership and use amongst students beyond university-sponsored adoption programs. This paper sets out to provide baseline data on the level of ownership and the current usage of tablets by students at an Australian university. A survey of 200 undergraduate and postgraduate students and interviews with five students showed high tablet ownership and significant engagement with educational uses. The findings of this study have implications for the incorporation of tablets into university education
Hazber, MAG, Li, R, Gu, X & Xu, G 2016, 'Integration Mapping Rules: Transforming Relational Database to Semantic Web Ontology', Applied Mathematics & Information Sciences, vol. 10, no. 3, pp. 881-901.
View/Download from: Publisher's site
View description>>
© 2016 NSP. Semantic integration became an attractive area of research in several disciplines, such as information integration, databases and ontologies. Huge amount of data is still stored in relational databases (RDBs) that can be used to build ontology, and the database cannot be used directly by the semantic web. Therefore, one of the main challenges of the semantic web is mapping relational databases to ontologies (RDF(S)-OWL). Moreover, the use of manual work in the mapping of web contents to ontologies is impractical because it contains billions of pages and the most of these contents are generated from relational databases. Hence, we propose a new approach, which enables semantic web applications to access relational databases and their contents by semantic methods. Domain ontologies can be used to formulate relational database schema and data in order to simplify the mapping (transformation) of the underlying data sources. Our method consists of two main phases: building ontology from an RDB schema and the generation of ontology instances from an RDB data automatically. In the first phase, we studied different cases of RDB schema to be mapped into ontology represented in RDF(S)-OWL, while in the second phase, the mapping rules are used to transform RDB data to ontological instances represented in RDF triples. Our approach is demonstrated with examples, validated by ontology validator and implemented using Apache Jena in Java Language and MYSQL. This approach is effective for building ontology and important for mining semantic information from huge web resources.
He, W & Xu, G 2016, 'Social media analytics: unveiling the value, impact and implications of social media analytics for the management and use of online information', Online Information Review, vol. 40, no. 1.
View/Download from: Publisher's site
Heijboer, M, van den Hoven, E, Bongers, B & Bakker, S 2016, 'Facilitating peripheral interaction: design and evaluation of peripheral interaction for a gesture-based lighting control with multimodal feedback', PERSONAL AND UBIQUITOUS COMPUTING, vol. 20, no. 1, pp. 1-22.
View/Download from: Publisher's site
View description>>
© 2015, The Author(s). Most interactions with today’s interfaces require a person’s full and focused attention. To alleviate the potential clutter of focal information, we investigated how interactions could be designed to take place in the background or periphery of attention. This paper explores whether gestural, multimodal interaction styles of an interactive light system allow for this. A study compared the performance of interactions with the light system in two conditions: the central condition in which participants interacted only with the light system, and the peripheral condition in which they interacted with the system while performing a high-attentional task simultaneously. Our study furthermore compared different feedback styles (visual, auditory, haptic, and a combination). Results indicated that especially for the combination feedback style, the interaction could take place without participants’ full visual attention, and performance did not significantly decrease in the peripheral condition. This seems to indicate that these interactions at least partly took place in their periphery of attention and that the multimodal feedback style aided this process.
Hou, S, Zhou, S, Chen, L, Feng, Y & Awudu, K 2016, 'Multi-label learning with label relevance in advertising video', Neurocomputing, vol. 171, pp. 932-948.
View/Download from: Publisher's site
View description>>
The recent proliferation of videos has brought out the need for applications such as automatic annotation and organization. These applications could greatly benefit from the respective thematic content depending on the type of video. Unlike the other kinds of video, an advertising video usually conveys a specific theme in a certain time period (e.g. drawing the audience׳s attention to a product or emphasizing the brand). Traditional multi-label algorithms may not work effectively with advertising videos due mainly to their heterogeneous nature. In this paper, we propose a new learning paradigm to resolve the problems arising out of traditional multi-label learning in advertising videos through label relevance. Aiming to address the issue of label relevance, we firstly assign each label with label degree (LD) to classify all the labels into three groups such as first label (FL), important label (IL) and common label (CL), and then propose a Directed Probability Label Graph (DPLG) model to mine the most related labels from the multi-label data with label relevance, in which the interdependency between labels is considered. In the implementation of DPLG, the labels that appear occasionally and possess inconspicuous co-occurrences are consequently eliminated effectively, employing λ-filtering and τ-pruning processes, respectively. And then the graph theory is utilized in DPLG to acquire Correlative Label-Sets (CLSs). Lastly, the searched Correlative Label-Sets (CLSs) are utilized to enhance multi-label annotation. Experimental results on advertising videos and several publicly available datasets demonstrate the effectiveness of the proposed method for multi-label annotation with label relevance
Huang, K-C, Huang, T-Y, Chuang, C-H, King, J-T, Wang, Y-K, Lin, C-T & Jung, T-P 2016, 'An EEG-Based Fatigue Detection and Mitigation System', International Journal of Neural Systems, vol. 26, no. 04, pp. 1650018-1650018.
View/Download from: Publisher's site
View description>>
Research has indicated that fatigue is a critical factor in cognitive lapses because it negatively affects an individual’s internal state, which is then manifested physiologically. This study explores neurophysiological changes, measured by electroencephalogram (EEG), due to fatigue. This study further demonstrates the feasibility of an online closed-loop EEG-based fatigue detection and mitigation system that detects physiological change and can thereby prevent fatigue-related cognitive lapses. More importantly, this work compares the efficacy of fatigue detection and mitigation between the EEG-based and a nonEEG-based random method. Twelve healthy subjects participated in a sustained-attention driving experiment. Each participant’s EEG signal was monitored continuously and a warning was delivered in real-time to participants once the EEG signature of fatigue was detected. Study results indicate suppression of the alpha- and theta-power of an occipital component and improved behavioral performance following a warning signal; these findings are in line with those in previous studies. However, study results also showed reduced warning efficacy (i.e. increased response times (RTs) to lane deviations) accompanied by increased alpha-power due to the fluctuation of warnings over time. Furthermore, a comparison of EEG-based and nonEEG-based random approaches clearly demonstrated the necessity of adaptive fatigue-mitigation systems, based on a subject’s cognitive level, to deliver warnings. Analytical results clearly demonstrate and validate the efficacy of this online closed-loop EEG-based fatigue detection and mitigation mechanism to identify cognitive lapses that may lead to catastrophic incidents in countless operational environments.
Hussain, W, Hussain, FK, Hussain, OK & Chang, E 2016, 'Provider-Based Optimized Personalized Viable SLA (OPV-SLA) Framework to Prevent SLA Violation', The Computer Journal, vol. 59, no. 12, pp. 1760-1783.
View/Download from: Publisher's site
View description>>
Service level agreement (SLA) is an essential agreement formed between a consumer and a provider in business activities. The SLA defines the business terms, objectives, obligations and commitment of both parties to a business activity, and in cloud computing it also defines a consumer's request for both fixed and variable resources, due to the elastic and dynamic nature of the cloud-computing environment. Providers need to thoroughly analyze such variability when forming SLAs to ensure they commit to the agreements with consumers and at the same time make the best use of available resources and obtain maximum returns. They can achieve this by entering into viable SLAs with consumers. A consumer's profile becomes a key element in determining the consumer's reliability, as a consumer who has previous service violation history is more likely to violate future service agreements; hence, a provider can avoid forming SLAs with such consumers. In this paper, we propose a novel optimal SLA formation architecture from the provider's perspective, enabling the provider to consider a consumer's reliability in committing to the SLA. We classify existing consumers into three categories based on their reliability or trustworthiness value and use that knowledge to ascertain whether to accept a consumer request for resource allocation, and then to determine the extent of the allocation. Our proposed architecture helps the service provider to monitor the behavior of service consumers in the post-interaction time phase and to use that information to form viable SLAs in the pre-interaction time phase to minimize service violations and penalties.
Jialin, H, Guangquan, Z, Yaoguang, H & Jie, L 2016, 'A solution to bi/tri-level programming problems using particle swarm optimization', INFORMATION SCIENCES, vol. 370, pp. 519-537.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. Multilevel (including bi-level and tri-level) programming aims to solve decentralized decision-making problems that feature interactive decision entities distributed throughout a hierarchical organization. Since the multilevel programming problem is strongly NP-hard and traditional exact algorithmic approaches lack efficiency, heuristics-based particle swarm optimization (PSO) algorithms have been used to generate an alternative for solving such problems. However, the existing PSO algorithms are limited to solving linear or small-scale bi-level programming problems. This paper first develops a novel bi-level PSO algorithm to solve general bi-level programs involving nonlinear and large-scale problems. It then proposes a tri-level PSO algorithm for handling tri-level programming problems that are more challenging than bi-level programs and have not been well solved by existing algorithms. For the sake of exploring the algorithms' performance, the proposed bi/tri-level PSO algorithms are applied to solve 62 benchmark problems and 810 large-scale problems which are randomly constructed. The computational results and comparison with other algorithms clearly illustrate the effectiveness of the proposed PSO algorithms in solving bi-level and tri-level programming problems.
Joshi, RG, Chelliah, J, Sood, S & Burdon, S 2016, 'Nature and spirit of exchange and interpersonal relationships fostering grassroots innovations', The Journal of Developing Areas, vol. 50, no. 6, pp. 399-409.
View/Download from: Publisher's site
View description>>
Exchange and interpersonal relationships are central to the functioning and sustainability of socio-economic activities, including innovation. Grassroots innovations (GI) are dynamic and relational phenomena that evolve with grassroots innovators’ beliefs, expectations and obligatory relationships for varied resources, and the actualization of their desire to make novel and beneficial products. In this paper, the dynamics of exchange and interpersonal relationships that underpin the GI phenomenon are explored through the lens of exchange theory and the consideration of the psychological contract. While exchange theory provides an explanation for the interdependent and dyadic socio-economic relations present in GI, the psychological contract provides a view on the perceptions and expectations that are embedded in exchange and innovation activities. These two theoretical lenses serve as a foundation for the research to engage with the subjective reality of the grassroots innovators’ experiences. In examining the subjective reality of the innovation experiences of the grassroots innovators; the research thereby discerns the dominant form of exchange and socio-economic structure that fosters GI from ideation to commercial scaling. Through the use of phenomenological exploration and detailed thematic analysis of the innovation experiences of the thirteen Indian grassroots innovators, the research determined the nature and spirit of the relational commercial exchanges that both entail and foster GI. The paper starts off with the discussion of the theoretical foundations of the research. Thereafter, the paper briefly discusses the research methodology and the exchange dynamics present in GI. In assimilating the research findings, the paper enlists the features of exchanges embedded in GI phenomenon and highlights the capacity of relational commercial exchanges in fostering GI. The paper further proposes, through this discussion, an interpretive framework for u...
Juang, C-F, Jeng, T-L & Chang, Y-C 2016, 'An Interpretable Fuzzy System Learned Through Online Rule Generation and Multiobjective ACO With a Mobile Robot Control Application', IEEE Transactions on Cybernetics, vol. 46, no. 12, pp. 2706-2718.
View/Download from: Publisher's site
Kaiwartya, O, Abdullah, AH, Cao, Y, Altameem, A, Prasad, M, Lin, C-T & Liu, X 2016, 'Internet of Vehicles: Motivation, Layered Architecture, Network Model, Challenges, and Future Aspects', IEEE Access, vol. 4, pp. 5356-5373.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Internet of Things is smartly changing various existing research areas into new themes, including smart health, smart home, smart industry, and smart transport. Relying on the basis of 'smart transport,' Internet of Vehicles (IoV) is evolving as a new theme of research and development from vehicular ad hoc networks (VANETs). This paper presents a comprehensive framework of IoV with emphasis on layered architecture, protocol stack, network model, challenges, and future aspects. Specifically, following the background on the evolution of VANETs and motivation on IoV an overview of IoV is presented as the heterogeneous vehicular networks. The IoV includes five types of vehicular communications, namely, vehicle-to-vehicle, vehicle-to-roadside, vehicle-to-infrastructure of cellular networks, vehicle-to-personal devices, and vehicle-to-sensors. A five layered architecture of IoV is proposed considering functionalities and representations of each layer. A protocol stack for the layered architecture is structured considering management, operational, and security planes. A network model of IoV is proposed based on the three network elements, including cloud, connection, and client. The benefits of the design and development of IoV are highlighted by performing a qualitative comparison between IoV and VANETs. Finally, the challenges ahead for realizing IoV are discussed and future aspects of IoV are envisioned.
Kamaleswaran, R & McGregor, C 2016, 'A Review of Visual Representations of Physiologic Data', JMIR Medical Informatics, vol. 4, no. 4, pp. e31-e31.
View/Download from: Publisher's site
View description>>
Background
Physiological data is derived from electrodes attached directly to patients. Modern patient monitors are capable of sampling data at frequencies in the range of several million bits every hour. Hence the potential for cognitive threat arising from information overload and diminished situational awareness becomes increasingly relevant. A systematic review was conducted to identify novel visual representations of physiologic data that address cognitive, analytic, and monitoring requirements in critical care environments.
Objective
The aims of this review were to identify knowledge pertaining to (1) support for conveying event information via tri-event parameters; (2) identification of the use of visual variables across all physiologic representations; (3) aspects of effective design principles and methodology; (4) frequency of expert consultations; (5) support for user engagement and identifying heuristics for future developments.
Methods
A review was completed of papers published as of August 2016. Titles were first collected and analyzed using an inclusion criteria. Abstracts resulting from the first pass were then analyzed to produce a final set of full papers. Each full paper was passed through a data extraction form eliciting data for comparative analysis.
Results
In total, 39 full papers met all criteria and were selected for full review. Results revealed great diversity in visual representations of physiological data. Visual representations spanned 4 groups including tabular, graph-based, object-based, and metaphoric displays. The metaphoric display was the most popular (n=19), followed by waveform displays typical to the single-sensor-single-indicator paradigm (n=18), and finally object displays (n=9) that utilized spatiotemporal elements to highlight changes in physiologic status. Results obtained from experiments and evaluations suggest specifics related to the optimal use of visual variables, such as colo...
Kamaleswaran, R, Collins, C, James, A & McGregor, C 2016, 'PhysioEx: Visual Analysis of Physiological Event Streams', Computer Graphics Forum, vol. 35, no. 3, pp. 331-340.
View/Download from: Publisher's site
View description>>
AbstractIn this work, we introduce a novel visualization technique, the Temporal Intensity Map, which visually integrates data values over time to reveal the frequency, duration, and timing of significant features in streaming data. We combine the Temporal Intensity Map with several coordinated visualizations of detected events in data streams to create PhysioEx, a visual dashboard for multiple heterogeneous data streams. We have applied PhysioEx in a design study in the field of neonatal medicine, to support clinical researchers exploring physiologic data streams. We evaluated our method through consultations with domain experts. Results show that our tool provides deep insight capabilities, supports hypothesis generation, and can be well integrated into the workflow of clinical researchers.
Kong, Y, Zhang, M & Ye, D 2016, 'An Auction-Based Approach for Group Task Allocation in an Open Network Environment', The Computer Journal, vol. 59, no. 3, pp. 403-422.
View/Download from: Publisher's site
Lancia, G, Mathieson, L & Moscato, P 2016, 'Separating Sets of Strings by Finding Matching Patterns is Almost Always Hard', Theoretical Computer Science, vol. 665, pp. 73-86.
View/Download from: Publisher's site
View description>>
We study the complexity of the problem of searching for a set of patternsthat separate two given sets of strings. This problem has applications in awide variety of areas, most notably in data mining, computational biology, andin understanding the complexity of genetic algorithms. We show that the basicproblem of finding a small set of patterns that match one set of strings but donot match any string in a second set is difficult (NP-complete, W[2]-hard whenparameterized by the size of the pattern set, and APX-hard). We then perform adetailed parameterized analysis of the problem, separating tractable andintractable variants. In particular we show that parameterizing by the size ofpattern set and the number of strings, and the size of the alphabet and thenumber of strings give FPT results, amongst others.
Li, D-L, Prasad, M, Lin, C-T & Chang, J-Y 2016, 'Self-adjusting feature maps network and its applications', Neurocomputing, vol. 207, pp. 78-94.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. This paper, proposes a novel artificial neural network, called self-adjusting feature map (SAM), and develop its unsupervised learning ability with self-adjusting mechanism. The trained network structure of representative connected neurons not only displays the spatial relation of the input data distribution but also quantizes the data well. The SAM can automatically isolate a set of connected neurons, in which, the used number of the sets may indicate the number of clusters. The idea of self-adjusting mechanism is based on combining of mathematical statistics and neurological advantages and retreat of waste. In the training process, for each representative neuron has are three phases, growth, adaptation and decline. The network of representative neurons, first create the necessary neurons according to the local density of the input data in the growth phase. In the adaption phase, it adjusts neighborhood neuron pair׳s connected/disconnected topology constantly according to the statistics of input feature data. Finally, the unnecessary neurons of the network are merged or remove in the decline phase. In this paper, we exploit the SAM to handle some peculiar cases that cannot be handled easily by classical unsupervised learning networks such as self-organizing map (SOM) network. The remarkable characteristics of the SAM can be seen on various real world cases in the experimental results.
Li, Y, Li, Y & Xu, G 2016, 'Protecting private geosocial networks against practical hybrid attacks with heterogeneous information', Neurocomputing, vol. 210, pp. 81-90.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V.GeoSocial Networks (GSNs) are becoming increasingly popular due to its power in providing high-performance and flexible service capabilities. More and more Internet users have accepted this innovative service model. However, even GSNs have great business value for data analysis by integrated with location information, it may seriously compromise users' privacy in publishing the GSN data. In this paper, we study the identity disclosure problem in publishing GSN data. We first discuss the attack problem by considering both the location-based and structure-based properties, as background knowledge, and then formalize two general models, named (k,m)-anonymity and (k,m,l)-anonymity Then we propose a complete solution to achieve (k,m)-anonymization and (k,m,l)-anonymization to prevent the released data from the above attacks above. We also take data utility into consideration by defining specific information loss metrics. It is validated by real-world data that the proposed methods can prevent GSN dataset from the attacks while retaining good utility.
Li, Y, Qiao, Y, Wang, X & Duan, R 2016, 'Tripartite-to-bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces', Communications in Mathematical Physics, vol. 358, no. 2, pp. 791-814.
View/Download from: Publisher's site
View description>>
We study the problem of transforming a tripartite pure state to a bipartiteone using stochastic local operations and classical communication (SLOCC). Itis known that the tripartite-to-bipartite SLOCC convertibility is characterizedby the maximal Schmidt rank of the given tripartite state, i.e. the largestSchmidt rank over those bipartite states lying in the support of the reduceddensity operator. In this paper, we further study this problem and exhibitnovel results in both multi-copy and asymptotic settings. In the multi-copyregime, we observe that the maximal Schmidt rank is strictlysuper-multiplicative, i.e. the maximal Schmidt rank of the tensor product oftwo tripartite pure states can be strictly larger than the product of theirmaximal Schmidt ranks. We then provide a full characterization of thosetripartite states whose maximal Schmidt rank is strictly super-multiplicativewhen taking tensor product with itself. In the asymptotic setting, we focus ondetermining the tripartite-to-bipartite SLOCC entanglement transformation rate,which turns out to be equivalent to computing the asymptotic maximal Schmidtrank of the tripartite state, defined as the regularization of its maximalSchmidt rank. Despite the difficulty caused by the super-multiplicativeproperty, we provide explicit formulas for evaluating the asymptotic maximalSchmidt ranks of two important families of tripartite pure states, by resortingto certain results of the structure of matrix spaces, including the study ofmatrix semi-invariants. These formulas give a sufficient and necessarycondition to determine whether a given tripartite pure state can be transformedto the bipartite maximally entangled state under SLOCC, in the asymptoticsetting. Applying the recent progress on the non-commutative rank problem, wecan verify this condition in deterministic polynomial time.
Lin, C-T, Chuang, C-H, Kerick, S, Mullen, T, Jung, T-P, Ko, L-W, Chen, S-A, King, J-T & McDowell, K 2016, 'Mind-Wandering Tends to Occur under Low Perceptual Demands during Driving', Scientific Reports, vol. 6, no. 1.
View/Download from: Publisher's site
View description>>
AbstractFluctuations in attention behind the wheel poses a significant risk for driver safety. During transient periods of inattention, drivers may shift their attention towards internally-directed thoughts or feelings at the expense of staying focused on the road. This study examined whether increasing task difficulty by manipulating involved sensory modalities as the driver detected the lane-departure in a simulated driving task would promote a shift of brain activity between different modes of processing, reflected by brain network dynamics on electroencephalographic sources. Results showed that depriving the driver of salient sensory information imposes a relatively more perceptually-demanding task, leading to a stronger activation in the task-positive network. When the vehicle motion feedback is available, the drivers may rely on vehicle motion to perceive the perturbations, which frees attentional capacity and tends to activate the default mode network. Such brain network dynamics could have major implications for understanding fluctuations in driver attention and designing advance driver assistance systems.
Liu, B, Zhou, W, Zhu, T, Gao, L, Luan, TH & Zhou, H 2016, 'Silence is Golden: Enhancing Privacy of Location-Based Services by Content Broadcasting and Active Caching in Wireless Vehicular Networks', IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9942-9953.
View/Download from: Publisher's site
Liu, B, Zhou, W, Zhu, T, Zhou, H & Lin, X 2016, 'Invisible Hand: A Privacy Preserving Mobile Crowd Sensing Framework Based on Economic Models', IEEE Transactions on Vehicular Technology, vol. 66, no. 5, pp. 1-1.
View/Download from: Publisher's site
Llopis-Albert, C, Merigó, JM & Xu, Y 2016, 'A coupled stochastic inverse/sharp interface seawater intrusion approach for coastal aquifers under groundwater parameter uncertainty', Journal of Hydrology, vol. 540, pp. 774-783.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. This paper presents an alternative approach to deal with seawater intrusion problems, that overcomes some of the limitations of previous works, by coupling the well-known SWI2 package for MODFLOW with a stochastic inverse model named GC method. On the one hand, the SWI2 allows a vertically integrated variable-density groundwater flow and seawater intrusion in coastal multi-aquifer systems, and a reduction in number of required model cells and the elimination of the need to solve the advective-dispersive transport equation, which leads to substantial model run-time savings. On the other hand, the GC method allows dealing with groundwater parameter uncertainty by constraining stochastic simulations to flow and mass transport data (i.e., hydraulic conductivity, freshwater heads, saltwater concentrations and travel times) and also to secondary information obtained from expert judgment or geophysical surveys, thus reducing uncertainty and increasing reliability in meeting the environmental standards. The methodology has been successfully applied to a transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques. The approach also allows partially overcoming the neglected diffusion and dispersion processes after the conditioning process since the uncertainty is reduced and results are closer to available data.
Llopis-Albert, C, Palacios-Marqués, D & Merigó, JM 2016, 'Decision making under uncertainty in environmental projects using mathematical simulation modeling', Environmental Earth Sciences, vol. 75, no. 19.
View/Download from: Publisher's site
View description>>
© 2016, Springer-Verlag Berlin Heidelberg. In decision-making processes, reliability and risk aversion play a decisive role. The aim of this study is to perform an uncertainty assessment of the effects of future scenarios of sustainable groundwater pumping strategies on the quantitative and chemical status of an aquifer. The good status of the aquifer is defined according to the terms established by the EU Water Framework Directive (WFD). A decision support systems (DSS) is presented, which makes use of a stochastic inverse model (GC method) and geostatistical approaches to calibrate equally likely realizations of hydraulic conductivity (K) fields for a particular case study. These K fields are conditional to available field data, including hard and soft information. Then, different future scenarios of groundwater pumping strategies are generated, based on historical information and WFD standards, and simulated for each one of the equally likely K fields. The future scenarios lead to different environmental impacts and levels of socioeconomic development of the region and, hence, to a different degree of acceptance among stakeholders. We have identified the different stakeholders implied in the decision-making process, the objectives pursued and the alternative actions that should be considered by stakeholders in a public participation project (PPP). The MonteCarlo simulation provides a highly effective way for uncertainty assessment and allows presenting the results in a simple and understandable way even for non-experts stakeholders. The methodology has been successfully applied to a real case study and lays the foundations to perform a PPP and stakeholders’ involvement in a decision-making process as required by the WFD. The results of the methodology can help the decision-making process to come up with the best policies and regulations for a groundwater system under uncertainty in groundwater parameters and management strategies and involving stakeh...
Loke, L & Kocaballi, AB 2016, 'Choreographic Inscriptions: A Framework for Exploring Sociomaterial Influences on Qualities of Movement for HCI', Human Technology, vol. 12, no. 1, pp. 31-55.
View/Download from: Publisher's site
View description>>
© 2016 Lian Loke & A. Baki Kocaballi, and the Agora Center, University of Jyväskylä. With the rise of ubiquitous computing technologies in everyday life, the daily actions of people are becoming ever more choreographed by the interactions available through technology. By combining the notion of inscriptions from actor-network theory and the qualitative descriptors of movement from Laban movement analysis, an analytic framework is proposed for exploring how the interplay of material and social inscriptions gives rise to movement patterns and behaviors, translated into choreographic inscriptions described with Laban effort and shape. It is demonstrated through a case study of an affective gesture mobile device. The framework provides an understanding of (a) how movement qualities are shaped by social and material inscriptions, (b) how the relative strength of inscriptions on movements may change according to different settings and user appropriation over time, and (c) how transforming inscriptions by design across different mediums can generate action spaces with varying degrees of openness.
Lopez-Lorca, A, Beydoun, G, Valencia-Garcia, R & Martinez-Bejar, R 2016, 'Automating the reuse of domain knowledge to improve the modelling outcome from interactions between developers and clients', COMPUTING, vol. 98, no. 6, pp. 609-640.
View/Download from: Publisher's site
Lopez-Lorca, AA, Beydoun, G, Valencia-Garcia, R & Martinez-Bejar, R 2016, 'Supporting agent oriented requirement analysis with ontologies', INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, vol. 87, pp. 20-37.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Requirements analysis activities underpin the success of the software development lifecycle. Subsequent errors in the requirements models can propagate to models in later phases and become much costlier to fix. Errors in requirement analysis are more likely in developing complex systems. Particularly, errors due to miscommunication and misinterpretation of a client's intentions are common. Ontologies relying on formal descriptions of semantics have often been used in multi agent systems (MAS) development to support various activities and generally improve the complex systems produced. However, their use during requirements analysis to validate match with the client's conceptualisation is largely unexplored. This article presents an ontology driven validation process to support requirement analysis of MAS models. This process is underpinned by an agent-based metamodel that describes commonly used informal agent requirement models. The process concurrently and incrementally validates the informal MAS requirement models produced. The synthesis of the process is first justified and illustrated in a manual tracing of the process. The paper then describes an interactive support tool to harness the formal semantics of ontologies and by pass the costly manual effort. The validation process is evaluated and illustrated using three case studies.
Lu, J, Han, J, Hu, Y & Zhang, G 2016, 'Multilevel decision-making: A survey', INFORMATION SCIENCES, vol. 346, pp. 463-487.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. All rights reserved. Multilevel decision-making techniques aim to deal with decentralized management problems that feature interactive decision entities distributed throughout a multiple level hierarchy. Significant efforts have been devoted to understanding the fundamental concepts and developing diverse solution algorithms associated with multilevel decision-making by researchers in areas of both mathematics/computer science and business areas. Researchers have emphasized the importance of developing a range of multilevel decision-making techniques to handle a wide variety of management and optimization problems in real-world applications, and have successfully gained experience in this area. It is thus vital that a high quality, instructive review of current trends should be conducted, not only of the theoretical research results but also the practical developments in multilevel decision-making in business. This paper systematically reviews up-to-date multilevel decision-making techniques and clusters related technique developments into four main categories: bi-level decision-making (including multi-objective and multi-follower situations), tri-level decision-making, fuzzy multilevel decision-making, and the applications of these techniques in different domains. By providing state-of-the-art knowledge, this survey will directly support researchers and practical professionals in their understanding of developments in theoretical research results and applications in relation to multilevel decision-making techniques.
Lu, M, Liang, J, Wang, Z & Yuan, X 2016, 'Exploring OD patterns of interested region based on taxi trajectories', Journal of Visualization, vol. 19, no. 4, pp. 811-821.
View/Download from: Publisher's site
View description>>
© 2016, The Visualization Society of Japan. Abstract: Traffics of different regions in a city have different Origin-Destination (OD) patterns, which potentially reveal the surrounding traffic context and social functions. In this work, we present a visual analysis system to explore OD patterns of interested regions based on taxi trajectories. The system integrates interactive trajectory filtering with visual OD patterns exploration. Trajectories related to interested region are selected by a suite of graphical filtering tools, from which OD clusters are detected automatically. OD traffic patterns can be explored at two levels: overview of OD and detailed exploration on dynamic OD patterns, including information of dynamic traffic volume and travel time. By testing on real taxi trajectory data sets, we demonstrate the effectiveness of our system. Graphical Abstract: [Figure not available: see fulltext.]
Lu, N, Lu, J, Zhang, G & Lopez de Mantaras, R 2016, 'A concept drift-tolerant case-base editing technique', ARTIFICIAL INTELLIGENCE, vol. 230, pp. 108-133.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. All rights reserved. The evolving nature and accumulating volume of real-world data inevitably give rise to the so-called 'concept drift' issue, causing many deployed Case-Based Reasoning (CBR) systems to require additional maintenance procedures. In Case-base Maintenance (CBM), case-base editing strategies to revise the case-base have proven to be effective instance selection approaches for handling concept drift. Motivated by current issues related to CBR techniques in handling concept drift, we present a two-stage case-base editing technique. In Stage 1, we propose a Noise-Enhanced Fast Context Switch (NEFCS) algorithm, which targets the removal of noise in a dynamic environment, and in Stage 2, we develop an innovative Stepwise Redundancy Removal (SRR) algorithm, which reduces the size of the case-base by eliminating redundancies while preserving the case-base coverage. Experimental evaluations on several public real-world datasets show that our case-base editing technique significantly improves accuracy compared to other case-base editing approaches on concept drift tasks, while preserving its effectiveness on static tasks.
Luccio, F, Mans, B, Mathieson, L & Pagli, L 2016, 'Complete Balancing via Rotation', The Computer Journal, vol. 59, no. 8, pp. 1252-1263.
View/Download from: Publisher's site
Luo, S, Yu, H, Zhao, Y, Wang, S, Yu, S & Li, L 2016, 'Towards Practical and Near-Optimal Coflow Scheduling for Data Center Networks', IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 11, pp. 3366-3380.
View/Download from: Publisher's site
View description>>
In current data centers, an application (e.g., MapReduce, Dryad, search platform, etc.) usually generates a group of parallel flows to complete a job. These flows compose a coflow and only completing them all is meaningful to the application. Accordingly, minimizing the average Coflow Completion Time (CCT) becomes a critical objective of flow scheduling. However, achieving this goal in today's Data Center Networks (DCNs) is quite challenging, not only because the schedule problem is theoretically NP-hard, but also because it is tough to perform practical flow scheduling in large-scale DCNs. In this paper, we find that minimizing the average CCT of a set of coflows is equivalent to the well-known problem of minimizing the sum of completion times in a concurrent open shop. As there are abundant existing solutions for concurrent open shop, we open up a variety of techniques for coflow scheduling. Inspired by the best known result, we derive a 2-approximation algorithm for coflow scheduling, and further develop a decentralized coflow scheduling system, D-CAS, which avoids the system problems associated with current centralized proposals while addressing the performance challenges of decentralized suggestions. Trace-driven simulations indicate that D-CAS achieves a performance close to Varys, the state-of-the-art centralized method, and outperforms Baraat, the only existing decentralized method, significantly.
Luo, X, Xuan, J, Lu, J & Zhang, G 2016, 'Measuring the Semantic Uncertainty of News Events for Evolution Potential Estimation', ACM TRANSACTIONS ON INFORMATION SYSTEMS, vol. 34, no. 4.
View/Download from: Publisher's site
View description>>
© 2016 ACM. The evolution potential estimation of news events can support the decision making of both corporations and governments. For example, a corporation could manage its public relations crisis in a timely manner if a negative news event about this corporation is known with large evolution potential in advance. However, existing state-of-the-art methods are mainly based on time series historical data, which are not suitable for the news events with limited historical data and bursty properties. In this article, we propose a purely content-based method to estimate the evolution potential of the news events. The proposed method considers a news event at a given time point as a system composed of different keywords, and the uncertainty of this system is defined and measured as the Semantic Uncertainty of this news event. At the same time, an uncertainty space is constructed with two extreme states: the most uncertain state and the most certain state. We believe that the Semantic Uncertainty has correlation with the content evolution of the news events, so it can be used to estimate the evolution potential of the news events. In order to verify the proposed method, we present detailed experimental setups and results measuring the correlation of the Semantic Uncertainty with the Content Change of news events using collected news events data. The results show that the correlation does exist and is stronger than the correlation of value from the time-series-based method with the Content Change. Therefore, we can use the Semantic Uncertainty to estimate the evolution potential of news events.
Malomo, L, Pietroni, N, Bickel, B & Cignoni, P 2016, 'FlexMolds: automatic design of flexible shells for molding.', ACM Trans. Graph., vol. 35, pp. 223:1-223:1.
View/Download from: Publisher's site
Mathieson, L 2016, 'Synergies in critical reflective practice and science: Science as reflection and reflection as science', Journal of University Teaching and Learning Practice, vol. 13, no. 2, pp. 1-13.
View description>>
The conceptions of reflective practice in education have their roots at least partly in the work of Dewey, who describes reflection as “the active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends” (Dewey 1933, p.9). This conception of reflection has carried on into more-focused efforts to describe critical reflection as a tool for improving professional practice (where academic and educational practice is the particular interest of this study); “… some puzzling or troubling or interesting phenomenon” allows the practitioner to access “the understandings which have been implicit in his action, understandings which he surfaces, criticizes, restructures, and embodies in further action” (Schön 1983, p. 50). Both of these descriptions embody a central idea of critical reflective practice: that the examination of practice involves the divination (in a rational, critical sense) of order and perhaps meaning from the facts at hand (which, in turn, are brought to light by the events that occur as the results of implementation of theory). As part of a lecture series, Gottlieb defined science as “an intellectual activity carried out by humans to understand the structure and functions of the world in which they live” (Gottlieb 1997). While science and critical reflective practice attempt to build models about different parts of our world – the natural world and the world of professional (educational) practice respectively – both embody certain underlying aims and methodologies. Indeed, it is striking that in these definitions the simple replacement of the terminology of reflective practice with the terminology of science (or vice versa) leads to a perfectly comprehensible definition of either.It is this confluence that this paper studies, building from two separate foundations, critical reflective practice and science. Via their models and exem...
McGahan, WT, Ernst, H & Dyson, LE 2016, 'Individual Learning Strategies and Choice in Student-Generated Multimedia', International Journal of Mobile and Blended Learning, vol. 8, no. 3, pp. 1-18.
View/Download from: Publisher's site
View description>>
There has been an increasing focus on student-generated multimedia assessment as a way of introducing the benefits of both visual literacy and peer-mediated learning into university courses. One such assessment was offered to first-year health science students but, contrary to expectations, led to poorer performance in their end-of-semester examinations. Following an analysis, the assignment was redesigned to offer students a choice of either a group-based animation task or an individual written task. Results showed improved performance on the assignment when students were offered a choice of assignments over when they were offered only the multimedia assignment. Student feedback indicated that students adopt deliberate individual learning strategies when offered choices in assessment. The study suggests that assumptions regarding the superiority of student-generated multimedia over more traditional assessments are not always correct, but that students' agency and individual preferences need to be recognized.
Merigó, JM & Núñez, A 2016, 'Influential journals in health research: a bibliometric study', Globalization and Health, vol. 12, no. 1, p. 46.
View/Download from: Publisher's site
View description>>
Background
There is a wide range of intellectual work written about health research, which has been shaped by the evolution of diseases. This study aims to identify the leading journals over the last 25 years (1990-2014) according to a wide range of bibliometric indicators.
Methods
The study develops a bibliometric overview of all the journals that are currently indexed in Web of Science (WoS) database in any of the four categories connected to health research. The work classifies health research in nine subfields: Public Health, Environmental and Occupational Health, Health Management and Economics, Health Promotion and Health Behavior, Epidemiology, Health Policy and Services, Medicine, Health Informatics, Engineering and Technology, and Primary Care.
Results
The results indicate a wide dispersion between categories being the American Journal of Epidemiology, Environmental Health Perspectives, American Journal of Public Health, and Social Science & Medicine, the journals that have received the highest number of citations over the last 25 years. According to other indicators such as the h-index and the citations per paper, some other journals such as the Annual Review of Public Health and Medical Care, obtain better results which show the wide diversity and profiles of outlets available in the scientific community. The results are grouped and studied according to the nine subfields in order to identify the leading journals in each specific sub discipline of health.
Conclusions
The work identifies the leading journals in health research through a bibliometric approach. The analysis shows a deep overview of the results of health journals. It is worth noting that many journals have entered the WoS database during the last years, in many cases to fill some specific niche that has emerged in the literature, although the most popular ones have been in the database for a long time.
Merigó, JM, Cancino, CA, Coronado, F & Urbano, D 2016, 'Academic research in innovation: a country analysis', Scientometrics, vol. 108, no. 2, pp. 559-593.
View/Download from: Publisher's site
Merigó, JM, Gil-Lafuente, AM & Gil-Lafuente, J 2016, 'Business, industrial marketing and uncertainty', Journal of Business & Industrial Marketing, vol. 31, no. 3, pp. 325-327.
View/Download from: Publisher's site
View description>>
PurposeThis special issue of the Journal of Business & Industrial Marketing, entitled “Business, Industrial Marketing and Uncertainty”, presents selected extended studies that were presented at the European Academy of Management and Business Economics Conference (AEDEM 2012).Design/methodology/approachThe main focus of this year was reflected in the slogan: “Creating new opportunities in an uncertain environment”. The objective was to show the importance that uncertainty has in our current world, strongly affected by many complexities and modern developments, especially through the new technological advances.FindingsOne fundamental reason that explains the economic crisis is that the government and companies were not well prepared for these critical situations. And the main justification for this is that they did not have enough information. Otherwise, they would have tried any possible strategy to avoid the crisis. Usually, uncertainty is defined as the situation with unknown information in the environment.Originality/valueFrom a theoretical perspective, the problem here is that enterprises and governments should assess the information and the uncertainty in a more appropriate way. Usually, they have some studies in this direction, but many times, it is not enough, as it was proved in the last economic crisis.
Merigó, JM, Palacios-Marqués, D & Ribeiro-Navarrete, B 2016, 'Corrigendum to “Aggregation systems for sales forecasting” [J. Bus. Res. 68(11) (2015) 2299–2304]', Journal of Business Research, vol. 69, no. 6, pp. 2325-2325.
View/Download from: Publisher's site
Merigó, JM, Palacios-Marqués, D & Zeng, S 2016, 'Subjective and objective information in linguistic multi-criteria group decision making', European Journal of Operational Research, vol. 248, no. 2, pp. 522-531.
View/Download from: Publisher's site
View description>>
Linguistic decision making systems represent situations that cannot be assessed with numerical information but it is possible to use linguistic variables. This paper introduces new linguistic aggregation operators in order to develop more efficient decision making systems. The linguistic probabilistic weighted average (LPWA) is presented. Its main advantage is that it considers subjective and objective information in the same formulation and considering the degree of importance that each concept has in the aggregation. A key feature of the LPWA operator is that it considers a wide range of linguistic aggregation operators including the linguistic weighted average, the linguistic probabilistic aggregation and the linguistic average. Further generalizations are presented by using quasi-arithmetic means and moving averages. An application in linguistic multi-criteria group decision making under subjective and objective risk is also presented in the context of the European Union law.
Merigó, JM, Peris-Ortíz, M, Navarro-García, A & Rueda-Armengot, C 2016, 'Aggregation operators in economic growth analysis and entrepreneurial group decision-making', Applied Soft Computing, vol. 47, pp. 141-150.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. All rights reserved. An economic crisis can be measured from different perspectives. A very commonly used measure is that of a country's economic growth. When growth is lower than desired, the economy is assumed to be near stagnation or in an economic recession. This paper connects entrepreneurship and economic growth in decision-making problems assessed with modern aggregation systems. Aggregation techniques can represent information more comprehensively in uncertain and imprecise environments. This paper suggests several practical aggregation operators for this purpose, such as the ordered weighted average and the probabilistic ordered weighted averaging weighted average. Other aggregation systems based on macroeconomic theory are also introduced. The paper concludes with an application in an entrepreneurial uncertain multi-criteria multi-person decision-making problem regarding the selection of optimal markets for creating a new company. This approach is based on the use of economic growth as the fundamental variable for determining the preferred solution.
MERIGÓ, JM, ROCAFORT, A & AZNAR-ALARCÓN, JP 2016, 'BIBLIOMETRIC OVERVIEW OF BUSINESS & ECONOMICS RESEARCH', Journal of Business Economics and Management, vol. 17, no. 3, pp. 397-413.
View/Download from: Publisher's site
View description>>
Bibliometrics is the quantitative study of bibliographic information. It classifies the information according to different criteria including authors, journals, institutions and countries. This paper presents a general bibliometric overview of the most influential research in business & economics according to the information found in the Web of Science. It includes research from different subcategories including business, business finance, economics and management. For doing so, four general lists are presented: the 50 most cited papers in business & economics of all time, the 40 most influential journals, the 40 most relevant institutions and the most influential countries. The results permit to obtain a general picture of the most significant research in business & economics. This information is very useful in order to identify the leading trends in this area.
Merigó, JM, Yang, J-B & Xu, D-L 2016, 'Demand Analysis with Aggregation Systems', International Journal of Intelligent Systems, vol. 31, no. 5, pp. 425-443.
View/Download from: Publisher's site
Meter, RV & Devitt, SJ 2016, 'Local and Distributed Quantum Computation', IEEE Computer 49(9), 31-42, Sept. 2016, vol. 49, no. 9, pp. 31-42.
View/Download from: Publisher's site
View description>>
Experimental groups are now fabricating quantum processors powerful enough toexecute small instances of quantum algorithms and definitively demonstratequantum error correction that extends the lifetime of quantum data, addingurgency to architectural investigations. Although other options continue to beexplored, effort is coalescing around topological coding models as the mostpractical implementation option for error correction on realizablemicroarchitectures. Scalability concerns have also motivated architects topropose distributed memory multicomputer architectures, with experimentalefforts demonstrating some of the basic building blocks to make such designspossible. We compile the latest results from a variety of different systemsaiming at the construction of a scalable quantum computer.
Mols, I, van den Hoven, E & Eggen, B 2016, 'Ritual Camera: Exploring Domestic Technology to Remember Everyday Life', IEEE PERVASIVE COMPUTING, vol. 15, no. 2, pp. 48-58.
View/Download from: Publisher's site
Motes, KR, Mann, RL, Olson, JP, Studer, NM, Bergeron, EA, Gilchrist, A, Dowling, JP, Berry, DW & Rohde, PP 2016, 'Efficient recycling strategies for preparing large Fock states from single-photon sources --- Applications to quantum metrology', Phys. Rev. A, vol. 94, no. 1, p. 012344.
View/Download from: Publisher's site
View description>>
Fock states are a fundamental resource for many quantum technologies such asquantum metrology. While much progress has been made in single-photon sourcetechnologies, preparing Fock states with large photon number remainschallenging. We present and analyze a bootstrapped approach fornon-deterministically preparing large photon-number Fock states by iterativelyfusing smaller Fock states on a beamsplitter. We show that by employing staterecycling we are able to exponentially improve the preparation rate overconventional schemes, allowing the efficient preparation of large Fock states.The scheme requires single-photon sources, beamsplitters, number-resolvedphoto-detectors, fast-feedforward, and an optical quantum memory.
Naderpour, M, Lu, J & Zhang, G 2016, 'A safety-critical decision support system evaluation using situation awareness and workload measures', RELIABILITY ENGINEERING & SYSTEM SAFETY, vol. 150, pp. 147-159.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd. To ensure the safety of operations in safety-critical systems, it is necessary to maintain operators' situation awareness (SA) at a high level. A situation awareness support system (SASS) has therefore been developed to handle uncertain situations [1]. This paper aims to systematically evaluate the enhancement of SA in SASS by applying a multi-perspective approach. The approach consists of two SA metrics, SAGAT and SART, and one workload metric, NASA-TLX. The first two metrics are used for the direct objective and subjective measurement of SA, while the third is used to estimate operator workload. The approach is applied in a safety-critical environment called residue treater, located at a chemical plant in which a poor human-system interface reduced the operators' SA and caused one of the worst accidents in US history. A counterbalanced within-subjects experiment is performed using a virtual environment interface with and without the support of SASS. The results indicate that SASS improves operators' SA, and specifically has benefits for SA levels 2 and 3. In addition, it is concluded that SASS reduces operator workload, although further investigations in different environments with a larger number of participants have been suggested.
Nagayama, S, Choi, B-S, Devitt, S, Suzuki, S & Van Meter, R 2016, 'Interoperability in encoded quantum repeater networks', Physical Review A, vol. 93, no. 4.
View/Download from: Publisher's site
View description>>
The future of quantum repeater networking will require interoperability between various error-correcting codes. A few specific code conversions and even a generalized method are known, however, no detailed analysis of these techniques in the context of quantum networking has been performed. In this paper we analyze a generalized procedure to create Bell pairs encoded heterogeneously between two separate codes used often in error-corrected quantum repeater network designs. We begin with a physical Bell pair and then encode each qubit in a different error-correcting code, using entanglement purification to increase the fidelity. We investigate three separate protocols for preparing the purified encoded Bell pair. We calculate the error probability of those schemes between the Steane [[7,1,3]] code, a distance-3 surface code, and single physical qubits by Monte Carlo simulation under a standard Pauli error model and estimate the resource efficiency of the procedures. A local gate error rate of 10-3 allows us to create high-fidelity logical Bell pairs between any of our chosen codes. We find that a postselected model, where any detected parity flips in code stabilizers result in a restart of the protocol, performs the best.
Nagayama, S, Fowler, AG, Horsman, D, Devitt, SJ & Meter, RV 2016, 'Surface Code Error Correction on a Defective Lattice', New Journal of Physics, 19(2):023050, 2017, vol. 19, no. 2, pp. 1-29.
View/Download from: Publisher's site
View description>>
The yield of physical qubits fabricated in the laboratory is much lower thanthat of classical transistors in production semiconductor fabrication. Actualimplementations of quantum computers will be susceptible to loss in the form ofphysically faulty qubits. Though these physical faults must negatively affectthe computation, we can deal with them by adapting error correction schemes. Inthis paper We have simulated statically placed single-fault lattices andlattices with randomly placed faults at functional qubit yields of 80%, 90% and95%, showing practical performance of a defective surface code by employingactual circuit constructions and realistic errors on every gate, includingidentity gates. We extend Stace et al.'s superplaquettes solution againstdynamic losses for the surface code to handle static losses such as physicallyfaulty qubits. The single-fault analysis shows that a static loss at theperiphery of the lattice has less negative effect than a static loss at thecenter. The randomly-faulty analysis shows that 95% yield is good enough tobuild a large scale quantum computer. The local gate error rate threshold is$\sim 0.3\%$, and a code distance of seven suppresses the residual error ratebelow the original error rate at $p=0.1\%$. 90% yield is also good enough whenwe discard badly fabricated quantum computation chips, while 80% yield does notshow enough error suppression even when discarding 90% of the chips. Weevaluated several metrics for predicting chip performance, and found that theaverage of the product of the number of data qubits and the cycle time of astabilizer measurement of stabilizers gave the strongest correlation withpost-correction residual error rates. Our analysis will help with selectingusable quantum computation chips from among the pool of all fabricated chips.
Nemoto, K, Trupke, M, Devitt, SJ, Scharfenberger, B, Buczak, K, Schmiedmayer, J & Munro, WJ 2016, 'Photonic Quantum Networks formed from NV− centers', Scientific Reports, vol. 6, no. 1, p. 26284.
View/Download from: Publisher's site
View description>>
AbstractIn this article we present a simple repeater scheme based on the negatively-charged nitrogen vacancy centre in diamond. Each repeater node is built from modules comprising an optical cavity containing a single NV−, with one nuclear spin from 15N as quantum memory. The module uses only deterministic processes and interactions to achieve high fidelity operations (>99%) and modules are connected by optical fiber. In the repeater node architecture, the processes between modules by photons can be in principle deterministic, however current limitations on optical components lead the processes to be probabilistic but heralded. Our resource-modest repeater architecture contains two modules at each node and the repeater nodes are then connected by entangled photon pairs. We discuss the performance of such a quantum repeater network with modest resources and then incorporate more resource-intense strategies step by step. Our architecture should allow large-scale quantum information networks with existing or near future technology.
Nguyen, Q, Khalifa, N, Alzamora, P, Gleeson, A, Catchpoole, D, Kennedy, P & Simoff, S 2016, 'Visual Analytics of Complex Genomics Data to Guide Effective Treatment Decisions', Journal of Imaging, vol. 2, no. 4, pp. 29-29.
View/Download from: Publisher's site
View description>>
© 2016 by the authors. In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, when building a framework for personalised treatment, the complexity of the genome must be captured in meaningful and actionable ways. This paper presents a novel visual analytics framework that enables effective analysis of large and complex genomics data. By providing interactive visualisations from the overview of the entire patient cohort to the detail view of individual genes, our work potentially guides effective treatment decisions for childhood cancer patients. The framework consists of multiple components enabling the complete analytics supporting personalised medicines, including similarity space construction, automated analysis, visualisation, gene-to-gene comparison and user-centric interaction and exploration based on feature selection. In addition to the traditional way to visualise data, we utilise the Unity3D platform for developing a smooth and interactive visual presentation of the information. This aims to provide better rendering, image quality, ergonomics and user experience to non-specialists or young users who are familiar with 3D gaming environments and interfaces. We illustrate the effectiveness of our approach through case studies with datasets from childhood cancers, B-cell Acute Lymphoblastic Leukaemia (ALL) and Rhabdomyosarcoma (RMS) patients, on how to guide the effective treatment decision in the cohort.
Oberst, S, Lai, JCS & Evans, TA 2016, 'Termites utilise clay to build structural supports and so increase foraging resources', Scientific Reports, vol. 6, no. 1.
View/Download from: Publisher's site
View description>>
AbstractMany termite species use clay to build foraging galleries and mound-nests. In some cases clay is placed within excavations of their wooden food, such as living trees or timber in buildings; however the purpose for this clay is unclear. We tested the hypotheses that termites can identify load bearing wood and that they use clay to provide mechanical support of the load and thus allow them to eat the wood. In field and laboratory experiments, we show that the lower termite Coptotermes acinaciformis, the most basal species to build a mound-nest, can distinguish unloaded from loaded wood and use clay differently when eating each type. The termites target unloaded wood preferentially and use thin clay sheeting to camouflage themselves while eating the unloaded wood. The termites attack loaded wood secondarily and build thick, load-bearing clay walls when they do. The termites add clay and build thicker walls as the load-bearing wood is consumed. The use of clay to support wood under load unlocks otherwise unavailable food resources. This behaviour may represent an evolutionary step from foraging behaviour to nest building in lower termites.
Oberst, S, Zhang, Z & Lai, JCS 2016, 'The Role of Nonlinearity and Uncertainty in Assessing Disc Brake Squeal Propensity', SAE International Journal of Passenger Cars - Mechanical Systems, vol. 9, no. 3, pp. 980-986.
View/Download from: Publisher's site
Othman, SH & Beydoun, G 2016, 'A metamodel-based knowledge sharing system for disaster management', EXPERT SYSTEMS WITH APPLICATIONS, vol. 63, pp. 49-65.
View/Download from: Publisher's site
Paler, A, Devitt, SJ & Fowler, AG 2016, 'Synthesis of Arbitrary Quantum Circuits to Topological Assembly', Scientific Reports 6, Article number: 30600 (2016), vol. 6, no. 1, p. 30600.
View/Download from: Publisher's site
View description>>
Given a quantum algorithm, it is highly nontrivial to devise an efficientsequence of physical gates implementing the algorithm on real hardware andincorporating topological quantum error correction. In this paper, we present afirst step towards this goal, focusing on generating correct and simplearrangements of topological structures that correspond to a given quantumcircuit and largely neglecting their efficiency. We detail the many challengesthat will need to be tackled in the pursuit of efficiency. The software sourcecode can be consulted at https://github.com/alexandrupaler/tqec.
Paler, A, Wille, R & Devitt, SJ 2016, 'Wire Recycling for Quantum Circuit Optimization', Phys. Rev. A, vol. 94, no. 4, p. 042337.
View/Download from: Publisher's site
View description>>
Quantum information processing is expressed using quantum bits (qubits) andquantum gates which are arranged in the terms of quantum circuits. Here, eachqubit is associated to a quantum circuit wire which is used to conduct thedesired operations. Most of the existing quantum circuits allocate a singlequantum circuit wire for each qubit and, hence, introduce a significantoverhead. In fact, qubits are usually not needed during the entire computationbut only between their initialization and measurement. Before and after that,corresponding wires may be used by other qubits. In this work, we propose asolution which exploits this fact in order to optimize the design of quantumcircuits with respect to the required wires. To this end, we introduce arepresentation of the lifetimes of all qubits which is used to analyze therespective need for wires. Based on this analysis, a method is proposed which'recycles' the available wires and, by this, reduces the size of the resultingcircuit. Experimental evaluations based on established reversible andfault-tolerant quantum circuits confirm that the proposed solution reduces theamount of wires by more than 90% compared to unoptimized quantum circuits.
Percival, J & McGregor, C 2016, 'An Evaluation of Understandability of Patient Journey Models in Mental Health', JMIR Human Factors, vol. 3, no. 2, pp. e20-e20.
View/Download from: Publisher's site
View description>>
BACKGROUND: There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. OBJECTIVES: This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. METHOD: Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. RESULTS: The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. CONCLUSIONS: The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on exte...
Pietroni, N, Puppo, E, Marcias, G, Roberto, R & Cignoni, P 2016, 'Tracing Field-Coherent Quad Layouts.', Comput. Graph. Forum, vol. 35, pp. 485-496.
View/Download from: Publisher's site
Pileggi, SF 2016, 'Is Big Data the New ?God? on Earth? [Opinion]', IEEE Technology and Society Magazine, vol. 35, no. 1, pp. 18-20.
View/Download from: Publisher's site
Polhill, JG, Filatova, T, Schlüter, M & Voinov, A 2016, 'Modelling systemic change in coupled socio-environmental systems', Environmental Modelling & Software, vol. 75, pp. 318-332.
View/Download from: Publisher's site
Polhill, JG, Filatova, T, Schlüter, M & Voinov, A 2016, 'Preface to the thematic issue on modelling systemic change in coupled socio-environmental systems', Environmental Modelling & Software, vol. 75, pp. 317-317.
View/Download from: Publisher's site
Pratama, M, Lu, J & Zhang, G 2016, 'Evolving Type-2 Fuzzy Classifier', IEEE TRANSACTIONS ON FUZZY SYSTEMS, vol. 24, no. 3, pp. 574-589.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Evolving fuzzy classifiers (EFCs) have achieved immense success in dealing with nonstationary data streams because of their flexible characteristics. Nonetheless, most real-world data streams feature highly uncertain characteristics, which cannot be handled by the type-1 EFC. A novel interval type-2 fuzzy classifier, namely evolving type-2 classifier (eT2Class), is proposed in this paper, which constructs an evolving working principle in the framework of interval type-2 fuzzy system. The eT2Class commences its learning process from scratch with an empty or initially trained rule base, and its fuzzy rules can be automatically grown, pruned, recalled, and merged on the fly referring to summarization power and generalization power of data streams. In addition, the eT2Class is driven by a generalized interval type-2 fuzzy rule, where the premise part is composed of the multivariate Gaussian function with an uncertain nondiagonal covariance matrix, while employing a subset of the nonlinear Chebyshev polynomial as the rule consequents. The efficacy of the eT2Class has been rigorously assessed by numerous real-world and artificial study cases, benchmarked against state-of-The-Art classifiers, and validated through various statistical tests. Our numerical results demonstrate that the eT2Class produces more reliable classification rates, while retaining more compact and parsimonious rule base than state-of-The-Art EFCs recently published in the literature.
Pratama, M, Lu, J, Lughofer, E, Zhang, G & Anavatti, S 2016, 'Scaffolding type-2 classifier for incremental learning under concept drifts', NEUROCOMPUTING, vol. 191, pp. 304-329.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. The proposal of a meta-cognitive learning machine that embodies the three pillars of human learning: what-to-learn, how-to-learn, and when-to-learn, has enriched the landscape of evolving systems. The majority of meta-cognitive learning machines in the literature have not, however, characterized a plug-and-play working principle, and thus require supplementary learning modules to be pre-or post-processed. In addition, they still rely on the type-1 neuron, which has problems of uncertainty. This paper proposes the Scaffolding Type-2 Classifier (ST2Class). ST2Class is a novel meta-cognitive scaffolding classifier that operates completely in local and incremental learning modes. It is built upon a multivariable interval type-2 Fuzzy Neural Network (FNN) which is driven by multivariate Gaussian function in the hidden layer and the non-linear wavelet polynomial in the output layer. The what-to-learn module is created by virtue of a novel active learning scenario termed the uncertainty measure; the how-to-learn module is based on the renowned Schema and Scaffolding theories; and the when-to-learn module uses a standard sample reserved strategy. The viability of ST2Class is numerically benchmarked against state-of-the-art classifiers in 12 data streams, and is statistically validated by thorough statistical tests, in which it achieves high accuracy while retaining low complexity.
Pratama, M, Zhang, G, Er, MJ & Anavatti, S 2016, 'An Incremental Type-2 Meta-Cognitive Extreme Learning Machine', IEEE Transactions on Cybernetics, vol. 47, no. 2, pp. 1-15.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.
Ramaprasad, A, Win, KT, Syn, T, Beydoun, G & Dawson, L 2016, 'Australia's National Health Programs: An Ontological Mapping.', Australas. J. Inf. Syst., vol. 20, pp. 1-21.
View/Download from: Publisher's site
View description>>
Australia has a large number of health program initiatives whose comprehensive assessment will help refine and redefine priorities by highlighting areas of emphasis, under-emphasis, and non-emphasis. The objectives of our research are to: (a) systematically map all the programs onto an ontological framework, and (b) systemically analyse their relative emphases at different levels of granularity. We mapped all the health program initiatives onto an ontology with five dimensions, namely: (a) Policy-scope, (b) Policy-focus, (c) Outcomes, (d) Type of care, and (e) Population served. Each dimension is expanded into a taxonomy of its constituent elements. Each combination of elements from the five dimensions is a possible policy initiative component. There are 30,030 possible components encapsulated in the ontology. It includes, for example: (a) National financial policies on accessibility of preventive care for family, and (b) Local-urban regulatory policies on cost of palliative care for individual-aged. Four of the authors mapped all of Australia's health programs and initiatives on to the ontology. Visualizations of the data are used to highlight the relative emphases in the program initiatives. The dominant emphasis of the program initiatives is: [National] [educational, personnel-physician, information] policies on [accessibility, quality] of [preventive, wellness] care for the [community]. However, although (a) information is emphasized technology is not and (b) accessibility and quality are emphasized cost, satisfaction, and quality are not. The ontology and the results of the mapping can help systematically reassess and redirect the relative emphases of the programs and initiatives from a systemic perspective.
Sanders, YR, Wallman, JJ & Sanders, BC 2016, 'Bounding quantum gate error rate based on reported average fidelity', New Journal of Physics, vol. 18, no. 1, pp. 012002-012002.
View/Download from: Publisher's site
View description>>
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.
Singh, J, Prasad, M, Prasad, OK, Meng Joo, E, Saxena, AK & Lin, C-T 2016, 'A Novel Fuzzy Logic Model for Pseudo-Relevance Feedback-Based Query Expansion', International Journal of Fuzzy Systems, vol. 18, no. 6, pp. 980-989.
View/Download from: Publisher's site
View description>>
© 2016, Taiwan Fuzzy Systems Association and Springer-Verlag Berlin Heidelberg. In this paper, a novel fuzzy logic-based expansion approach considering the relevance score produced by different rank aggregation approaches is proposed. It is well known that different rank aggregation approaches yield different relevance scores for each term. The proposed fuzzy logic approach combines different weights of each term by using fuzzy rules to infer the weights of the additional query terms. Experimental results demonstrate that the proposed approach achieves significant improvement over individual expansion, aggregated and other related state-of-the-arts methods.
Sun, F, Liu, B, Hou, F, Zhou, H, Chen, J, Rui, Y & Gui, L 2016, 'A QoE centric distributed caching approach for vehicular video streaming in cellular networks', Wireless Communications and Mobile Computing, vol. 16, no. 12, pp. 1612-1624.
View/Download from: Publisher's site
View description>>
AbstractDistributed caching‐empowered wireless networks can greatly improve the efficiency of data storage and transmission and thereby the users' quality of experience (QoE). However, how this technology can alleviate the network access pressure while ensuring the consistency of content delivery is still an open question, especially in the case where the users are in fast motion. Therefore, in this paper, we investigate the caching issue emerging from a forthcoming scenario where vehicular video streaming is performed under cellular networks. Specifically, a QoE centric distributed caching approach is proposed to fulfill as many users' requests as possible, considering the limited caching space of base stations and basic user experience guarantee. Firstly, a QoE evaluation model is established using verified empirical data. Also, the mathematic relationship between the streaming bit rate and actual storage space is developed. Then, the distributed caching management for vehicular video streaming is formulated as a constrained optimization problem and solved with the generalized–reduced gradient method. Simulation results indicate that our approach can improve the users' satisfaction ratio by up to 40%. Copyright © 2015 John Wiley & Sons, Ltd.
Sun, L, Ma, J, Zhang, Y, Dong, H & Hussain, FK 2016, 'Cloud-FuSeR: Fuzzy ontology and MCDM based cloud service selection', Future Generation Computer Systems, vol. 57, pp. 42-55.
View/Download from: Publisher's site
Tian, F, Liu, B, Cai, H, Zhou, H & Gui, L 2016, 'Practical Asynchronous Neighbor Discovery in Ad Hoc Networks With Directional Antennas', IEEE Transactions on Vehicular Technology, vol. 65, no. 5, pp. 3614-3627.
View/Download from: Publisher's site
View description>>
Neighbor discovery is a crucial step in the initialization of wireless ad hoc networks. When directional antennas are used, this process becomes more challenging since two neighboring nodes must be in transmit and receive states, respectively, pointing their antennas to each other simultaneously. Most of the proposed neighbor discovery algorithms only consider the synchronous system and cannot work efficiently in the asynchronous environment. However, asynchronous neighbor discovery algorithms are more practical and offer many potential advantages. In this paper, we first analyze a one-way handshake-based asynchronous neighbor discovery algorithm by introducing a mathematical model named 'Problem of Coloring Balls.' Then, we extend it to a hybrid asynchronous algorithm that leads to a 24.4% decrease in the expected time of neighbor discovery. Compared with the synchronous algorithms, the asynchronous algorithms require approximately twice the time to complete the neighbor discovery process. Our proposed hybrid asynchronous algorithm performs better than both the two-way synchronous algorithm and the two-way asynchronous algorithm. We validate the practicality of our proposed asynchronous algorithms by OPNET simulations.
Tian, F, Liu, B, Zhou, H, Rui, Y, Chen, J, Xiong, J & Gui, L 2016, 'Caching algorithms for broadcasting and multicasting in disruption tolerant networks', Wireless Communications and Mobile Computing, vol. 16, no. 18, pp. 3377-3390.
View/Download from: Publisher's site
View description>>
AbstractIn delay and disruption tolerant networks, the contacts among nodes are intermittent. Because of the importance of data access, providing efficient data access is the ultimate aim of analyzing and exploiting disruption tolerant networks. Caching is widely proved to be able to improve data access performance. In this paper, we consider caching schemes for broadcasting and multicasting to improve the performance of data access. First, we propose a caching algorithm for broadcasting, which selects the community central nodes as relays from both network structure perspective and social network perspective. Then, we accommodate the caching algorithm for multicasting by considering the data query pattern. Extensive trace‐driven simulations are conducted to investigate the essential difference between the caching algorithms for broadcasting and multicasting and evaluate the performance of these algorithms. Copyright © 2016 John Wiley & Sons, Ltd.
Tonelli, D, Pietroni, N, Puppo, E, Froli, M, Cignoni, P, Amendola, G & Scopigno, R 2016, 'Stability of Statics Aware Voronoi Grid-Shells', Engineering Structures, vol. 116, pp. 70-82.
View/Download from: Publisher's site
View description>>
Grid-shells are lightweight structures used to cover long spans with few load-bearing material, as they excel for lightness, elegance and transparency. In this paper we analyze the stability of hex-dominant free-form grid-shells, generated with the Statics Aware Voronoi Remeshing scheme introduced in Pietroni et al. (2015). This is a novel hex-dominant, organic-like and non uniform remeshing pattern that manages to take into account the statics of the underlying surface. We show how this pattern is particularly suitable for free-form grid-shells, providing good performance in terms of both aesthetics and structural behavior. To reach this goal, we select a set of four contemporary architectural surfaces and we establish a systematic comparative analysis between Statics Aware Voronoi Grid-Shells and equivalent state of the art triangular and quadrilateral grid-shells. For each dataset and for each grid-shell topology, imperfection sensitivity analyses are carried out and the worst response diagrams compared. It turns out that, in spite of the intrinsic weakness of the hexagonal topology, free-form Statics Aware Voronoi Grid-Shells are much more effective than their state-of-the-art quadrilateral counterparts.
Turner, KG, Anderson, S, Gonzales-Chang, M, Costanza, R, Courville, S, Dalgaard, T, Dominati, E, Kubiszewski, I, Ogilvy, S, Porfirio, L, Ratna, N, Sandhu, H, Sutton, PC, Svenning, J-C, Turner, GM, Varennes, Y-D, Voinov, A & Wratten, S 2016, 'A review of methods, data, and models to assess changes in the value of ecosystem services from land degradation and restoration', Ecological Modelling, vol. 319, pp. 190-207.
View/Download from: Publisher's site
Valenzuela-Fernández, L, Nicolas, C, Gil-Lafuente, J & Merigó, JM 2016, 'Fuzzy indicators for customer retention', International Journal of Engineering Business Management, vol. 8, pp. 184797901667052-184797901667052.
View/Download from: Publisher's site
View description>>
It is widely known that market orientation (MO) and customer value help companies achieve sustainable sales growth over time. Nevertheless, one cannot ignore the existence of a gap on how to measure this relationship. Following this idea, this study proposes six fuzzy key performance indicators that aims to measure customer retention and loyalty of the portfolio. The work uses 300 sales executives. This exploratory study concludes that indicators such as MO, customer orientation (CO), degree of CO value of sales force, innovation capability, lifetime value, and customer service quality positively influence customer retention and loyalty portfolio.
Voinov, A, Kolagani, N & McCall, MK 2016, 'Preface to this Virtual Thematic Issue: Modelling with Stakeholders II', Environmental Modelling & Software, vol. 79, pp. 153-155.
View/Download from: Publisher's site
Voinov, A, Kolagani, N, McCall, MK, Glynn, PD, Kragt, ME, Ostermann, FO, Pierce, SA & Ramu, P 2016, 'Modelling with stakeholders – Next generation', Environmental Modelling & Software, vol. 77, pp. 196-220.
View/Download from: Publisher's site
Wang, D, Jin, H, Zou, D, Xu, P, Zhu, T & Chen, G 2016, 'Taming transitive permission attack via bytecode rewriting on Android application', Security and Communication Networks, vol. 9, no. 13, pp. 2100-2114.
View/Download from: Publisher's site
View description>>
AbstractGoogle Android is popular for mobile devices in recent years. The openness and popularity of Android make it a primary target for malware. Even though Android's security mechanisms could defend most malware, its permission model is vulnerable to transitive permission attack, a type of privilege escalation attacks. Many approaches have been proposed to detect this attack by modifying the Android OS. However, the Android's fragmentation problem and requiring rooting Android device hinder those approaches large‐scale adoption. In this paper, we present an instrumentation framework, called SEAPP, for Android applications (or “apps”) to detect the transitive permission attack on unmodified Android. SEAPP automatically rewrites an app without requiring its source codes and produces a security‐harden app. At runtime, call‐chains are built among these apps and detection process is executed before a privileged API is invoked. Our experimental results show that SEAPP could work on a large number of benign apps from the official Android market and malicious apps, with a repackaged success rate of over 99.8%. We also show that our framework effectively tracks call‐chains among apps and detects known transitive permission attack with low overhead. Copyright © 2016 John Wiley & Sons, Ltd.
Wang, W, Zhang, G & Lu, J 2016, 'Member contribution-based group recommender system', DECISION SUPPORT SYSTEMS, vol. 87, pp. 80-93.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. Developing group recommender systems (GRSs) is a vital requirement in many online service systems to provide recommendations in contexts in which a group of users are involved. Unfortunately, GRSs cannot be effectively supported using traditional individual recommendation techniques because it needs new models to reach an agreement to satisfy all the members of this group, given their conflicting preferences. Our goal is to generate recommendations by taking each group member's contribution into account through weighting members according to their degrees of importance. To achieve this goal, we first propose a member contribution score (MCS) model, which employs the separable non-negative matrix factorization technique on a group rating matrix, to analyze the degree of importance of each member. A Manhattan distance-based local average rating (MLA) model is then developed to refine predictions by addressing the fat tail problem. By integrating the MCS and MLA models, a member contribution-based group recommendation (MC-GR) approach is developed. Experiments show that our MC-GR approach achieves a significant improvement in the performance of group recommendations. Lastly, using the MC-GR approach, we develop a group recommender system called GroTo that can effectively recommend activities to web-based tourist groups.
Wu, J, Wang, J, Qin, S & Lu, H 2016, 'Suitable error evaluation criteria selection in the wind energy assessment via the K-means clustering algorithm', International Journal of Green Energy, vol. 13, no. 11, pp. 1145-1162.
View/Download from: Publisher's site
View description>>
© 2016 Taylor & Francis Group, LLC. In this paper, wind energy potential of four locations in Xinjiang region is assessed. The Weibull distribution as well as the Logistic and the Lognormal distributions are applied to describe the distributions of the wind speed at different heights. In determining the parameters in the Weibull distribution, four intelligent parameter optimization approaches including the differential evolutionary, the particle swarm optimization, and two other approaches derived from these two algorithms and combined advantages of these two approaches are employed. Then the optimal distribution is chosen through the Chi-square error (CSE), the Kolmogorov–Smirnov test error (KSE), and the root mean square error (RMSE) criteria. However, it is found that the variation range of some criteria is quite large, thus these criteria are analyzed and evaluated both from the anomalous values and by the K-means clustering method. Anomaly observation results have shown that the CSE is the first one should be considered to be eliminated from the consequent optimal distribution function selection. This idea is further confirmed by the K-means clustering algorithm, by which the CSE is clustered into a different group with KSE and RMSE. Therefore, only the reserved two error evaluation criteria are utilized to evaluate the wind power potential.
Xiao, L, Shao, W, Wang, C, Zhang, K & Lu, H 2016, 'Research and application of a hybrid model based on multi-objective optimization for electrical load forecasting', Applied Energy, vol. 180, pp. 213-233.
View/Download from: Publisher's site
Xiong, P, Zhu, T, Niu, W & Li, G 2016, 'A differentially private algorithm for location data release', Knowledge and Information Systems, vol. 47, no. 3, pp. 647-669.
View/Download from: Publisher's site
View description>>
The rise of mobile technologies in recent years has led to large volumes of location information, which are valuable resources for knowledge discovery such as travel patterns mining and traffic analysis. However, location dataset has been confronted with serious privacy concerns because adversaries may re-identify a user and his/her sensitivity information from these datasets with only a little background knowledge. Recently, several privacy-preserving techniques have been proposed to address the problem, but most of them lack a strict privacy notion and can hardly resist the number of possible attacks. This paper proposes a private release algorithm to randomize location dataset in a strict privacy notion, differential privacy, with the goal of preserving users’ identities and sensitive information. The algorithm aims to mask the exact locations of each user as well as the frequency that the user visits the locations with a given privacy budget. It includes three privacy-preserving operations: private location clustering shrinks the randomized domain and cluster weight perturbation hides the weights of locations, while private location selection hides the exact locations of a user. Theoretical analysis on privacy and utility confirms an improved trade-off between privacy and utility of released location data. Extensive experiments have been carried out on four real-world datasets, GeoLife, Flickr, Div400 and Instagram. The experimental results further suggest that this private release algorithm can successfully retain the utility of the datasets while preserving users’ privacy.
Xuan, J, Luo, X, Zhang, G, Lu, J & Xu, Z 2016, 'Uncertainty Analysis for the Keyword System of Web Events', IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, vol. 46, no. 6, pp. 829-842.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Webpage recommendations for hot Web events can assist people to easily follow the evolution of these Web events. At the same time, there are different levels of semantic uncertainty underlying the amount of Webpages for a Web event, such as recapitulative information and detailed information. Apparently, the grasp of the semantic uncertainty of Web events could improve the satisfactoriness of Webpage recommendations. However, traditional hit-rate-based or clustering-based Webpage recommendation methods have overlooked these different levels of semantic uncertainty. In this paper, we propose a framework to identify the different underlying levels of semantic uncertainty in terms of Web events, and then utilize these for Webpage recommendations. Our idea is to consider a Web event as a system composed of different keywords, and the uncertainty of this keyword system is related to the uncertainty of the particular Web event. Based on keyword association linked network Web event representation and Shannon entropy, we identify the different levels of semantic uncertainty, and construct a semantic pyramid (SP) to express the uncertainty hierarchy of a Web event. Finally, an SP-based Webpage recommendation system is developed. Experiments show that the proposed algorithm can significantly capture the different levels of the semantic uncertainties of Web events and it can be applied to Webpage recommendations.
Yi, X, Paulet, R, Bertino, E & Xu, G 2016, 'Private Cell Retrieval From Data Warehouses', IEEE Transactions on Information Forensics and Security, vol. 11, no. 6, pp. 1346-1361.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Publicly accessible data warehouses are an indispensable resource for data analysis. However, they also pose a significant risk to the privacy of the clients, since a data warehouse operator may follow the client's queries and infer what the client is interested in. Private information retrieval (PIR) techniques allow the client to retrieve a cell from a data warehouse without revealing to the operator which cell is retrieved and, therefore, protects the privacy of the client's queries. However, PIR cannot be used to hide online analytical processing (OLAP) operations performed by the client, which may disclose the client's interest. This paper presents a solution for private cell retrieval from a data warehouse on the basis of the Paillier cryptosystem. By our solution, the client can privately perform OLAP operations on the data warehouse and retrieve one (or more) cell without revealing any information about which cell is selected. In addition, we propose a solution for private block download on the basis of the Paillier cryptosystem. Our private block download allows the client to download an encrypted block from a data warehouse without revealing which block in a cloaking region is downloaded and improves the feasibility of our private cell retrieval. Our solutions ensure both the server's privacy and the client's privacy. Our experiments have shown that our solutions are practical.
Yu, D, Li, D-F & Merigó, JM 2016, 'Dual hesitant fuzzy group decision making method and its application to supplier selection', International Journal of Machine Learning and Cybernetics, vol. 7, no. 5, pp. 819-831.
View/Download from: Publisher's site
View description>>
The concept of dual hesitant fuzzy set arising from hesitant fuzzy set is generalized by including a function reflecting the decision maker’s fuzziness about the non-membership degree of the information provided. This paper studies some dual hesitant fuzzy information aggregation operators for aggregating dual hesitant fuzzy elements, such as dual hesitant fuzzy Heronian mean operator and dual hesitant fuzzy geometric Heronian mean operator. The research resulting dual hesitant fuzzy information aggregation operators finds an important role in group decision making (GDM) applications. It can fusion the experts’ opinion to the comprehensive ones and based on which an optimal decision making scheme can be determined. The properties of the proposed operators are studied and the application on GDM are investigated. The effectiveness of the GDM method is demonstrated on the case study about supplier selection.
Yu, D, Li, D-F, Merigó, JM & Fang, L 2016, 'Mapping development of linguistic decision making studies', Journal of Intelligent & Fuzzy Systems, vol. 30, no. 5, pp. 2727-2736.
View/Download from: Publisher's site
Yu, D, Merigó, JM & Xu, Y 2016, 'Group Decision Making in Information Systems Security Assessment Using Dual Hesitant Fuzzy Set', International Journal of Intelligent Systems, vol. 31, no. 8, pp. 786-812.
View/Download from: Publisher's site
View description>>
Network information system security has become a global issue since it is related to the economic development and national security. Information system security assessment plays an important role in the development of security solutions. Aiming at this issue, a dual hesitant fuzzy (DHF) group decision-making (GDM) method was proposed in this paper to assist the assessment of network information system security. A systemic index containing four aspects was established including organization security, management security, technical security, and personnel management security. The DHF group evaluation matrix was constructed based on the individual evaluation information from each expert. Some power average operator-based DHF information aggregation operators are proposed and used to fusion the performance of each criterion for information systems. The advantage of these operators is that they can describe the relationship between the indexes quantitatively. Finally, a case study about information systems security assessment was presented to verify the effectiveness of proposed GDM methods.
Yu, Y-H, Lu, S-W, Chuang, C-H, King, J-T, Chang, C-L, Chen, S-A, Chen, S-F & Lin, C-T 2016, 'An Inflatable and Wearable Wireless System for Making 32-Channel Electroencephalogram Measurements', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 24, no. 7, pp. 806-813.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Potable electroencephalography (EEG) devices have become critical for important research. They have various applications, such as in brain-computer interfaces (BCI). Numerous recent investigations have focused on the development of dry sensors, but few concern the simultaneous attachment of high-density dry sensors to different regions of the scalp to receive qualified EEG signals from hairy sites. An inflatable and wearable wireless 32-channel EEG device was designed, prototyped, and experimentally validated for making EEG signal measurements; it incorporates spring-loaded dry sensors and a novel gasbag design to solve the problem of interference by hair. The cap is ventilated and incorporates a circuit board and battery with a high-tolerance wireless (Bluetooth) protocol and low power consumption characteristics. The proposed system provides a 500/250 Hz sampling rate, and 24 bit EEG data to meet the BCI system data requirement. Experimental results prove that the proposed EEG system is effective in measuring audio event-related potential, measuring visual event-related potential, and rapid serial visual presentation. Results of this work demonstrate that the proposed EEG cap system performs well in making EEG measurements and is feasible for practical applications.
Zeng, D, Gu, L, Guo, S, Cheng, Z & Yu, S 2016, 'Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System', IEEE Transactions on Computers, vol. 65, no. 12, pp. 3702-3712.
View/Download from: Publisher's site
View description>>
Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Zhang, G, Han, J & Lu, J 2016, 'Fuzzy Bi-level Decision-Making Techniques: A Survey', INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, vol. 9, pp. 25-34.
View/Download from: Publisher's site
View description>>
© 2016 the authors. Bi-level decision-making techniques aim to deal with decentralized management problems that feature interactive decision entities distributed throughout a bi-level hierarchy. A challenge in handling bi-level decision problems is that various uncertainties naturally appear in decision-making process. Significant efforts have been devoted that fuzzy set techniques can be used to effectively deal with uncertain issues in bi-level decision-making, known as fuzzy bi-level decision-making techniques, and researchers have successfully gained experience in this area. It is thus vital that an instructive review of current trends in this area should be conducted, not only of the theoretical research but also the practical developments. This paper systematically reviews up-to-date fuzzy bi-level decisionmaking techniques, including models, approaches, algorithms and systems. It also clusters related technique developments into four main categories: basic fuzzy bi-level decision-making, fuzzy bi-level decision-making with multiple optima, fuzzy random bi-level decision-making, and the applications of bi-level decision-making techniques in different domains. By providing state-of-the-art knowledge, this survey paper will directly support researchers and practitioners in their understanding of developments in theoretical research results and applications in relation to fuzzy bi-level decision-making techniques.
Zhang, H, Quan, W, Song, J, Jiang, Z & Yu, S 2016, 'Link State Prediction-Based Reliable Transmission for High-Speed Railway Networks', IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9617-9629.
View/Download from: Publisher's site
View description>>
Due to unpredictable noise and ambient interference along high-speed railways (HSRs), it is challenging to provide reliable Internet services in severe HSR network environments. Most existing research that requires expensive modifications for large-scale already in-used base stations cannot be immediately deployed into the existing HSR systems. In this paper, we propose a quite lightweight but effective solution to improve the Internet experience for HSR passengers. Different from other existing approaches, we employ a data-driven link state prediction (LSP) mechanism for HSR reliable transmission, called LSP4HSR, which directly operates in HSR's on-board routers. In particular, we conduct an extensive measurement of network status on several realistic HSR lines and collect a first-hand dataset in terms of round-trip time and packet loss rate. By analyzing this real dataset, we find that HSR link quality presents obvious two-time-scale variation characteristics. We execute a lot of in-depth studies to explore potential reasons for this interesting phenomenon. Furthermore, based on the two-time-scale Markov chain, we establish an accurate HSR link prediction approach, which brings an LSP-based transmission enhancement mechanism to alleviate the impact from poor link status along HSR lines. Extensive experiments verify that the proposed solution can not only improve the packet transmission reliability in HSR networks but can be also deployed in existing HSR systems quite smoothly and easily.
Zhang, L, Yang, Z, Voinov, A & Gao, S 2016, 'Nature-inspired stormwater management practice: The ecological wisdom underlying the Tuanchen drainage system in Beijing, China and its contemporary relevance', Landscape and Urban Planning, vol. 155, pp. 11-20.
View/Download from: Publisher's site
Zhang, Y, Robinson, DKR, Porter, AL, Zhu, D, Zhang, G & Lu, J 2016, 'Technology roadmapping for competitive technical intelligence', TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, vol. 110, no. 2016, pp. 175-186.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Inc. Understanding the evolution and emergence of technology domains remains a challenge, particularly so for potentially breakthrough technologies. Though it is well recognized that emergence of new fields is complex and uncertain, to make decisions amidst such uncertainty, one needs to mobilize various sources of intelligence to identify known–knowns and known–unknowns to be able to choose appropriate strategies and policies. This competitive technical intelligence cannot rely on simple trend analyses because breakthrough technologies have little past to inform such trends, and positing the directions of evolution is challenging. Neither do qualitative tools, embracing the complexities, provide all the solutions, since transparent and repeatable techniques need to be employed to create best practices and evaluate the intelligence that comes from such exercises. In this paper, we present a hybrid roadmapping technique that draws on a number of approaches and integrates them into a multi-level approach (individual activities, industry evolutions and broader global changes) that can be applied to breakthrough technologies. We describe this approach in deeper detail through a case study on dye-sensitized solar cells. Our contribution to this special issue is to showcase the technique as part of a family of approaches that are emerging around the world to inform strategy and policy.
Zhang, Y, Shang, L, Huang, L, Porter, AL, Zhang, G, Lu, J & Zhu, D 2016, 'A hybrid similarity measure method for patent portfolio analysis', Journal of Informetrics, vol. 10, no. 4, pp. 1108-1130.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Similarity measures are fundamental tools for identifying relationships within or across patent portfolios. Many bibliometric indicators are used to determine similarity measures; for example, bibliographic coupling, citation and co-citation, and co-word distribution. This paper aims to construct a hybrid similarity measure method based on multiple indicators to analyze patent portfolios. Two models are proposed: categorical similarity and semantic similarity. The categorical similarity model emphasizes international patent classifications (IPCs), while the semantic similarity model emphasizes textual elements. We introduce fuzzy set routines to translate the rough technical (sub-) categories of IPCs into defined numeric values, and we calculate the categorical similarities between patent portfolios using membership grade vectors. In parallel, we identify and highlight core terms in a 3-level tree structure and compute the semantic similarities by comparing the tree-based structures. A weighting model is designed to consider: 1) the bias that exists between the categorical and semantic similarities, and 2) the weighting or integrating strategy for a hybrid method. A case study to measure the technological similarities between selected firms in China's medical device industry is used to demonstrate the reliability our method, and the results indicate the practical meaning of our method in a broad range of informetric applications.
Zhang, Y, Wu, J, Cai, Z, Zhang, P & Chen, L 2016, 'Memetic Extreme Learning Machine', Pattern Recognition, vol. 58, pp. 135-148.
View/Download from: Publisher's site
View description>>
© 2016. Extreme Learning Machine (ELM) is a promising model for training single-hidden layer feedforward networks (SLFNs) and has been widely used for classification. However, ELM faces the challenge of arbitrarily selected parameters, e.g., the network weights and hidden biases. Therefore, many efforts have been made to enhance the performance of ELM, such as using evolutionary algorithms to explore promising areas of the solution space. Although evolutionary algorithms can explore promising areas of the solution space, they are not able to locate global optimum efficiently. In this paper, we present a new Memetic Algorithm (MA)-based Extreme Learning Machine (M-ELM for short). M-ELM embeds the local search strategy into the global optimization framework to obtain optimal network parameters. Experiments and comparisons on 46 UCI data sets validate the performance of M-ELM. The corresponding results demonstrate that M-ELM significantly outperforms state-of-the-art ELM algorithms.
Zhang, Y, Zhang, G, Chen, H, Porter, AL, Zhu, D & Lu, J 2016, 'Topic analysis and forecasting for science, technology and innovation: Methodology with a case study focusing on big data research', TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, vol. 105, pp. 179-191.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. The number and extent of current Science, Technology & Innovation topics are changing all the time, and their induced accumulative innovation, or even disruptive revolution, will heavily influence the whole of society in the near future. By addressing and predicting these changes, this paper proposes an analytic method to (1) cluster associated terms and phrases to constitute meaningful technological topics and their interactions, and (2) identify changing topical emphases. Our results are carried forward to present mechanisms that forecast prospective developments using Technology Roadmapping, combining qualitative and quantitative methodologies. An empirical case study of Awards data from the United States National Science Foundation, Division of Computer and Communication Foundation, is performed to demonstrate the proposed method. The resulting knowledge may hold interest for R&D management and science policy in practice.
Zhang, Z, Liu, Y, Xu, G & Chen, H 2016, 'A weighted adaptation method on learning user preference profile', Knowledge-Based Systems, vol. 112, pp. 114-126.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. Recommender systems typically store personal preference profiles. Many items in the profiles can be represented by numerical attributes. However, the initial profile of each user is incomplete and imprecise. One important problem in the development of these systems is how to learn user preferences, and how to automatically adapted update the profiles. To address this issue, this paper presents an unsupervised approach for learning user preferences over numeric attributes by analyzing the interactions between users and recommender systems. When a list of recommendations shown to a target user, the favorite item will be selected by him/her, then the selected item and the over-ranked items will be employed as valuable feedback to learn the user profile. Specifically, two contributions are offered: 1), a learning approach to measure the influence of over-ranked items through analysis of user feedbacks and 2), a weighting algorithm to calculate weights of different attributes by analyzing user selections. These two approaches are integrated into a traditional adaption model for updating user preference profile. Extensive simulations and results show that both approaches are more effective than existing approaches.
Zhang, Z, Liu, Y, Xu, G & Luo, G 2016, 'Recommendation using DMF-based fine tuning method', Journal of Intelligent Information Systems, vol. 47, no. 2, pp. 233-246.
View/Download from: Publisher's site
View description>>
© 2016 Springer Science+Business Media New York Recommender Systems (RS) have been comprehensively analyzed in the past decade, Matrix Factorization (MF)-based Collaborative Filtering (CF) method has been proved to be an useful model to improve the performance of recommendation. Factors that inferred from item rating patterns shows the vectors which are useful for MF to characterize both items and users. A recommendation can concluded from good correspondence between item and user factors. A basic MF model starts with an object function, which is consisted of the squared error between original training matrix and predicted matrix as well as the regularization term (regularization parameters). To learn the predicted matrix, recommender systems minimize the squared error which has been regularized. However, two important details have been ignored: (1) the predicted matrix will be more and more accuracy as the iterations carried out, then a fix value of regularization parameters may not be the most suitable choice. (2) the final distribution trend of ratings of predicted matrix is not similar with the original training matrix. Therefore, we propose a Dynamic-MF algorithm and fine tuning method which is quite general to overcome the mentioned detail problems. Some other information, such as social relations, etc, can be easily incorporated into this method (model). The experimental analysis on two large datasets demonstrates that our approaches outperform the basic MF-based method.
Zhang, Z, Oberst, S & Lai, JCS 2016, 'Instability analysis of friction oscillators with uncertainty in the friction law distribution', Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, vol. 230, no. 6, pp. 948-958.
View/Download from: Publisher's site
View description>>
Despite substantial research efforts in the past two decades, the prediction of brake squeal propensity, as a significant noise, vibration and harshness (NVH) issue to automotive manufactures, is as difficult as ever. This is due to the complexity of the interacting mechanisms (e.g. stick-slip, sprag-slip, mode coupling and hammering effect) and the uncertain operating conditions (temperature, pressure). In particular, two major aspects in brake squeal have attracted significant attention recently: nonlinearity and uncertainty. The fugitiveness of brake squeal could be attributed to a number of factors including the difficulty in accurately modelling friction. In this paper, the influence of the uncertainty arising from the tribological aspect in brake squeal prediction is analysed. Three types of friction models, namely the Amonton-Coulomb model, the velocity-dependent model and the LuGre model, are randomly assigned to a group of interconnected oscillators which model the dynamics of a brake system. The complex eigenvalue analysis, as a standard stability analysis tool, and the friction work calculation are performed to investigate the probability for instability arising from the uncertainty in the friction models. The results are discussed with a view to apply this approach to the analysis of the squeal propensity for a full brake system.
Zhang, Z, Oberst, S & Lai, JCS 2016, 'On the potential of uncertainty analysis for prediction of brake squeal propensity', Journal of Sound and Vibration, vol. 377, pp. 123-132.
View/Download from: Publisher's site
Zheng, Y, Zhang, G, Han, J & Lu, J 2016, 'Pessimistic bilevel optimization model for risk-averse production-distribution planning', INFORMATION SCIENCES, vol. 372, pp. 677-689.
View/Download from: Publisher's site
View description>>
© 2016Production-distribution (PD) planning problems are often addressed in an organizational hierarchy in which a distribution company that utilizes several depots is the leader and the manufacturing companies are the followers. The classical objective function of the leader is to minimize the total operating cost of the distribution company, and the followers optimize their respective production cost. However, the distribution company (the leader) frequently cannot obtain complete production information from the manufacturing companies, and may thus become risk-averse. In this case, a better description of the leader's objective function is the minimization of the maximum possible operating cost (Min-Max). In this paper, this type of PD problem is called a risk-averse PD planning problem and is formulated as a pessimistic mixed-integer bilevel optimization (PMIBO) model from the worst-case point of view. To solve the risk-averse PD planning problem, not yet well solved in literature, a penalty function-based method is presented which transforms the PMIBO model into a series of single-level optimization problems so that the latter ones can be solved by available optimization software. Finally, the feasibility of the proposed model is demonstrated using a set of case-based examples of PD planning.
Zhou, L, Merigó, JM, Chen, H & Liu, J 2016, 'The optimal group continuous logarithm compatibility measure for interval multiplicative preference relations based on the COWGA operator', Information Sciences, vol. 328, pp. 250-269.
View/Download from: Publisher's site
View description>>
The calculation of compatibility measures is an important technique employed in group decision-making with interval multiplicative preference relations. In this paper, a new compatibility measure called the continuous logarithm compatibility, which considers risk attitudes in decision-making based on the continuous ordered weighted geometric averaging (COWGA) operator, is introduced. We also develop a group continuous compatibility model (GCC Model) by minimizing the group continuous logarithm compatibility measure between the synthetic interval multiplicative preference relation and the continuous characteristic preference relation. Furthermore, theoretical foundations are established for the proposed model, such as the sufficient and necessary conditions for the existence of an optimal solution, the conditions for the existence of a superior optimal solution and the conditions for the existence of redundant preference relations. In addition, we investigate certain conditions for which the optimal objective function of the GCC Model guarantees its efficiency as the number of decision-makers increases. Finally, practical illustrative examples are examined to demonstrate the model and compare it with previous methods.
Zhu, T, Li, G, Zhou, W, Xiong, P & Yuan, C 2016, 'Privacy-preserving topic model for tagging recommender systems', Knowledge and Information Systems, vol. 46, no. 1, pp. 33-58.
View/Download from: Publisher's site
View description>>
Tagging recommender systems provide users the freedom to explore tags and obtain recommendations. The releasing and sharing of these tagging datasets will accelerate both commercial and research work on recommender systems. However, releasing the original tagging datasets is usually confronted with serious privacy concerns, because adversaries may re-identify a user and her/his sensitive information from tagging datasets with only a little background information. Recently, several privacy techniques have been proposed to address the problem, but most of these lack a strict privacy notion, and rarely prevent individuals being re-identified from the dataset. This paper proposes a privacy- preserving tag release algorithm, PriTop. This algorithm is designed to satisfy differential privacy, a strict privacy notion with the goal of protecting users in a tagging dataset. The proposed PriTop algorithm includes three privacy-preserving operations: Private topic model generation structures the uncontrolled tags; private weight perturbation adds Laplace noise into the weights to hide the numbers of tags; while private tag selection finally finds the most suitable replacement tags for the original tags, so the exact tags can be hidden. We present extensive experimental results on four real-world datasets, Delicious, MovieLens, Last.fm and BibSonomy. While the recommendation algorithm is successful in all the cases, our results further suggest the proposed PriTop algorithm can successfully retain the utility of the datasets while preserving privacy.
Zhu, W 2016, 'Preface', Journal of Computer Science and Technology, vol. 31, no. 6, pp. 1069-1071.
View/Download from: Publisher's site
Zowghi, D & Gervasi, V 2016, 'Introduction to the special issue of best papers from RE2015 conference', Requirements Engineering, vol. 21, no. 3, pp. 309-310.
View/Download from: Publisher's site
Abdullaev, S, McBurney, P & Musial, K 1970, 'Pricing options with portfolio-holding trading agents in direct double auction', Frontiers in Artificial Intelligence and Applications, 22nd European Conference on Artificial Intelligence (ECAI), IOS PRESS, Hague, NETHERLANDS, pp. 1754-1755.
View/Download from: Publisher's site
View description>>
Options constitute integral part of modern financial trades, and are priced according to the risk associated with buying or selling certain asset in future. Financial literature mostly concentrates on risk-neutral methods of pricing options such as Black-Scholes model. However, it is an emerging field in option pricing theory to use trading agents with utility functions to determine the option's potential payoff for the agent. In this paper, we use one of such methodologies developed by Othman and Sandholm to design portfolio-holding agents that are endowed with popular option portfolios such as bullish spread, butterfly spread, straddle, etc to price options. Agents use their portfolios to evaluate how buying or selling certain option would change their current payoff structure, and form their orders based on this information. We also simulate these agents in a multi-unit direct double auction. The emerging prices are compared to risk-neutral prices under different market conditions. Through an appropriate endowment of option portfolios to agents, we can also mimic market conditions where the population of agents are bearish, bullish, neutral or non-neutral in their beliefs.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Named Entity Recognition from Unstructured Handwritten Document Images', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, Santorini, Greece, pp. 375-380.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Named entity recognition is an important topic in the field of natural language processing, whereas in document image processing, such recognition is quite challenging without employing any linguistic knowledge. In this paper we propose an approach to detect named entities (NEs) directly from offline handwritten unstructured document images without explicit character/word recognition, and with very little aid from natural language and script rules. At the preprocessing stage, the document image is binarized, and then the text is segmented into words. The slant/skew/baseline corrections of the words are also performed. After preprocessing, the words are sent for NE recognition. We analyze the structural and positional characteristics of NEs and extract some relevant features from the word image. Then the BLSTM neural network is used for NE recognition. Our system also contains a post-processing stage to reduce the true NE rejection rate. The proposed approach produces encouraging results on both historical and modern document images, including those from an Australian archive, which are reported here for the very first time.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Offline Cursive Bengali Word Recognition Using CNNs with a Recurrent Model', 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Shenzhen, China, pp. 429-434.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper deals with offline handwritten word recognition of a major Indic script: Bengali. Due to the structure of this script, the characters (mostly ortho-syllables) are frequently overlapping and hard to segment, especially when the writing is cursive. Individual character recognition and the combination of outputs can increase the likelihood of errors. Instead, a better approach can be sending the whole word to a suitable recognizer. Here we use the Convolutional Neural Network (CNN) integrated with a recurrent model for this purpose. Long short-term memory blocks are used as hidden units. Also, the CNN-derived features are employed in a recurrent model with a CTC (Connectionist Temporal Classification) layer to get the output. We have tested our method on three datasets: (a) a publicly available dataset, (b) a new dataset generated by our research group and (c) an unconstrained dataset. The dataset (a) contains 17,091 words, while our dataset (b) contains 107,550 number of words in total. In addition to these, the dataset (c) is comprised of 5,223 words. We have compared our results with those of some earlier work in the area and have found improved performance, which is due to the novel integration of CNNs with the recurrent model.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Writer identification by training on one script but testing on another', 2016 23rd International Conference on Pattern Recognition (ICPR), 2016 23rd International Conference on Pattern Recognition (ICPR), IEEE, Mexico, pp. 1153-1158.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper deals with identifying a writer from his/her offline handwriting. In a multilingual country where a writer can scribe in multiple scripts, writer identification becomes challenging when we have individual handwriting data in one script while we need to verify/identify a writer from handwriting in another script. In this paper such an issue is addressed with two scripts: English and Bengali. Here we model the task as a classification problem, where training data contains only Bengali handwritten samples and testing is performed on English handwritten texts. This work is based on the understanding that a writer has some inherent stroke characteristics that are independent of the script in which (s)he writes. In this work, some implicit structural and statistical features are extracted, and multiple classifiers are employed for writer identification. Many training sessions are run on a database of 100 writers and the performances are analyzed. We have obtained encouraging results on this database, which show the effectiveness of our method.
Ahadi, A, Behbood, V, Vihavainen, A, Prior, J & Lister, R 1970, 'Students' Syntactic Mistakes in Writing Seven Different Types of SQL Queries and its Application to Predicting Students' Success', Proceedings of the 47th ACM Technical Symposium on Computing Science Education, SIGCSE '16: The 47th ACM Technical Symposium on Computing Science Education, ACM, Memphis, Tennessee, pp. 401-406.
View/Download from: Publisher's site
View description>>
© 2016 ACM. The computing education community has studied extensively the errors of novice programmers. In contrast, little attention has been given to student's mistake in writing SQL statements. This paper represents the first large scale quantitative analysis of the student's syntactic mistakes in writing different types of SQL queries. Over 160 thousand snapshots of SQL queries were collected from over 2000 students across eight years. We describe the most common types of syntactic errors that students make. We also describe our development of an automatic classifier with an overall accuracy of 0.78 for predicting student performance in writing SQL queries.
Ahadi, A, Lister, R & Vihavainen, A 1970, 'On the Number of Attempts Students Made on Some Online Programming Exercises During Semester and their Subsequent Performance on Final Exam Questions', Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16: Innovation and Technology in Computer Science Education Conference 2016, ACM, Arequipa, Peru, pp. 218-223.
View/Download from: Publisher's site
View description>>
This paper explores the relationship between student performance on online programming exercises completed during semester with subsequent student performance on a final exam. We introduce an approach that combines whether or not a student produced a correct solution to an online exercise with information on the number of attempts at the exercise submitted by the student. We use data collected from students in an introductory Java course to assess the value of this approach. We compare the approach that utilizes the number of attempts to an approach that simply considers whether or not a student produced a correct solution to each exercise. We found that the results for the method that utilizes the number of attempts correlates better with performance on a final exam.
Ahadi, A, Prior, J, Behbood, V & Lister, R 1970, 'Students' Semantic Mistakes in Writing Seven Different Types of SQL Queries', Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16: Innovation and Technology in Computer Science Education Conference 2016, ACM, Peru.
View/Download from: Publisher's site
Al-Doghman, F, Chaczko, Z, Ajayan, AR & Klempous, R 1970, 'A review on Fog Computing technology', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Budapest, Hungary, pp. 1525-1530.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Out of the many computing and software oriented models that are being adopted by Computer Networking, Fog Computing has captured quite a wide audience in Research and Industry. There is a lot of confusion on its precise definition, position, role and application. The Internet of Things (IOT), todays' digitized intelligent connectivity domain, demands real time response in many applications and services. This renders Fog Computing a suitable platform for achieving goals of autonomy and efficiency. This paper is a justification of the concepts, interest, approaches, and practices of Fog Computing. It describes the need for adopting this new model and investigate its prime features by elucidating the scenarios for implementing it, thereby outlining its significance in the IoT world.
Alfaro-Garcia, VG, Gil-Lafuente, AM & Merigo, JM 1970, 'Induced generalized ordered weighted logarithmic aggregation operators', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greexe, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. We present the induced generalized ordered weighted logarithmic aggregation (IGOWLA) operator. It is an extension of the generalized ordered weighted logarithmic aggregation (GOWLA) operator. The IGOWLA operator uses order-induced variables that modify the reordering mechanism of the arguments to be aggregated. The main advantage of the induced process is the consideration of the complex attitude of the decision makers. We study some properties of the IGOWLA operator, such as idempotency, commutativity, boundedness and monotonicity. Finally we present an illustrative example of a group decision-making procedure using a multi-person analysis and the IGOWLA operator in the area of innovation management.
Alkalbani, AM, Ghamry, AM, Hussain, FK & Hussain, OK 1970, 'Predicting the sentiment of SaaS online reviews using supervised machine learning techniques', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, CANADA, pp. 1547-1553.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.There has been a dramatic increase in the sharing of opinions and information across different web platforms and social media, especially online product reviews. Cloud web portals, such as getApp.com, were designed to amalgamate cloud service information and to also examine how consumers evaluate their experience of using cloud computing products. The current literature shows the growing importance of online users' reviews, hence this study focuses on investigating consumers' feedback on Software-as-a-Service (SaaS) products by developing models to predict reviewers' attitudes. The goal of this paper is to develop prediction models to predict the sentiment of SaaS consumers' reviews (positive or negative). This research proposes five models that are based on five algorithms, the Support Vector Machine algorithm, Naive Bayes algorithm, Naive Bayes (Kernel) algorithm, k-nearest neighbors algorithm, and the decision tree algorithm to predict the attitude of SaaS reviews. The prediction accuracy of the space vector algorithm (5-fold cross-validation) is 92.37% which suggests that this algorithm is able to better determine the sentiment of online reviews compared with the other models. The results of this study provide valuable insight into online SaaS reviews and will assist in the design of SaaS review websites.
Alkalbani, AM, Ghamry, AM, Hussain, FK & Hussain, OK 1970, 'Sentiment Analysis and classification for Software as a Service Reviews', IEEE 30TH INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS IEEE AINA 2016, International Conference on Advanced Information Networking and Applications (was ICOIN), IEEE, Crans-Montana, Switzerland, pp. 53-58.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.With the rapid growth of cloud services, there has been a significant increase in the number of online consumer reviews and opinions on these services on different social media platforms. These reviews are a source of valuable information in regard to cloud market position and cloud consumer satisfaction. This study explores cloud consumers' reviews that reflect the user's experience with Software as a Service (SaaS) applications. The reviews were collected from different web portals, and around 4000 online reviews were analysed using sentiment analysis to identify the polarity of each review, that is, whether the sentiment being expressed is positive, negative, or neutral. Also, this research develops a model for predicting the sentiment of Software as a Service consumers' reviews using a supervised learning machine called a support vector machine (SVM). The sentiment results show that 62% of the reviews are positive which indicates that consumers are most likely satisfied with SaaS services. The results show that the prediction accuracy of the SVM-based Binary Occurrence approach (3-fold crossvalidation testing) is 92.30%, indicating it performs better in determining sentiment compared with other approaches (Term Occurrences, TFIDF). This work also provides valuable insight into online SaaS reviews and offers the research community the first SaaS polarity dataset.
Alkalbani, AM, Hussain, FK & IEEE 1970, 'A Comparative Study and Future Research Directions in Cloud Service Discovery', PROCEEDINGS OF THE 2016 IEEE 11TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), IEEE Conference on Industrial Electronics and Applications, IEEE, Dearborn, MI, United States, pp. 1049-1056.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Cloud computing technology is a new paradigm which provides Information Technology (IT) resources via the Internet. This new shift in the way that IT re-sources are offered to the user brings new challenges, such as cloud service discovery. Nowadays, cloud users are faced with a dilemma as they have an abundant choice of cloud services. Moreover, many cloud providers offer a range of services which deliver similar functionality. Locating the best and most appropriate cloud service with a suitable and capable provider is a primary concern for any consumer. In order to clearly comprehend the scope of this problem, a thorough analysis of the limitations of cloud service discovery approaches is required which, in turn, will empower researchers to deliver better solutions for consumers to make an informed decision and choose the right service. This paper presents an overview of the current cloud service discovery trends and challenges in recent studies. Additionally, the reviewed approaches are classified according to service discovery architecture and techniques. Furthermore, these approaches are compared and analysed from several perspectives including approach model/architecture, service type, ontology representation (domain, language, and reasoning), dynamic discovery model, evaluation model, user's preferences techniques, data updates, and public repositories.
Allen, G, Burdon, SW & Dovey, K 1970, 'The Socio-Political Antecedents of Technical Innovation', International Society for Professional Innovation Management, International Society for Professional Innovation Management, International Society for Professional Innovation Management, Porto, Portugal, pp. 1-10.
View description>>
The paper reports on a management initiative within an iconic global high-tech company to facilitate technical innovation within two teams (situated in different global locations of the company) that had been unable to produce any form of technical innovation over a period of several years. Experimenting with an action research strategy, this initiative had the practical goal of generating technical innovation and the research goal of gaining insight into the social dynamics that may facilitate such innovation. The two-year process delivered novel insights into the circumstances that enabled these teams to deliver four company-lauded technical innovations. The principal finding of the research - that social innovation is an antecedent of technical innovation – highlights the importance of alternative research methodologies (to that of the dominant research approach involved in R&D facilities) in addressing the politics of innovation within large organisations.
Alzoubi, YI & Gill, AQ 1970, 'An Agile Enterprise Architecture-Driven Model for Geographically Distributed Agile Development', International Conference on Information Systems Development, ISD 2015, International Conference on Information Systems Development, Springer International Publishing, Harbin, China, pp. 63-77.
View/Download from: Publisher's site
View description>>
Agile development is a highly collaborative environment, which requires active communication (i.e. effective and efficient communication) among stakeholders. The active communication in geographically distributed agile development (GDAD) environment is difficult to achieve due to many challenges. Literature has reported that active communication play critical role in enhancing GDAD performance through reducing the cost and time of a project. However, little empirical evidence is known about how to study and establish active communication construct in GDAD in terms of its dimensions, determinants and effects on GDAD performance. To address this knowledge gap, this paper describes an enterprise architecture (EA) driven research model to identify and empirically examine the GDAD active communication construct. This model can be used by researchers and practitioners to examine the relationships among two dimensions of GDAD active communication (effectiveness and efficiency), one antecedent that can be controlled (agile EA), and four dimensions of GDAD performance (on-Time completion, on-budget completion, software functionality and software quality).
Arellano, LAP, Castro, EL, Ochoa, EA & MerigoLindahl, JM 1970, 'Prioritized induced probabilistic OWA for dispute resolution methods', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, Univ Texas El Paso, El Paso, TX.
View/Download from: Publisher's site
Awais, M & Gill, AQ 1970, 'Enterprise IT governance: Back to basics', 25th International Conference on Information Systems Development, ISD 2016, International Conference on Information Systems Development, AIS eLibrary, Katowice, Poland., pp. 188-196.
View description>>
Enterprise IT (EIT) governance is an emerging and convoluted area in Information Technology (IT). As a subset, EIT governance operates under defined boundaries and set of rules inherited from the enterprise governance. There are a number of definitions that define EIT governance concepts. These concepts are linked in an intricate web of EIT governance. These concepts and related definitions have emerged over a period of time either through implementation models or IT events. This marks the need for a comprehensive review and synthesis of governance concepts in the modern context of always changing IT landscape. This research applied the well-known Systematic Literature Review (SLR) method. 4 different databases are used to find relevant research papers. Based on available definitions, evidence and analysis, it is found that four concepts are used more than any other: decision, organization, process and goal. This study result provides a consolidated set of key concepts, their relationships and trends, which can be used as a knowledge-base by researchers and practitioners' for further work in this important area of EIT governance.
Bashir, MR & Gill, AQ 1970, 'Towards an IoT Big Data Analytics Framework: Smart Buildings Systems.', HPCC/SmartCity/DSS, IEEE International Conference on High Performance Computing and Communications, IEEE Computer Society, Sydney, Australia, pp. 1325-1332.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. There is a growing interest in IoT-enabled smart buildings. However, the storage and analysis of large amount of high-speed real-time smart building data is a challenging task. There are a number of contemporary Big Data management technologies and advanced analytics techniques that can be used to deal with this challenge. There is a need for an integrated IoT Big Data Analytics (IBDA) framework to fill the research gap in the Big Data Analytics domain. This paper presents one such IBDA framework for the storage and analysis of real time data generated from IoT sensors deployed inside the smart building. The initial version of the IBDA framework has been developed by using Python and the Big Data Cloudera platform. The applicability of the framework is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. The initial results indicate that the proposed framework is fit for the purpose and seems useful for IoT-enabled Big Data Analytics for smart buildings. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain. This framework will be further evaluated and extended through its implementation in other domains.
Blanco-Mesa, F & Merigó, JM 1970, 'Bonferroni Means with the Adequacy Coefficient and the Index of Maximum and Minimum Level', Lecture Notes in Business Information Processing, Springer International Publishing, pp. 155-166.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. The aim of the paper is to develop new aggregation operators using Bonferroni means, OWA operators and some distance and norms measures. We introduce the BON-OWAAC and BON-OWAIMAM operators. We are able to include adequacy coefficient and the maximum and minimum level in the same formulation with Bonferroni means and OWA operator. The main advantages on using these operators are that they allow considering continuous aggregations, multiple-comparison between each argument and distance measures in the same formulation. The numerical sample is focused on an entrepreneurial example in the sport industry in Colombia.
Blanco-Mesa, F & Merigo-Lindahl, JM 1970, 'Bonferroni distances with OWA operators', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, El Paso, TX, USA.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The aim of the paper is to develop new aggregation operators using Bonferroni means, ordered weighted averaging (OWA) operators and some distance measures. We introduce the Bonferroni-Hamming weighted distance, Bonferroni OWA distance, and Bonferroni distances with OWA operators and weighted averages. The main advantages of using these operators are that they allow considering different aggregations contexts, multiple-comparison between each argument and distance measures in the same formulation.
Blanco-Mesa, F, Merigo Lindahl, JM & Gil-Lafuente, AM 1970, 'A bibliometric analysis of fuzzy decision making research', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, El Paso, TX, USA.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Fuzzy decision-making consists in making decisions under complex and uncertain environments where the information can be assessed with fuzzy sets and systems. The aim of this study is to review the main contributions in this field by using a bibliometric approach. For doing so, the article uses a wide range of bibliometric indicators including the citations and the h-index. Moreover, it also uses the VOS viewer software in order to map the main trends in this area. The work considers the leading journals, articles, authors, institutions and countries. The results indicate that the Zadeh L.A. led the origins of fuzzy research and Ronald Yager is the most prominent author in FDM. The USA was the traditional leader in this field with the most significant researcher. However, during the last years, this field is receiving more attention by Asian authors that are starting to lead the field. This discipline has a strong potential and the expectations for the future is that it will continue to grow.
Blezinger, D & van den Hoven, E 1970, 'Storytelling with Objects to Explore Digital Archives', Proceedings of the European Conference on Cognitive Ergonomics, ECCE '16: European Conference on Cognitive Ergonomics, ACM, Nottingham, United Kingdom.
View/Download from: Publisher's site
View description>>
© 2016 ACM.Finding media in archives is difficult while storytelling with photos can be fun and supports memory retrieval. Could the search for media become a natural part of the storytelling experience? This study investigates spatial interactions with objects as a means to encode information for retrieval while being embedded in the story flow. An experiment is carried out in which participants watch a short video and re-tell the story using cards each of which shows a character or object occurring in the video. Participants arrange the cards when telling the story. It is analyzed what information interactions with cards carry and how this information relates to the language of storytelling. Most participants align interactions with objects with the sentences of the story while some arrange the cards corresponding to the video scene. Spatial interactions with objects can carry information on their own or complemented by language.
Brady, F & Dyson, LE 1970, 'Exploring the Contribution of Design to Mobile Technology Uptake in a Remote Region of Australia', Culture, Technology, Communication. Common World, Different Futures, International Conference on Culture, Technology, and Communication, Springer International Publishing, London, UK, pp. 55-67.
View/Download from: Publisher's site
View description>>
© IFIP International Federation for Information Processing 2016.Some of the most remote communities in Australia have participated in a technological revolution since the arrival of mobile phone networks in 2003. We follow this journey in four largely Indigenous communities in Cape York and the Torres Strait Islands, from the first 2G network, to 3G, and finally to mobile broadband and smartphones, looking at its impact on communication, Internet access, new media use and social networking. In seeking to understand this phenomenon, we conclude that aspects of the design of the mobile system have contributed, including the flexibility of the technology to adapt to the needs of varying social groups, the small portable nature of the devices which allows them to serve a traditionally mobile people and to be kept as personal devices, a billing system which serves low income people, and the multifunctionality of the technology which provide entertainment while also supporting their use of Facebook.
Braytee, A, Catchpoole, DR, Kennedy, PJ & Liu, W 1970, 'Balanced Supervised Non-Negative Matrix Factorization for Childhood Leukaemia Patients', Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM'16: ACM Conference on Information and Knowledge Management, ACM, Indianapolis, Indiana, USA, pp. 2405-2408.
View/Download from: Publisher's site
View description>>
© 2016 ACM. Supervised feature extraction methods have received considerable attention in the data mining community due to their capability to improve the classification performance of the unsupervised dimensionality reduction methods. With increasing dimensionality, several methods based on supervised feature extraction are proposed to achieve a feature ranking especially on microarray gene expression data. This paper proposes a method with twofold objectives: it implements a balanced supervised non-negative matrix factorization (BSNMF) to handle the class imbalance problem in supervised non-negative matrix factorization techniques. Furthermore, it proposes an accurate gene ranking method based on our proposed BSNMF for microarray gene expression datasets. To the best of our knowledge, this is the first work to handle the class imbalance problem in supervised feature extraction methods. This work is part of a Human Genome project at The Children's Hospital at Westmead (TB-CHW), Australia. Our experiments indicate that the factorized components using supervised feature extraction approach have more classification capability than the unsu-pervised one, but it drastically fails at the presence of class imbalance problem. Our proposed method outperforms the state-of-the-art methods and shows promise in overcoming this concern.
Braytee, A, Liu, W & Kennedy, P 1970, 'A Cost-Sensitive Learning Strategy for Feature Extraction from Imbalanced Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Kyoto, Japan, pp. 78-86.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. In this paper, novel cost-sensitive principal component analysis (CSPCA) and cost-sensitive non-negative matrix factorization (CSNMF) methods are proposed for handling the problem of feature extraction from imbalanced data. The presence of highly imbalanced data misleads existing feature extraction techniques to produce biased features, which results in poor classification performance especially for the minor class problem. To solve this problem, we propose a costsensitive learning strategy for feature extraction techniques that uses the imbalance ratio of classes to discount the majority samples. This strategy is adapted to the popular feature extraction methods such as PCA and NMF. The main advantage of the proposed methods is that they are able to lessen the inherent bias of the extracted features to the majority class in existing PCA and NMF algorithms. Experiments on twelve public datasets with different levels of imbalance ratios show that the proposed methods outperformed the state-of-the-art methods on multiple classifiers.
Bremner, MJ, Montanaro, A & Shepherd, D 1970, 'Average-case complexity versus approximate simulation of commuting quantum computations', 19th Conference on Quantum Information Processing, Banff, Canada.
Brereton, M & Van den Hoven, E 1970, 'Session details: Provocations and Work-in-Progress (P-WiP)', Proceedings of the 2016 ACM Conference Companion Publication on Designing Interactive Systems, DIS '16: Designing Interactive Systems Conference 2016, ACM.
View/Download from: Publisher's site
Broekhuijsen, M, Mols, I & van den Hoven, E 1970, 'A holistic design perspective on media capturing and reliving', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania.
View/Download from: Publisher's site
Carey, B & Johnston, A 1970, 'Reflection on action in NIME research: Two complementary perspectives', Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 377-382.
View description>>
This paper discusses practice-based research in the context of live performance with interactive systems. Practicebased research is outlined in depth, with key concepts and approaches contextualised with respect to research in the NIME field. We focus on two approaches, both of which are concerned with documenting, examining and reflecting on the real-world behaviours and experiences of people and artefacts involved in the creation of new works. The first approach is primarily based on reflections by an individual performer/developer (auto-ethnography) and the second on interviews and observations. The rationales for both approaches are presented along with findings from research which applied them in order to illustrate and explore the characteristics of both. Challenges, including the difficulty of balancing rigour and relevance and the risks of negatively impacting on creative practices are articulated, as are the potential benefits.
Castro, EL, Ochoa, EA, Merigo Lindahl, JM & Lafuente, AMG 1970, 'Heavy Moving Averages in exchange rate forecasting', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, Univ Texas El Paso, El Paso, TX.
View/Download from: Publisher's site
Cetindamar, D 1970, 'A new role for universities: Technology transfer for social innovations', 2016 Portland International Conference on Management of Engineering and Technology (PICMET), 2016 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Honolulu, HI, USA, pp. 290-295.
View/Download from: Publisher's site
View description>>
© 2016 Portland International Conference on Management of Engineering and Technology, Inc. Universities have played a significant role in stimulating technological change and innovation, the focus has been commercialization of technical knowledge generated within science, technology and mathematics disciplines. Universities have increased disseminating knowledge as well as integration with industry in the form of entrepreneurial university. The transformation of university mission has supported university-industry-government interactions in creating commercial entrepreneurial spinoffs while it neglected to interact with a critical stakeholder of the university: society. To our knowledge, the transfer of knowledge generated within universities into social enterprises / social entrepreneurs has not been studied in the literature. This paper will present the gap in the literature review that might be an invitation for researchers to focus on the topic.
Chen, S, Chen, S, Wang, Z, Liang, J, Yuan, X, Cao, N & Wu, Y 1970, 'D-Map: Visual analysis of ego-centric information diffusion patterns in social media', 2016 IEEE Conference on Visual Analytics Science and Technology (VAST), 2016 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, Baltimore, MD, USA, pp. 41-50.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Popular social media platforms could rapidly propagate vital information over social networks among a significant number of people. In this work we present D-Map (Diffusion Map), a novel visualization method to support exploration and analysis of social behaviors during such information diffusion and propagation on typical social media through a map metaphor. In D-Map, users who participated in reposting (i.e., resending a message initially posted by others) one central user's posts (i.e., a series of original tweets) are collected and mapped to a hexagonal grid based on their behavior similarities and in chronological order of the repostings. With additional interaction and linking, D-Map is capable of providing visual portraits of the influential users and describing their social behaviors. A comprehensive visual analysis system is developed to support interactive exploration with D-Map. We evaluate our work with real world social media data and find interesting patterns among users. Key players, important information diffusion paths, and interactions among social communities can be identified.
Chen, S, Wang, Z, Liang, J & Yuan, X 1970, 'Uncertainty-aware Visual Analytics for Exploring Human Behaviors from Heterogeneous Spatial Temporal Data', Proceedings of the Third Conference of China Visualization and Visual Analytics (ChinaVis'16), the Third Conference of China Visualization and Visual Analytics (ChinaVis'16), Changsha, China.
View description>>
When analyzing human behaviors, we need to construct the humanbehaviors from multiple sources of data, e.g. trajectory data, transactiondata, identity data, etc. The problem we’re facing is the dataconflicts, different resolution, missing and conflicting data, whichtogether lead to the uncertainty in the spatial temporal data. Suchuncertainty in data leads to difficulties even failure in the visualanalytics task for analyzing people behavior, pattern and outliers.However, traditional automatic methods can not solve the problemsin such complex scenario, where the uncertain and conflicting patternsare not well-defined. To solve the problems, we proposed asemi-automatic approach, for users to solve the conflicts and identifythe uncertainties. To be general, We summarized five types ofuncertainties and solutions to conduct the tasks of behavior analysis.Combined with the uncertainty-aware methods, we proposed avisual analytics system to analyze human behaviors, detect patternsand find outliers. Case studies from the IEEE VAST Challenge2014 dataset confirms the effectiveness of our approach.
Chinchore, A, Xu, G & Jiang, F 1970, 'Classifying Sybil in MSNs using C4.5', 2016 INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC AND SOCIO-CULTURAL COMPUTING (BESC), IEEE/ACM International Conference on Behavioral, Economic, Socio-Cultural Computing (BESC), IEEE, Durham, NC, pp. 145-150.
de Vries, NJ, Arefin, AS, Mathieson, L, Lucas, B & Moscato, P 1970, 'Relative Neighborhood Graphs Uncover the Dynamics of Social Media Engagement', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Advanced Data Mining and Applications, Springer International Publishing, Gold Coast, Queensland, Australia, pp. 283-297.
View/Download from: Publisher's site
View description>>
In this paper, we examine if the Relative Neighborhood Graph (RNG) can reveal related dynamics of page-level social media metrics. A statistical analysis is also provided to illustrate the application of the method in two other datasets (the Indo-European Language dataset and the Shakespearean Era Text dataset). Using social media metrics on the world’s ‘top check-in locations’ Facebook pages dataset, the statistical analysis reveals coherent dynamical patterns. In the largest cluster, the categories ‘Gym’, ‘Fitness Center’, and ‘Sports and Recreation’ appear closely linked together in the RNG. Taken together, our study validates our expectation that RNGs can provide a “parameter-free" mathematical formalization of proximity. Our approach gives useful insights on user behaviour in social media page-level metrics as well as other applications.
Erfani, SS, Abedin, B & Blount, Y 1970, 'Social support, Social belongingness, and psychological well-being: Benefits of Online healthcare community membership', Pacific Asia Conference on Information Systems, PACIS 2016 - Proceedings, Pacific Asia Conference on Information Systems, PACIS, Taiwan.
View description>>
Despite an increase in users interacting using Online Social Network Sites, the value they generate for health purposes are under-researched. Previous research has mainly focused on the capacity of Online Social Network Sites for improving social and organisational value. Yet, the value of these platforms can be investigated in the other context such as health. This paper studies the value of membership in health related Online Social Network Sites, and in particular investigates how participation in such communities benefits users' psychological well-being. Twenty-five qualitative semi-structured interviews were conducted with users of the Ovarian Cancer Australia Facebook page (OCA Facebook), the exemplar online community used in this study. The participants were people who were affected by ovarian cancer and were members of the OCA Facebook community where they exchanged information and received support. Using a multi-theory perspective to interpret the data, results showed that a sense of belongingness to a community with like-minded people as well as receiving social support through message exchange in the community were two main perceived benefits of the OCA online community membership. Findings also showed that most interviewees used OCA Facebook on a daily basis. While some were passive users and only read/observed the content crated by others, other users actively posted content and communicated with other members. The paper concludes with implications of the results, recommendations for future studies and proposes a qualitative theoretical framework to examine the value of online communities in a more holistic way.
Fang, XS, Sheng, QZ & Wang, X 1970, 'An Ensemble Approach for Better Truth Discovery', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 298-311.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Truth discovery is a hot research topic in the Big Data era, with the goal of identifying true values from the conflicting data provided by multiple sources on the same data items. Previously, many methods have been proposed to tackle this issue. However, none of the existing methods is a clear winner that consistently outperforms the others due to the varied characteristics of different methods. In addition, in some cases, an improved method may not even beat its original version as a result of the bias introduced by limited ground truths or different features of the applied datasets. To realize an approach that achieves better and robust overall performance, we propose to fully leverage the advantages of existing methods by extracting truth from the prediction results of these existing truth discovery methods. In particular, we first distinguish between the single-truth and multi-truth discovery problems and formally define the ensemble truth discovery problem. Then, we analyze the feasibility of the ensemble approach, and derive two models, i.e., serial model and parallel model, to implement the approach, and to further tackle the above two types of truth discovery problems. Extensive experiments over three large real-world datasets and various synthetic datasets demonstrate the effectiveness of our approach.
Feng Gu, Zhang, G, Jie Lu & Chin-Teng Lin 1970, 'Concept drift detection based on equal density estimation', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, pp. 24-30.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. An important problem that remains in online data mining systems is how to accurately and efficiently detect changes in the underlying distribution of large data streams. The challenge for change detection methods is to maximise the accumulative effect of changing regions with unknown distribution, while at the same time providing sufficient information to describe the nature of the changes. In this paper, we propose a novel change detection method based on the estimation of equal density regions, with the aim of overcoming the issues of instability and inefficiency that underlie methods of predefined space partitioning schemes. Our method is general, nonparametric and requires no prior knowledge of the data distribution. A series of experiments demonstrate that our method effectively detects concept drift in single dimension as well as high dimension data, and is also able to explain the change by locating the data points that contribute most to the change. The detection result is guaranteed by statistical tests.
Gao, F & Musial-Gabrys, K 1970, 'Hybrid structure-based link prediction model', 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), IEEE, San Francisco, CA, pp. 1221-1228.
View/Download from: Publisher's site
Gill, AQ & Hevary, S 1970, 'Cloud Monitoring Data Challenges: A Systematic Review.', ICONIP (1), International Conference on Neural Information Processing, Springer, Kyoto, Japan, pp. 72-79.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Organizations need to continuously monitor, source and process large amount of operational data for optimizing the cloud computing environment. The research problem is: what are cloud monitoring data challenges – in particular virtual CPU monitoring data? This paper adopts a Systematic Literature Review (SLR) approach to identify and report cloud monitoring data challenges. SLR approach was applied to initially identify a large set of 1861 papers. Finally, 24 of 1861 relevant papers were selected and reviewed to identify the five major challenges of cloud monitoring data: monitoring technology, virtualization technology, energy, availability and performance. The results of this review are expected to help researchers and practitioners to understand cloud computing data challenges and develop innovative techniques and strategies to deal with these challenges.
Gill, AQ, Chew, EK, Kricker, D & Bird, G 1970, 'Adaptive Enterprise Resilience Management: Adaptive Action Design Research in Financial Services Case Study.', CBI (1), IEEE Conference on Business Informatics (CBI), IEEE, Paris, pp. 113-122.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Resilience is the ability of an enterprise to absorb, recover and adapt from a disruption. Being resilient is a complex undertaking for enterprises operating in a highly dynamic environment and striving for continuous efficiency and innovation. The challenge for enterprises is to offer and run a customer-centric and interdependent large portfolio of resilient services. The fundamental research question is: how to enable service resilience in the practical enterprise resilience context? This paper addresses this important research question, and reports findings from on-going (2014-2016) research on adaptive enterprise resilience management in an Australian financial services organization (FSO). This research is being conducted using the adaptive action-design research (ADR) method to iteratively research, develop and deliver the desired resilience framework in short increments. This paper presents the overall evolved adaptive enterprise resilience management framework and its 'service resilience' element details as one of the key outcomes from the second adaptive ADR increment.
Grochow, JA, Mulmuley, KD & Qiao, Y 1970, 'Boundaries of VP and VNP', Leibniz International Proceedings in Informatics, LIPIcs, International Colloquium on Automata Languages and Programming, Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, Rome, Italy.
View/Download from: Publisher's site
View description>>
One fundamental question in the context of the geometric complexity theory approach to the VP vs. VNP conjecture is whether VP = VP, where VP is the class of families of polynomials that can be computed by arithmetic circuits of polynomial degree and size, and VP is the class of families of polynomials that can be approximated infinitesimally closely by arithmetic circuits of polynomial degree and size. The goal of this article is to study the conjecture in (Mulmuley, FOCS 2012) that VP is not contained in VP. Towards that end, we introduce three degenerations of VP (i.e., sets of points in VP), namely the stable degeneration Stable-VP, the Newton degeneration Newton-VP, and the p-definable one-parameter degeneration VP∗. We also introduce analogous degenerations of VNP. We show that Stable-VP ⊆ Newton-VP ⊆ VP∗ ⊆ VNP, and Stable-VNP = Newton-VNP = VNP∗ = VNP. The three notions of degenerations and the proof of this result shed light on the problem of separating VP from VP. Although we do not yet construct explicit candidates for the polynomial families in VP \VP, we prove results which tell us where not to look for such families. Specifically, we demonstrate that the families in Newton-VP \VP based on semi-invariants of quivers would have to be nongeneric by showing that, for many finite quivers (including some wild ones), Newton degeneration of any generic semi-invariant can be computed by a circuit of polynomial size. We also show that the Newton degenerations of perfect matching Pfaffians, monotone arithmetic circuits over the reals, and Schur polynomials have polynomial-size circuits.
Guo, Y, Zhu, J, Lu, H & Lei, G 1970, 'Design considerations of electric motors with soft magnetic composite cores', 2016 IEEE 8th International Power Electronics and Motion Control Conference (IPEMC-ECCE Asia), 2016 IEEE 8th International Power Electronics and Motion Control Conference (IPEMC 2016 - ECCE Asia), IEEE, Hefei, China, pp. 3007-3011.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Soft magnetic composite (SMC) materials possess many unique properties, which are particularly suitable for development of novel structure electric motors for various electric drive systems. The unique properties of SMC material include three-dimensional (3-D) magnetic and thermal isotropy, very low eddy current loss, and prospect of very low cost mass production. Therefore, the application of SMC materials in electrical appliance, particularly in electric motors, has attracted great interest in research. However, SMC materials also have some drawbacks, e.g. low permeability, high hysteresis loss and low mechanical strength, and hence a direct replacement of electrical steels by SMC would not necessarily lead to satisfaction or improvement of motor performance. To fully explore the application potential of the SMC materials, their unique properties should be fully employed and at the same time the effects of their drawbacks should be avoided or minimized. This paper aims to present some key issues on design of SMC electric motors based on the extensive research in the past two decades by various researchers including the authors of this paper. The key design issues are discussed and some conclusions are drawn for future effort in this area.
Han, B, Tsang, IW & Chen, L 1970, 'On the Convergence of a Family of Robust Losses for Stochastic Gradient Descent', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECML-PKDD), Springer International Publishing, Riva del Garda, Italy, pp. 665-680.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. The convergence of Stochastic Gradient Descent (SGD) using convex loss functions has been widely studied. However, vanilla SGD methods using convex losses cannot perform well with noisy labels, which adversely affect the update of the primal variable in SGD methods. Unfortunately, noisy labels are ubiquitous in real world applications such as crowdsourcing. To handle noisy labels, in this paper, we present a family of robust losses for SGD methods. By employing our robust losses, SGD methods successfully reduce negative effects caused by noisy labels on each update of the primal variable. We not only reveal the convergence rate of SGD methods using robust losses, but also provide the robustness analysis on two representative robust losses. Comprehensive experimental results on six real-world datasets show that SGD methods using robust losses are obviously more robust than other baseline methods in most situations with fast convergence.
Hao, P, Zhang, G & Lu, J 1970, 'Enhancing cross domain recommendation with domain dependent tags', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, CANADA, pp. 1266-1273.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. One challenge in recommender system is to deal with data sparsity. To handle this issue, social tags are utilized to bring disjoint domains together for knowledge transfer in cross-domain recommendation. The most intuitive way is to use common tags that present in both source and target domains. However, it is difficult to obtain a strong domain connection by exploiting a small amount of common tags, especially when the tagging data in target domain is too scarce to share enough common tags with source domain. In this paper we propose a novel framework, called Enhanced Tag-induced Cross Domain Collaborative Filtering (ETagiCDCF), to integrate the rich information contained in domain dependent tags into recommendation procedure. We perform experiments on two public datasets and compare with several single and cross domain recommendation approaches, the results demonstrate that ETagiCDCF can effectively address data sparseness and improve recommendation performance.
Hazber, MAG, Li, R, Xu, G & Alalayah, KM 1970, 'An Approach for Automatically Generating R2RML-Based Direct Mapping from Relational Databases', Communications in Computer and Information Science, International Conference of Young Computer Scientists, Engineers and Educators (ICYCSEE), Springer Singapore, Harbin, China, pp. 151-169.
View/Download from: Publisher's site
View description>>
For integrating relational databases (RDBs) into semantic web applications, the W3C RDB2RDF Working Group recommended two approaches, Direct Mapping (DM) and R2RML. The DM provides a set of mapping rules according to RDB schema, while the R2RML allows users to manually define mappings according to existing target ontology. The major problem to use R2RML is the effort for creating R2RML mapping documents manually. This may lead to appearance of many mistakes in the R2RML documents and requires domain experts. In this paper, we propose and implement an approach to generate an R2RML mapping documents automatically from RDB schema. The R2RML mapping reflects the behavior of the DM specification and allows any R2RML parser to generate a set of RDF triples from relational data. The input of generating approach is DBsInfo class that automatically generated from relational schema. An experimental prototype is developed and shows the effectiveness of our approach algorithms.
Herron, D, Andalibi, N, Haimson, O, Moncur, W, van den Hoven, E & ACM 1970, 'HCI and Sensitive Life Experiences', PROCEEDINGS OF THE NORDICHI '16: THE 9TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION - GAME CHANGING DESIGN, Nordic Conference on Human-Computer Interaction (NordiCHI), ACM, Gothenburg, Sweden, pp. 1-3.
View/Download from: Publisher's site
View description>>
HCI research has identified a number of life events and life transitions which see individuals in a vulnerable state, such as gender transition, domestic abuse, romantic relationship dissolution, bereavement, and even genocide. Although these life events differ across the human lifespan, considering them as a group of 'sensitive life experiences', and exploring the similarities and differences in how we approach those experiences as researchers could be invaluable in generating a better understanding of them. In this workshop, we aim to identify current opportunities for, and barriers to, the design of social computing systems that support people during sensitive life events and transitions. Participants will take part in activities centred around exploring the similarities and differences between their own and others' research methods and results, drawing on their own experiences in discussions around carrying out research in these sensitive contexts.
Herron, D, Moncur, W, van den Hoven, E & ACM 1970, 'Digital Possessions After a Romantic Break Up', PROCEEDINGS OF THE NORDICHI '16: THE 9TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION - GAME CHANGING DESIGN, Nordic Conference on Human-Computer Interaction (NordiCHI), Association Computing Machinery Digital Library, Gothenburg, Sweden.
View/Download from: Publisher's site
View description>>
© 2016 ACM.With technology becoming more pervasive in everyday life, it is common for individuals to use digital media to support the enactment and maintenance of romantic relationships. Partners in a relationship may create digital possessions frequently. However, after a relationship ends, individuals typically seek to disconnect from their ex-partner. This becomes difficult due to the partners' interwoven digital presence and digital possessions. In this paper, we report on a qualitative study exploring individuals' experiences of relationship break up in a digital context, and discuss their attitudes towards digital possessions from those relationships. Five main themes emerged: digital possessions that sustain relationships, comparing before and after, tainted digital possessions, digital possessions and invasions of privacy, involved and emotional reminiscing. Design opportunities were identified in managing attitudes towards digital possessions, disconnecting and reconnecting, and encouraging awareness of digital possessions.
Holland, S, McPherson, AP, Mackay, WE, Wanderley, MM, Gurevich, MD, Mudd, TW, O Modhrain, S, Wilkie, KL, Malloch, JW, Garcia, J & others 1970, 'Music and HCI', Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, ACM, pp. 3339-3346.
Hollmén, J, Spiliopoulou, M, Kane, B, Marshall, A, Soda, P, Antani, S & McGregor, C 1970, 'Preface', 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, pp. xiii-xiv.
View/Download from: Publisher's site
Huo, H, Liu, X, Li, J, Yang, H, Peng, D & Chen, Q 1970, 'A Weighted K-AP Query Method for RSSI Based Indoor Positioning', Springer International Publishing, pp. 150-163.
View/Download from: Publisher's site
Hussain, W, Hussain, F & Hussain, O 1970, 'Allocating optimized resources in the cloud by a viable SLA model', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, Canada, pp. 1282-1287.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. A cloud business environment comprises service providers and service consumers. Services are supplied through a Service Level Agreement (SLA) which defines all deliverables, commitments, obligations, QoS, violation penalties etc. that help a service provider and a service consumer to execute their business transactions. The primary aim of a service provider is to fulfill its commitment to a consumer by forming a viable SLA that wisely assigns the appropriate amount of resources to a requesting consumer. In this paper, we propose a viable SLA model that helps a service provider form a viable agreement with a consumer, based on its previous resource usage profile. The model uses a Fuzzy Inference System and takes the reliability and the contract duration of a consumer as input to calculate the suitability of this consumer, which is also used as input along with the risk propensity of a service provider to determine the amount of resources to offer to a consumer. We evaluate our approach and find that by using an optimized viable SLA model, providers are able to allocate an appropriate amount of resources to avoid an SLA violation.
Hussain, W, Hussain, F, Hussain, O & IEEE 1970, 'QoS Prediction Methods to Avoid SLA Violation in Post-Interaction Time Phase', PROCEEDINGS OF THE 2016 IEEE 11TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), IEEE Conference on Industrial Electronics and Applications, IEEE, Hefei, China, pp. 32-37.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Due to the dynamic nature of cloud computing it is very important for a small to medium scale service providers to optimally assign computing resources and apply accurate prediction methods that enable the best resource management. The choice of an ideal quality of service (QoS) prediction method is one of the key factors in business transactions that help a service provider manage the risk of SLA violations by taking appropriate and immediate action to reduce occurrence, or avoid operations that may cause risk. In this paper we analyze ten prediction methods, including neural network methods, stochastic methods and others to predict time series cloud data and compare their prediction accuracy over five time intervals. We use Cascade Forward Backpropagation, Elman Backpropagation, Generalized Regression, NARX, Simple Exponential Smoothing, Simple Moving Average, Weighted Moving Average, Extrapolation, Holt-Winters Double Exponential Smoothing and ARIMA and predict resource usage at 1, 2, 3, 4 and 5 hours into the future. We use Root Means Square Error and Mean Absolute Deviation as a benchmark for their prediction accuracy. From the prediction results we observed that the ARIMA method provides the most optimal prediction results for all time intervals.
Hussain, W, Hussain, FK & Hussain, OK 1970, 'SLA Management Framework to Avoid Violation in Cloud', NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III, International Conference on Neural Information Processing, Springer International Publishing, Kyoto, Japan, pp. 309-316.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Cloud computing is an emerging technology that have a broad scope to offers a wide range of services to revolutionize the existing IT infrastructure. This internet based technology offers a services like-on demand service, shared resources, multitenant architecture, scalability, portability, elasticity and giving an illusion of having an infinite resource by a consumer through virtualization. Because of the elastic nature of a cloud it is very critical of a service provider specially for a small/medium cloud provider to form a viable SLA with a consumer to avoid any service violation. SLA is a key agreement that need to be intelligently form and monitor, and if there is a chance of service violation then a provider should be informed to take necessary remedial action to avoid violation. In this paper we propose our viable SLA management framework that comprise of two time phases-pre-interaction time phase and post-interaction time phase. Our viable SLA framework help a service provider in making a decision of a consumer request, offer the amount of resources to consumer, predict QoS parameters, monitor run time QoS parameters and take an appropriate action to mitigate risks when there is a variation between a predicted and an agreed QoS parameters.
Hussain, W, Zowghi, D, Clear, T, MacDonell, S & Blincoe, K 1970, 'Managing Requirements Change the Informal Way: When Saying ‘No’ is Not an Option', 2016 IEEE 24th International Requirements Engineering Conference (RE), 2016 IEEE 24th International Requirements Engineering Conference (RE), IEEE, Beijing, China, pp. 126-135.
View/Download from: Publisher's site
View description>>
Software has always been considered as malleable. Changes to software requirements are inevitable during the development process. Despite many software engineering advances over several decades, requirements changes are a source of project risk, particularly when businesses and technologies are evolving rapidly. Although effectively managing requirements changes is a critical aspect of software engineering, conceptions of requirements change in the literature and approaches to their management in practice still seem rudimentary. The overall goal of this study is to better understand the process of requirements change management. We present findings from an exploratory case study of requirements change management in a globally distributed setting. In this context we noted a contrast with the traditional models of requirements change. In theory, change control policies and formal processes are considered as a natural strategy to deal with requirements changes. Yet we observed that "informal requirements changes" (InfRc) were pervasive and unavoidable. Our results reveal an equally ’natural’ informal change management process that is required to handle InfRc in parallel. We present a novel model of requirements change which, we argue, better represents the phenomenon and more realistically incorporates both the informal and formal types of change.
Ijaz, K, Wang, Y, Milne, D & Calvo, RA 1970, 'Competitive vs Affiliative Design of Immersive VR Exergames', SERIOUS GAMES, JCSG 2016, 2nd International Joint Conference on Serious Games (JCSG), Springer International Publishing, Griffith Univ, Brisbane, AUSTRALIA, pp. 140-150.
View/Download from: Publisher's site
Ijaz, K, Wang, Y, Milne, D & Calvo, RA 1970, 'VR-Rides: Interactive VR Games for Health', SERIOUS GAMES, JCSG 2016, 2nd International Joint Conference on Serious Games (JCSG), Springer International Publishing, Griffith Univ, Brisbane, AUSTRALIA, pp. 289-292.
View/Download from: Publisher's site
Inan, DI, Beydoun, G & Opper, S 1970, 'Towards knowledge sharing in disaster management: An agent oriented knowledge analysis framework', Proceedings of the Australasian Conference on Information Systems 2015, Australasian Conference on Information Systems 2015.
View description>>
Disaster Management (DM) is a complex set of interrelated activities. Theactivities are often knowledge intensive and time sensitive. Sharing therequired knowledge timely is critical for DM. In developed countries, forrecurring disasters (e.g. floods), there are dedicated document repositories ofDisaster Management Plans (DMP) that can be accessed as needs arise. However,accessing the appropriate plan in a timely manner and sharing activitiesbetween plans often requires domain knowledge and intimate knowledge of theplans in the first place. In this paper, we introduce an agent-based knowledgeanalysis method to convert DMPs into a collection of knowledge units that canbe stored into a unified repository. The repository of DM actions then enablesthe mixing and matching knowledge between different plans. The repository isstructured as a layered abstraction according to Meta Object Facility (MOF). Weuse the flood management plans used by SES in NSW to illustrate and give apreliminary validation of the approach. It is illustrated using DMPs along theflood prone Murrumbidgee River in central NSW.
Inibhunu, C & McGregor, C 1970, 'Machine learning model for temporal pattern recognition', 2016 IEEE EMBS International Student Conference (ISC), 2016 IEEE EMBS International Student Conference (ISC), IEEE.
View/Download from: Publisher's site
View description>>
Temporal abstraction and data mining are two research fields that have tried to synthesis time oriented data and bring out an understanding on the hidden relationships that may exist between time oriented events. In clinical settings, having the ability to know the hidden relationships on patient data as they unfold could help save a life by aiding in detection of conditions that are not obvious to clinicians and healthcare workers. Understanding the hidden patterns is a huge challenge due to the exponential search space unique to time-series data. In this paper, we propose a temporal pattern recognition model based on dimension reduction and similarity measures thereby maintaining the temporal nature of the raw data.
Izaddoost, A & McGregor, C 1970, 'Enhance Network Communications in a Cloud-Based Real-Time Health Analytics Platform Using SDN', 2016 IEEE International Conference on Healthcare Informatics (ICHI), 2016 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Chicago, IL, pp. 388-391.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Transferring collected physiological data from health facilities to a cloud-based health analytical platform can be seen as an efficient and cost effective approach to provide clinical support to rural and remote healthcare centres from urban based specialists. A cloud-based healthcare platform will reduce the requirement of patient transfer due to lack of clinical experts or providing consultative support through the phone. However transferring physiological data streams through a data path with insufficient quality and unsatisfactory conditions may have negative performance impact on real-time data processing. To address this issue, we study the benefit of using software-defined networking (SDN) technology. SDN as an emerging networking paradigm, can be employed to manage and control network conditions and apply desired policies. This research introduces the significant features in SDN technology to transfer physiological data streams through an alternative path with a better quality rather than the congested predetermined shortest path in order to enhance data transfer reliability and improve real-time data processing quality.
Jalali, R, Dauda, A, El-Khatib, K, McGregor, C & Surti, C 1970, 'An architecture for health data collection using off-the-shelf health sensors', 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
Nowadays, many people, and not only the ones with health problems are being more health conscious. With the advent of sensor based technologies, it has become possible to create wearable wireless biometric sensor networks, known as Body Sensor Networks (BSNs) which allow people to collect their health data and send it remotely for further analysis and storage. Research has shown that the use of BSNs enables remote wireless diagnosis of various health conditions. In this paper, we propose a novel layered architecture for smart healthcare system where health community service providers, patients, doctors and hospitals have access to real time data which has been gathered using various sensory mechanisms. An experimental case study has been implemented for evaluation. Early results show benefits of this system in improving the quality of health care.
Juang, C-F & Chang, Y-C 1970, 'Data-driven interpretable fuzzy controller design through mult-objective genetic algorithm', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE.
View/Download from: Publisher's site
Kamaleswaran, R, James, A, Collins, C & McGregor, C 1970, 'CoRAD: Visual Analytics for Cohort Analysis', 2016 IEEE International Conference on Healthcare Informatics (ICHI), 2016 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Chicago, IL, pp. 517-526.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In this paper, we introduce a novel dynamic visual analytic tool called the Cohort Relative Aligned Dashboard (CoRAD). We present the design components of CoRAD, along with alternatives that lead to the final instantiation. We also present an evaluation involving expert clinical researchers, comparing CoRAD against an existing analytics method. The results of the evaluation show CoRAD to be more usable and useful for the target user. The relative alignment of physiologic data to clinical events were found to be a highlight of the tool. Clinical experts also found the interactive selection and filter functions to be useful in reducing information overload. Moreover, CoRAD was also found to allow clinical researchers to generate alternative hypotheses and test them in vivo.
Kang, G, Li, J & Tao, D 1970, 'Shakeout: A New Regularized Deep Neural Network Training Scheme.', AAAI, AAAI Conference on Artificial Intelligence, AAAI Press, Phoenix, USA, pp. 1751-1757.
View description>>
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. The invention of effective training techniques largely contributes to this success. The so-called 'Dropout' training scheme is one of the most powerful tool to reduce over-fitting. From the statistic point of view, Dropout works by implicitly imposing an L2 regularizer on the weights. In this paper, we present a new training scheme: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, our method randomly chooses to enhance or inverse the contributions of each unit to the next layer. We show that our scheme leads to a combination of L1 regularization and L2 regularization imposed on the weights, which has been proved effective by the Elastic Net models in practice.We have empirically evaluated the Shakeout scheme and demonstrated that sparse network weights are obtained via Shakeout training. Our classification experiments on real-life image datasets MNIST and CIFAR- 10 show that Shakeout deals with over-fitting effectively.
Khalili, SM, Babagolzadeh, M, Yazdani, M, Saberi, M & Chang, E 1970, 'A Bi-objective Model for Relief Supply Location in Post-Disaster Management', 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), IEEE, Ostrava, CZECH REPUBLIC, pp. 428-434.
View/Download from: Publisher's site
Kocaballi, AB & Yorulmaz, Y 1970, 'Performative Photography as an Ideation Method', Proceedings of the 2016 ACM Conference on Designing Interactive Systems, DIS '16: Designing Interactive Systems Conference 2016, ACM, Qeensland Univ Technol, Brisbane, AUSTRALIA, pp. 1083-1095.
View/Download from: Publisher's site
Kolagani, N, Gray, S & Voinov, A 1970, 'Session D1: Tools and methods of participatory modelling', Environmental Modelling and Software for Supporting a Sustainable Future, Proceedings - 8th International Congress on Environmental Modelling and Software, iEMSs 2016, p. 804.
Kolamunna, H, Hu, Y, Perino, D, Thilakarathna, K, Makaroff, D, Guan, X & Seneviratne, A 1970, 'AFit', Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM.
View/Download from: Publisher's site
Kolamunna, H, Hu, Y, Perino, D, Thilakarathna, K, Makaroff, D, Guan, X & Seneviratne, A 1970, 'AFV', Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM.
View/Download from: Publisher's site
Kong, Y, Zhang, M & Ye, D 1970, 'A Group Task Allocation Strategy in Open and Dynamic Grid Environments', Studies in Computational Intelligence, Springer International Publishing, pp. 121-139.
View/Download from: Publisher's site
View description>>
Against the problem of group task allocation with time constraints in open and dynamic grid environments, this paper proposes a decentralised indicator-based combinatorial auction strategy for group task allocation. In the proposed strategy, both resource providers and consumers are modeled as intelligent agents. All the agents are limited to communicating with their neighbour agents, therefore, the proposed strategy is decentralised. In addition, the proposed strategy allow agents to enter and leave the grid environments freely, and is robust to the dynamism and openness of the grid environments. Tasks in the proposed strategy have deadlines and might need the collaboration of a group of self-interested providers to be executed. The experimental results demonstrate that the proposed strategy outperforms a well-known decentralised task allocation strategy in terms of success rate, individual utility of the involved agents and the speed of task allocation.
Korhonen, JJ, Lapalme, J, McDavid, D & Gill, AQ 1970, 'Adaptive Enterprise Architecture for the Future: Towards a Reconceptualization of EA.', CBI (1), IEEE Conference on Business Informatics (CBI), IEEE, Paris, France, pp. 272-281.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In some conventional definitions, Enterprise Architecture (EA) is conceived as a descriptive overview of the enterprise, while in other views EA is seen as a prescriptive framework of principles and models that helps translate business strategy to enterprise change. The conceptualizations of EA also vary in scope. There is an increasing recognition of EA as a systemic, enterprise-wide capability encompassing all relevant facets of the organization, transcending the traditional IT-centric view. However, we argue that none of the conventional conceptualizations of EA are adaptive in the face of today's complex environment. We view that an adaptive EA must go beyond a single organization and fully appreciate enterprise-in-environment ecosystemic perspective. Drawing on the heritage of Open Socio-Technical Systems Design and adopting the 'three schools of thought' as a meta-paradigmatic backdrop, the paper features four different views of long-time scholar-practitioners, who discuss what an adaptive enterprise architecture would entail. Integration of these views paints a radically reconceptualized picture of enterprise architecture for the future. With this paper, we want to lay a foundation for a debate on the need for alternative conceptualizations, manifestations and research agenda for enterprise architecture.
Kuppili Venkata, S, Keppens, J & Musial, K 1970, 'Agent Based Simulation to Evaluate Adaptive Caching in Distributed Databases', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 455-462.
View/Download from: Publisher's site
View description>>
Caching frequently used data is a common practice to improve query performance in database systems. But traditional algorithms used for cache management prove to be insufficient in distributed environment where groups of users require similar or related data from multiple databases. Repeated data transfers can become a bottleneck leading to long query response time and high resource utilization. Our work focuses on adaptive algorithms to decide on optimal grain of data to be cached and cache refreshment techniques to reduce data transfers. In this paper, we present agent based simulation to investigate and in consequence improve cache management in the distributed database environment. Dynamic grain size and decisions on cache refreshment are made as a result of coordination and interaction between agents. Initial results show better response time and higher data availability compared to traditional caching techniques.
Kuznecova, T & Voinov, AA 1970, 'A conceptual framework for an agricultural agent-based model with a two-level social component: Modeling farmer groups', Environmental Modelling and Software for Supporting a Sustainable Future, Proceedings - 8th International Congress on Environmental Modelling and Software, iEMSs 2016, pp. 1045-1053.
View description>>
In the last decade, collective actions within smallholder groups and cooperatives have been promoted by various development programs and projects. However, to develop appropriate programs and policies aimed at supporting cooperation among farmers, an approach may be required able to reflect the dynamics of an agricultural system resulting from decision-making and interactions between elements at different levels and scales. In this study, we are focusing on the groups of smallholders organizing for collective crop production and/or marketing. Our aim is to provide an approach and a tool to gain a deeper insight in how cooperative groups emerge and perform under different conditions and objective functions. An agent-based model will be built as a core of such a tool. The main difference from existing agricultural models is that we consider at least two levels of social agents and corresponding decision-making categories - individual and collective. The collective level refers to a dynamic cooperative group or network emerging as a higher level agent from the individual agents. Moreover, we are seeking for the trade-offs between simplicity and more realistic representation of social agent behavior, compared to purely rational economic optimization approach. We start with a conceptual model to represent the system of interest. More specifically, in this model we: i) identify system components and interactions between them at different levels; ii) explore applicability of the heuristics-based approaches, such as Consumat (Jager, 2000), for individual decision-making and agent's transition to collective actions, when enriched with various socio-economic, spatial and environmental influencing factors; iii) explore ways to represent collective activities and decision-making in groups. The conceptual model, further combined with a land use/land cover and crop productivity framework, will be used as a prototype implementation to study emergence and performance of farmer g...
Lanese, I & Devitt, S 1970, 'Preface', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
Li, Q, Qiao, M, Bian, W & Tao, D 1970, 'Conditional Graphical Lasso for Multi-label Image Classification', 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Las Vegas, Nevada, United States, pp. 2977-2986.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Multi-label image classification aims to predict multiple labels for a single image which contains diverse content. By utilizing label correlations, various techniques have been developed to improve classification performance. However, current existing methods either neglect image features when exploiting label correlations or lack the ability to learn image-dependent conditional label structures. In this paper, we develop conditional graphical Lasso (CGL) to handle these challenges. CGL provides a unified Bayesian framework for structure and parameter learning conditioned on image features. We formulate the multi-label prediction as CGL inference problem, which is solved by a mean field variational approach. Meanwhile, CGL learning is efficient due to a tailored proximal gradient procedure by applying the maximum a posterior (MAP) methodology. CGL performs competitively for multi-label image classification on benchmark datasets MULAN scene, PASCAL VOC 2007 and PASCAL VOC 2012, compared with the state-of-the-art multi-label classification algorithms.
Li, X, Xiong, J, Liu, B, Gui, L & Qiu, M 1970, 'A capacity improving and energy saving scheduling scheme in push-based converged wireless broadcasting and cellular networks', 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper proposes a capacity improving and energy saving scheduling scheme in push-based Converged Wireless Broadcasting and Cellular Network (CWBCN). We maximize the network capacity and alleviate request congestion through broadcasting/multi-casting the most popular services, and locally caching for further request on the user side. In the proposed scheme, we firstly introduce a UE (User Equipment)-based caching mechanism by considering both the frequency and recency of the popular services. Simulations show that the proposed mechanism brings a significant improvement on the capacity of the CWBCN. Based on the mechanism, a sleep-awake algorithm is proposed to further reduce the energy consumption of the UEs. Simulations show that with the proposed algorithm for the converged network can reduce 20%-30% energy consumption by comparing to the traditional one.
Lister, R 1970, 'Toward a Developmental Epistemology of Computer Programming', Proceedings of the 11th Workshop in Primary and Secondary Computing Education, WiPSCE '16: 11th Workshop in Primary and Secondary Computing Education, ACM, Münster, Germany.
View/Download from: Publisher's site
View description>>
This paper was written as a companion to my keynote address at the 11th Workshop in Primary and Secondary Computing Education (WiPSCE 2016). The paper outlines my own research on how novices learn to program. Any reader whose interest has been piqued may pursue furher detail in the papers cited. I begin by explaining my philosophical position. In making that explanation, I do not claim that it is the only right position; on the contrary I allude to other philosophical positions that I regard as complimentary to my own. The academic warfare between these positions is pointless and counterproductive --- all the established positions have something positive to offer. Having established my position, I then go on to argue that the work of Jean Piaget, and subsequent neo-Piagetians, offers useful insight into how children learn to program computers.
Liu, A, Zhang, G, Lu, J, Lu, N & Lin, C-T 1970, 'An Online Competence-Based Concept Drift Detection Algorithm', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Hobart, TAS, Australia, pp. 416-428.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. The ability to adapt to new learning environments is a vital feature of contemporary case-based reasoning system. It is imperative that decision makers know when and how to discard outdated cases and apply new cases to perform smart maintenance operations. Competencebased empirical distance has been recently proposed as a measurement that can estimate the difference between case sample sets without knowing the actual case distributions. It is reportedly one of the most accurate drift detection algorithms in both synthetic and real-world data sets. However, as the construction of competence models have to retain every case in memory, it is not suitable for online drift detection. In addition, the high computational complexity O(n2) also limits its practical application, especially when dealing with large scale data sets with time constrains. In this paper, therefore, we propose a space-based online case grouping strategy, and a new case group enhanced competence distance (CGCD), to address these issues. The experiment results show that the proposed strategy and related algorithms significantly improve the efficiency of the current leading competence-based drift detection algorithm.
Liu, B, Zhou, W, Jiang, J & Wang, K 1970, 'K-Source: Multiple source selection for traffic offloading in mobile social networks', 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP), 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP), IEEE, Yangzhou, PEOPLES R CHINA.
View/Download from: Publisher's site
Liu, C & Chen, L 1970, 'Summarizing uncertain transaction databases by Probabilistic Tiles', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, pp. 4375-4382.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Transaction data mining is ubiquitous in various domains and has been researched extensively. In recent years, observing that uncertainty is inherent in many real world applications, uncertain data mining has attracted much research attention. Among the research problems, summarization is important because it produces concise and informative results, which facilitates further analysis. However, there are few works exploring how to effectively summarize uncertain transaction data. In this paper, we formulate the problem of summarizing uncertain transaction data as Minimal Probabilistic Tile Cover Mining, which aims to find a high-quality probabilistic tile set covering an uncertain database with minimal cost. We define the concept of Probabilistic Price and Probabilistic Price Order to evaluate and compare the quality of tiles, and propose a framework to discover the minimal probabilistic tile cover. The bottleneck is to check whether a tile is better than another according to the Probabilistic Price Order, which involves the computation of a joint probability. We prove that it can be decomposed into independent terms and calculated efficiently. Several optimization techniques are devised to further improve the performance. Experimental results on real world datasets demonstrate the conciseness of the produced tiles and the effectiveness and efficiency of our approach.
Liu, DYT, Richards, D, Dawson, P, Froissard, J-C & Atif, A 1970, 'Knowledge Acquisition for Learning Analytics: Comparing Teacher-Derived, Algorithm-Derived, and Hybrid Models in the Moodle Engagement Analytics Plugin', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 183-197.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. One of the promises of big data in higher education (learning analytics) is being able to accurately identify and assist students who may not be engaging as expected. These expectations, distilled into parameters for learning analytics tools, can be determined by human teacher experts or by algorithms themselves. However, there has been little work done to compare the power of knowledge models acquired from teachers and from algorithms. In the context of an open source learning analytics tool, the Moodle Engagement Analytics Plugin, we examined the ability of teacher-derived models to accurately predict student engagement and performance, compared to models derived from algorithms, as well as hybrid models. Our preliminary findings, reported here, provided evidence for the fallibility and strength of teacher-and algorithm-derived models, respectively, and highlighted the benefits of a hybrid approach to model-and knowledge-generation for learning analytics. A human in the loop solution is therefore suggested as a possible optimal approach.
Liu, Y, Huang, ML, Liang, J & Huang, W 1970, 'Facial Feature Extraction and Recognition for Traditional Chinese Physiognomy', 2016 20th International Conference Information Visualisation (IV), 2016 20th International Conference Information Visualisation (IV), IEEE, Lisbon, Portugal, pp. 408-412.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. We propose a novel calculation method of personality based on the Chinese physiognomy. The proposed solution combines the ancient and the modem physiognomy to summarize the corresponding relation between the personality and facial feature and model the baseline to shape the face feature. We compute histogram of image by searching for the threshold values to create a binary image in an adaptive way. The two-pass connected component method indicates the feature region. We encode the binary image to remove the noise point, so that the new connected image can provide a better result. The method was tested on ORL face database.
Liu, Y-T, Wu, S-L, Kuang-Pen Chou, Lin, Y-Y, Jie Lu, Guangquan Zhang, Wen-Chieh Lin & Lin, C-T 1970, 'Driving fatigue prediction with pre-event electroencephalography (EEG) via a recurrent fuzzy neural network', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, Canada, pp. 2488-2494.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. We propose an electroencephalography (EEG) prediction system based on a recurrent fuzzy neural network (RFNN) architecture to assess drivers' fatigue degrees during a virtual-reality (VR) dynamic driving environment. Prediction of fatigue degrees is a crucial and arduous biomedical issue for driving safety, which has attracted growing attention of the research community in the recent past. Meanwhile, combined with the benefits of measuring EEG signals facilitates, many EEG-based brain-computer interfaces (BCIs) have been developed for use in real-Time mental assessment. In the literature, EEG signals are severely blended with stochastic noise; therefore, the performance of BCIs is constrained by low resolution in recognition tasks. For this rationale, independent component analysis (ICA) is usually used to find a source mapping from original data that has been blended with unrelated artificial noise. However, the mechanism of ICA cannot be used in real-Time BCI design. To overcome this bottleneck, the proposed system in this paper utilizes a recurrent self-evolving fuzzy neural work (RSEFNN) to increase memory capability for adaptive noise cancellation when assessing drivers' mental states during a car driving task. The experimental results without the use of ICA procedure indicate that the proposed RSEFNN model remains superior performance compared with the state-of-Thearts models.
LU, H, ZHANG, K, XIAO, L & WANG, C 1970, 'A HYBRID MODEL FOR SHORT-TERM WIND SPEED FORECASTING BASED ON NON-POSITIVE CONSTRAINT COMBINATION THEORY', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, Roubaix, France, pp. 240-245.
View/Download from: Publisher's site
View description>>
© 2016 by World Scientific Publishing Co. Pte. Ltd. Short-term wind speed forecasting plays an irreplaceable role in efficient management of wind energy systems and accurate forecasting results could provide effective future plans for operators of utilities and wind energy systems. Aiming at improving the accuracy of short-term wind forecasting, this paper presents a new forecasting model based on the non-positive constraint combination theory. In this model, a modified optimization algorithm is used to optimize the weight coefficients of the constituent models based on the non-positive constraint combination theory. The combined model is tested using three sets of 10-min wind speed data from real-world wind farms. The testing results show that the forecasting accuracy of new model is significantly better than the constituent models.
Lumor, T, Chew, E & Gill, AQ 1970, 'Exploring the Role of Enterprise Architecture in IS-enabled Ot: An EA Principles Perspective.', EDOC Workshops, Workshop in conjunction with the IEEE International Enterprise Distributed Object Computing Conference, IEEE Computer Society, Vienna, Austria, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Although EA principles have received considerable attention in recent years, there is still little known about how EA principles can be used to govern the transformation of the Information Systems enabled organization. In this research-in-progress paper, we communicate our initial step towards answering the sub-question: how do enforcing EA principles contribute to IS-enabled OT? Based on a comprehensive literature review, we initially propose five testable hypotheses and a research model, which is a pre-requisite to developing a data-driven theory for this important area of research. It is anticipated that the ensuing theory will provide a basis for further research studying the impact of EA on IS-enabled OT. The tested research model will also provide guidance to practitioners on how to effectively design and use EA principles in managing transformative changes caused by IS within their organizations and overall industry sectors.
Madi, BMA, Sheng, QZ, Yao, L, Qin, Y & Wang, X 1970, 'PLMwsp: Probabilistic Latent Model for Web Service QoS Prediction', 2016 IEEE International Conference on Web Services (ICWS), 2016 IEEE International Conference on Web Services (ICWS), IEEE, San Francisco, CA, pp. 623-630.
View/Download from: Publisher's site
Manongdo, R & Xu, G 1970, 'Applying client churn prediction modeling on home-based care services industry', 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Durham, NC, pp. 167-172.
View/Download from: Publisher's site
View description>>
Client churn prediction model is widely acknowledged as an effective way of realizing customer life-time value especially in service-oriented industries and under a competitive business environment. Churn model allows targeting of clients for retention campaigns and is a critical component of customer relationship management(CRM) and business intelligence systems. There are numerous statistical models and techniques applied successfully on data mining projects for various industries. While there is literature for prediction modeling on hospital health care services, non-exist for home-based care services. In this study, logistic regression, random forest and C5.0 decision tree were the models used in building a binary client churn classifier for a home-based care services company based in Australia. All models yielded prediction accuracies over 90% with tree based classifiers marginally higher and C5.0 model found to be suitable for use in this industry. This study also showed that existing client satisfaction measures currently in use by the company does not adequately contribute to churn analysis.
McGregor, C & Bonnis, B 1970, 'Big Data Analytics for Resilience Assessment and Development in Tactical Training Serious Games', 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, NORTH IRELAND, pp. 158-162.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Training activities utilising virtual realityenvironments are being used increasingly to create trainingscenarios to promote resilience for mental and physicalwellbeing and to enable repeatable scenarios to allow traineesto learn techniques for various stressors. However, assessmentof the trainees' response to these training activities has eitherbeen limited to various pre and post training assessmentmetrics or collected in parallel during experiments andanalysed retrospectively. We have created a Big Data analyticsplatform, Athena, that in real-time acquires data from a firstperson shooter game, ArmA 3, as well as the data ArmA 3sends to the muscle stimulation component of a multisensorygarment, ARAIG that provides on the body feedback to thewearer for communications, weapon fire and being hit andintegrates that data with physiological response data such asheart rate, breathing behaviour and blood oxygen saturation. This paper presents a method to create structured resiliencetraining scenarios that incorporate Big Data analytics forresilience analytics for new approaches for resilienceassessment and development in tactical training serious games.
McGregor, C, Bonnis, B, Stanfield, B & Stanfield, M 1970, 'Design of the ARAIG haptic garment for enhanced resilience assessment and development in tactical training serious games', 2016 IEEE 6th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), 2016 IEEE 6th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), IEEE, pp. 214-217.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. First person shooter virtual reality games have begun to be used for serious games for military or civilian tactical training for new approaches for resilience assessment and development as part of new approaches for mental health training. However, sensory stimulation has been largely constrained to visual and auditory sensations with limited tactile feedback through haptic controllers. This paper presents a design for the ARAIG haptic garment for enhanced resilience assessment and development in tactical training serious games.
Merigo, JM, Alrajeh, N & Peris-Ortiz, M 1970, 'Induced aggregation operators in the ordered weighted average sum', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greece, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The ordered weighted average (OWA) aggregation is an extension of the classical weighted average by using a reordering process of the arguments in a decreasing or increasing way. This article presents new averaging aggregation operators by using sums and order inducing variables. This approach produces the induced ordered weighted average sum (IOWAS). The IOWAS operator aggregates a set of sums using a complex reordering process based on order-inducing variables. This approach includes a different types of aggregation structures including the well-known OWA families. The work presents additional generalizations by using generalized and quasi-arithmetic means. The paper ends with a simple numerical example that shows how to aggregate with this new approach.
Merigo, JM, Blanco-Mesa, F, Gil-Lafuente, AM & Yager, RR 1970, 'A bibliometric analysis of the first thirty years of the International Journal of Intelligent Systems', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greece.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The International Journal of Intelligent Systems was created on 1986. Today, the journal has become thirty years old. In order to celebrate this anniversary, this study develops a bibliometric review of all the papers published in the journal between 1986 and 2015. The results are mainly based on the Web of Science Core Collection that classifies the bibliographic material by using several indicators including the total number of publications and citations, the h-index, the cites per paper and the citing articles. Moreover, the work also uses the VOS viewer software for visualizing the main results through bibliographic coupling and co-citation. The results show a general overview of the leading trends that have influenced the journal in terms of highly cited papers, authors, journals, universities and countries.
Merigó, JM, Zurita, G & Link-Chaparro, S 1970, 'Normalization of the article influence score between categories', Lecture Notes in Engineering and Computer Science, pp. 182-187.
View description>>
This study introduces a normalized article influence score. The main objective is to show that the article influence score obtained in different categories is not equivalent and it is necessary to normalize it when comparing journals form different categories. Several methods are suggested including a normalization that divides the article influence score by the average and another approach that normalizes the results in [0, 1] inside the same category in order to be able to compare between different fields. The results show that each category have different results and it is necessary to develop a normalization process in order to compare the journals. The article analyses a case study in engineering.
Merigó, JM, Zurita, G & Lobos-Ossandón, V 1970, 'Computer science research in artificial intelligence', Lecture Notes in Engineering and Computer Science, pp. 216-220.
View description>>
This paper presents a bibliometric overview of the research carried out between 1990 and 2014 in computer science with a focus on artificial intelligence. The work analyses all the journals available in Web of Science during this period and presents their publication and citation results. The study also considers the most cited articles in this area during the last twenty-five years. IEEE Journals obtain the most remarkable results publishing more than half of the most cited papers.
Milne, DN, Pink, G, Hachey, B & Calvo, RA 1970, 'CLPsych 2016 Shared Task: Triaging content in online peer-support forums', Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology, Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology, Association for Computational Linguistics.
View/Download from: Publisher's site
Mols, I, van den Hoven, E & Eggen, B 1970, 'Technologies for Everyday Life Reflection', Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Eindhoven, The Netherlands, pp. 53-61.
View/Download from: Publisher's site
View description>>
Reflection gives insight, supports action and can improve wellbeing. People might want to reflect more often for these benefits, but find it difficult to do so in everyday life. Research in HCI has shown the potential of systems to support reflection in different contexts. In this paper we present a design space for supporting everyday life reflection. We produced a workbook with a selection of conceptual design proposals, which show how systems can take different roles in the process of reflection: triggering, supporting and capturing. We describe a design space with two dimensions by combining these roles with strategies found in literature. We contribute to the extensive body of work on reflection by outlining how design for everyday life reflection requires a focus on more holistic reflection, design with openness and integration in everyday life.
Mols, I, van den Hoven, E, Eggen, B & ACM 1970, 'Informing Design for Reflection: an Overview of Current Everyday Practices', PROCEEDINGS OF THE NORDICHI '16: THE 9TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION - GAME CHANGING DESIGN, Nordic Conference on Human-Computer Interaction (NordCHI), ACM, Gothenburg, Sweden, pp. 1-10.
View/Download from: Publisher's site
View description>>
There is an increasing interest in HCI in designing to support reflection in users. In this paper, we specifically focus on everyday life reflection, covering and connecting a broad range of topics from someone's life rather than focusing on a very specific aspect. Although many systems aim to support reflection, few are based on an overview of how people currently integrate reflection in everyday life. In this paper, we aim to contribute to this gap through a questionnaire on everyday life reflection practices combining both qualitative and quantitative questions. Findings provide insights in the broad range of people that engage with reflection in different ways. We aim to inform design through four considerations: rumination, timing, initiative and social context.
Montgomery, J, Reid, M & Drake, BJ 1970, 'Protocols and Structures for Inference: A RESTful API for Machine Learning', Proceedings of The 2nd International Conference on Predictive APIs and Apps, 2nd International Conference on Predictive APIs and Apps, Journal of Machine Learning Research, Sydney, pp. 29-42.
View description>>
Diversity in machine learning APIs (in both software toolkits and web services), works against realising machine learning’s full potential, making it difficult to draw on individual algorithms from different products or to compose multiple algorithms to solve complex tasks. This paper introduces the Protocols and Structures for Inference (PSI) service architecture and specification, which presents inferential entities—relations, attributes, learners and predictors—as RESTful web resources that are accessible via a common but flexible and extensible interface. Resources describe the data they ingest or emit using a variant of the JSON schema language, and the API has mechanisms to support non-JSON data and future extension of service features.
Muhammad, A, Zhou, Q, Beydoun, G, Xu, D & Shen, J 1970, 'Learning Path Adaptation in Online Learning Systems', 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD), International Conference on Computer Supported Cooperative Work in Design, IEEE, Nanchang, China, pp. 421-426.
View/Download from: Publisher's site
View description>>
Learning path in online learning systems refers to a sequence of learning objects which are designated to help the students in improving their knowledge or skill in particular subjects or degree courses. In this paper, we review the recent research on learning path adaptation to pursue two goals, first is to organize and analyze the parameter of adaptation in learning path; the second is to discuss the challenges in implementing learning path adaptation. The survey covers the state of the art and aims at providing a comprehensive introduction to the learning path adaptation for researchers and practitioners
Naqshbandi, K, Milne, DN, Davies, B, Potter, S, Calvo, RA & Hoermann, S 1970, 'Helping young people going through tough times', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Univ Tasmania, Hobart, AUSTRALIA.
View/Download from: Publisher's site
Nascimben, M, Yu, Y-H, Lin, C-T, King, J-T, Singh, AK & Chuang, C 1970, 'Effect of a cognitive involving videogame on MI task', Proceedings of the 6th International Brain-Computer Interface Meeting, organized by the BCI Society, Verlag der TU Graz, Graz University of Technology, sponsored by g.tec medical engineering GmbH, pp. 175-175.
View/Download from: Publisher's site
View description>>
Researcher and developers have to face with performance variation in motor imagery[1] across andwithin subjects and its fluctuations over time. In addition MI achievement variations within subjects are closelycorrelated to neurophysiological variables[2]. In our study a MI task was submitted to a group of healthy subjectsbefore and after playing BCIGEM videogame for 90 minutes. Some EEG features were found, suggesting adifferent pathway of activation inside MU rhythm during Motor Imagery (MI) after a mentally challenging activitylike playing a videogame
NEJAD, MZ, LU, JIE, ASGARI, P & BEHBOOD, V 1970, 'THE EFFECT OF GOOGLE DRIVE DISTANCE AND DURATION IN RESIDENTIAL PROPERTY IN SYDNEY, AUSTRALIA', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, FRANCE.
View/Download from: Publisher's site
Nguyen, TTS & Lu, H 1970, 'Domain Ontology Construction Using Web Usage Data', Proceedings of AI 2016: Advances in Artificial Intelligence (LNCS), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Hobart, Australia, pp. 338-344.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016.Ontologies play an important role in conceptual model design and the development of machine-readable knowledge bases. They can be used to represent various knowledge not only about content concepts, but also explicit and implicit relations. While ontologies exist for many application domains of websites, the implicit relations between domain and accessed Web-pages might be less concerned and unclear. These relations are crucial for Web-page recommendation in recommender systems. This paper presents a novel method developing an ontology of Web-pages mapped to domain knowledge. It will focus on solutions of semi-automating ontology construction using Web usage data. An experiment of Microsoft Web data is implemented and evaluated.
Ochoa, EA, Castro, EL, Lindahl, JMM & Lafuente, AMG 1970, 'Forgotten effects and heavy moving averages in exchange rate forecasting', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greece, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper presents the results of using experton, forgotten effects and heavy moving averages operators in three traditional models based purchasing power parity (PPP) model to forecast exchange rate. Therefore, the use of these methods is to improve the forecast error under scenarios of volatility and uncertainty, such as the financial markets and more precise in exchange rate. The heavy ordered weighted moving average weighted average (HOWMAWA) operator is introduced. This new operator includes the weighted average in the usual heavy ordered weighted moving average (HOWMA) operator, considering a degree of importance for each concept that includes the operator. The use of experton and forgotten effects methodology represents the information of the experts in the field and with that information were obtained hidden variables or second degree relations. The results show that the inclusion of the forgotten effects and heavy moving average operators improve our results and reduce the forecast error.
Orth, D & van den Hoven, E 1970, ''I wouldn't choose that key ring; it's not me'', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania.
View/Download from: Publisher's site
View description>>
We each possess certain objects that are dear to us for a
variety of reasons. They can be sentimental to us, bring us
delight through their use or empower us. Throughout our
lives, we use these cherished possessions to reaffirm who
we are, who we were and who we wish to become. To
explore this, we conducted a design study that asked ten
participants to consider their emotional attachment
towards and the identity-relevance of cherished and newly
introduced possessions. Participants were then asked to
elaborate on their responses in interviews. Through a
thematic analysis of these responses, we found that the
emotional significance of possessions was reportedly
influenced by both their relevance to selfhood and position
within a life story. We use these findings to discuss how
the design of new products and systems can promote
emotional attachment by holding a multitude of
emotionally significant meanings to their owners.
Pan, C, Liu, B, Zhou, H & Gui, L 1970, 'Multi-path routing for video streaming in multi-radio multi-channel wireless mesh networks', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Multi-radio multi-channel (MRMC) is a promising approach to relieve the overload caused by the explosive growth of video streaming traffic in wireless mesh networks (WMNs). Previous studies have shown that in MRMC WMNs, network capacity can be increased significantly by proper design of channel assignment and routing algorithm. Multi-path routing can make good use of the network capacity improvement of MRMC WMNs. Multi-path routing has been applied in wired and wireless networks for load balancing or congestion control. However, it remains a challenge in MRMC WMNs. In this paper, we first discuss how to find multiple high-quality paths from source to destination while considering the interference between each other. Then we focus on the rate allocation among multiple paths and formulate it as a max-min problem which can be transformed to a linear programming (LP) problem. Finally, we propose a joint multi-path discovery and rate allocation algorithm. We evaluate this algorithm through simulations. Results show that our algorithm not only increases the network capacity, but also keeps the average end-to-end delay over all video streaming sessions at a low level.
Pang, G, Cao, L & Chen, L 1970, 'Outlier detection in complex categorical data by modelling the feature value couplings', IJCAI International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence (IJCAI), AAAI Press, New York, pp. 1902-1908.
View description>>
This paper introduces a novel unsupervised outlier detection method, namely Coupled Biased Random Walks (CBRW), for identifying outliers in categorical data with diversified frequency distributions and many noisy features. Existing pattern-based outlier detection methods are ineffective in handling such complex scenarios, as they misfit such data. CBRW estimates outlier scores of feature values by modelling feature value level couplings, which carry intrinsic data characteristics, via biased random walks to handle this complex data. The outlier scores of feature values can either measure the outlierness of an object or facilitate the existing methods as a feature weighting and selection indicator. Substantial experiments show that CBRW can not only detect outliers in complex data significantly better than the state-of-the-art methods, but also greatly improve the performance of existing methods on data sets with many noisy features.
Pang, G, Cao, L, Chen, L & Liu, H 1970, 'Unsupervised Feature Selection for Outlier Detection by Modelling Hierarchical Value-Feature Couplings', 2016 IEEE 16th International Conference on Data Mining (ICDM), 2016 IEEE 16th International Conference on Data Mining (ICDM), IEEE, Barcelona, pp. 410-419.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Proper feature selection for unsupervised outlier detection can improve detection performance but is very challenging due to complex feature interactions, the mixture of relevant features with noisy/redundant features in imbalanced data, and the unavailability of class labels. Little work has been done on this challenge. This paper proposes a novel Coupled Unsupervised Feature Selection framework (CUFS for short) to filter out noisy or redundant features for subsequent outlier detection in categorical data. CUFS quantifies the outlierness (or relevance) of features by learning and integrating both the feature value couplings and feature couplings. Such value-To-feature couplings capture intrinsic data characteristics and distinguish relevant features from those noisy/redundant features. CUFS is further instantiated into a parameter-free Dense Subgraph-based Feature Selection method, called DSFS. We prove that DSFS retains a 2-Approximation feature subset to the optimal subset. Extensive evaluation results on 15 real-world data sets show that DSFS obtains an average 48% feature reduction rate, and enables three different types of pattern-based outlier detection methods to achieve substantially better AUC improvements and/or perform orders of magnitude faster than on the original feature set. Compared to its feature selection contender, on average, all three DSFS-based detectors achieve more than 20% AUC improvement.
Pickrell, M, Bongers, B & van den Hoven, E 1970, 'Understanding Changes in the Motivation of Stroke Patients Undergoing Rehabilitation in Hospital', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Persuasive Technology, Springer International Publishing, Salzburg, Austria, pp. 251-262.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. Stroke patient motivation can fluctuate during rehabilitation due to a range of factors. This study reports on qualitative research, consisting of observations of stroke patients undergoing rehabilitation and interviews with patients about the changes in motivation they identified during their time completing rehabilitation in the hospital. We found a range of positive and negative factors which affect motivation. Positive factors include improvements in patient movement and support from other patients and family members. Negative factors include pain and psychological issues such as changes in mood. From this fieldwork, a set of design guidelines has been developed to act as a platform for researchers and designers developing equipment for the rehabilitation of stroke patients.
Pileggi, SF 1970, 'A Privacy-Friendly Model for an Efficient and Effective Activity Scheduling Inside Dynamic Virtual Organizations', COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS, AND WORKSHARING, COLLABORATECOM 2015, 11th EAI International Conference on Collaborative Computing - Networking, Applications and Worksharing, Springer International Publishing, Wuhan, PEOPLES R CHINA, pp. 303-308.
View/Download from: Publisher's site
Popov, A, Fink, W, McGregor, C & Hess, A 1970, 'PHM for astronauts: Elaborating and refining the concept', 2016 IEEE Aerospace Conference, 2016 IEEE Aerospace Conference, IEEE, pp. 1-9.
View/Download from: Publisher's site
View description>>
Clarifying and evolving the PHM for Astronauts concept, introduced in [1], this conceptual paper focuses on particular PHM-based solutions to bring Human Health and Performance (HH&P) technologies to the required technology readiness level (TRL) in order to mitigate the HH&P risks of manned space exploration missions. This paper discusses the particular PHM-based solutions for some HH&P technologies that are, namely by NASA designation, the Autonomous Medical Decision technology and the Integrated Biomedical Informatics technology. Both of the technologies are identified as essential ones in NASA's integrated technology roadmap for the Technology Area 06: Human Health, Life Support, and Habitation Systems. The proposed technology solutions are to bridge PHM, an engineering discipline, to HH&P domain in order to mitigate the risks by focusing on efforts to reduce countermeasure mass and volume and drive the risks down to an acceptable level. The Autonomous Medical Decision technology is based on wireless handheld devices and is a result of a paradigm shift from tele-medicine to that of health support autonomy. The Integrated Biomedical Informatics technology is based on Crew Electronic Health Records (CEHR) system with predictive diagnostics capability developed for crew members rather than for healthcare professionals. The paper explores the proposed PHM-based solutions on crew health maintenance in terms of predictive diagnostics providing early and actionable real-time warnings of impending health problems that otherwise would have gone undetected.
Prior, J, Ferguson, S & Leaney, J 1970, 'Reflection is hard', Proceedings of the Australasian Computer Science Week Multiconference, ACSW '16: Australasian Computer Science Week, ACM, Canberra, Australia.
View/Download from: Publisher's site
View description>>
We have observed that it is a non-trivial exercise for undergraduate students to learn how to reflect. Reflective practice is now recognised as important for software developers and has become a key part of software studios in universities, but there is limited empirical investigation into how best to teach and learn reflection. In the literature on reflection in software studios, there are many papers that claim that reflection in the studio is mandatory. However, there is inadequate guidance about teaching early stage students to reflect in that literature. The essence of the work presented in this paper is a beginning to the consideration of how the teaching of software development can best be combined with teaching reflective practice for early stage software development students. We started on a research programme to understand how to encourage students to learn to reflect. As we were unsure about teaching reflection, and we wished to change our teaching as we progressively understood better what to do, we chose action research as the most suitable approach. Within the action research cycles we used ethnography to understand what was happening with the students when they attempted to reflect. This paper reports on the first 4 semesters of research.
We have developed and tested a reflection model and process that provide scaffolding for students beginning to reflect. We have observed three patterns in how our students applied this process in writing their reflections, which we will use to further understand what will help them learn to reflect. We have also identified two themes, namely, motivation and intervention, which highlight where the challenges lie in teaching and learning reflection.
Qiao, M, Bian, W, Xu, RYD & Tao, D 1970, 'Diversified hidden Markov models for sequential labeling', 2016 IEEE 32nd International Conference on Data Engineering (ICDE), 2016 IEEE 32nd International Conference on Data Engineering (ICDE), IEEE, Helsinki, FINLAND, pp. 1512-+.
View/Download from: Publisher's site
Ramezani, F, Naderpour, M & Lu, J 1970, 'A multi-objective optimization model for virtual machine mapping in cloud data centres', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, BC, Canada, pp. 1259-1265.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Modern cloud computing environments exploit virtualization for efficient resource management to reduce computational cost and energy budget. Virtual machine (VM) migration is a technique that enables flexible resource allocation and increases the computation power and communication capability within cloud data centers. VM migration helps cloud providers to successfully achieve various resource management objectives such as load balancing, power management, fault tolerance, and system maintenance. However, the VM migration process can affect the performance of applications unless it is supported by smart optimization methods. This paper presents a multi-objective optimization model to address this issue. The objectives are to minimize power consumption, maximize resource utilization (or minimize idle resources), and minimize VM transfer time. Fuzzy particle swarm optimization (PSO), which improves the efficiency of conventional PSO by using fuzzy logic systems, is relied upon to solve the optimization problem. The model is implemented in a cloud simulator to investigate its performance, and the results verify the performance improvement of the proposed model.
Saberi, M, Chang, E, Hussain, OK & Saberi, Z 1970, 'Next generation of interactive contact centre for efficient customer recognition: Conceptual framework', Proceedings of the International Conference on Industrial Engineering and Operations Management, pp. 3231-3241.
View description>>
Contact centers, as the organization's touch point, have a considerable effect on customer experience and retention. It has been shown that 70% of all business interactions are handled in contact centers. A framework is proposed in this conceptual paper to build cleaned interactive customer recognition framework (CICRF) in CCs. CICRF consists of two integrated modules: cleansing and ICRF. The first module focuses on the detection and resolution of duplicate records to improve the effectiveness and efficiency of customer recognition. The second module focuses on interactive customer recognition in a customer database when there are multiple records with the same name. Cleansing module uses Semi-Automatic deduplication process by incorporating three main functions in its design, namely: DedupCrowd, DedupNN and DedupCSR. DedupCrowd is a function that provides training pairs of records for DeduppNN which is a deduplication based neural network. Researchers suggest leveraging human computing power in managing duplicate data which is scalable top the large size of contact centers data. However completion of crowdsourcing tasks is an error-prone process that affects the overall performance of the crowd. Thus, controlling the quality of workers is an essential step for crowdsourcing systems and for that I propose OSQC, an online statistical quality control framework, to monitor the performance of workers. DeduppNN is a neural network based deduplication method that uses output of DedupCrowd for the training purposes. DeduppNN has two features: first is that it is an online deduplication method which is essential for the purposes of customer recognition. Second is that in terms of costs it is much lower in comparison with DedupCrowd. The last function is designed for providing label to pairs when DedupNN is not sure about their label. The intuition behind this function is similar with active learning area which selects appropriate data for labeling. ICRF consist...
Saberi, M, Janjua, NK, Chang, E, Hussain, OK & Pazhoheshfar, P 1970, 'In-house crowdsourcing-based entity resolution using argumentation', Proceedings of the International Conference on Industrial Engineering and Operations Management, p. 135.
View description>>
A conceptual framework is proposed in this study to improve Entity Resolution in contact centers. It is stated in the paper that how RFID produce dirty data in CC's databases and how using customer service representatives (CSRs) via argumentation framework deal with issue. Leveraging the power of CSRs put this work as a crowdsourcing technique that combine human and machine together to rich to the high quality of data in CC's databases. © IEOM Society International.
Saberi, M, Karduck, A, Hussain, OK & Chang, E 1970, 'Challenges in Efficient Customer Recognition in Contact Centre: State-of-the-Art Survey by Focusing on Big Data Techniques Applicability', 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), IEEE, Ostrava, CZECH REPUBLIC, pp. 548-554.
View/Download from: Publisher's site
Saqib, M, Daud Khan, S & Blumenstein, M 1970, 'Texture-based feature mining for crowd density estimation: A study', 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Palmerston North, New Zealand.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Texture feature is an important feature descriptor for many image analysis applications. The objectives of this research are to determine distinctive texture features for crowd density estimation and counting. In this paper, we have comprehensively reviewed different texture features and their different possible combinations to evaluate their performance on pedestrian crowds. A two-stage classification and regression based framework have been proposed for performance evaluation of all the texture features for crowd density estimation and counting. According to the framework, input images are divided into blocks and blocks into cells of different sizes, having varying crowd density levels. Due to perspective distortion, people appearing close to the camera contribute more to the feature vector than people far away. Therefore, features extracted are normalized using a perspective normalization map of the scene. At the first stage, image blocks are classified using multi-class SVM into different density level. At the second stage Gaussian Process Regression is used to re gress low-level features to count. Various texture features and their possible combinations are evaluated on publicly available dataset.
Shu, Q, Guo, H, Liang, J, Che, L, Liu, J & Yuan, X 1970, 'EnsembleGraph: Interactive visual analysis of spatiotemporal behaviors in ensemble simulation data', 2016 IEEE Pacific Visualization Symposium (PacificVis), 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE, Taipei, Taiwan, pp. 56-63.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper presents a novel visual analysis tool, EnsembleGraph, which aims at helping scientists understand spatiotemporal similarities across runs in time-varying ensemble simulation data. We abstract the input data into a graph, where each node represents a region with similar behaviors across runs and nodes in adjacent time frames are linked if their regions overlap spatially. The visualization of this graph, combined with multiple-linked views showing details, enables users to explore, select, and compare the extracted regions that have similar behaviors. The driving application of this paper is the study of regional emission influences over tropospheric ozone, based on the ensemble simulations conducted with different anthropogenic emission absences using MOZART-4. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations.
Singh, A, Wang, Y-K, Chiu, C-Y, Yu, Y-H, Nascimben, M, King, J-T, Chuang, C-H, Chen, S-A, Ko, L-W, Pal, NR & others 1970, 'Attention in Complex Environment of Brain Computer Interface', Proceedings of the 6th International Brain-Computer Interface Meeting, organized by the BCI Society, Verlag der TU Graz, Graz University of Technology, sponsored by g.tec medical engineering GmbH, pp. 165-165.
Song, K, Chen, L, Gao, W, Feng, S, Wang, D & Zhang, C 1970, 'PerSentiment', Proceedings of the 25th International Conference Companion on World Wide Web - WWW '16 Companion, the 25th International Conference Companion, ACM Press, Montreal, pp. 255-258.
View/Download from: Publisher's site
View description>>
Microblogging services are playing increasingly important roles in our daily life today. It is useful for microblog users to instantly understand the sentiment of a large number of microblogs posted by their friends and make appropriate response. Despite considerable progress on microblog sentiment classification, most of the existing works ignore the influence of personal distinctions of different microblog users on the sentiments they convey, and none of them has provided real-world personalized sentiment classification systems. Considering personal distinctions in sentiment analysis is natural and necessary as different people have different language habits, personal characters, opinion bias and so on. In this demonstration, we present a live system based on Twitter called PerSentiment, an individuality-dependent sentiment classification system which makes the first attempt to analyze the personalized sentiment of recent tweets and retweets posted by the authenticated user and the users he/she follows. Our system consists of four steps, i.e., requesting tweets via Twitter API, preprocessing collected tweets for extracting features, building personalized sentiment classifier based on a novel and extensible Latent Factor Model (LFM) trained on emoticon-tagged tweets, and finally visualizing the sentiment of friends' tweets to provide a guide for better sentiment understanding.
Song, K, Gao, W, Chen, L, Feng, S, Wang, D & Zhang, C 1970, 'Build Emotion Lexicon from the Mood of Crowd via Topic-Assisted Joint Non-negative Matrix Factorization', Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, SIGIR '16: The 39th International ACM SIGIR conference on research and development in Information Retrieval, ACM, Pisa, Italy.
View/Download from: Publisher's site
Sun, F, Liu, B, Hou, F, Zhou, H, Gui, L & Chen, J 1970, 'Cournot equilibrium in the mobile virtual network operator oriented oligopoly offloading market', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Cellular networks are now facing severe traffic overload problems due to the explosive growth of mobile data traffic. One of the promising solutions is to offload part of the traffic through WiFi. In this paper, we investigate an oligopoly offloading market, where several Mobile Virtual Network Operators (MVNOs) compete to serve end users using the network infrastructure leased from the host Mobile Network Operator (MNO) at the wholesale market. First, we study the competitive interactions among the MVNOs considering the overload problems of the offloading market. Specially, we formulate the interactions as a non-cooperative inventory competition game, where each MVNO determines the amount of cellular traffic it can provide to end users (named as the traffic inventory of each MVNO in this paper) simultaneously. We analyze and derive the existence of the Cournot equilibrium using game theory. Furthermore, we study the impact of the MNO's wholesale price strategy on the market equilibrium. Based on these analysis, we find the optimal initial inventory strategy for these competitors according to the Cournot equilibrium. Finally, our simulations present the process of achieving the market equilibrium and illustrate the impact of the host MNO to the MVNOs.
Sun, G, Cui, T, Beydoun, G, Shen, J & Chen, S 1970, 'Profiling and Supporting Adaptive Micro Learning on Open Education Resources.', CBD, International Conference on Advanced Cloud and Big Data, IEEE Computer Society, Chengdu, China, pp. 158-163.
View/Download from: Publisher's site
View description>>
It is found that learners prefer to use micro learning mode to conduct learning activities through open educational resources (OERs). However, adaptive micro learning is scarcely supported by current OER platforms. In this paper we focus on profiling an effective micro learning process which is central to establish the raw materials and set up rules for the final adaptive process. This work consists of two parts. First, we conducted an educational data mining and learning analysis study to discover the patterns and rules in micro learning through OER. Then based on its findings, we profiled features of both learners and OERs to reveal the full learning story in order to support the decision making process. Incorporating educational data mining and learning analysis, an cloud-based architecture for Micro Learning as a Service (MLaaS) was designed to integrate all necessary procedures together as a complete service for delivering micro OERs. The MLaaS also provides a platform for resource sharing and exchanging in peer-to-peer learning environment. Working principle of a key step, namely the computational decision-making of micro OER adaptation, was also introduced
Tawk, T, Al-Kilidar, H & Bagia, R 1970, 'Skills for Managing Virtual Projects: Are they Gained Through Graduate Project Management Programs?', 27th Annual Conference of the Australasian Association for Engineering Education : AAEE 2016, AAEE - Annual Conference of Australasian Association for Engineering Education, Australasian Association for Engineering Education, Coffs Harbour, Australia.
Thuy Do, QN, Zhilin, A, Junior, CZP, Wang, G & Hussain, FK 1970, 'A network-based approach to detect spammer groups', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, BC, Canada, pp. 3642-3648.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Online reviews nowadays are an important source of information for consumers to evaluate online services and products before deciding which product and which provider to choose. Therefore, online reviews have significant power to influence consumers' purchase decisions. Being aware of this, an increasing number of companies have organized spammer review campaigns, in order to promote their products and gain an advantage over their competitors by manipulating and misleading consumers. To make sure the Internet remains a reliable source of information, we propose a method to identify both individual and group spamming reviews by assigning a suspicion score to each user. The proposed method is a network-based approach combining clustering techniques. We demonstrate the efficiency and effectiveness of our approach on a real-world and manipulated dataset that contains over 8000 restaurants and 600,000 restaurant reviews from TripAdvisor website. We tested our method in three testing scenarios. The method was able to detect all spammers in two testing scenarios, however it did not detect all in the last scenario.
Tian, F, Liu, B, Xiong, J & Gui, L 1970, 'Movement-based incentive for cellular traffic offloading through D2D communications', 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Due to the various applications for smartphones, mobile data traffic is growing at an unprecedented rate. The cellular network is suffering from traffic overloaded currently. Offloading part of the cellular traffic through opportunistic contact between mobile devices is a promising solution to solve the overload problem. However, due to the uneven distribution of devices and regular mobility of smartphone users, the contacts between mobile devices are opportunistic, the cellular traffic offloading approach results in poor performance, i.e., the relay user contacts with other mobile users with small probability. In this paper, we are the first to propose a movement-based incentive mechanism for cellular traffic offloading, where we control the mobility of relay users to improve the performance of traffic offloading. The movement-based incentive mechanism contains a relay user selection algorithm and a payment determination algorithm. Comparing with existing solutions, our proposed movement-based incentive mechanism has better performance.
Tibben, W, Brown, RBK, Beydoun, G & Zamani, R 1970, 'Is consensus a viable concept to justify use of online collaborative networks in multi-stakeholder governance?', 2016 49TH HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES (HICSS), Hawaii International Conference on System Sciences (HICSS), IEEE, Kauai, USA, pp. 4665-4674.
View/Download from: Publisher's site
View description>>
The adoption of multi-stakeholder decision-making processes using online collaborative technologies for Internet governance has facilitated participation of stakeholders from many developing countries in decision making within organizations such as ISOC and ICANN. One important and underlying rationale that gives rise to such arrangements is the notion of consensus. The paper uses the work of Arrow to firstly question whether consensus is indeed a theoretically justifiable concept on which to base multi-stakeholder governance. The paper then further uses Arrow's insights to develop an analytical framework which identifies expertise and authority as two key factors in the analysis of online decision making. The paper presents a conjecture that a significant challenge in ensuring productive multi-stakeholder governance are the practices that govern the ways in which authority and expertise interact. To that end, two potential sources of leadership are defined within online collaborative networks: positional leadership and thought leadership.
van Gennip, D, Orth, D, Imtiaz, MA, van den Hoven, E & Plimmer, B 1970, 'Tangible cognition', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania, Australia.
View/Download from: Publisher's site
View description>>
This workshop will explore the relationship between HCI using tangible user interfaces (TUIs) and cognition. We see exciting opportunities for tangible interaction to address some of the cognitive challenges of concern to the HCI community, in areas such as education, healthcare, games, reminiscing and reflection, and community issues. Drawing together the Australasian community, with those from further afield, we hope to strengthen research and build a local community in this exciting and rapidly developing field. Participation is invited from researchers working in tangible user interfaces or those interested in cognition and interaction. During the workshop the majority of the time will be spent in small group discussions and brainstorming solutions.
van Gennip, D, van den Hoven, E & Markopoulos, P 1970, 'The Phenomenology of Remembered Experience', Proceedings of the European Conference on Cognitive Ergonomics, ECCE '16: European Conference on Cognitive Ergonomics, ACM, Nottingham, United Kingdom, pp. 1-8.
View/Download from: Publisher's site
View description>>
There is a growing interest in interactive technologies that support remembering by considering functional, experiential, and emotional support to their users. Design driven research benefits from an understanding of how people experience autobiographical remembering. We present a phenomenological study in which twenty-two adults were interviewed using the repertory grid technique; we aimed at soliciting personal constructs that characterize people's remembered experiences. Inductive coding revealed that 77,8% of identified constructs could be reliably coded in five categories referring to contentment, confidence/unease, social interactions, reflection, and intensity. These results align with earlier classifications of personal constructs and models of human emotion. The categorization derived from this study provides an empirically founded characterization of the design space of technologies for supporting remembering. We discuss its potential value as a tool for evaluating interactive systems in relation to personal and social memory talk, and outline future improvements.
Versteeg, M, van den Hoven, E & Hummels, C 1970, 'Interactive Jewellery', Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Eindhoven, The Netherlands, pp. 44-52.
View/Download from: Publisher's site
View description>>
Many current wearables have a technology-driven background: the focus is primarily on functionality, while their possible personal and social-cultural value is underappreciated. We think that developing wearables from a jewellery perspective can compensate for this. The personal and social cultural values embodied by traditional jewellery are often tightly connected to their function as memento. In this paper we reflect from a jewellery perspective, a memory-studies perspective and a TEI-perspective on three design proposals for interactive jewellery. We identify 1) drawing inspiration from interaction with traditional jewellery, 2) using relatively simple technology with high experiential qualities, 3) abstract and poetic data representation and 4) storing data uniquely on the digital jewel as possible design directions.
Voinov, A, Pierce, S & Barreteau, O 1970, 'Stream D sessions', Environmental Modelling and Software for Supporting a Sustainable Future, Proceedings - 8th International Congress on Environmental Modelling and Software, iEMSs 2016, p. 803.
Wakefield, J, Tyler, J, Dyson, L & Frawley, J 1970, 'Implications of Tablet Computing Annotation and Sharing Technology on Student Learning', American Accounting Association Annual Meeting, New York.
Wakefield, JA, Tyler, J, Dyson, L & Frawley, J 1970, 'Implications of tablet computing enabled sharing and annotation technology on introductory accounting student performance', AFAANZ, Gold Coast.
Wang, D, Deng, S & Xu, G 1970, 'GEMRec: A Graph-Based Emotion-Aware Music Recommendation Approach', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Web Information Systems Engineering, Springer International Publishing, Shanghai, China, pp. 92-106.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Music recommendation has gained substantial attention in recent times. As one of the most important context features,user emotion has great potential to improve recommendations,but this has not yet been sufficiently explored due to the difficulty of emotion acquisition and incorporation. This paper proposes a graph-based emotion-aware music recommendation approach (GEMRec) by simultaneously taking a user’s music listening history and emotion into consideration. The proposed approach models the relations between user,music,and emotion as a three-element tuple (user,music,emotion),upon which an Emotion Aware Graph (EAG) is built,and then a relevance propagation algorithm based on random walk is devised to rank the relevance of music items for recommendation. Evaluation experiments are conducted based on a real dataset collected from a Chinese microblog service in comparison to baselines. The results show that the emotional context from a user’s microblogs contributes to improving the performance of music recommendation in terms of hitrate,precision,recall,and F1 score.
Wang, D, Deng, S, Liu, S & Xu, G 1970, 'Improving Music Recommendation Using Distributed Representation', Proceedings of the 25th International Conference Companion on World Wide Web - WWW '16 Companion, the 25th International Conference Companion, ACM Press, Montreal, Canada, pp. 125-126.
View/Download from: Publisher's site
View description>>
In this paper, a music recommendation approach based on distributed representation is presented. The proposed approach firstly learns the distributed representations of music pieces and acquires users' preferences from listening records. Then, it recommends appropriate music pieces whose distributed representations are in accordance with target users' preferences. Experiments on a real world dataset demonstrate that the proposed approach outperforms the state-of-the-art methods.
Wang, D, Deng, S, Zhang, X & Xu, G 1970, 'Learning Music Embedding with Metadata for Context Aware Recommendation', Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, ICMR'16: International Conference on Multimedia Retrieval, ACM, New York, USA, pp. 249-253.
View/Download from: Publisher's site
View description>>
© 2016 ACM. Contextual factors can benefit music recommendation and retrieval tasks remarkably. However, how to acquire and utilize the contextual information still need to be studied. In this paper, we propose a context aware music recommendation approach, which can recommend music appropriate for users' contextual preference for music. In analogy to matrix factorization methods for collaborative filtering, the proposed approach does not require songs to be described by features beforehand, but it learns music pieces' embeddings (vectors in low-dimensional continuous space) from music playing records and corresponding metadata and infer users' general and contextual preference for music from their playing records with the learned embedding. Then, our approach can recommend appropriate music pieces. Experimental evaluations on a real world dataset show that the proposed approach outperforms baseline methods.
Wang, S, Liu, W, Wu, J, Cao, L, Meng, Q & Kennedy, PJ 1970, 'Training deep neural networks on imbalanced data sets', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, pp. 4368-4374.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.
Wang, W, Yin, H, Sadiq, S, Chen, L, Xie, M & Zhou, X 1970, 'SPORE: A sequential personalized spatial item recommender system', 2016 IEEE 32nd International Conference on Data Engineering (ICDE), 2016 IEEE 32nd International Conference on Data Engineering (ICDE), IEEE, Helsinki, pp. 954-965.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important way of helping users discover interesting locations to increase their engagement with location-based services. Although human movement exhibits sequential patterns in LBSNs, most current studies on spatial item recommendations do not consider the sequential influence of locations. Leveraging sequential patterns in spatial item recommendation is, however, very challenging, considering 1) users' check-in data in LBSNs has a low sampling rate in both space and time, which renders existing prediction techniques on GPS trajectories ineffective; 2) the prediction space is extremely large, with millions of distinct locations as the next prediction target, which impedes the application of classical Markov chain models; and 3) there is no existing framework that unifies users' personal interests and the sequential influence in a principled manner. In light of the above challenges, we propose a sequential personalized spatial item recommendation framework (SPORE) which introduces a novel latent variable topic-region to model and fuse sequential influence with personal interests in the latent and exponential space. The advantages of modeling the sequential effect at the topic-region level include a significantly reduced prediction space, an effective alleviation of data sparsity and a direct expression of the semantic meaning of users' spatial activities. Furthermore, we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online top-k recommendation process by extending the traditional LSH. We evaluate the performance of SPORE on two real datasets and one large-scale synthetic dataset. The results demonstrate a significant improvement in SPORE's ability to recommend spatial items, in terms of both effectiveness and efficiency, compared with the state-of-the-art methods.
Wang, X, Sheng, QZ, Yao, L, Li, X, Fang, XS, Xu, X & Benatallah, B 1970, 'Empowering Truth Discovery with Multi-Truth Prediction', Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM'16: ACM Conference on Information and Knowledge Management, ACM, IUPUI, Indianapolis, IN, pp. 881-890.
View/Download from: Publisher's site
Wang, X, Sheng, QZ, Yao, L, Li, X, Fang, XS, Xu, X & Benatallah, B 1970, 'Truth Discovery via Exploiting Implications from Multi-Source Data', Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM'16: ACM Conference on Information and Knowledge Management, ACM, IUPUI, Indianapolis, IN, pp. 861-870.
View/Download from: Publisher's site
Weilemann, E, Brune, P & Gill, AQ 1970, 'Do They Miss the Lectures? – Flipped Classroom Perception by Software Engineering Students', Proceedings of ECSEE 2016 - Flipped Classroom Perception by Software Engineering Students, European Conference of Software Engineering Education, Shaker Verlag, Seeon Monastery, Germany, pp. 245-249.
View/Download from: Publisher's site
WU, D, HUSSAIN, F, ZHANG, G, LU, JIE, UNWIN, J & RANCE, G 1970, 'A CLOUD-BASED COMPREHENSIVE HEALTH INFORMATION SYSTEM FRAMEWORK', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, pp. 612-617.
View/Download from: Publisher's site
View description>>
© 2016 by World Scientific Publishing Co. Pte. Ltd. Big data appearing in health domain bring great opportunities for the health information system development. To effectively utilize the big health data, three challenges: data heterogeneity, huge data volume and high velocity of data generation, and various kinds of user requirements, need to be dealt with. To solve the problem, this paper proposes a cloud-based comprehensive health information system framework, which uses cloud computing techniques to manage and process the big health data, and provides several data analysis and recommendation services to explore the data and extract values from them.
Wu, S-L, Liu, Y-T, Kuang-Pen Chou, Lin, Y-Y, Jie Lu, Guangquan Zhang, Chun-Hsiang Chuang, Wen-Chieh Lin & Lin, C-T 1970, 'A motor imagery based brain-computer interface system via swarm-optimized fuzzy integral and its application', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, BC, Canada, pp. 2495-2500.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. A brain-computer interface (BCI) system provides a convenient means of communication between the human brain and a computer, which is applied not only to healthy people but also for people that suffer from motor neuron diseases (MNDs). Motor imagery (MI) is one well-known basis for designing Electroencephalography (EEG)-based real-life BCI systems. However, EEG signals are often contaminated with severe noise and various uncertainties, imprecise and incomplete information streams. Therefore, this study proposes spectrum ensemble based on swam-optimized fuzzy integral for integrating decisions from sub-band classifiers that are established by a sub-band common spatial pattern (SBCSP) method. Firstly, the SBCSP effectively extracts features from EEG signals, and thereby the multiple linear discriminant analysis (MLDA) is employed during a MI classification task. Subsequently, particle swarm optimization (PSO) is used to regulate the subject-specific parameters for assigning optimal confidence levels for classifiers used in the fuzzy integral during the fuzzy fusion stage of the proposed system. Moreover, BCI systems usually tend to have complex architectures, be bulky in size, and require time-consuming processing. To overcome this drawback, a wireless and wearable EEG measurement system is investigated in this study. Finally, in our experimental result, the proposed system is found to produce significant improvement in terms of the receiver operating characteristic (ROC) curve. Furthermore, we demonstrate that a robotic arm can be reliably controlled using the proposed BCI system. This paper presents novel insights regarding the possibility of using the proposed MI-based BCI system in real-life applications.
Wu, W, Li, B, Chen, L & Zhang, C 1970, 'Canonical Consistent Weighted Sampling for Real-Value Weighted Min-Hash', 2016 IEEE 16th International Conference on Data Mining (ICDM), 2016 IEEE 16th International Conference on Data Mining (ICDM), IEEE, Barcelona, Spain, pp. 1287-1292.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Min-Hash, as a member of the Locality Sensitive Hashing (LSH) family for sketching sets, plays an important role in the big data era. It is widely used for efficiently estimating similarities of bag-of-words represented data and has been extended to dealing with multi-sets and real-value weighted sets. Improved ConsistentWeighted Sampling (ICWS) has been recognized as the state-of-The-Art for real-value weighted Min- Hash. However, the algorithmic implementation of ICWS is flawed because it violates the uniformity of the Min-Hash scheme. In this paper, we propose a Canonical Consistent Weighted Sampling (CCWS) algorithm, which not only retains the same theoretical complexity as ICWS but also strictly complies with the definition of Min-Hash. The experimental results demonstrate that the proposed CCWS algorithm runs faster than the state-of-The-Arts while achieving similar classification performance on a number of real-world text data sets.
Wu, W, Li, B, Chen, L & Zhang, C 1970, 'Cross-View Feature Hashing for Image Retrieval', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Auckland, New Zealand, pp. 203-214.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. Traditional cross-view information retrieval mainly rests on correlating two sets of features in different views. However, features in different views usually have different physical interpretations. It may be inappropriate to map multiple views of data onto a shared feature space and directly compare them. In this paper, we propose a simple yet effective Cross-View Feature Hashing (CVFH) algorithm via a “partition and match” approach. The feature space for each view is bi-partitioned multiple times using B hash functions and the resulting binary codes for all the views can thus be represented in a compatible B-bit Hamming space. To ensure that hashed feature space is effective for supporting generic machine learning and information retrieval functionalities, the hash functions are learned to satisfy two criteria: (1) the neighbors in the original feature spaces should be also close in the Hamming space; and (2) the binary codes for multiple views of the same sample should be similar in the shared Hamming space. We apply CVFH to cross view image retrieval. The experimental results show that CVFH can outperform the Canonical Component Analysis (CCA) based cross-view method.
Xiang, H, Xu, X, Zheng, H, Li, S, Wu, T, Dou, W & Yu, S 1970, 'An Adaptive Cloudlet Placement Method for Mobile Applications over GPS Big Data', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Mobile cloud computing provides powerful computing and storage capacity on managing GPS big data by offloading vast workloads to remote clouds. For the mobile applications with urgent computing or communication deadline, it is necessary to reduce the workload transmission latency between mobile devices and clouds. This can be technically achieved by expanding mobile cloudlets that are moving co-located with Access Points (APs). However, it is not-trivial to place such movable cloudlets efficiently to enhance the cloud service for dynamic context-aware mobile applications. In view of this challenge, an adaptive cloudlet placement method for mobile applications over GPS big data is proposed in this paper. Specifically, the gathering regions of the mobile devices are identified based on position clustering, and the cloudlet destination locations are confirmed accordingly. Besides, the traces between the origin and destination locations of these mobile cloudlets are also achieved. Finally, the experimental results demonstrate that the proposed method is both effective and efficient.
Xue, S, Lu, J, Wu, J, Zhang, G & Xiong, L 1970, 'Multi-instance graphical transfer clustering for traffic data learning', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, pp. 4390-4395.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In order to better model complex real-world data and to develop robust features that capture relevant information, we usually employ unsupervised feature learning to learn a layer of features representations from unlabeled data. However, developing domain-specific features for each task is expensive, time-consuming and requires expertise of the data. In this paper, we introduce multi-instance clustering and graphical learning to unsupervised transfer learning. For a better clustering efficient, we proposed a set of algorithms on the application of traffic data learning, instance feature representation, distance calculation of multi-instance clustering, multi-instance graphical cluster initialisation, multi-instance multi-cluster update, and graphical multi-instance transfer clustering (GMITC). In the end of this paper, we examine the proposed algorithms on the Eastwest datasets by couples of baselines. The experiment results indicate that our proposed algorithms can get higher clustering accuracy and much higher programming speed.
Yang, D, Wu, Z, Wang, X, Cao, J & Xu, G 1970, 'Predicting Replacement of Smartphones with Mobile App Usage', Web Information Systems Engineering – WISE 2016 (LNCS), International Conference on Web Information Systems Engineering, Springer International Publishing, Shanghai, China, pp. 343-351.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016.To identify right customers who intend to replace the smart phone can help to perform precision marketing and thus bring significant financial gains to cell phone retailers. In this paper,we provide a study of exploiting mobile app usage for predicting users who will change the phone in the future. We first analyze the characteristics of mobile log data and develop the temporal bag-of-apps model,which can transform the raw data to the app usage vectors. We then formularize the prediction problem,present the hazard based prediction model,and derive the inference procedure. Finally,we evaluate both data model and prediction model on real-world data. The experimental results show that the temporal usage data model can effectively capture the unique characteristics of mobile log data,and the hazard based prediction model is thus much more effective than traditional classification methods. Furthermore,the hazard model is explainable,that is,it can easily show how the replacement of smart phones relate to mobile app usage over time.
Yao, L, Benatallah, B, Wang, X, Tran, NK & Lu, Q 1970, 'Context as a Service: Realizing Internet of Things-Aware Processes for the Independent Living of the Elderly', SERVICE-ORIENTED COMPUTING, (ICSOC 2016), 14th International Conference on Service-Oriented Computing (ICSOC), Springer International Publishing, Banff, CANADA, pp. 763-779.
View/Download from: Publisher's site
Ye, L, Cao, K, Guo, YJ, Huang, X, Beadle, P, Argha, A, Piccardi, M, Zhang, G & Su, SW 1970, 'Inertial Sensor based Post Fall Analysis for False Alarming Reduction', Telehealth and Assistive Technology / 847: Intelligent Systems and Robotics, Telehealth and Assistive Technology / 847: Intelligent Systems and Robotics, ACTAPRESS, Zurich, Switzerland, pp. 36-43.
View/Download from: Publisher's site
View description>>
One of the major public health problems among elderly people is falling injury. This study investigates fall detection and prevention by using inertial sensors for which the major existing challenging is how to significantly reduce false alarming in order to enhance the acceptance of elderly users during rehabilitation and daily exercises. Different from most existing approaches in the literature, the behavior after falling will be analyzed in details, which can not only greatly reduce false alarming, but also significantly improves the accuracy of the assessment of the severity of falling injuries.
Zeng, X, Lu, J, Kerre, EE, Martinez, L & Koehl, L 1970, 'Foreword', Uncertainty Modelling in Knowledge Engineering and Decision Making - Proceedings of the 12th International FLINS Conference, FLINS 2016, WORLD SCIENTIFIC PUBL CO PTE LTD, pp. v-vi.
View/Download from: Publisher's site
Zhang, Q, Wu, D, Zhang, G & Lu, J 1970, 'Fuzzy user-interest drift detection based recommender systems', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, Canada, pp. 1274-1281.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Recommender systems aim to provide personalized suggestions to users by modeling user-interests to deal with information overload problem, which is extremely severe in the era of big data. Since user-interests are drifting due to their taste variation on items, recommender systems without considering that will suffer degradation of prediction accuracy. There are two challenges about adapting to user-interest drift in recommender systems: 1) accurately modeling user-interests is not easy since the drift of user-interests may occur in different direction for each user; 2) item features and user-interests are often incomplete and vague, which makes it more difficult to model user-interests. To handle these two issues, this study proposes a fuzzy user-interest drift detection based recommender system that adapts to user-interest drift and improves prediction accuracy. A fuzzy user-interest consistency model is built based on fuzzy set theories, and a user-interest drift detection approach and algorithms are developed based on concept drift techniques to provide guidance to recommendation generation. Empirical experiments are conducted on synthetic and real-world MovieLens datasets. The results show that the proposed approach improves the performance of recommender systems in metric of MAE.
Zhang, Z, Huang, K, Tan, T, Yang, P & Li, J 1970, 'ReD-SFA: Relation Discovery Based Slow Feature Analysis for Trajectory Clustering', 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, USA, pp. 752-760.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. For spectral embedding/clustering, it is still an open problem on how to construct an relation graph to reflect the intrinsic structures in data. In this paper, we proposed an approach, named Relation Discovery based Slow Feature Analysis (ReD-SFA), for feature learning and graph construction simultaneously. Given an initial graph with only a few nearest but most reliable pairwise relations, new reliable relations are discovered by an assumption of reliability preservation, i.e., the reliable relations will preserve their reliabilities in the learnt projection subspace. We formulate the idea as a cross entropy (CE) minimization problem to reduce the discrepancy between two Bernoulli distributions parameterized by the updated distances and the existing relation graph respectively. Furthermore, to overcome the imbalanced distribution of samples, a Boosting-like strategy is proposed to balance the discovered relations over all clusters. To evaluate the proposed method, extensive experiments are performed with various trajectory clustering tasks, including motion segmentation, time series clustering and crowd detection. The results demonstrate that ReDSFA can discover reliable intra-cluster relations with high precision, and competitive clustering performance can be achieved in comparison with state-of-the-art.
Zhang, Z, Oberst, S & Lai, JCS 1970, 'Influence of contact condition and sliding speed on friction-induced instability', ICSV 2016 - 23rd International Congress on Sound and Vibration: From Ancient to Modern Acoustics, International Congress on Sound and Vibration: From Ancient to Modern Acoustics (ICSV), International Institute of Acoustics and Vibration, Athens, Greece.
View description>>
Brake squeal, defined as audible noise above 1 kHz, is triggered by energy provided in the contact area between the pad and the disc and friction-induced instabilities. Owing to customers' demand of reducing vehicle noise and the increasing use of light composite materials in cars, squealing brakes remain a major concern to the automotive industry because of warranty-related claims. The prediction of disc brake squeal propensity is as challenging as ever. Although friction-induced instabilities are inherently nonlinear and during squeal the brake system's operating and environmental conditions keep changing, mostly linear and steady state methods are used for the analysis of brake squeal propensity. While many different instability mechanisms have been identified, their interactions and the resulting dynamics are not yet fully understood. Linear instability predictions suffer from over- and under-predictions and have to be complemented by extensive noise dynamometer or in vehicle tests. Recent studies indicate that frictional contact is multi-scaled in nature, highly sensitive and inhomogeneous. Very high local pressures and partial contact separations in the contact interface further complicate its numerical modelling. By studying an analytical model of 3 × 3 friction oscillators using three different friction laws (Amonton-Coulomb, the velocity-dependent and the LuGre friction model) in point contact with a sliding rigid plate and incorporating uncertainties in the contact condition, robustly unstable vibration modes have been identified in our previous research. Here, the number and the combination of friction oscillators engaged in contact are randomised to model imperfect contact. In addition, the effect of the variation in the plate's sliding velocity on the in-stability analysis is investigated with randomised friction coefficient of the Amonton-Coulomb friction model. Results of instability prediction and net work calculations are used to illust...
Zheng, D, Huo, H, Chen, S-Y, Xu, B & Liu, L 1970, 'LTMF: Local-Based Tag Integration Model for Recommendation', Springer International Publishing, pp. 296-302.
View/Download from: Publisher's site
Zijlema, A, van den Hoven, E & Eggen, B 1970, 'Companions', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Australia.
View/Download from: Publisher's site
View description>>
Cherished utilitarian objects can provide comfort and pleasure through their associations to our personal past and the time and energy we have invested in and with them. In this paper, we present a specific type of object relationship, which we call the companion. They are mundane objects that accrued meaning over time, and evoke tiny pleasureswhen we interact with them. We then draw insights from the HCI research literature on digital possessions and attachment that could be applied to enhance digital products or processes with companion qualities. We argue the importance to design for digital companionship in everyday use products, for example by enabling the accruement of subtle marks of the owners past with the product. We wish to evoke thought and awareness of the role of companions, and how this relationship can be supported in digital products.
ZUO, HUA, ZHANG, G, BEHBOOD, V, LU, JIE, PEDRYCZ, W & ZHANG, T 1970, 'FUZZY TRANSFER LEARNING IN DATA-SHORTAGE AND RAPIDLY CHANGING ENVIRONMENTS', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, Roubaix, FRANCE, pp. 175-180.
View/Download from: Publisher's site
Zurita, G, Merigó, JM & Lobos-Ossandón, V 1970, 'A bibliometric analysis of journals in educational research', Lecture Notes in Engineering and Computer Science, pp. 403-408.
View description>>
The influence and impact of journals in the scientific community is a fundamental question for researchers worldwide because it measures the importance and quality of a publication. This study analyses all the journals that are currently ranked in any educational research category in Web of Science by using bibliometric indicators. The aim is to provide a general overview of their impact and influence between 1989 and 2013. The journals are divided in seven research categories that represent the whole field of educational research. The analysis also develops a general comparison between all the categories. The results show that many interdisciplinary journals obtain a broader impact than the core journals although these publications are also well positioned in the field.