Ahadi, A, Brennan, S, Kennedy, PJ, Hutvagner, G & Tran, N 2016, 'Long non-coding RNAs harboring miRNA seed regions are enriched in prostate cancer exosomes', Scientific Reports, vol. 6, no. 1, pp. 1-14.
View/Download from: Publisher's site
View description>>
AbstractLong non-coding RNAs (lncRNAs) form the largest transcript class in the human transcriptome. These lncRNA are expressed not only in the cells, but they are also present in the cell-derived extracellular vesicles such as exosomes. The function of these lncRNAs in cancer biology is not entirely clear, but they appear to be modulators of gene expression. In this study, we characterize the expression of lncRNAs in several prostate cancer exosomes and their parental cell lines. We show that certain lncRNAs are enriched in cancer exosomes with the overall expression signatures varying across cell lines. These exosomal lncRNAs are themselves enriched for miRNA seeds with a preference for let-7 family members as well as miR-17, miR-18a, miR-20a, miR-93 and miR-106b. The enrichment of miRNA seed regions in exosomal lncRNAs is matched with a concomitant high expression of the same miRNA. In addition, the exosomal lncRNAs also showed an over representation of RNA binding protein binding motifs. The two most common motifs belonged to ELAVL1 and RBMX. Given the enrichment of miRNA and RBP sites on exosomal lncRNAs, their interplay may suggest a possible function in prostate cancer carcinogenesis.
Al Othman, FA & Sohaib, O 2016, 'Enhancing Innovative Capability and Sustainability of Saudi Firms', SUSTAINABILITY, vol. 8, no. 12, pp. 1-16.
View/Download from: Publisher's site
View description>>
© 2016 by the author. The Saudi Arabian government has recognised the need for an alternative path to national development in the form of a knowledge-based economy (KBE). One of the key drivers of a knowledge-based economy KBE is innovation. Therefore, to achieve this aim, it is important to understand the various factors affecting organisational innovation capability and sustainability. This empirical research study was conducted to provide a better understanding of the interrelationships among the key constructs, socio-technical factors, diffusion of innovation, and knowledge-sharing process towards Saudi organisational innovation capability. The results offer a number of implications, which are beneficial towards the adoption of the knowledge-based economy seeking to enhance the Saudi organisations towards enriching the organisational innovation capability and sustainability.
Al-Jubouri, B & Gabrys, B 2016, 'Local Learning for Multi-layer, Multi-component Predictive System', Procedia Computer Science, vol. 96, pp. 723-732.
View/Download from: Publisher's site
View description>>
This study introduces a new multi-layer multi-component ensemble. The components of this ensemble are trained locally on subsets of features for disjoint sets of data. The data instances are assigned to local regions using the similarity of their features pairwise squared correlation. Many ensemble methods encourage diversity among their base predictors by training them on different subsets of data or different subsets of features. In the proposed architecture the local regions contain disjoint sets of data and for this data only the most similar features are selected. The pairwise squared correlations of the features are used to weight the predictions of the ensemble's models. The proposed architecture has been tested on a number of data sets and its performance was compared to five benchmark algorithms. The results showed that the testing accuracy of the developed architecture is comparable to the rotation forest and is better than the other benchmark algorithms.
Allen, G & Dovey, KA 2016, 'Action Research as a Leadership Strategy for Innovation: The Case of a Global High-Technology Organisation', International Journal of Action Research, vol. 12, no. 1, pp. 8-37.
View/Download from: Publisher's site
View description>>
The paper describes two sets of action research within an iconic global high-tech company. Two teams within the organisation (one in New York and one in Sydney) were selected to participate on the basis of their failure to have achieved any technical innovation over the previous three years. The action research had the practical goal of generating valuable technical innovations and the research goal of gaining insight into any social (leadership) practices that may have facilitated the technical innovation. The research delivered novel insights into the nature of the leadership practices that enabled these two teams to deliver four company-lauded technical innovations. The principal finding of the research - that social innovation precedes technical innovation – highlights the role action research can play in the creation of a social environment conducive to technical innovation within enterprises.
Alzoubi, YI, Gill, AQ & Al-Ani, A 2016, 'Empirical studies of geographically distributed agile development communication challenges: A systematic review.', Inf. Manag., vol. 53, no. 1, pp. 22-37.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. All rights reserved. There is increasing interest in studying and applying geographically distributed agile development (GDAD). Much has been published on GDAD communication. There is a need to systematically review and synthesize the literature on GDAD communication challenges. Using the SLR approach and applying customized search criteria derived from the research questions, 21 relevant empirical studies were identified and reviewed in this paper. The data from these papers were extracted to identify communication challenges and the techniques used to overcome these challenges. The findings of this research serve as a resource for GDAD practitioners and researchers when setting future research priorities and directions.
Anaissi, A, Goyal, M, Catchpoole, DR, Braytee, A & Kennedy, PJ 2016, 'Ensemble Feature Learning of Genomic Data Using Support Vector Machine', PLOS ONE, vol. 11, no. 6, pp. e0157330-e0157330.
View/Download from: Publisher's site
View description>>
© 2016 Anaissi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algo...
Argent, RM, Sojda, RS, Giupponi, C, McIntosh, B, Voinov, AA & Maier, HR 2016, 'Best practices for conceptual modelling in environmental planning and management', Environmental Modelling & Software, vol. 80, pp. 113-121.
View/Download from: Publisher's site
Azadeh, A, Aryaee, M, Zarrin, M & Saberi, M 2016, 'A novel performance measurement approach based on trust context using fuzzy T-norm and S-norm operators: The case study of energy consumption', Energy Exploration & Exploitation, vol. 34, no. 4, pp. 561-585.
View/Download from: Publisher's site
View description>>
In today’s economic environment, performance and efficiency assessment is essential for organizations in order to survive and raise their market share. Energy efficient consumption is a major issue in the energy planning of each country which is a big concern of managers, hence, exploitation of a strong approach for efficiency evaluation and assessment seems necessary in the energy section. In this study, a novel performance assessment model is proposed based on the concept of trust, using two popular fuzzy operators called T-norm and S-norm. The developed model is applied for a real case study of energy consumption efficiency assessment for 36 countries. An adaptive network based fuzzy inference system (ANFIS) is used to measure the efficiencies. Also, to predict efficiency rates of the future time periods, a regression model is applied as a time series model. The obtained results indicate the superiority and applicability of the proposed methodology. To the best of our knowledge, this is the first study that proposes a novel performance measurement approach based on trust context by using fuzzy T-norm and S-norm operators.
Bano, M, Zowghi, D & Sarkissian, N 2016, 'Empirical study of communication structures and barriers in geographically distributed teams', IET SOFTWARE, vol. 10, no. 5, pp. 147-153.
View/Download from: Publisher's site
View description>>
Conway's law asserts that communication structures of organisations constrain the design of the products they develop. This law is more explicitly observable in geographically distributed contexts because distributed teams are required to share information across different time zones and barriers. The diverse business processes and functions adopted by individual teams in geographically distributed settings create challenges for effective communication. Since the publication of Conway's law, a significant body of research has emerged in its relation to the communication structures. When it comes to software projects, the explicit observation about Conway's law has produced mixed results. The research reported in this study explores the communication structures and corresponding challenges faced by teams within a large geographically distributed software development organisation. The data was collected from relevant documents, a questionnaire and interviews with relevant stakeholders. The findings suggest that Conway's law is observable within the communication structures of globally distributed software development teams. The authors have identified the barriers and challenges of effective communications in this setting and have investigated the benefits of utilising an integrated system to overcome these challenges.
Belete, GF & Voinov, A 2016, 'Exploring temporal and functional synchronization in integrating models: A sensitivity analysis', Computers & Geosciences, vol. 90, pp. 162-171.
View/Download from: Publisher's site
Benavides Espinosa, MDM & Merigó Lindahl, JM 2016, 'Organizational design as a learning enabler: A fuzzy-set approach', Journal of Business Research, vol. 69, no. 4, pp. 1340-1344.
View/Download from: Publisher's site
View description>>
In the literature on organizational learning, very few empirical studies attempt to show how organizational design can enable or hinder learning in organizations. This study uses a fuzzy-set technique (fuzzy-set qualitative comparative analysis: fsQCA) as an initial approach to analyzing different design variables and how they affect organizational learning. The results prove that themechanical structures are suitable for organizational learning, especially in large companies. Furthermore, qualified workers should have autonomy to learn.
Beydoun, G & Low, G 2016, 'Centering ontologies in agent oriented software engineering processes', Complex & Intelligent Systems, vol. 2, no. 3, pp. 235-242.
View/Download from: Publisher's site
View description>>
A plethora of Multi Agent Systems (MAS) development methodologies exists and all compete for prominence. This paper advocates unification of best of breed activities from these methodologies and examines two existing approaches for unifying access to them. It proposes an alternative approach that focusses on the use of domain knowledge through ontologies as offering the best potential for unifying access to them. The reliance on ontologies will provide flexibility in the process and workproducts use within the methodology. The focus on domain knowledge will reduce the number of mandatory methodological tasks and at the same time create scope for reuse with respect to both system designs and components. The paper will further sketch and argue for a full software development lifecycle for MAS where ontologies expressing domain knowledge are the central artifacts.
Blanco-Mesa, F, Merigó, JM & Kacprzyk, J 2016, 'Bonferroni means with distance measures and the adequacy coefficient in entrepreneurial group theory', Knowledge-Based Systems, vol. 111, pp. 217-227.
View/Download from: Publisher's site
View description>>
© 2016 The aim of the paper is to develop new aggregation operators using Bonferroni means, OWA operators and some distance measure. We introduce the BON-OWAAC and BON-OWAIMAM operators. We are able to include coefficient adequacy and the maximum and minimum levels in the same formulation with Bonferroni means and an OWA operator. The main advantages of using these operators are that they allow consideration of continuous aggregations, multiple comparisons between each argument and distance measures in the same formulation. An application is developed using these new algorithms in combination with Moore's families and Galois lattices to solve group decision-making problems. The professional and personal interests of the entrepreneurs who share co-working spaces are taken as an example for establishing relationships and groups. According to the professional and personal profile affinities for each entrepreneur, the results show dissimilarity and fuzzy relationships and the maximum similarity sub-relations to establish relationships and groups using Moore's families and Galois lattice. Finally, this new type of distance family can be used for applications in areas such as sports teams, strategy marketing and teamwork.
Blount, Y, Abedin, B, Vatanasakdakul, S & Erfani, S 2016, 'Integrating enterprise resource planning (SAP) in the accounting curriculum: a systematic literature review and case study', Accounting Education, vol. 25, no. 2, pp. 185-202.
View/Download from: Publisher's site
View description>>
© 2016 Taylor & Francis. This study investigates how an enterprise resource planning (ERP) software package SAP was integrated into the curriculum of an accounting information systems (AIS) course in an Australian university. Furthermore, the paper provides a systematic literature review of articles published between 1990 and 2013 to understand how ERP systems were integrated into curriculums of other institutions, and to inform the curriculum designers on approaches for adopting SAP, the benefits and potential limitations. The experiences of integrating SAP into an AIS course from both the students and teaching staff perspectives are described and evaluated. The main finding was the importance of resourcing the instructors with technical and pedagogical support to achieve the learning outcomes. The paper concludes by proposing critical success factors for integrating ERP effectively into an AIS course.
Boixo, S, Isakov, SV, Smelyanskiy, VN, Babbush, R, Ding, N, Jiang, Z, Bremner, MJ, Martinis, JM & Neven, H 2016, 'Characterizing Quantum Supremacy in Near-Term Devices', Nature Physics, vol. 14, no. 6, pp. 595-600.
View/Download from: Publisher's site
View description>>
A critical question for the field of quantum computing in the near future iswhether quantum devices without error correction can perform a well-definedcomputational task beyond the capabilities of state-of-the-art classicalcomputers, achieving so-called quantum supremacy. We study the task of samplingfrom the output distributions of (pseudo-)random quantum circuits, a naturaltask for benchmarking quantum computers. Crucially, sampling this distributionclassically requires a direct numerical simulation of the circuit, withcomputational cost exponential in the number of qubits. This requirement istypical of chaotic systems. We extend previous results in computationalcomplexity to argue more formally that this sampling task must take exponentialtime in a classical computer. We study the convergence to the chaotic regimeusing extensive supercomputer simulations, modeling circuits with up to 42qubits - the largest quantum circuits simulated to date for a computationaltask that approaches quantum supremacy. We argue that while chaotic states areextremely sensitive to errors, quantum supremacy can be achieved in thenear-term with approximately fifty superconducting qubits. We introduce crossentropy as a useful benchmark of quantum circuits which approximates thecircuit fidelity. We show that the cross entropy can be efficiently measuredwhen circuit simulations are available. Beyond the classically tractableregime, the cross entropy can be extrapolated and compared with theoreticalestimates of circuit fidelity to define a practical quantum supremacy test.
Bremner, MJ, Montanaro, A & Shepherd, DJ 2016, 'Achieving quantum supremacy with sparse and noisy commuting quantum computations', Quantum, vol. 1, pp. 8-8.
View/Download from: Publisher's site
View description>>
The class of commuting quantum circuits known as IQP (instantaneous quantumpolynomial-time) has been shown to be hard to simulate classically, assumingcertain complexity-theoretic conjectures. Here we study the power of IQPcircuits in the presence of physically motivated constraints. First, we showthat there is a family of sparse IQP circuits that can be implemented on asquare lattice of n qubits in depth O(sqrt(n) log n), and which is likely hardto simulate classically. Next, we show that, if an arbitrarily small constantamount of noise is applied to each qubit at the end of any IQP circuit whoseoutput probability distribution is sufficiently anticoncentrated, there is apolynomial-time classical algorithm that simulates sampling from the resultingdistribution, up to constant accuracy in total variation distance. However, weshow that purely classical error-correction techniques can be used to designIQP circuits which remain hard to simulate classically, even in the presence ofarbitrary amounts of noise of this form. These results demonstrate thechallenges faced by experiments designed to demonstrate quantum supremacy overclassical computation, and how these challenges can be overcome.
Bremner, MJ, Montanaro, A & Shepherd, DJ 2016, 'Average-Case Complexity Versus Approximate Simulation of Commuting Quantum Computations', PHYSICAL REVIEW LETTERS, vol. 117, no. 8.
View/Download from: Publisher's site
View description>>
© 2016 American Physical Society. We use the class of commuting quantum computations known as IQP (instantaneous quantum polynomial time) to strengthen the conjecture that quantum computers are hard to simulate classically. We show that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error. One conjecture relates to the hardness of estimating the complex-temperature partition function for random instances of the Ising model; the other concerns approximating the number of zeroes of random low-degree polynomials. We observe that both conjectures can be shown to be valid in the setting of worst-case complexity. We arrive at these conjectures by deriving spin-based generalizations of the boson sampling problem that avoid the so-called permanent anticoncentration conjecture. 2016 UK.
Brown, RBK, Beydoun, G, Low, G, Tibben, W, Zamani, R, Garcia-Sanchez, F & Martinez-Bejar, R 2016, 'Computationally efficient ontology selection in software requirement planning', INFORMATION SYSTEMS FRONTIERS, vol. 18, no. 2, pp. 349-358.
View/Download from: Publisher's site
Cao, Z, Lin, C-T, Chuang, C-H, Lai, K-L, Yang, AC, Fuh, J-L & Wang, S-J 2016, 'Resting-state EEG power and coherence vary between migraine phases', The Journal of Headache and Pain, vol. 17, no. 1.
View/Download from: Publisher's site
View description>>
© 2016, The Author(s). Background: Migraine is characterized by a series of phases (inter-ictal, pre-ictal, ictal, and post-ictal). It is of great interest whether resting-state electroencephalography (EEG) is differentiable between these phases. Methods: We compared resting-state EEG energy intensity and effective connectivity in different migraine phases using EEG power and coherence analyses in patients with migraine without aura as compared with healthy controls (HCs). EEG power and isolated effective coherence of delta (1–3.5 Hz), theta (4–7.5 Hz), alpha (8–12.5 Hz), and beta (13–30 Hz) bands were calculated in the frontal, central, temporal, parietal, and occipital regions. Results: Fifty patients with episodic migraine (1–5 headache days/month) and 20 HCs completed the study. Patients were classified into inter-ictal, pre-ictal, ictal, and post-ictal phases (n = 22, 12, 8, 8, respectively), using 36-h criteria. Compared to HCs, inter-ictal and ictal patients, but not pre- or post-ictal patients, had lower EEG power and coherence, except for a higher effective connectivity in fronto-occipital network in inter-ictal patients (p <.05). Compared to data obtained from the inter-ictal group, EEG power and coherence were increased in the pre-ictal group, with the exception of a lower effective connectivity in fronto-occipital network (p <.05). Inter-ictal and ictal patients had decreased EEG power and coherence relative to HCs, which were “normalized” in the pre-ictal or post-ictal groups. Conclusion: Resting-state EEG power density and effective connectivity differ between migraine phases and provide an insight into the complex neurophysiology of migraine.
Casanovas, M, Torres-Martínez, A & Merigó, JM 2016, 'Decision Making in Reinsurance with Induced OWA Operators and Minkowski Distances', Cybernetics and Systems, vol. 47, no. 6, pp. 460-477.
View/Download from: Publisher's site
Catchpoole, D 2016, '‘Biohoarding’: treasures not seen, stories not told', Journal of Health Services Research & Policy, vol. 21, no. 2, pp. 140-142.
View/Download from: Publisher's site
View description>>
This article raises the concern that biobanks are failing to realize the expected research and health service outcomes. Rather than biobanking, we have been engaging in ‘biohoarding’, where building a quantifiable collection of tissue samples is the primary basis of the bio-resource. The root cause of ‘biohoarding’ is an ideological and motivational confusion as to the purpose for collecting the tissue in the first place. We have lost sight of the knowledge gain that biobanks should generate. The obligation to prevent ‘biohoarding’ lies not with researchers, funders or managers but with policy makers.
Cetindamar, D, Phaal, R & Probert, DR 2016, 'Technology management as a profession and the challenges ahead', Journal of Engineering and Technology Management, vol. 41, pp. 1-13.
View/Download from: Publisher's site
Chen, S, Yuan, X, Wang, Z, Guo, C, Liang, J, Wang, Z, Zhang, X & Zhang, J 2016, 'Interactive Visual Discovering of Movement Patterns from Sparsely Sampled Geo-tagged Social Media Data', IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 270-279.
View/Download from: Publisher's site
View description>>
© 1995-2012 IEEE. Social media data with geotags can be used to track people's movements in their daily lives. By providing both rich text and movement information, visual analysis on social media data can be both interesting and challenging. In contrast to traditional movement data, the sparseness and irregularity of social media data increase the difficulty of extracting movement patterns. To facilitate the understanding of people's movements, we present an interactive visual analytics system to support the exploration of sparsely sampled trajectory data from social media. We propose a heuristic model to reduce the uncertainty caused by the nature of social media data. In the proposed system, users can filter and select reliable data from each derived movement category, based on the guidance of uncertainty model and interactive selection tools. By iteratively analyzing filtered movements, users can explore the semantics of movements, including the transportation methods, frequent visiting sequences and keyword descriptions. We provide two cases to demonstrate how our system can help users to explore the movement patterns.
Chen, Y, Zhen, YG, Hu, HY, Liang, J & Ma, KL 2016, 'Visualization technique for multi-attribute in hierarchical structure', Ruan Jian Xue Bao/Journal of Software, vol. 27, no. 5, pp. 1091-1102.
View/Download from: Publisher's site
View description>>
Nowadays, there is increasing need to analyze the complex data with both hierarchical and multi-attributes in many fields such as food safety, stock market, and network security. The visual analytics appeared in recent years provides a good solution to analyze this kind of data. So far, many visualization methods for multi-dimensional data and hierarchical data, the typical data objects in the field of information visualization, have been presented to solve data analyzing problems effectively. However, the existing solutions can't meet requirements of visual analysis for the complex data with both multi-dimensional and hierarchical attributes. This paper presents a technology named Multi-Coordinate in Treemap (MCT), which combines rectangle treemap and multi-dimensional coordinates techniques. MCT uses treemap created with Squarified and Strip layout algorithm to represent hierarchical structure, uses four edges of treemap's rectangular node as the attribute axis, and through mapping property values to attribute axis, connecting attribute points and fitting curve, to achieve visualization of multi-attribute in hierarchical structure. This work applies MCT technology to visualize pesticide residue detection data and implements the visualization for detecting excessive pesticide residue in fruits and vegetables distributed in each provinces of China. This technology provides an efficient analysis tool for field experts. MCT can also be applied in other fields which require visual analysis of complex data with both hierarchical and multi-attribute.
Chuang, S-W, Chuang, C-H, Yu, Y-H, King, J-T & Lin, C-T 2016, 'EEG Alpha and Gamma Modulators Mediate Motion Sickness-Related Spectral Responses', International Journal of Neural Systems, vol. 26, no. 02, pp. 1650007-1650007.
View/Download from: Publisher's site
View description>>
Motion sickness (MS) is a common experience of travelers. To provide insights into brain dynamics associated with MS, this study recruited 19 subjects to participate in an electroencephalogram (EEG) experiment in a virtual-reality driving environment. When riding on consecutive winding roads, subjects experienced postural instability and sensory conflict between visual and vestibular stimuli. Meanwhile, subjects rated their level of MS on a six-point scale. Independent component analysis (ICA) was used to separate the filtered EEG signals into maximally temporally independent components (ICs). Then, reduced logarithmic spectra of ICs of interest, using principal component analysis, were decomposed by ICA again to find spectrally fixed and temporally independent modulators (IMs). Results demonstrated that a higher degree of MS accompanied increased activation of alpha ([Formula: see text]) and gamma ([Formula: see text]) IMs across remote-independent brain processes, covering motor, parietal and occipital areas. This co-modulatory spectral change in alpha and gamma bands revealed the neurophysiological demand to regulate conflicts among multi-modal sensory systems during MS.
Devitt, SJ 2016, 'Performing Quantum Computing Experiments in the Cloud', Phys. Rev. A, vol. 94, no. 3, p. 032329.
View/Download from: Publisher's site
View description>>
Quantum computing technology has reached a second renaissance in the pastfive years. Increased interest from both the private and public sector combinedwith extraordinary theoretical and experimental progress has solidified thistechnology as a major advancement in the 21st century. As anticipated by many,the first realisation of quantum computing technology would occur over thecloud, with users logging onto dedicated hardware over the classical internet.Recently IBM has released the {\em Quantum Experience} which allows users toaccess a five qubit quantum processor. In this paper we take advantage of thisonline availability of actual quantum hardware and present four quantuminformation experiments that have never been demonstrated before. We utilisethe IBM chip to realise protocols in Quantum Error Correction, QuantumArithmetic, Quantum graph theory and Fault-tolerant quantum computation, byaccessing the device remotely through the cloud. While the results are subjectto significant noise, the correct results are returned from the chip. Thisdemonstrates the power of experimental groups opening up their technology to awider audience and will hopefully allow for the next stage development inquantum information technology.
Devitt, SJ 2016, 'Programming quantum computers using 3-D puzzles, coffee cups, and doughnuts', XRDS, vol. 23, no. 1, pp. 45-50.
View/Download from: Publisher's site
View description>>
The task of programming a quantum computer is just as strange as quantummechanics itself. But it now looks like a simple 3D puzzle may be the futuretool of quantum software engineers.
Ding, W-P, Lin, C-T, Prasad, M, Chen, S-B & Guan, Z-J 2016, 'Attribute Equilibrium Dominance Reduction Accelerator (DCCAEDR) Based on Distributed Coevolutionary Cloud and Its Application in Medical Records', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 3, pp. 384-400.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Aimed at the tremendous challenge of attribute reduction for big data mining and knowledge discovery, we propose a new attribute equilibrium dominance reduction accelerator (DCCAEDR) based on the distributed coevolutionary cloud model. First, the framework of N-populations distributed coevolutionary MapReduce model is designed to divide the entire population into N subpopulations, sharing the reward of different subpopulations' solutions under a MapReduce cloud mechanism. Because the adaptive balancing between exploration and exploitation can be achieved in a better way, the reduction performance is guaranteed to be the same as those using the whole independent data set. Second, a novel Nash equilibrium dominance strategy of elitists under the N bounded rationality regions is adopted to assist the subpopulations necessary to attain the stable status of Nash equilibrium dominance. This further enhances the accelerator's robustness against complex noise on big data. Third, the approximation parallelism mechanism based on MapReduce is constructed to implement rule reduction by accelerating the computation of attribute equivalence classes. Consequently, the entire attribute reduction set with the equilibrium dominance solution can be achieved. Extensive simulation results have been used to illustrate the effectiveness and robustness of the proposed DCCAEDR accelerator for attribute reduction on big data. Furthermore, the DCCAEDR is applied to solve attribute reduction for traditional Chinese medical records and to segment cortical surfaces of the neonatal brain 3-D-MRI records, and the DCCAEDR shows the superior competitive results, when compared with the representative algorithms.
Dong, D, R. Petersen, I, Wang, Y, Yi, X & Rabitz, H 2016, 'Sampled‐data design for robust control of open two‐level quantum systems with operator errors', IET Control Theory & Applications, vol. 10, no. 18, pp. 2415-2421.
View/Download from: Publisher's site
View description>>
This study proposes a sampled‐data design method for robust control of open two‐level quantum systems with operator errors. The required control performance is characterised using the concept of a sliding mode domain related to fidelity, coherence or purity. The authors have designed a control law offline and then utilise it online for a two‐level system subject to decoherence with operator errors in the system model. They analyse three cases of approximate amplitude damping decoherence, approximate phase damping decoherence and approximate depolarising decoherence. They design specific sampling periods for these cases that can guarantee the required control performance.
Dzeng, R-J, Lin, C-T & Fang, Y-C 2016, 'Using eye-tracker to compare search patterns between experienced and novice workers for site hazard identification', Safety Science, vol. 82, pp. 56-67.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. The construction industry accounts for a high number of accidents. Although identifying hazards prior to commencing construction is widely employed to prevent accidents, it typically fails because of insufficient safety experience. The experience helps in training novice inspectors, although extracting and describing tacit knowledge explicitly is difficult. This study created a digital building construction site, and designed a hazard-identification experiment involving four workplaces featuring obvious and unobvious hazards (e.g., falls, collapses, and electric shocks), and an eye-tracker was used to compare the search patterns of the experienced and novice workers. The results indicated that experience assisted the experienced workers in assessing both obvious (p<. 0.001) and unobvious hazards (p= 0.004) significantly faster than the novice workers could; however, it did not improve the accuracy with which they identified hazards, indicating that general work experience is not equivalent to safety-specific experience, and may not necessarily improve workers' accuracy in identifying hazards. Nevertheless, the experienced workers were more confident in identifying hazards, they exhibited fewer fixations, and their scan paths for assessing hazards were more consistent. The experienced workers first assessed the high-risk targets-laborers working at heights-and subsequently assessed those working on the ground, followed by the equipment or environment. Furthermore, they typically inspected openings later than novice workers did. The search strategies identified may be incorporated into the training courses to improve the hazard awareness for novice workers.
Erfani, SS, Blount, Y & Abedin, B 2016, 'The influence of health-specific social network site use on the psychological well-being of cancer-affected people', JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, vol. 23, no. 3, pp. 467-476.
View/Download from: Publisher's site
Faed, A, Chang, E, Saberi, M, Hussain, OK & Azadeh, A 2016, 'Intelligent customer complaint handling utilising principal component and data envelopment analysis (PDA)', Applied Soft Computing, vol. 47, pp. 614-630.
View/Download from: Publisher's site
Farley, J & Voinov, A 2016, 'Economics, socio-ecological resilience and ecosystem services', JOURNAL OF ENVIRONMENTAL MANAGEMENT, vol. 183, pp. 389-398.
View/Download from: Publisher's site
Flynn, A, Dwight, T, Harris, J, Benn, D, Zhou, L, Hogg, A, Catchpoole, D, James, P, Duncan, EL, Trainer, A, Gill, AJ, Clifton-Bligh, R, Hicks, RJ & Tothill, RW 2016, 'Pheo-Type: A Diagnostic Gene-expression Assay for the Classification of Pheochromocytoma and Paraganglioma', The Journal of Clinical Endocrinology & Metabolism, vol. 101, no. 3, pp. 1034-1043.
View/Download from: Publisher's site
View description>>
Abstract Context: Pheochromocytomas and paragangliomas (PPGLs) are heritable neoplasms that can be classified into gene-expression subtypes corresponding to their underlying specific genetic drivers. Objective: This study aimed to develop a diagnostic and research tool (Pheo-type) capable of classifying PPGL tumors into gene-expression subtypes that could be used to guide and interpret genetic testing, determine surveillance programs, and aid in elucidation of PPGL biology. Design: A compendium of published microarray data representing 205 PPGL tumors was used for the selection of subtype-specific genes that were then translated to the Nanostring gene-expression platform. A support vector machine was trained on the microarray dataset and then tested on an independent Nanostring dataset representing 38 familial and sporadic cases of PPGL of known genotype (RET, NF1, TMEM127, MAX, HRAS, VHL, and SDHx). Different classifier models involving between three and six subtypes were compared for their discrimination potential. Results: A gene set of 46 genes and six endogenous controls was selected representing six known PPGL subtypes; RTK1–3 (RET, NF1, TMEM127, and HRAS), MAX-like, VHL, and SDHx. Of 38 test cases, 34 (90%) were correctly predicted to six subtypes based on the known genotype to gene-expression subtype association. Removal of the RTK2 subtype from training, characterized by an admixture of tumor and normal adrenal cortex, improved the classific...
Frawley, JK, Dyson, LE, Wakefield, J & Tyler, J 2016, 'Supporting Graduate Attribute Development in Introductory Accounting with Student-Generated Screencasts', International Journal of Mobile and Blended Learning (IJMBL), vol. 8, no. 3, pp. 65-82.
View/Download from: Publisher's site
View description>>
In recent years educational, industry and government bodies have placed increasing emphasis on the need to better support the development of “soft” skills or graduate attributes within higher education. This paper details the adoption of a student-generated multimedia screencast assignment that was found to address this need. Implemented within a large introductory accounting subject, this optional assignment allowed undergraduate students to design, develop and record a screencast so as to explain a key accounting concept to their peers. This paper reports on the trial, evaluation and redesign of this assignment. Drawing on data from student surveys, practitioner reflections and descriptive analysis of the screencasts themselves, this paper demonstrates the ways that the assignment contributed to the development and expression of a number of graduate attributes. These included the students' skills in multimedia, creativity, teamwork and self-directed learning. Adopting free-to-use software and providing a fun and different way of learning accounting, this novel approach constitutes a sustainable and readily replicable way of supporting graduate attribute development. This paper contributes understandings that will be relevant to both researchers and practitioners.
Gao, Q, Dong, D & Petersen, IR 2016, 'Fault tolerant quantum filtering and fault detection for quantum systems', Automatica, vol. 71, pp. 125-134.
View/Download from: Publisher's site
Gao, Q, Dong, D, Petersen, IR & Rabitz, H 2016, 'Fault tolerant filtering and fault detection for quantum systems driven by fields in single photon states', Journal of Mathematical Physics, vol. 57, no. 6.
View/Download from: Publisher's site
View description>>
The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.
Garcia, JA, Schoene, D, Lord, SR, Delbaere, K, Valenzuela, T & Navarro, KF 2016, 'A Bespoke Kinect Stepping Exergame for Improving Physical and Cognitive Function in Older People: A Pilot Study', Games for Health Journal, vol. 5, no. 6, pp. 382-388.
View/Download from: Publisher's site
View description>>
© 2016 Mary Ann Liebert, Inc. Background: Systematic review evidence has shown that step training reduces the number of falls in older people by half. This study investigated the feasibility and effectiveness of a bespoke Kinect stepping exergame in an unsupervised home-based setting. Materials and Methods: An uncontrolled pilot trial was conducted in 12 community-dwelling older adults (mean age 79.3 ± 8.7 years, 10 females). The stepping game comprised rapid stepping, attention, and response inhibition. Participants were recommended to exercise unsupervised at home for a minimum of three 20-minute sessions per week over the 12-week study period. The outcome measures were choice stepping reaction time (CSRT) (main outcome measure), standing balance, gait speed, five-time sit-to-stand (STS), timed up and go (TUG) performance, and neuropsychological function (attention: letter-digit and executive function:Stroop tests) assessed at baseline, 4 weeks, 8 weeks, and trial end (12 weeks). Results: Ten participants (83%) completed the trial and reassessments. A median 8.2 20-minute sessions were completed and no adverse events were reported. Across the trial period, participants showed significant improvements in CSRT (11%), TUG (13%), gait speed (29%), standing balance (7%), and STS (24%) performance (all P < 0.05). There were also nonsignificant, but meaningful, improvements for the letter-digit (13%) and Stroop tests (15%). Conclusions: This study found that a bespoke Kinect step training program was safe and feasible for older people to undertake unsupervised at home and led to improvements in stepping, standing balance, gait speed, and mobility. The home-based step training program could therefore be included in exercise programs designed to prevent falls.
Gholami, MF, Daneshgar, F, Low, G & Beydoun, G 2016, 'Cloud migration process-A survey, evaluation framework, and open challenges', JOURNAL OF SYSTEMS AND SOFTWARE, vol. 120, pp. 31-69.
View/Download from: Publisher's site
Gill, AQ, Phennel, N, Lane, D & Phung, VL 2016, 'IoT-enabled emergency information supply chain architecture for elderly people: The Australian context.', Inf. Syst., vol. 58, pp. 75-86.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd. All rights reserved. The effective delivery of emergency information to elderly people is a challenging task. Failure to deliver appropriate information can have an adverse impact on the well-being of the elderly people. This paper addresses this challenge and proposes an IoT-enabled information architecture driven approach, which is called 'Resalert'. Resalert offers IoT-enabled emergency information supply chain architecture pattern, IoT device architecture and system architecture. The applicability of the Resalert is evaluated by the means of an example scenario, a portable Raspberry Pi based system prototype and user evaluation. The results of this research indicate that the proposed approach seems useful to the effective delivery of emergency information to elderly people.
Gong, C, Tao, D, Maybank, SJ, Liu, W, Kang, G & Yang, J 2016, 'Multi-Modal Curriculum Learning for Semi-Supervised Image Classification', IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 25, no. 7, pp. 3249-3260.
View/Download from: Publisher's site
View description>>
© 1992-2012 IEEE. Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.
González, LO, Rodríguez Gil, LI, Martorell Cunill, O & Merigó Lindahl, JM 2016, 'The effect of financial innovation on European banks' risk', Journal of Business Research, vol. 69, no. 11, pp. 4781-4786.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. This study examines the effect of the use of securitization and credit derivatives on the risk profile of European banks. Using information from 134 listed European banks during the period of 2006–2010, the results show that securitization and trading with credit derivatives have a negative effect on financial stability. The main findings also show the dominance of trading positions over hedging positions for credit derivatives. The results of this study support the higher capital requirements of the new Basel III international banking regulations. Furthermore, accounting measures do not readily indicate market risks, and thus the results support central banks’ use of market-solvency measures to monitor financial stability.
Green, D, Naidoo, E, Olminkhof, C & Dyson, LE 2016, 'Tablets@university: The ownership and use of tablet devices by students', AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, vol. 32, no. 3, pp. 50-64.
View/Download from: Publisher's site
View description>>
Tablet devices have made a dramatic impact in the computing industry, and have been widely adopted by consumers, including tertiary students. Published research surrounding the use of tablet computers in tertiary settings appears to be largely centred on the advantages of integrating tablets into university pedagogies. However, there appears to have been very little research into the current level of ownership and use amongst students beyond university-sponsored adoption programs. This paper sets out to provide baseline data on the level of ownership and the current usage of tablets by students at an Australian university. A survey of 200 undergraduate and postgraduate students and interviews with five students showed high tablet ownership and significant engagement with educational uses. The findings of this study have implications for the incorporation of tablets into university education
Guan, J, Feng, Y & Ying, M 2016, 'Decomposition of Quantum Markov Chains and Its Applications', Journal of Computer and System Sciences, vol. 95, pp. 55-68.
View/Download from: Publisher's site
View description>>
Markov chains have been widely employed as a fundamental model in the studiesof probabilistic and stochastic communicating and concurrent systems. It iswell-understood that decomposition techniques play a key role in reachabilityanalysis and model-checking of Markov chains. (Discrete-time) quantum Markovchains have been introduced as a model of quantum communicating systems [1] andalso a semantic model of quantum programs [2]. The BSCC (Bottom StronglyConnected Component) and stationary coherence decompositions of quantum Markovchains were introduced in [3, 4, 5]. This paper presents a new decompositiontechnique, namely periodic decomposition, for quantum Markov chains. We furtherestablish a limit theorem for them. As an application, an algorithm to find amaximum dimensional noiseless subsystem of a quantum communicating system isgiven using decomposition techniques of quantum Markov chains.
Guo, S, Yu, S, Li, J & Ansari, N 2016, 'Big data for networking [Guest Editorial]', IEEE Network, vol. 30, no. 1, pp. 4-5.
View/Download from: Publisher's site
Gupta, P, Lin, C-T, Mehlawat, MK & Grover, N 2016, 'A New Method for Intuitionistic Fuzzy Multiattribute Decision Making', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 9, pp. 1167-1179.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In this paper, we study the multiattribute decision-making (MADM) problem with intuitionistic fuzzy values that represent information regarding alternatives on the attributes. Assuming that the weight information of the attributes is not known completely, we use an approach that utilizes the relative comparisons based on the advantage and disadvantage scores of the alternatives obtained on each attribute. The relative comparison of the intuitionistic fuzzy values in this research use all the three parameters, namely membership degree ('the more the better'), nonmembership degree ('the less the better'), and hesitancy degree ('the less the better'), thereby leading to the tradeoff values of all the three parameters. The score functions (advantage and disadvantage scores) used for this purpose are based on the positive contributions of these parameters, wherever applicable. Furthermore, these scores are used to obtain the strength and weakness scores leading to the satisfaction degrees of the alternatives. The optimal weights of the attributes are determined using a multiobjective optimization model that simultaneously maximizes the satisfaction degree of each alternative. The optimal solution is used for ranking and selecting the best alternative on the basis of the overall attribute values. To validate the proposed methodology, we present a numerical illustration of a real-world case. The methodology is further extended to treat MADM problem with interval-valued intuitionistic fuzzy information. Finally, a thorough comparison is done to demonstrate the advantages of the solution methodology over the existing methods used for the intuitionistic fuzzy MADM problems.
Hajinoroozi, M, Mao, Z, Jung, T-P, Lin, C-T & Huang, Y 2016, 'EEG-based prediction of driver's cognitive performance by deep convolutional neural network', Signal Processing: Image Communication, vol. 47, pp. 549-555.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. We considered the prediction of driver's cognitive states related to driving performance using EEG signals. We proposed a novel channel-wise convolutional neural network (CCNN) whose architecture considers the unique characteristics of EEG data. We also discussed CCNN-R, a CCNN variation that uses Restricted Boltzmann Machine to replace the convolutional filter, and derived the detailed algorithm. To test the performance of CCNN and CCNN-R, we assembled a large EEG dataset from 3 studies of driver fatigue that includes samples from 37 subjects. Using this dataset, we investigated the new CCNN and CCNN-R on raw EEG data and also Independent Component Analysis (ICA) decomposition. We tested both within-subject and cross-subject predictions and the results showed CCNN and CCNN-R achieved robust and improved performance over conventional DNN and CNN as well as other non-DL algorithms.
Hazber, MAG, Li, R, Gu, X & Xu, G 2016, 'Integration Mapping Rules: Transforming Relational Database to Semantic Web Ontology', Applied Mathematics & Information Sciences, vol. 10, no. 3, pp. 881-901.
View/Download from: Publisher's site
View description>>
© 2016 NSP. Semantic integration became an attractive area of research in several disciplines, such as information integration, databases and ontologies. Huge amount of data is still stored in relational databases (RDBs) that can be used to build ontology, and the database cannot be used directly by the semantic web. Therefore, one of the main challenges of the semantic web is mapping relational databases to ontologies (RDF(S)-OWL). Moreover, the use of manual work in the mapping of web contents to ontologies is impractical because it contains billions of pages and the most of these contents are generated from relational databases. Hence, we propose a new approach, which enables semantic web applications to access relational databases and their contents by semantic methods. Domain ontologies can be used to formulate relational database schema and data in order to simplify the mapping (transformation) of the underlying data sources. Our method consists of two main phases: building ontology from an RDB schema and the generation of ontology instances from an RDB data automatically. In the first phase, we studied different cases of RDB schema to be mapped into ontology represented in RDF(S)-OWL, while in the second phase, the mapping rules are used to transform RDB data to ontological instances represented in RDF triples. Our approach is demonstrated with examples, validated by ontology validator and implemented using Apache Jena in Java Language and MYSQL. This approach is effective for building ontology and important for mining semantic information from huge web resources.
He, H, Lin, C-T, Tan, KC, Kendall, G & Cangelosi, A 2016, 'CIS Publication Spotlight [Publication Spotlight]', IEEE Computational Intelligence Magazine, vol. 11, no. 1, pp. 15-17.
View/Download from: Publisher's site
Heijboer, M, van den Hoven, E, Bongers, B & Bakker, S 2016, 'Facilitating peripheral interaction: design and evaluation of peripheral interaction for a gesture-based lighting control with multimodal feedback', PERSONAL AND UBIQUITOUS COMPUTING, vol. 20, no. 1, pp. 1-22.
View/Download from: Publisher's site
View description>>
© 2015, The Author(s). Most interactions with today’s interfaces require a person’s full and focused attention. To alleviate the potential clutter of focal information, we investigated how interactions could be designed to take place in the background or periphery of attention. This paper explores whether gestural, multimodal interaction styles of an interactive light system allow for this. A study compared the performance of interactions with the light system in two conditions: the central condition in which participants interacted only with the light system, and the peripheral condition in which they interacted with the system while performing a high-attentional task simultaneously. Our study furthermore compared different feedback styles (visual, auditory, haptic, and a combination). Results indicated that especially for the combination feedback style, the interaction could take place without participants’ full visual attention, and performance did not significantly decrease in the peripheral condition. This seems to indicate that these interactions at least partly took place in their periphery of attention and that the multimodal feedback style aided this process.
Hou, Z, Zhong, H-S, Tian, Y, Dong, D, Qi, B, Li, L, Wang, Y, Nori, F, Xiang, G-Y, Li, C-F & Guo, G-C 2016, 'Full reconstruction of a 14-qubit state within four hours', New J. Phys., vol. 18, p. 083036.
View description>>
Full quantum state tomography (FQST) plays a unique role in the estimation ofthe state of a quantum system without \emph{a priori} knowledge or assumptions.Unfortunately, since FQST requires informationally (over)complete measurements,both the number of measurement bases and the computational complexity of dataprocessing suffer an exponential growth with the size of the quantum system. A14-qubit entangled state has already been experimentally prepared in an iontrap, and the data processing capability for FQST of a 14-qubit state seems tobe far away from practical applications. In this paper, the computationalcapability of FQST is pushed forward to reconstruct a 14-qubit state with a runtime of only 3.35 hours using the linear regression estimation (LRE) algorithm,even when informationally overcomplete Pauli measurements are employed. Thecomputational complexity of the LRE algorithm is first reduced from$O(10^{19})$ to $O(10^{15})$ for a 14-qubit state, by dropping all the zeroelements, and its computational efficiency is further sped up by fullyexploiting the parallelism of the LRE algorithm with parallel GraphicProcessing Unit (GPU) programming. Our result can play an important role inquantum information technologies with large quantum systems.
Huang, J, Yin, Y, Zhao, Y, Duan, Q, Wang, W & Yu, S 2016, 'A Game-Theoretic Resource Allocation Approach for Intercell Device-to-Device Communications in Cellular Networks', IEEE Transactions on Emerging Topics in Computing, vol. 4, no. 4, pp. 475-486.
View/Download from: Publisher's site
View description>>
Device-to-device (D2D) communication is a recently emerged disruptive technology for enhancing the performance of current cellular systems. To successfully implement D2D communications underlaying cellular networks, resource allocation to D2D links is a critical issue, which is far from trivial due to the mutual interference between D2D users and cellular users. Most of the existing resource allocation research for D2D communications has primarily focused on the intracell scenario while leaving the intercell settings not considered. In this paper, we investigate the resource allocation issue for intercell scenarios where a D2D link is located in the overlapping area of two neighboring cells. Furthermore, we present three intercell D2D scenarios regarding the resource allocation problem. To address the problem, we develop a repeated game model under these scenarios. Distinct from existing works, we characterize the communication infrastructure, namely, base stations, as players competing resource allocation quota from D2D demand, and we define the utility of each player as the payoff from both cellular and D2D communications using radio resources. We also propose a resource allocation algorithm and protocol based on the Nash equilibrium derivations. Numerical results indicate that the developed model not only significantly enhances the system performance, including sum rate and sum rate gain, but also shed lights on resource configurations for intercell D2D scenarios.
Huang, K-C, Huang, T-Y, Chuang, C-H, King, J-T, Wang, Y-K, Lin, C-T & Jung, T-P 2016, 'An EEG-Based Fatigue Detection and Mitigation System', International Journal of Neural Systems, vol. 26, no. 04, pp. 1650018-1650018.
View/Download from: Publisher's site
View description>>
Research has indicated that fatigue is a critical factor in cognitive lapses because it negatively affects an individual’s internal state, which is then manifested physiologically. This study explores neurophysiological changes, measured by electroencephalogram (EEG), due to fatigue. This study further demonstrates the feasibility of an online closed-loop EEG-based fatigue detection and mitigation system that detects physiological change and can thereby prevent fatigue-related cognitive lapses. More importantly, this work compares the efficacy of fatigue detection and mitigation between the EEG-based and a nonEEG-based random method. Twelve healthy subjects participated in a sustained-attention driving experiment. Each participant’s EEG signal was monitored continuously and a warning was delivered in real-time to participants once the EEG signature of fatigue was detected. Study results indicate suppression of the alpha- and theta-power of an occipital component and improved behavioral performance following a warning signal; these findings are in line with those in previous studies. However, study results also showed reduced warning efficacy (i.e. increased response times (RTs) to lane deviations) accompanied by increased alpha-power due to the fluctuation of warnings over time. Furthermore, a comparison of EEG-based and nonEEG-based random approaches clearly demonstrated the necessity of adaptive fatigue-mitigation systems, based on a subject’s cognitive level, to deliver warnings. Analytical results clearly demonstrate and validate the efficacy of this online closed-loop EEG-based fatigue detection and mitigation mechanism to identify cognitive lapses that may lead to catastrophic incidents in countless operational environments.
Hussain, W, Hussain, FK, Hussain, OK & Chang, E 2016, 'Provider-Based Optimized Personalized Viable SLA (OPV-SLA) Framework to Prevent SLA Violation', The Computer Journal, vol. 59, no. 12, pp. 1760-1783.
View/Download from: Publisher's site
View description>>
Service level agreement (SLA) is an essential agreement formed between a consumer and a provider in business activities. The SLA defines the business terms, objectives, obligations and commitment of both parties to a business activity, and in cloud computing it also defines a consumer's request for both fixed and variable resources, due to the elastic and dynamic nature of the cloud-computing environment. Providers need to thoroughly analyze such variability when forming SLAs to ensure they commit to the agreements with consumers and at the same time make the best use of available resources and obtain maximum returns. They can achieve this by entering into viable SLAs with consumers. A consumer's profile becomes a key element in determining the consumer's reliability, as a consumer who has previous service violation history is more likely to violate future service agreements; hence, a provider can avoid forming SLAs with such consumers. In this paper, we propose a novel optimal SLA formation architecture from the provider's perspective, enabling the provider to consider a consumer's reliability in committing to the SLA. We classify existing consumers into three categories based on their reliability or trustworthiness value and use that knowledge to ascertain whether to accept a consumer request for resource allocation, and then to determine the extent of the allocation. Our proposed architecture helps the service provider to monitor the behavior of service consumers in the post-interaction time phase and to use that information to form viable SLAs in the pre-interaction time phase to minimize service violations and penalties.
Jayakodi, K, Bandara, M, Perera, I & Meedeniya, D 2016, 'WordNet and Cosine Similarity based Classifier of Exam Questions using Bloom’s Taxonomy', International Journal of Emerging Technologies in Learning (iJET), vol. 11, no. 04, pp. 142-142.
View/Download from: Publisher's site
View description>>
Assessment usually plays an indispensable role in the education and it is the prime indicator of student learning achievement. Exam questions are the main form of assessment used in learning. Setting appropriate exam questions to achieve the desired outcome of the course is a challenging work for the examiner. Therefore this research is mainly focused to categorize the exam questions automatically into its learning levels using Bloom’s taxonomy. Natural Language Processing (NLP) techniques such as tokenization, stop word removal, lemmatization and tagging were used before generating the rule set to be used for this classification. WordNet similarity algorithms with NLTK and cosine similarity algorithm were developed to generate a unique set of rules to identify the question category and the weight for each exam question according to Bloom’s taxonomy. These derived rules make it easy to analyze the exam questions. Evaluators can redesign their exam papers based on the outcome of the evaluation process. A sample of examination questions of the Department of Computing and Information Systems, Wayamba University, Sri Lanka was used for the evaluation; weight assignment was done based on the total value generated from both WordNet algorithm and the cosine algorithm. Identified question categories were confirmed by a domain expert. The generated rule set indicated over 70% accuracy.
Jialin, H, Guangquan, Z, Yaoguang, H & Jie, L 2016, 'A solution to bi/tri-level programming problems using particle swarm optimization', INFORMATION SCIENCES, vol. 370, pp. 519-537.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. Multilevel (including bi-level and tri-level) programming aims to solve decentralized decision-making problems that feature interactive decision entities distributed throughout a hierarchical organization. Since the multilevel programming problem is strongly NP-hard and traditional exact algorithmic approaches lack efficiency, heuristics-based particle swarm optimization (PSO) algorithms have been used to generate an alternative for solving such problems. However, the existing PSO algorithms are limited to solving linear or small-scale bi-level programming problems. This paper first develops a novel bi-level PSO algorithm to solve general bi-level programs involving nonlinear and large-scale problems. It then proposes a tri-level PSO algorithm for handling tri-level programming problems that are more challenging than bi-level programs and have not been well solved by existing algorithms. For the sake of exploring the algorithms' performance, the proposed bi/tri-level PSO algorithms are applied to solve 62 benchmark problems and 810 large-scale problems which are randomly constructed. The computational results and comparison with other algorithms clearly illustrate the effectiveness of the proposed PSO algorithms in solving bi-level and tri-level programming problems.
Johnston, A 2016, 'Opportunities for Practice-Based Research in Musical Instrument Design', Leonardo, vol. 49, no. 1, pp. 82-83.
View/Download from: Publisher's site
View description>>
This paper considers the relationship between design, practice and research in the area of New Interfaces for Musical Expression (NIME). The author argues that NIME practitioner-researchers should embrace the instability and dynamism inherent in digital musical interactions in order to explore and document the evolving processes of musical expression.
Johnston, A & Ferguson, S 2016, 'Practice-Based Research and New Interfaces for Musical Expression', Leonardo, vol. 49, no. 1, pp. 71-71.
View/Download from: Publisher's site
Joshi, RG, Chelliah, J, Sood, S & Burdon, S 2016, 'Nature and spirit of exchange and interpersonal relationships fostering grassroots innovations', The Journal of Developing Areas, vol. 50, no. 6, pp. 399-409.
View/Download from: Publisher's site
View description>>
Exchange and interpersonal relationships are central to the functioning and sustainability of socio-economic activities, including innovation. Grassroots innovations (GI) are dynamic and relational phenomena that evolve with grassroots innovators’ beliefs, expectations and obligatory relationships for varied resources, and the actualization of their desire to make novel and beneficial products. In this paper, the dynamics of exchange and interpersonal relationships that underpin the GI phenomenon are explored through the lens of exchange theory and the consideration of the psychological contract. While exchange theory provides an explanation for the interdependent and dyadic socio-economic relations present in GI, the psychological contract provides a view on the perceptions and expectations that are embedded in exchange and innovation activities. These two theoretical lenses serve as a foundation for the research to engage with the subjective reality of the grassroots innovators’ experiences. In examining the subjective reality of the innovation experiences of the grassroots innovators; the research thereby discerns the dominant form of exchange and socio-economic structure that fosters GI from ideation to commercial scaling. Through the use of phenomenological exploration and detailed thematic analysis of the innovation experiences of the thirteen Indian grassroots innovators, the research determined the nature and spirit of the relational commercial exchanges that both entail and foster GI. The paper starts off with the discussion of the theoretical foundations of the research. Thereafter, the paper briefly discusses the research methodology and the exchange dynamics present in GI. In assimilating the research findings, the paper enlists the features of exchanges embedded in GI phenomenon and highlights the capacity of relational commercial exchanges in fostering GI. The paper further proposes, through this discussion, an interpretive framework for u...
Juang, C-F, Jeng, T-L & Chang, Y-C 2016, 'An Interpretable Fuzzy System Learned Through Online Rule Generation and Multiobjective ACO With a Mobile Robot Control Application', IEEE Transactions on Cybernetics, vol. 46, no. 12, pp. 2706-2718.
View/Download from: Publisher's site
Kaiwartya, O, Abdullah, AH, Cao, Y, Altameem, A, Prasad, M, Lin, C-T & Liu, X 2016, 'Internet of Vehicles: Motivation, Layered Architecture, Network Model, Challenges, and Future Aspects', IEEE Access, vol. 4, pp. 5356-5373.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Internet of Things is smartly changing various existing research areas into new themes, including smart health, smart home, smart industry, and smart transport. Relying on the basis of 'smart transport,' Internet of Vehicles (IoV) is evolving as a new theme of research and development from vehicular ad hoc networks (VANETs). This paper presents a comprehensive framework of IoV with emphasis on layered architecture, protocol stack, network model, challenges, and future aspects. Specifically, following the background on the evolution of VANETs and motivation on IoV an overview of IoV is presented as the heterogeneous vehicular networks. The IoV includes five types of vehicular communications, namely, vehicle-to-vehicle, vehicle-to-roadside, vehicle-to-infrastructure of cellular networks, vehicle-to-personal devices, and vehicle-to-sensors. A five layered architecture of IoV is proposed considering functionalities and representations of each layer. A protocol stack for the layered architecture is structured considering management, operational, and security planes. A network model of IoV is proposed based on the three network elements, including cloud, connection, and client. The benefits of the design and development of IoV are highlighted by performing a qualitative comparison between IoV and VANETs. Finally, the challenges ahead for realizing IoV are discussed and future aspects of IoV are envisioned.
Kamaleswaran, R & McGregor, C 2016, 'A Review of Visual Representations of Physiologic Data', JMIR Medical Informatics, vol. 4, no. 4, pp. e31-e31.
View/Download from: Publisher's site
View description>>
Background
Physiological data is derived from electrodes attached directly to patients. Modern patient monitors are capable of sampling data at frequencies in the range of several million bits every hour. Hence the potential for cognitive threat arising from information overload and diminished situational awareness becomes increasingly relevant. A systematic review was conducted to identify novel visual representations of physiologic data that address cognitive, analytic, and monitoring requirements in critical care environments.
Objective
The aims of this review were to identify knowledge pertaining to (1) support for conveying event information via tri-event parameters; (2) identification of the use of visual variables across all physiologic representations; (3) aspects of effective design principles and methodology; (4) frequency of expert consultations; (5) support for user engagement and identifying heuristics for future developments.
Methods
A review was completed of papers published as of August 2016. Titles were first collected and analyzed using an inclusion criteria. Abstracts resulting from the first pass were then analyzed to produce a final set of full papers. Each full paper was passed through a data extraction form eliciting data for comparative analysis.
Results
In total, 39 full papers met all criteria and were selected for full review. Results revealed great diversity in visual representations of physiological data. Visual representations spanned 4 groups including tabular, graph-based, object-based, and metaphoric displays. The metaphoric display was the most popular (n=19), followed by waveform displays typical to the single-sensor-single-indicator paradigm (n=18), and finally object displays (n=9) that utilized spatiotemporal elements to highlight changes in physiologic status. Results obtained from experiments and evaluations suggest specifics related to the optimal use of visual variables, such as colo...
Kamaleswaran, R, Collins, C, James, A & McGregor, C 2016, 'PhysioEx: Visual Analysis of Physiological Event Streams', Computer Graphics Forum, vol. 35, no. 3, pp. 331-340.
View/Download from: Publisher's site
View description>>
AbstractIn this work, we introduce a novel visualization technique, the Temporal Intensity Map, which visually integrates data values over time to reveal the frequency, duration, and timing of significant features in streaming data. We combine the Temporal Intensity Map with several coordinated visualizations of detected events in data streams to create PhysioEx, a visual dashboard for multiple heterogeneous data streams. We have applied PhysioEx in a design study in the field of neonatal medicine, to support clinical researchers exploring physiologic data streams. We evaluated our method through consultations with domain experts. Results show that our tool provides deep insight capabilities, supports hypothesis generation, and can be well integrated into the workflow of clinical researchers.
Kang, K & Sohaib, O 2016, 'Individualists vs. Collectivists in B2C E-Business Purchase Intention', Journal of Internet and e-business Studies, vol. 2016, pp. 1-11.
View/Download from: Publisher's site
View description>>
The purpose of this study is to propose an interpersonal trust (iTrust) model to better understand the online consumer cognitive and affective reactions in a B2C website. This study provides proposition on the influence of culture (Individualistic and Collectivistic) on the relationship between cognitive-based and web design and affect-based trust to buyer behavior aspects towards purchase intention in B2C e-business website. It is important to understand online purchasing perceptions between two different cultural groups because the Individualistic online consumer trust may be higher than the Collectivistic and vice versa.
Lancia, G, Mathieson, L & Moscato, P 2016, 'Separating Sets of Strings by Finding Matching Patterns is Almost Always Hard', Theoretical Computer Science, vol. 665, pp. 73-86.
View/Download from: Publisher's site
View description>>
We study the complexity of the problem of searching for a set of patternsthat separate two given sets of strings. This problem has applications in awide variety of areas, most notably in data mining, computational biology, andin understanding the complexity of genetic algorithms. We show that the basicproblem of finding a small set of patterns that match one set of strings but donot match any string in a second set is difficult (NP-complete, W[2]-hard whenparameterized by the size of the pattern set, and APX-hard). We then perform adetailed parameterized analysis of the problem, separating tractable andintractable variants. In particular we show that parameterizing by the size ofpattern set and the number of strings, and the size of the alphabet and thenumber of strings give FPT results, amongst others.
Li, D-L, Prasad, M, Lin, C-T & Chang, J-Y 2016, 'Self-adjusting feature maps network and its applications', Neurocomputing, vol. 207, pp. 78-94.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. This paper, proposes a novel artificial neural network, called self-adjusting feature map (SAM), and develop its unsupervised learning ability with self-adjusting mechanism. The trained network structure of representative connected neurons not only displays the spatial relation of the input data distribution but also quantizes the data well. The SAM can automatically isolate a set of connected neurons, in which, the used number of the sets may indicate the number of clusters. The idea of self-adjusting mechanism is based on combining of mathematical statistics and neurological advantages and retreat of waste. In the training process, for each representative neuron has are three phases, growth, adaptation and decline. The network of representative neurons, first create the necessary neurons according to the local density of the input data in the growth phase. In the adaption phase, it adjusts neighborhood neuron pair׳s connected/disconnected topology constantly according to the statistics of input feature data. Finally, the unnecessary neurons of the network are merged or remove in the decline phase. In this paper, we exploit the SAM to handle some peculiar cases that cannot be handled easily by classical unsupervised learning networks such as self-organizing map (SOM) network. The remarkable characteristics of the SAM can be seen on various real world cases in the experimental results.
Li, F, Xu, G & Cao, L 2016, 'Two-level matrix factorization for recommender systems', Neural Computing and Applications, vol. 27, no. 8, pp. 2267-2278.
View/Download from: Publisher's site
View description>>
© 2015, The Natural Computing Applications Forum. Many existing recommendation methods such as matrix factorization (MF) mainly rely on user–item rating matrix, which sometimes is not informative enough, often suffering from the cold-start problem. To solve this challenge, complementary textual relations between items are incorporated into recommender systems (RS) in this paper. Specifically, we first apply a novel weighted textual matrix factorization (WTMF) approach to compute the semantic similarities between items, then integrate the inferred item semantic relations into MF and propose a two-level matrix factorization (TLMF) model for RS. Experimental results on two open data sets not only demonstrate the superiority of TLMF model over benchmark methods, but also show the effectiveness of TLMF for solving the cold-start problem.
Li, Y, Li, Y & Xu, G 2016, 'Protecting private geosocial networks against practical hybrid attacks with heterogeneous information', Neurocomputing, vol. 210, pp. 81-90.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V.GeoSocial Networks (GSNs) are becoming increasingly popular due to its power in providing high-performance and flexible service capabilities. More and more Internet users have accepted this innovative service model. However, even GSNs have great business value for data analysis by integrated with location information, it may seriously compromise users' privacy in publishing the GSN data. In this paper, we study the identity disclosure problem in publishing GSN data. We first discuss the attack problem by considering both the location-based and structure-based properties, as background knowledge, and then formalize two general models, named (k,m)-anonymity and (k,m,l)-anonymity Then we propose a complete solution to achieve (k,m)-anonymization and (k,m,l)-anonymization to prevent the released data from the above attacks above. We also take data utility into consideration by defining specific information loss metrics. It is validated by real-world data that the proposed methods can prevent GSN dataset from the attacks while retaining good utility.
Li, Y, Qiao, Y, Wang, X & Duan, R 2016, 'Tripartite-to-bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces', Communications in Mathematical Physics, vol. 358, no. 2, pp. 791-814.
View/Download from: Publisher's site
View description>>
We study the problem of transforming a tripartite pure state to a bipartiteone using stochastic local operations and classical communication (SLOCC). Itis known that the tripartite-to-bipartite SLOCC convertibility is characterizedby the maximal Schmidt rank of the given tripartite state, i.e. the largestSchmidt rank over those bipartite states lying in the support of the reduceddensity operator. In this paper, we further study this problem and exhibitnovel results in both multi-copy and asymptotic settings. In the multi-copyregime, we observe that the maximal Schmidt rank is strictlysuper-multiplicative, i.e. the maximal Schmidt rank of the tensor product oftwo tripartite pure states can be strictly larger than the product of theirmaximal Schmidt ranks. We then provide a full characterization of thosetripartite states whose maximal Schmidt rank is strictly super-multiplicativewhen taking tensor product with itself. In the asymptotic setting, we focus ondetermining the tripartite-to-bipartite SLOCC entanglement transformation rate,which turns out to be equivalent to computing the asymptotic maximal Schmidtrank of the tripartite state, defined as the regularization of its maximalSchmidt rank. Despite the difficulty caused by the super-multiplicativeproperty, we provide explicit formulas for evaluating the asymptotic maximalSchmidt ranks of two important families of tripartite pure states, by resortingto certain results of the structure of matrix spaces, including the study ofmatrix semi-invariants. These formulas give a sufficient and necessarycondition to determine whether a given tripartite pure state can be transformedto the bipartite maximally entangled state under SLOCC, in the asymptoticsetting. Applying the recent progress on the non-commutative rank problem, wecan verify this condition in deterministic polynomial time.
Liang, J, Huang, ML & Nguyen, QV 2016, 'Navigation in large hierarchical graph through chain-context views', Journal of Visualization, vol. 19, no. 3, pp. 543-559.
View/Download from: Publisher's site
View description>>
© 2016, The Visualization Society of Japan. Abstract: The most commonly used interaction techniques in space-filling visualization are drilling-down + semantic-zooming and focus + context methods. However, under these schemes, users often have insufficient knowledge about contextual information to guide them exploring through very large and deep hierarchical structures. This paper proposes an efficient interaction method called “chain-context view” (CCV) for the navigation in space-filling visualizations. Instead of displaying a no or one context views, we provide users with a progressive sequence of context views, which maximize the display area of contextual information. The rich contextual information provided in the exploration path could greatly increase the accuracy of user’s decisions and reduce the “unsuccessful trips” and “unnecessary views” while locating the target object by browsing in deep levels of hierarchical structures with CCVs. The new method allows the users to trace each step of their interactions and make it easy to jump or return to any level of the hierarchy that they have previously visited. A usability study was conducted to evaluate the effectiveness of the CCV, by measuring the user performance and satisfaction on the navigation of deep levelled relational structures. Graphical abstract: [Figure not available: see fulltext.]
Light, A, Pedell, S, Robertson, T, Waycott, J, Bell, J, Durick, J & Leong, TW 2016, 'What's special about aging', Interactions, vol. 23, no. 2, pp. 66-69.
View/Download from: Publisher's site
View description>>
Community + Culture features practitioner perspectives on designing technologies for and with communities. We highlight compelling projects and provocative points of view that speak to both community technology practice and the interaction design field as a whole. --- Christopher A. Le Dantec, Editor
Lin, CT & Garibaldi, JJ 2016, 'Editorial', IEEE Transactions on Fuzzy Systems, vol. 24, no. 6, pp. 1257-1258.
View/Download from: Publisher's site
Lin, C-T, Chuang, C-H, Kerick, S, Mullen, T, Jung, T-P, Ko, L-W, Chen, S-A, King, J-T & McDowell, K 2016, 'Mind-Wandering Tends to Occur under Low Perceptual Demands during Driving', Scientific Reports, vol. 6, no. 1.
View/Download from: Publisher's site
View description>>
AbstractFluctuations in attention behind the wheel poses a significant risk for driver safety. During transient periods of inattention, drivers may shift their attention towards internally-directed thoughts or feelings at the expense of staying focused on the road. This study examined whether increasing task difficulty by manipulating involved sensory modalities as the driver detected the lane-departure in a simulated driving task would promote a shift of brain activity between different modes of processing, reflected by brain network dynamics on electroencephalographic sources. Results showed that depriving the driver of salient sensory information imposes a relatively more perceptually-demanding task, leading to a stronger activation in the task-positive network. When the vehicle motion feedback is available, the drivers may rely on vehicle motion to perceive the perturbations, which frees attentional capacity and tends to activate the default mode network. Such brain network dynamics could have major implications for understanding fluctuations in driver attention and designing advance driver assistance systems.
Liu, B, Zhou, W, Zhu, T, Gao, L, Luan, TH & Zhou, H 2016, 'Silence is Golden: Enhancing Privacy of Location-Based Services by Content Broadcasting and Active Caching in Wireless Vehicular Networks', IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9942-9953.
View/Download from: Publisher's site
Liu, B, Zhou, W, Zhu, T, Zhou, H & Lin, X 2016, 'Invisible Hand: A Privacy Preserving Mobile Crowd Sensing Framework Based on Economic Models', IEEE Transactions on Vehicular Technology, vol. 66, no. 5, pp. 1-1.
View/Download from: Publisher's site
Liu, X, Iftikhar, N, Huo, H & Nielsen, PS 2016, 'Optimizing ETL by a Two-Level Data Staging Method', International Journal of Data Warehousing and Mining, vol. 12, no. 3, pp. 32-50.
View/Download from: Publisher's site
View description>>
In data warehousing, the data from source systems are populated into a central data warehouse (DW) through extraction, transformation and loading (ETL). The standard ETL approach usually uses sequential jobs to process the data with dependencies, such as dimension and fact data. It is a non-trivial task to process the so-called early-/late-arriving data, which arrive out of order. This paper proposes a two-level data staging area method to optimize ETL. The proposed method is an all-in-one solution that supports processing different types of data from operational systems, including early-/late-arriving data, and fast-/slowly-changing data. The introduced additional staging area decouples loading process from data extraction and transformation, which improves ETL flexibility and minimizes intervention to the data warehouse. This paper evaluates the proposed method empirically, which shows that it is more efficient and less intrusive than the standard ETL method.
Liu, Y-T, Lin, Y-Y, Wu, S-L, Chuang, C-H & Lin, C-T 2016, 'Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network', IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 2, pp. 347-360.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.
Llopis-Albert, C, Merigó, JM & Xu, Y 2016, 'A coupled stochastic inverse/sharp interface seawater intrusion approach for coastal aquifers under groundwater parameter uncertainty', Journal of Hydrology, vol. 540, pp. 774-783.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. This paper presents an alternative approach to deal with seawater intrusion problems, that overcomes some of the limitations of previous works, by coupling the well-known SWI2 package for MODFLOW with a stochastic inverse model named GC method. On the one hand, the SWI2 allows a vertically integrated variable-density groundwater flow and seawater intrusion in coastal multi-aquifer systems, and a reduction in number of required model cells and the elimination of the need to solve the advective-dispersive transport equation, which leads to substantial model run-time savings. On the other hand, the GC method allows dealing with groundwater parameter uncertainty by constraining stochastic simulations to flow and mass transport data (i.e., hydraulic conductivity, freshwater heads, saltwater concentrations and travel times) and also to secondary information obtained from expert judgment or geophysical surveys, thus reducing uncertainty and increasing reliability in meeting the environmental standards. The methodology has been successfully applied to a transient movement of the freshwater-seawater interface in response to changing freshwater inflow in a two-aquifer coastal aquifer system, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques. The approach also allows partially overcoming the neglected diffusion and dispersion processes after the conditioning process since the uncertainty is reduced and results are closer to available data.
Llopis-Albert, C, Palacios-Marqués, D & Merigó, JM 2016, 'Decision making under uncertainty in environmental projects using mathematical simulation modeling', Environmental Earth Sciences, vol. 75, no. 19.
View/Download from: Publisher's site
View description>>
© 2016, Springer-Verlag Berlin Heidelberg. In decision-making processes, reliability and risk aversion play a decisive role. The aim of this study is to perform an uncertainty assessment of the effects of future scenarios of sustainable groundwater pumping strategies on the quantitative and chemical status of an aquifer. The good status of the aquifer is defined according to the terms established by the EU Water Framework Directive (WFD). A decision support systems (DSS) is presented, which makes use of a stochastic inverse model (GC method) and geostatistical approaches to calibrate equally likely realizations of hydraulic conductivity (K) fields for a particular case study. These K fields are conditional to available field data, including hard and soft information. Then, different future scenarios of groundwater pumping strategies are generated, based on historical information and WFD standards, and simulated for each one of the equally likely K fields. The future scenarios lead to different environmental impacts and levels of socioeconomic development of the region and, hence, to a different degree of acceptance among stakeholders. We have identified the different stakeholders implied in the decision-making process, the objectives pursued and the alternative actions that should be considered by stakeholders in a public participation project (PPP). The MonteCarlo simulation provides a highly effective way for uncertainty assessment and allows presenting the results in a simple and understandable way even for non-experts stakeholders. The methodology has been successfully applied to a real case study and lays the foundations to perform a PPP and stakeholders’ involvement in a decision-making process as required by the WFD. The results of the methodology can help the decision-making process to come up with the best policies and regulations for a groundwater system under uncertainty in groundwater parameters and management strategies and involving stakeh...
Loke, L & Kocaballi, AB 2016, 'Choreographic Inscriptions: A Framework for Exploring Sociomaterial Influences on Qualities of Movement for HCI', Human Technology, vol. 12, no. 1, pp. 31-55.
View/Download from: Publisher's site
View description>>
© 2016 Lian Loke & A. Baki Kocaballi, and the Agora Center, University of Jyväskylä. With the rise of ubiquitous computing technologies in everyday life, the daily actions of people are becoming ever more choreographed by the interactions available through technology. By combining the notion of inscriptions from actor-network theory and the qualitative descriptors of movement from Laban movement analysis, an analytic framework is proposed for exploring how the interplay of material and social inscriptions gives rise to movement patterns and behaviors, translated into choreographic inscriptions described with Laban effort and shape. It is demonstrated through a case study of an affective gesture mobile device. The framework provides an understanding of (a) how movement qualities are shaped by social and material inscriptions, (b) how the relative strength of inscriptions on movements may change according to different settings and user appropriation over time, and (c) how transforming inscriptions by design across different mediums can generate action spaces with varying degrees of openness.
Lopez-Lorca, A, Beydoun, G, Valencia-Garcia, R & Martinez-Bejar, R 2016, 'Automating the reuse of domain knowledge to improve the modelling outcome from interactions between developers and clients', COMPUTING, vol. 98, no. 6, pp. 609-640.
View/Download from: Publisher's site
Lopez-Lorca, AA, Beydoun, G, Valencia-Garcia, R & Martinez-Bejar, R 2016, 'Supporting agent oriented requirement analysis with ontologies', INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, vol. 87, pp. 20-37.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Requirements analysis activities underpin the success of the software development lifecycle. Subsequent errors in the requirements models can propagate to models in later phases and become much costlier to fix. Errors in requirement analysis are more likely in developing complex systems. Particularly, errors due to miscommunication and misinterpretation of a client's intentions are common. Ontologies relying on formal descriptions of semantics have often been used in multi agent systems (MAS) development to support various activities and generally improve the complex systems produced. However, their use during requirements analysis to validate match with the client's conceptualisation is largely unexplored. This article presents an ontology driven validation process to support requirement analysis of MAS models. This process is underpinned by an agent-based metamodel that describes commonly used informal agent requirement models. The process concurrently and incrementally validates the informal MAS requirement models produced. The synthesis of the process is first justified and illustrated in a manual tracing of the process. The paper then describes an interactive support tool to harness the formal semantics of ontologies and by pass the costly manual effort. The validation process is evaluated and illustrated using three case studies.
Lu, J, Han, J, Hu, Y & Zhang, G 2016, 'Multilevel decision-making: A survey', INFORMATION SCIENCES, vol. 346, pp. 463-487.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. All rights reserved. Multilevel decision-making techniques aim to deal with decentralized management problems that feature interactive decision entities distributed throughout a multiple level hierarchy. Significant efforts have been devoted to understanding the fundamental concepts and developing diverse solution algorithms associated with multilevel decision-making by researchers in areas of both mathematics/computer science and business areas. Researchers have emphasized the importance of developing a range of multilevel decision-making techniques to handle a wide variety of management and optimization problems in real-world applications, and have successfully gained experience in this area. It is thus vital that a high quality, instructive review of current trends should be conducted, not only of the theoretical research results but also the practical developments in multilevel decision-making in business. This paper systematically reviews up-to-date multilevel decision-making techniques and clusters related technique developments into four main categories: bi-level decision-making (including multi-objective and multi-follower situations), tri-level decision-making, fuzzy multilevel decision-making, and the applications of these techniques in different domains. By providing state-of-the-art knowledge, this survey will directly support researchers and practical professionals in their understanding of developments in theoretical research results and applications in relation to multilevel decision-making techniques.
Lu, M, Liang, J, Wang, Z & Yuan, X 2016, 'Exploring OD patterns of interested region based on taxi trajectories', Journal of Visualization, vol. 19, no. 4, pp. 811-821.
View/Download from: Publisher's site
View description>>
© 2016, The Visualization Society of Japan. Abstract: Traffics of different regions in a city have different Origin-Destination (OD) patterns, which potentially reveal the surrounding traffic context and social functions. In this work, we present a visual analysis system to explore OD patterns of interested regions based on taxi trajectories. The system integrates interactive trajectory filtering with visual OD patterns exploration. Trajectories related to interested region are selected by a suite of graphical filtering tools, from which OD clusters are detected automatically. OD traffic patterns can be explored at two levels: overview of OD and detailed exploration on dynamic OD patterns, including information of dynamic traffic volume and travel time. By testing on real taxi trajectory data sets, we demonstrate the effectiveness of our system. Graphical Abstract: [Figure not available: see fulltext.]
Lu, N, Lu, J, Zhang, G & Lopez de Mantaras, R 2016, 'A concept drift-tolerant case-base editing technique', ARTIFICIAL INTELLIGENCE, vol. 230, pp. 108-133.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. All rights reserved. The evolving nature and accumulating volume of real-world data inevitably give rise to the so-called 'concept drift' issue, causing many deployed Case-Based Reasoning (CBR) systems to require additional maintenance procedures. In Case-base Maintenance (CBM), case-base editing strategies to revise the case-base have proven to be effective instance selection approaches for handling concept drift. Motivated by current issues related to CBR techniques in handling concept drift, we present a two-stage case-base editing technique. In Stage 1, we propose a Noise-Enhanced Fast Context Switch (NEFCS) algorithm, which targets the removal of noise in a dynamic environment, and in Stage 2, we develop an innovative Stepwise Redundancy Removal (SRR) algorithm, which reduces the size of the case-base by eliminating redundancies while preserving the case-base coverage. Experimental evaluations on several public real-world datasets show that our case-base editing technique significantly improves accuracy compared to other case-base editing approaches on concept drift tasks, while preserving its effectiveness on static tasks.
Luccio, F, Mans, B, Mathieson, L & Pagli, L 2016, 'Complete Balancing via Rotation', The Computer Journal, vol. 59, no. 8, pp. 1252-1263.
View/Download from: Publisher's site
Luo, S, Yu, H, Zhao, Y, Wang, S, Yu, S & Li, L 2016, 'Towards Practical and Near-Optimal Coflow Scheduling for Data Center Networks', IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 11, pp. 3366-3380.
View/Download from: Publisher's site
View description>>
In current data centers, an application (e.g., MapReduce, Dryad, search platform, etc.) usually generates a group of parallel flows to complete a job. These flows compose a coflow and only completing them all is meaningful to the application. Accordingly, minimizing the average Coflow Completion Time (CCT) becomes a critical objective of flow scheduling. However, achieving this goal in today's Data Center Networks (DCNs) is quite challenging, not only because the schedule problem is theoretically NP-hard, but also because it is tough to perform practical flow scheduling in large-scale DCNs. In this paper, we find that minimizing the average CCT of a set of coflows is equivalent to the well-known problem of minimizing the sum of completion times in a concurrent open shop. As there are abundant existing solutions for concurrent open shop, we open up a variety of techniques for coflow scheduling. Inspired by the best known result, we derive a 2-approximation algorithm for coflow scheduling, and further develop a decentralized coflow scheduling system, D-CAS, which avoids the system problems associated with current centralized proposals while addressing the performance challenges of decentralized suggestions. Trace-driven simulations indicate that D-CAS achieves a performance close to Varys, the state-of-the-art centralized method, and outperforms Baraat, the only existing decentralized method, significantly.
Luo, X, Xuan, J, Lu, J & Zhang, G 2016, 'Measuring the Semantic Uncertainty of News Events for Evolution Potential Estimation', ACM TRANSACTIONS ON INFORMATION SYSTEMS, vol. 34, no. 4.
View/Download from: Publisher's site
View description>>
© 2016 ACM. The evolution potential estimation of news events can support the decision making of both corporations and governments. For example, a corporation could manage its public relations crisis in a timely manner if a negative news event about this corporation is known with large evolution potential in advance. However, existing state-of-the-art methods are mainly based on time series historical data, which are not suitable for the news events with limited historical data and bursty properties. In this article, we propose a purely content-based method to estimate the evolution potential of the news events. The proposed method considers a news event at a given time point as a system composed of different keywords, and the uncertainty of this system is defined and measured as the Semantic Uncertainty of this news event. At the same time, an uncertainty space is constructed with two extreme states: the most uncertain state and the most certain state. We believe that the Semantic Uncertainty has correlation with the content evolution of the news events, so it can be used to estimate the evolution potential of the news events. In order to verify the proposed method, we present detailed experimental setups and results measuring the correlation of the Semantic Uncertainty with the Content Change of news events using collected news events data. The results show that the correlation does exist and is stronger than the correlation of value from the time-series-based method with the Content Change. Therefore, we can use the Semantic Uncertainty to estimate the evolution potential of news events.
Ma, Q, Zhang, S, Zhou, W, Yu, S & Wang, C 2016, 'When Will You Have a New Mobile Phone? An Empirical Answer From Big Data', IEEE Access, vol. 4, pp. 10147-10157.
View/Download from: Publisher's site
View description>>
When and why people change their mobile phones are important issues in mobile communications industry, because it will impact greatly on the marketing strategy and revenue estimation for both mobile operators and manufactures. It is a promising way to take use of big data to analyze and predict the phone changing event. In this paper, based on mobile user big data, first through statistical analysis, we find that three important probability distributions, i.e., power-law, log-normal, and geometric distribution, play an important role in the user behaviors. Second, the relationships between eight selected attributes and phone changing are built, for example, young people have greater intention to change their phones if they are using the phones belonging to the low occupancy phones or feature phones. Third, we verified the performance of four prediction models on phone changing event under three scenarios. Information gain ratio was used to implement attribute selection and then sampling method, cost-sensitive together with standard classifiers were used to solve imbalanced phone changing event. Experiment results show our proposed enhanced backpropagation neural network in the undersampling scenario can attain better prediction performance.
Malomo, L, Pietroni, N, Bickel, B & Cignoni, P 2016, 'FlexMolds: automatic design of flexible shells for molding.', ACM Trans. Graph., vol. 35, pp. 223:1-223:1.
View/Download from: Publisher's site
Mathieson, L 2016, 'Synergies in critical reflective practice and science: Science as reflection and reflection as science', Journal of University Teaching and Learning Practice, vol. 13, no. 2, pp. 1-13.
View description>>
The conceptions of reflective practice in education have their roots at least partly in the work of Dewey, who describes reflection as “the active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends” (Dewey 1933, p.9). This conception of reflection has carried on into more-focused efforts to describe critical reflection as a tool for improving professional practice (where academic and educational practice is the particular interest of this study); “… some puzzling or troubling or interesting phenomenon” allows the practitioner to access “the understandings which have been implicit in his action, understandings which he surfaces, criticizes, restructures, and embodies in further action” (Schön 1983, p. 50). Both of these descriptions embody a central idea of critical reflective practice: that the examination of practice involves the divination (in a rational, critical sense) of order and perhaps meaning from the facts at hand (which, in turn, are brought to light by the events that occur as the results of implementation of theory). As part of a lecture series, Gottlieb defined science as “an intellectual activity carried out by humans to understand the structure and functions of the world in which they live” (Gottlieb 1997). While science and critical reflective practice attempt to build models about different parts of our world – the natural world and the world of professional (educational) practice respectively – both embody certain underlying aims and methodologies. Indeed, it is striking that in these definitions the simple replacement of the terminology of reflective practice with the terminology of science (or vice versa) leads to a perfectly comprehensible definition of either.It is this confluence that this paper studies, building from two separate foundations, critical reflective practice and science. Via their models and exem...
McGahan, WT, Ernst, H & Dyson, LE 2016, 'Individual Learning Strategies and Choice in Student-Generated Multimedia', International Journal of Mobile and Blended Learning, vol. 8, no. 3, pp. 1-18.
View/Download from: Publisher's site
View description>>
There has been an increasing focus on student-generated multimedia assessment as a way of introducing the benefits of both visual literacy and peer-mediated learning into university courses. One such assessment was offered to first-year health science students but, contrary to expectations, led to poorer performance in their end-of-semester examinations. Following an analysis, the assignment was redesigned to offer students a choice of either a group-based animation task or an individual written task. Results showed improved performance on the assignment when students were offered a choice of assignments over when they were offered only the multimedia assignment. Student feedback indicated that students adopt deliberate individual learning strategies when offered choices in assessment. The study suggests that assumptions regarding the superiority of student-generated multimedia over more traditional assessments are not always correct, but that students' agency and individual preferences need to be recognized.
Merigó, JM & Núñez, A 2016, 'Influential journals in health research: a bibliometric study', Globalization and Health, vol. 12, no. 1, p. 46.
View/Download from: Publisher's site
View description>>
Background
There is a wide range of intellectual work written about health research, which has been shaped by the evolution of diseases. This study aims to identify the leading journals over the last 25 years (1990-2014) according to a wide range of bibliometric indicators.
Methods
The study develops a bibliometric overview of all the journals that are currently indexed in Web of Science (WoS) database in any of the four categories connected to health research. The work classifies health research in nine subfields: Public Health, Environmental and Occupational Health, Health Management and Economics, Health Promotion and Health Behavior, Epidemiology, Health Policy and Services, Medicine, Health Informatics, Engineering and Technology, and Primary Care.
Results
The results indicate a wide dispersion between categories being the American Journal of Epidemiology, Environmental Health Perspectives, American Journal of Public Health, and Social Science & Medicine, the journals that have received the highest number of citations over the last 25 years. According to other indicators such as the h-index and the citations per paper, some other journals such as the Annual Review of Public Health and Medical Care, obtain better results which show the wide diversity and profiles of outlets available in the scientific community. The results are grouped and studied according to the nine subfields in order to identify the leading journals in each specific sub discipline of health.
Conclusions
The work identifies the leading journals in health research through a bibliometric approach. The analysis shows a deep overview of the results of health journals. It is worth noting that many journals have entered the WoS database during the last years, in many cases to fill some specific niche that has emerged in the literature, although the most popular ones have been in the database for a long time.
Merigó, JM, Cancino, CA, Coronado, F & Urbano, D 2016, 'Academic research in innovation: a country analysis', Scientometrics, vol. 108, no. 2, pp. 559-593.
View/Download from: Publisher's site
Merigó, JM, Gil-Lafuente, AM & Gil-Lafuente, J 2016, 'Business, industrial marketing and uncertainty', Journal of Business & Industrial Marketing, vol. 31, no. 3, pp. 325-327.
View/Download from: Publisher's site
View description>>
PurposeThis special issue of the Journal of Business & Industrial Marketing, entitled “Business, Industrial Marketing and Uncertainty”, presents selected extended studies that were presented at the European Academy of Management and Business Economics Conference (AEDEM 2012).Design/methodology/approachThe main focus of this year was reflected in the slogan: “Creating new opportunities in an uncertain environment”. The objective was to show the importance that uncertainty has in our current world, strongly affected by many complexities and modern developments, especially through the new technological advances.FindingsOne fundamental reason that explains the economic crisis is that the government and companies were not well prepared for these critical situations. And the main justification for this is that they did not have enough information. Otherwise, they would have tried any possible strategy to avoid the crisis. Usually, uncertainty is defined as the situation with unknown information in the environment.Originality/valueFrom a theoretical perspective, the problem here is that enterprises and governments should assess the information and the uncertainty in a more appropriate way. Usually, they have some studies in this direction, but many times, it is not enough, as it was proved in the last economic crisis.
Merigó, JM, Palacios-Marqués, D & Ribeiro-Navarrete, B 2016, 'Corrigendum to “Aggregation systems for sales forecasting” [J. Bus. Res. 68(11) (2015) 2299–2304]', Journal of Business Research, vol. 69, no. 6, pp. 2325-2325.
View/Download from: Publisher's site
Merigó, JM, Palacios-Marqués, D & Zeng, S 2016, 'Subjective and objective information in linguistic multi-criteria group decision making', European Journal of Operational Research, vol. 248, no. 2, pp. 522-531.
View/Download from: Publisher's site
View description>>
Linguistic decision making systems represent situations that cannot be assessed with numerical information but it is possible to use linguistic variables. This paper introduces new linguistic aggregation operators in order to develop more efficient decision making systems. The linguistic probabilistic weighted average (LPWA) is presented. Its main advantage is that it considers subjective and objective information in the same formulation and considering the degree of importance that each concept has in the aggregation. A key feature of the LPWA operator is that it considers a wide range of linguistic aggregation operators including the linguistic weighted average, the linguistic probabilistic aggregation and the linguistic average. Further generalizations are presented by using quasi-arithmetic means and moving averages. An application in linguistic multi-criteria group decision making under subjective and objective risk is also presented in the context of the European Union law.
Merigó, JM, Peris-Ortíz, M, Navarro-García, A & Rueda-Armengot, C 2016, 'Aggregation operators in economic growth analysis and entrepreneurial group decision-making', Applied Soft Computing, vol. 47, pp. 141-150.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. All rights reserved. An economic crisis can be measured from different perspectives. A very commonly used measure is that of a country's economic growth. When growth is lower than desired, the economy is assumed to be near stagnation or in an economic recession. This paper connects entrepreneurship and economic growth in decision-making problems assessed with modern aggregation systems. Aggregation techniques can represent information more comprehensively in uncertain and imprecise environments. This paper suggests several practical aggregation operators for this purpose, such as the ordered weighted average and the probabilistic ordered weighted averaging weighted average. Other aggregation systems based on macroeconomic theory are also introduced. The paper concludes with an application in an entrepreneurial uncertain multi-criteria multi-person decision-making problem regarding the selection of optimal markets for creating a new company. This approach is based on the use of economic growth as the fundamental variable for determining the preferred solution.
MERIGÓ, JM, ROCAFORT, A & AZNAR-ALARCÓN, JP 2016, 'BIBLIOMETRIC OVERVIEW OF BUSINESS & ECONOMICS RESEARCH', Journal of Business Economics and Management, vol. 17, no. 3, pp. 397-413.
View/Download from: Publisher's site
View description>>
Bibliometrics is the quantitative study of bibliographic information. It classifies the information according to different criteria including authors, journals, institutions and countries. This paper presents a general bibliometric overview of the most influential research in business & economics according to the information found in the Web of Science. It includes research from different subcategories including business, business finance, economics and management. For doing so, four general lists are presented: the 50 most cited papers in business & economics of all time, the 40 most influential journals, the 40 most relevant institutions and the most influential countries. The results permit to obtain a general picture of the most significant research in business & economics. This information is very useful in order to identify the leading trends in this area.
Merigó, JM, Yang, J-B & Xu, D-L 2016, 'Demand Analysis with Aggregation Systems', International Journal of Intelligent Systems, vol. 31, no. 5, pp. 425-443.
View/Download from: Publisher's site
Meter, RV & Devitt, SJ 2016, 'Local and Distributed Quantum Computation', IEEE Computer 49(9), 31-42, Sept. 2016, vol. 49, no. 9, pp. 31-42.
View/Download from: Publisher's site
View description>>
Experimental groups are now fabricating quantum processors powerful enough toexecute small instances of quantum algorithms and definitively demonstratequantum error correction that extends the lifetime of quantum data, addingurgency to architectural investigations. Although other options continue to beexplored, effort is coalescing around topological coding models as the mostpractical implementation option for error correction on realizablemicroarchitectures. Scalability concerns have also motivated architects topropose distributed memory multicomputer architectures, with experimentalefforts demonstrating some of the basic building blocks to make such designspossible. We compile the latest results from a variety of different systemsaiming at the construction of a scalable quantum computer.
Motes, KR, Mann, RL, Olson, JP, Studer, NM, Bergeron, EA, Gilchrist, A, Dowling, JP, Berry, DW & Rohde, PP 2016, 'Efficient recycling strategies for preparing large Fock states from single-photon sources --- Applications to quantum metrology', Phys. Rev. A, vol. 94, no. 1, p. 012344.
View/Download from: Publisher's site
View description>>
Fock states are a fundamental resource for many quantum technologies such asquantum metrology. While much progress has been made in single-photon sourcetechnologies, preparing Fock states with large photon number remainschallenging. We present and analyze a bootstrapped approach fornon-deterministically preparing large photon-number Fock states by iterativelyfusing smaller Fock states on a beamsplitter. We show that by employing staterecycling we are able to exponentially improve the preparation rate overconventional schemes, allowing the efficient preparation of large Fock states.The scheme requires single-photon sources, beamsplitters, number-resolvedphoto-detectors, fast-feedforward, and an optical quantum memory.
Mueller, P, Huang, C-T, Yu, S, Tari, Z & Lin, Y-D 2016, 'Cloud Security', IEEE Cloud Computing, vol. 3, no. 5, pp. 22-24.
View/Download from: Publisher's site
Naderpour, M, Lu, J & Zhang, G 2016, 'A safety-critical decision support system evaluation using situation awareness and workload measures', RELIABILITY ENGINEERING & SYSTEM SAFETY, vol. 150, pp. 147-159.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd. To ensure the safety of operations in safety-critical systems, it is necessary to maintain operators' situation awareness (SA) at a high level. A situation awareness support system (SASS) has therefore been developed to handle uncertain situations [1]. This paper aims to systematically evaluate the enhancement of SA in SASS by applying a multi-perspective approach. The approach consists of two SA metrics, SAGAT and SART, and one workload metric, NASA-TLX. The first two metrics are used for the direct objective and subjective measurement of SA, while the third is used to estimate operator workload. The approach is applied in a safety-critical environment called residue treater, located at a chemical plant in which a poor human-system interface reduced the operators' SA and caused one of the worst accidents in US history. A counterbalanced within-subjects experiment is performed using a virtual environment interface with and without the support of SASS. The results indicate that SASS improves operators' SA, and specifically has benefits for SA levels 2 and 3. In addition, it is concluded that SASS reduces operator workload, although further investigations in different environments with a larger number of participants have been suggested.
Nagayama, S, Choi, B-S, Devitt, S, Suzuki, S & Van Meter, R 2016, 'Interoperability in encoded quantum repeater networks', Physical Review A, vol. 93, no. 4.
View/Download from: Publisher's site
View description>>
The future of quantum repeater networking will require interoperability between various error-correcting codes. A few specific code conversions and even a generalized method are known, however, no detailed analysis of these techniques in the context of quantum networking has been performed. In this paper we analyze a generalized procedure to create Bell pairs encoded heterogeneously between two separate codes used often in error-corrected quantum repeater network designs. We begin with a physical Bell pair and then encode each qubit in a different error-correcting code, using entanglement purification to increase the fidelity. We investigate three separate protocols for preparing the purified encoded Bell pair. We calculate the error probability of those schemes between the Steane [[7,1,3]] code, a distance-3 surface code, and single physical qubits by Monte Carlo simulation under a standard Pauli error model and estimate the resource efficiency of the procedures. A local gate error rate of 10-3 allows us to create high-fidelity logical Bell pairs between any of our chosen codes. We find that a postselected model, where any detected parity flips in code stabilizers result in a restart of the protocol, performs the best.
Nagayama, S, Fowler, AG, Horsman, D, Devitt, SJ & Meter, RV 2016, 'Surface Code Error Correction on a Defective Lattice', New Journal of Physics, 19(2):023050, 2017, vol. 19, no. 2, pp. 1-29.
View/Download from: Publisher's site
View description>>
The yield of physical qubits fabricated in the laboratory is much lower thanthat of classical transistors in production semiconductor fabrication. Actualimplementations of quantum computers will be susceptible to loss in the form ofphysically faulty qubits. Though these physical faults must negatively affectthe computation, we can deal with them by adapting error correction schemes. Inthis paper We have simulated statically placed single-fault lattices andlattices with randomly placed faults at functional qubit yields of 80%, 90% and95%, showing practical performance of a defective surface code by employingactual circuit constructions and realistic errors on every gate, includingidentity gates. We extend Stace et al.'s superplaquettes solution againstdynamic losses for the surface code to handle static losses such as physicallyfaulty qubits. The single-fault analysis shows that a static loss at theperiphery of the lattice has less negative effect than a static loss at thecenter. The randomly-faulty analysis shows that 95% yield is good enough tobuild a large scale quantum computer. The local gate error rate threshold is$\sim 0.3\%$, and a code distance of seven suppresses the residual error ratebelow the original error rate at $p=0.1\%$. 90% yield is also good enough whenwe discard badly fabricated quantum computation chips, while 80% yield does notshow enough error suppression even when discarding 90% of the chips. Weevaluated several metrics for predicting chip performance, and found that theaverage of the product of the number of data qubits and the cycle time of astabilizer measurement of stabilizers gave the strongest correlation withpost-correction residual error rates. Our analysis will help with selectingusable quantum computation chips from among the pool of all fabricated chips.
Nemoto, K, Trupke, M, Devitt, SJ, Scharfenberger, B, Buczak, K, Schmiedmayer, J & Munro, WJ 2016, 'Photonic Quantum Networks formed from NV− centers', Scientific Reports, vol. 6, no. 1, p. 26284.
View/Download from: Publisher's site
View description>>
AbstractIn this article we present a simple repeater scheme based on the negatively-charged nitrogen vacancy centre in diamond. Each repeater node is built from modules comprising an optical cavity containing a single NV−, with one nuclear spin from 15N as quantum memory. The module uses only deterministic processes and interactions to achieve high fidelity operations (>99%) and modules are connected by optical fiber. In the repeater node architecture, the processes between modules by photons can be in principle deterministic, however current limitations on optical components lead the processes to be probabilistic but heralded. Our resource-modest repeater architecture contains two modules at each node and the repeater nodes are then connected by entangled photon pairs. We discuss the performance of such a quantum repeater network with modest resources and then incorporate more resource-intense strategies step by step. Our architecture should allow large-scale quantum information networks with existing or near future technology.
Nguyen, Q, Khalifa, N, Alzamora, P, Gleeson, A, Catchpoole, D, Kennedy, P & Simoff, S 2016, 'Visual Analytics of Complex Genomics Data to Guide Effective Treatment Decisions', Journal of Imaging, vol. 2, no. 4, pp. 29-29.
View/Download from: Publisher's site
View description>>
In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, when building a framework for personalised treatment, the complexity of the genome must be captured in meaningful and actionable ways. This paper presents a novel visual analytics framework that enables effective analysis of large and complex genomics data. By providing interactive visualisations from the overview of the entire patient cohort to the detail view of individual genes, our work potentially guides effective treatment decisions for childhood cancer patients. The framework consists of multiple components enabling the complete analytics supporting personalised medicines, including similarity space construction, automated analysis, visualisation, gene-to-gene comparison and user-centric interaction and exploration based on feature selection. In addition to the traditional way to visualise data, we utilise the Unity3D platform for developing a smooth and interactive visual presentation of the information. This aims to provide better rendering, image quality, ergonomics and user experience to non-specialists or young users who are familiar with 3D gaming environments and interfaces. We illustrate the effectiveness of our approach through case studies with datasets from childhood cancers, B-cell Acute Lymphoblastic Leukaemia (ALL) and Rhabdomyosarcoma (RMS) patients, on how to guide the effective treatment decision in the cohort.
Nie, L, Jiang, D, Guo, L & Yu, S 2016, 'Traffic matrix prediction and estimation based on deep learning in large-scale IP backbone networks', Journal of Network and Computer Applications, vol. 76, pp. 16-22.
View/Download from: Publisher's site
View description>>
Network traffic analysis has been one of the most crucial techniques for preserving a large-scale IP backbone network. Despite its importance, large-scale network traffic monitoring techniques suffer from some technical and mercantile issues to obtain precise network traffic data. Though the network traffic estimation method has been the most prevalent technique for acquiring network traffic, it still has a great number of problems that need solving. With the development of the scale of our networks, the level of the ill-posed property of the network traffic estimation problem is more deteriorated. Besides, the statistical features of network traffic have changed greatly in terms of current network architectures and applications. Motivated by that, in this paper, we propose a network traffic prediction and estimation method respectively. We first use a deep learning architecture to explore the dynamic properties of network traffic, and then propose a novel network traffic prediction approach based on a deep belief network. We further propose a network traffic estimation method utilizing the deep belief network via link counts and routing information. We validate the effectiveness of our methodologies by real data sets from the Abilene and GÉANT backbone networks.
Niu, J, Wang, L, Liu, X & Yu, S 2016, 'FUIR: Fusing user and item information to deal with data sparsity by using side information in recommendation systems', Journal of Network and Computer Applications, vol. 70, pp. 41-50.
View/Download from: Publisher's site
View description>>
Recommendation systems adopt various techniques to recommend ranked lists of items to help users in identifying items that fit their personal tastes best. Among various recommendation algorithms, user and item-based collaborative filtering methods have been very successful in both industry and academia. More recently, the rapid growth of the Internet and E-commerce applications results in great challenges for recommendation systems as the number of users and the amount of available online information have been growing too fast. These challenges include performing high quality recommendations per second for millions of users and items, achieving high coverage under the circumstance of data sparsity and increasing the scalability of recommendation systems. To obtain higher quality recommendations under the circumstance of data sparsity, in this paper, we propose a novel method to compute the similarity of different users based on the side information which is beyond user-item rating information from various online recommendation and review sites. Furthermore, we take the special interests of users into consideration and combine three types of information (users, items, user-items) to predict the ratings of items. Then FUIR, a novel recommendation algorithm which fuses user and item information, is proposed to generate recommendation results for target users. We evaluate our proposed FUIR algorithm on three data sets and the experimental results demonstrate that our FUIR algorithm is effective against sparse rating data and can produce higher quality recommendations.
Oberst, S, Lai, JCS & Evans, TA 2016, 'Termites utilise clay to build structural supports and so increase foraging resources', Scientific Reports, vol. 6, no. 1.
View/Download from: Publisher's site
View description>>
AbstractMany termite species use clay to build foraging galleries and mound-nests. In some cases clay is placed within excavations of their wooden food, such as living trees or timber in buildings; however the purpose for this clay is unclear. We tested the hypotheses that termites can identify load bearing wood and that they use clay to provide mechanical support of the load and thus allow them to eat the wood. In field and laboratory experiments, we show that the lower termiteCoptotermes acinaciformis, the most basal species to build a mound-nest, can distinguish unloaded from loaded wood and use clay differently when eating each type. The termites target unloaded wood preferentially and use thin clay sheeting to camouflage themselves while eating the unloaded wood. The termites attack loaded wood secondarily and build thick, load-bearing clay walls when they do. The termites add clay and build thicker walls as the load-bearing wood is consumed. The use of clay to support wood under load unlocks otherwise unavailable food resources. This behaviour may represent an evolutionary step from foraging behaviour to nest building in lower termites.
Oberst, S, Zhang, Z & Lai, JCS 2016, 'The Role of Nonlinearity and Uncertainty in Assessing Disc Brake Squeal Propensity', SAE International Journal of Passenger Cars - Mechanical Systems, vol. 9, no. 3, pp. 980-986.
View/Download from: Publisher's site
Othman, SH & Beydoun, G 2016, 'A metamodel-based knowledge sharing system for disaster management', EXPERT SYSTEMS WITH APPLICATIONS, vol. 63, pp. 49-65.
View/Download from: Publisher's site
Paler, A, Devitt, SJ & Fowler, AG 2016, 'Synthesis of Arbitrary Quantum Circuits to Topological Assembly', Scientific Reports 6, Article number: 30600 (2016), vol. 6, no. 1, p. 30600.
View/Download from: Publisher's site
View description>>
Given a quantum algorithm, it is highly nontrivial to devise an efficientsequence of physical gates implementing the algorithm on real hardware andincorporating topological quantum error correction. In this paper, we present afirst step towards this goal, focusing on generating correct and simplearrangements of topological structures that correspond to a given quantumcircuit and largely neglecting their efficiency. We detail the many challengesthat will need to be tackled in the pursuit of efficiency. The software sourcecode can be consulted at https://github.com/alexandrupaler/tqec.
Paler, A, Wille, R & Devitt, SJ 2016, 'Wire Recycling for Quantum Circuit Optimization', Phys. Rev. A, vol. 94, no. 4, p. 042337.
View/Download from: Publisher's site
View description>>
Quantum information processing is expressed using quantum bits (qubits) andquantum gates which are arranged in the terms of quantum circuits. Here, eachqubit is associated to a quantum circuit wire which is used to conduct thedesired operations. Most of the existing quantum circuits allocate a singlequantum circuit wire for each qubit and, hence, introduce a significantoverhead. In fact, qubits are usually not needed during the entire computationbut only between their initialization and measurement. Before and after that,corresponding wires may be used by other qubits. In this work, we propose asolution which exploits this fact in order to optimize the design of quantumcircuits with respect to the required wires. To this end, we introduce arepresentation of the lifetimes of all qubits which is used to analyze therespective need for wires. Based on this analysis, a method is proposed which'recycles' the available wires and, by this, reduces the size of the resultingcircuit. Experimental evaluations based on established reversible andfault-tolerant quantum circuits confirm that the proposed solution reduces theamount of wires by more than 90% compared to unoptimized quantum circuits.
Percival, J & McGregor, C 2016, 'An Evaluation of Understandability of Patient Journey Models in Mental Health', JMIR Human Factors, vol. 3, no. 2, pp. e20-e20.
View/Download from: Publisher's site
View description>>
BACKGROUND: There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. OBJECTIVES: This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. METHOD: Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. RESULTS: The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. CONCLUSIONS: The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on exte...
Pietroni, N, Puppo, E, Marcias, G, Roberto, R & Cignoni, P 2016, 'Tracing Field-Coherent Quad Layouts.', Comput. Graph. Forum, vol. 35, pp. 485-496.
View/Download from: Publisher's site
Pileggi, SF 2016, 'Is Big Data the New ?God? on Earth? [Opinion]', IEEE Technology and Society Magazine, vol. 35, no. 1, pp. 18-20.
View/Download from: Publisher's site
Polhill, JG, Filatova, T, Schlüter, M & Voinov, A 2016, 'Modelling systemic change in coupled socio-environmental systems', Environmental Modelling & Software, vol. 75, pp. 318-332.
View/Download from: Publisher's site
Polhill, JG, Filatova, T, Schlüter, M & Voinov, A 2016, 'Preface to the thematic issue on modelling systemic change in coupled socio-environmental systems', Environmental Modelling & Software, vol. 75, pp. 317-317.
View/Download from: Publisher's site
Pratama, M, Lu, J & Zhang, G 2016, 'Evolving Type-2 Fuzzy Classifier', IEEE TRANSACTIONS ON FUZZY SYSTEMS, vol. 24, no. 3, pp. 574-589.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Evolving fuzzy classifiers (EFCs) have achieved immense success in dealing with nonstationary data streams because of their flexible characteristics. Nonetheless, most real-world data streams feature highly uncertain characteristics, which cannot be handled by the type-1 EFC. A novel interval type-2 fuzzy classifier, namely evolving type-2 classifier (eT2Class), is proposed in this paper, which constructs an evolving working principle in the framework of interval type-2 fuzzy system. The eT2Class commences its learning process from scratch with an empty or initially trained rule base, and its fuzzy rules can be automatically grown, pruned, recalled, and merged on the fly referring to summarization power and generalization power of data streams. In addition, the eT2Class is driven by a generalized interval type-2 fuzzy rule, where the premise part is composed of the multivariate Gaussian function with an uncertain nondiagonal covariance matrix, while employing a subset of the nonlinear Chebyshev polynomial as the rule consequents. The efficacy of the eT2Class has been rigorously assessed by numerous real-world and artificial study cases, benchmarked against state-of-The-Art classifiers, and validated through various statistical tests. Our numerical results demonstrate that the eT2Class produces more reliable classification rates, while retaining more compact and parsimonious rule base than state-of-The-Art EFCs recently published in the literature.
Pratama, M, Lu, J, Lughofer, E, Zhang, G & Anavatti, S 2016, 'Scaffolding type-2 classifier for incremental learning under concept drifts', NEUROCOMPUTING, vol. 191, pp. 304-329.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. The proposal of a meta-cognitive learning machine that embodies the three pillars of human learning: what-to-learn, how-to-learn, and when-to-learn, has enriched the landscape of evolving systems. The majority of meta-cognitive learning machines in the literature have not, however, characterized a plug-and-play working principle, and thus require supplementary learning modules to be pre-or post-processed. In addition, they still rely on the type-1 neuron, which has problems of uncertainty. This paper proposes the Scaffolding Type-2 Classifier (ST2Class). ST2Class is a novel meta-cognitive scaffolding classifier that operates completely in local and incremental learning modes. It is built upon a multivariable interval type-2 Fuzzy Neural Network (FNN) which is driven by multivariate Gaussian function in the hidden layer and the non-linear wavelet polynomial in the output layer. The what-to-learn module is created by virtue of a novel active learning scenario termed the uncertainty measure; the how-to-learn module is based on the renowned Schema and Scaffolding theories; and the when-to-learn module uses a standard sample reserved strategy. The viability of ST2Class is numerically benchmarked against state-of-the-art classifiers in 12 data streams, and is statistically validated by thorough statistical tests, in which it achieves high accuracy while retaining low complexity.
Pratama, M, Zhang, G, Er, MJ & Anavatti, S 2016, 'An Incremental Type-2 Meta-Cognitive Extreme Learning Machine', IEEE Transactions on Cybernetics, vol. 47, no. 2, pp. 1-15.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.
Ramaprasad, A, Win, KT, Syn, T, Beydoun, G & Dawson, L 2016, 'Australia's National Health Programs: An Ontological Mapping.', Australas. J. Inf. Syst., vol. 20, pp. 1-21.
View/Download from: Publisher's site
View description>>
Australia has a large number of health program initiatives whose comprehensive assessment will help refine and redefine priorities by highlighting areas of emphasis, under-emphasis, and non-emphasis. The objectives of our research are to: (a) systematically map all the programs onto an ontological framework, and (b) systemically analyse their relative emphases at different levels of granularity. We mapped all the health program initiatives onto an ontology with five dimensions, namely: (a) Policy-scope, (b) Policy-focus, (c) Outcomes, (d) Type of care, and (e) Population served. Each dimension is expanded into a taxonomy of its constituent elements. Each combination of elements from the five dimensions is a possible policy initiative component. There are 30,030 possible components encapsulated in the ontology. It includes, for example: (a) National financial policies on accessibility of preventive care for family, and (b) Local-urban regulatory policies on cost of palliative care for individual-aged. Four of the authors mapped all of Australia's health programs and initiatives on to the ontology. Visualizations of the data are used to highlight the relative emphases in the program initiatives. The dominant emphasis of the program initiatives is: [National] [educational, personnel-physician, information] policies on [accessibility, quality] of [preventive, wellness] care for the [community]. However, although (a) information is emphasized technology is not and (b) accessibility and quality are emphasized cost, satisfaction, and quality are not. The ontology and the results of the mapping can help systematically reassess and redirect the relative emphases of the programs and initiatives from a systemic perspective.
Salvador, MM, Budka, M & Gabrys, B 2016, 'Effects of Change Propagation Resulting from Adaptive Preprocessing in Multicomponent Predictive Systems', Procedia Computer Science, vol. 96, pp. 713-722.
View/Download from: Publisher's site
View description>>
Predictive modelling is a complex process that requires a number of steps to transform raw data into predictions. Preprocessing of the input data is a key step in such process, and the selection of proper preprocessing methods is often a labour intensive task. Such methods are usually trained offline and their parameters remain fixed during the whole model deployment lifetime. However, preprocessing of non-stationary data streams is more challenging since the lack of adaptation of such preprocessing methods may degrade system performance. In addition, dependencies between different predictive system components make the adaptation process more challenging. In this paper we discuss the effects of change propagation resulting from using adaptive preprocessing in a Multicomponent Predictive System (MCPS). To highlight various issues we present four scenarios with different levels of adaptation. A number of experiments have been performed with a range of datasets to compare the prediction error in all four scenarios. Results show that well managed adaptation considerably improves the prediction performance. However, the model can become inconsistent if adaptation in one component is not correctly propagated throughout the rest of system components. Sometimes, such inconsistency may not cause an obvious deterioration in the system performance, therefore being difficult to detect. In some other cases it may even lead to a system failure as was observed in our experiments.
Sanders, YR, Wallman, JJ & Sanders, BC 2016, 'Bounding quantum gate error rate based on reported average fidelity', New Journal of Physics, vol. 18, no. 1, pp. 012002-012002.
View/Download from: Publisher's site
View description>>
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.
Shen, S, Huang, L, Liu, J, Champion, A, Yu, S & Cao, Q 2016, 'Reliability Evaluation for Clustered WSNs under Malware Propagation', Sensors, vol. 16, no. 6, pp. 855-855.
View/Download from: Publisher's site
View description>>
We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.
Singh, J, Prasad, M, Prasad, OK, Meng Joo, E, Saxena, AK & Lin, C-T 2016, 'A Novel Fuzzy Logic Model for Pseudo-Relevance Feedback-Based Query Expansion', International Journal of Fuzzy Systems, vol. 18, no. 6, pp. 980-989.
View/Download from: Publisher's site
View description>>
© 2016, Taiwan Fuzzy Systems Association and Springer-Verlag Berlin Heidelberg. In this paper, a novel fuzzy logic-based expansion approach considering the relevance score produced by different rank aggregation approaches is proposed. It is well known that different rank aggregation approaches yield different relevance scores for each term. The proposed fuzzy logic approach combines different weights of each term by using fuzzy rules to infer the weights of the additional query terms. Experimental results demonstrate that the proposed approach achieves significant improvement over individual expansion, aggregated and other related state-of-the-arts methods.
Sood, K, Yu, S & Xiang, Y 2016, 'Performance Analysis of Software-Defined Network Switch Using $M/Geo/1$ Model', IEEE Communications Letters, vol. 20, no. 12, pp. 2522-2525.
View/Download from: Publisher's site
View description>>
The aim of this letter is to propose an analytical model to study the performance of software-defined network (SDN) switches. Here, SDN switch performance is defined as the time that an SDN switch needs to process packet without the interaction of controller. We exploit the capabilities of queueing theory-based M/Geo/1 model to analyze the key factors, flow-table size, packet arrival rate, number of rules, and position of rules. The analytical model is validated using extensive simulations. This letter reveals that these factors have significant influence on the performance of an SDN switch.
Sood, K, Yu, S & Xiang, Y 2016, 'Software-Defined Wireless Networking Opportunities and Challenges for Internet-of-Things: A Review', IEEE Internet of Things Journal, vol. 3, no. 4, pp. 453-463.
View/Download from: Publisher's site
View description>>
With the emergence of Internet-of-Things (IoT), there is now growing interest to simplify wireless network controls. This is a very challenging task, comprising information acquisition, information analysis, decision-making, and action implementation on large scale IoT networks. Resulting in research to explore the integration of software-defined networking (SDN) and IoT for a simpler, easier, and strain less network control. SDN is a promising novel paradigm shift which has the capability to enable a simplified and robust programmable wireless network serving an array of physical objects and applications. This paper starts with the emergence of SDN and then highlights recent significant developments in the wireless and optical domains with the aim of integrating SDN and IoT. Challenges in SDN and IoT integration are also discussed from both security and scalability perspectives.
Sood, K, Yu, S, Xiang, Y & Cheng, H 2016, 'A General QoS Aware Flow-Balancing and Resource Management Scheme in Distributed Software-Defined Networks', IEEE Access, vol. 4, pp. 7176-7185.
View/Download from: Publisher's site
View description>>
Due to the limited service capabilities of centralized controllers, it is difficult to process high volume of flows within reasonable time. This particularly degrades the strict quality of service (QoS) requirements of interactive media applications, which is non-negligible factor. To alleviate this concern, distributed deployments of software-defined network (SDN) controllers are inevitable and have gained a predominant position. However, to maintain application specific QoS requirements, the number of resources used in network directly impacts the capital and operational expenditure. Hence, in distributed SDN architectures, issues such as flow arrival rate, resources required and operational cost have significant mutual dependencies on each other. Therefore, it is essential to research feasible methods to maintain QoS and minimize resources provisioning cost. Motivated from this, we propose a solution in a distributed SDN architectures that provides flow-balancing (with guaranteed QoS) in pro-active operations of SDN controllers, and attempts to optimize the use of instance resources provisioning costs. We validate our solution using the tools of queuing theory. Our studies indicate that with our solution, a network with minimum resources and affordable cost with guaranteed application QoS can be set-up.
Sun, F, Liu, B, Hou, F, Zhou, H, Chen, J, Rui, Y & Gui, L 2016, 'A QoE centric distributed caching approach for vehicular video streaming in cellular networks', Wireless Communications and Mobile Computing, vol. 16, no. 12, pp. 1612-1624.
View/Download from: Publisher's site
View description>>
AbstractDistributed caching‐empowered wireless networks can greatly improve the efficiency of data storage and transmission and thereby the users' quality of experience (QoE). However, how this technology can alleviate the network access pressure while ensuring the consistency of content delivery is still an open question, especially in the case where the users are in fast motion. Therefore, in this paper, we investigate the caching issue emerging from a forthcoming scenario where vehicular video streaming is performed under cellular networks. Specifically, a QoE centric distributed caching approach is proposed to fulfill as many users' requests as possible, considering the limited caching space of base stations and basic user experience guarantee. Firstly, a QoE evaluation model is established using verified empirical data. Also, the mathematic relationship between the streaming bit rate and actual storage space is developed. Then, the distributed caching management for vehicular video streaming is formulated as a constrained optimization problem and solved with the generalized–reduced gradient method. Simulation results indicate that our approach can improve the users' satisfaction ratio by up to 40%. Copyright © 2015 John Wiley & Sons, Ltd.
Sun, L, Ma, J, Zhang, Y, Dong, H & Hussain, FK 2016, 'Cloud-FuSeR: Fuzzy ontology and MCDM based cloud service selection', Future Generation Computer Systems, vol. 57, pp. 42-55.
View/Download from: Publisher's site
Tian, F, Liu, B, Cai, H, Zhou, H & Gui, L 2016, 'Practical Asynchronous Neighbor Discovery in Ad Hoc Networks With Directional Antennas', IEEE Transactions on Vehicular Technology, vol. 65, no. 5, pp. 3614-3627.
View/Download from: Publisher's site
View description>>
Neighbor discovery is a crucial step in the initialization of wireless ad hoc networks. When directional antennas are used, this process becomes more challenging since two neighboring nodes must be in transmit and receive states, respectively, pointing their antennas to each other simultaneously. Most of the proposed neighbor discovery algorithms only consider the synchronous system and cannot work efficiently in the asynchronous environment. However, asynchronous neighbor discovery algorithms are more practical and offer many potential advantages. In this paper, we first analyze a one-way handshake-based asynchronous neighbor discovery algorithm by introducing a mathematical model named 'Problem of Coloring Balls.' Then, we extend it to a hybrid asynchronous algorithm that leads to a 24.4% decrease in the expected time of neighbor discovery. Compared with the synchronous algorithms, the asynchronous algorithms require approximately twice the time to complete the neighbor discovery process. Our proposed hybrid asynchronous algorithm performs better than both the two-way synchronous algorithm and the two-way asynchronous algorithm. We validate the practicality of our proposed asynchronous algorithms by OPNET simulations.
Tian, F, Liu, B, Zhou, H, Rui, Y, Chen, J, Xiong, J & Gui, L 2016, 'Caching algorithms for broadcasting and multicasting in disruption tolerant networks', Wireless Communications and Mobile Computing, vol. 16, no. 18, pp. 3377-3390.
View/Download from: Publisher's site
View description>>
AbstractIn delay and disruption tolerant networks, the contacts among nodes are intermittent. Because of the importance of data access, providing efficient data access is the ultimate aim of analyzing and exploiting disruption tolerant networks. Caching is widely proved to be able to improve data access performance. In this paper, we consider caching schemes for broadcasting and multicasting to improve the performance of data access. First, we propose a caching algorithm for broadcasting, which selects the community central nodes as relays from both network structure perspective and social network perspective. Then, we accommodate the caching algorithm for multicasting by considering the data query pattern. Extensive trace‐driven simulations are conducted to investigate the essential difference between the caching algorithms for broadcasting and multicasting and evaluate the performance of these algorithms. Copyright © 2016 John Wiley & Sons, Ltd.
Tonelli, D, Pietroni, N, Puppo, E, Froli, M, Cignoni, P, Amendola, G & Scopigno, R 2016, 'Stability of Statics Aware Voronoi Grid-Shells', Engineering Structures, vol. 116, pp. 70-82.
View/Download from: Publisher's site
View description>>
Grid-shells are lightweight structures used to cover long spans with few load-bearing material, as they excel for lightness, elegance and transparency. In this paper we analyze the stability of hex-dominant free-form grid-shells, generated with the Statics Aware Voronoi Remeshing scheme introduced in Pietroni et al. (2015). This is a novel hex-dominant, organic-like and non uniform remeshing pattern that manages to take into account the statics of the underlying surface. We show how this pattern is particularly suitable for free-form grid-shells, providing good performance in terms of both aesthetics and structural behavior. To reach this goal, we select a set of four contemporary architectural surfaces and we establish a systematic comparative analysis between Statics Aware Voronoi Grid-Shells and equivalent state of the art triangular and quadrilateral grid-shells. For each dataset and for each grid-shell topology, imperfection sensitivity analyses are carried out and the worst response diagrams compared. It turns out that, in spite of the intrinsic weakness of the hexagonal topology, free-form Statics Aware Voronoi Grid-Shells are much more effective than their state-of-the-art quadrilateral counterparts.
Torchia, J, Golbourn, B, Feng, S, Ho, KC, Sin-Chan, P, Vasiljevic, A, Norman, JD, Guilhamon, P, Garzia, L, Agamez, NR, Lu, M, Chan, TS, Picard, D, de Antonellis, P, Khuong-Quang, D-A, Planello, AC, Zeller, C, Barsyte-Lovejoy, D, Lafay-Cousin, L, Letourneau, L, Bourgey, M, Yu, M, Gendoo, DMA, Dzamba, M, Barszczyk, M, Medina, T, Riemenschneider, AN, Morrissy, AS, Ra, Y-S, Ramaswamy, V, Remke, M, Dunham, CP, Yip, S, Ng, H-K, Lu, J-Q, Mehta, V, Albrecht, S, Pimentel, J, Chan, JA, Somers, GR, Faria, CC, Roque, L, Fouladi, M, Hoffman, LM, Moore, AS, Wang, Y, Choi, SA, Hansford, JR, Catchpoole, D, Birks, DK, Foreman, NK, Strother, D, Klekner, A, Bognár, L, Garami, M, Hauser, P, Hortobágyi, T, Wilson, B, Hukin, J, Carret, A-S, Van Meter, TE, Hwang, EI, Gajjar, A, Chiou, S-H, Nakamura, H, Toledano, H, Fried, I, Fults, D, Wataya, T, Fryer, C, Eisenstat, DD, Scheinemann, K, Fleming, AJ, Johnston, DL, Michaud, J, Zelcer, S, Hammond, R, Afzal, S, Ramsay, DA, Sirachainan, N, Hongeng, S, Larbcharoensub, N, Grundy, RG, Lulla, RR, Fangusaro, JR, Druker, H, Bartels, U, Grant, R, Malkin, D, McGlade, CJ, Nicolaides, T, Tihan, T, Phillips, J, Majewski, J, Montpetit, A, Bourque, G, Bader, GD, Reddy, AT, Gillespie, GY, Warmuth-Metz, M, Rutkowski, S, Tabori, U, Lupien, M, Brudno, M, Schüller, U, Pietsch, T, Judkins, AR, Hawkins, CE, Bouffet, E, Kim, S-K, Dirks, PB, Taylor, MD, Erdreich-Epstein, A, Arrowsmith, CH, De Carvalho, DD, Rutka, JT, Jabado, N & Huang, A 2016, 'Integrated (epi)-Genomic Analyses Identify Subgroup-Specific Therapeutic Targets in CNS Rhabdoid Tumors', Cancer Cell, vol. 30, no. 6, pp. 891-908.
View/Download from: Publisher's site
View description>>
We recently reported that atypical teratoid rhabdoid tumors (ATRTs) comprise at least two transcriptional subtypes with different clinical outcomes; however, the mechanisms underlying therapeutic heterogeneity remained unclear. In this study, we analyzed 191 primary ATRTs and 10 ATRT cell lines to define the genomic and epigenomic landscape of ATRTs and identify subgroup-specific therapeutic targets. We found ATRTs segregated into three epigenetic subgroups with distinct genomic profiles, SMARCB1 genotypes, and chromatin landscape that correlated with differential cellular responses to a panel of signaling and epigenetic inhibitors. Significantly, we discovered that differential methylation of a PDGFRB-associated enhancer confers specific sensitivity of group 2 ATRT cells to dasatinib and nilotinib, and suggest that these are promising therapies for this highly lethal ATRT subtype.
Turner, KG, Anderson, S, Gonzales-Chang, M, Costanza, R, Courville, S, Dalgaard, T, Dominati, E, Kubiszewski, I, Ogilvy, S, Porfirio, L, Ratna, N, Sandhu, H, Sutton, PC, Svenning, J-C, Turner, GM, Varennes, Y-D, Voinov, A & Wratten, S 2016, 'A review of methods, data, and models to assess changes in the value of ecosystem services from land degradation and restoration', Ecological Modelling, vol. 319, pp. 190-207.
View/Download from: Publisher's site
Valenzuela-Fernández, L, Nicolas, C, Gil-Lafuente, J & Merigó, JM 2016, 'Fuzzy indicators for customer retention', International Journal of Engineering Business Management, vol. 8, pp. 184797901667052-184797901667052.
View/Download from: Publisher's site
View description>>
It is widely known that market orientation (MO) and customer value help companies achieve sustainable sales growth over time. Nevertheless, one cannot ignore the existence of a gap on how to measure this relationship. Following this idea, this study proposes six fuzzy key performance indicators that aims to measure customer retention and loyalty of the portfolio. The work uses 300 sales executives. This exploratory study concludes that indicators such as MO, customer orientation (CO), degree of CO value of sales force, innovation capability, lifetime value, and customer service quality positively influence customer retention and loyalty portfolio.
Vaughan, N & Gabrys, B 2016, 'Comparing and Combining Time Series Trajectories Using Dynamic Time Warping', Procedia Computer Science, vol. 96, pp. 465-474.
View/Download from: Publisher's site
View description>>
This research proposes the application of dynamic time warping (DTW) algorithm to analyse multivariate data from virtual reality training simulators, to assess the skill level of trainees. We present results of DTW algorithm applied to trajectory data from a virtual reality haptic training simulator for epidural needle insertion. The proposed application of DTW algorithm serves two purposes, to enable (i) two trajectories to be compared as a similarity measure and also enables (ii) two or more trajectories to be combined together to produce a typical or representative average trajectory using a novel hierarchical DTW process. Our experiments included 100 expert and 100 novice simulator recordings. The data consists of multivariate time series data-streams including multi-dimensional trajectories combined with force and pressure measurements. Our results show that our proposed application of DTW provides a useful time-independent method for (i) comparing two trajectories by providing a similarity measure and (ii) combining two or more trajectories into one, showing higher performance compared to conventional methods such as linear mean. These results demonstrate that DTW can be useful within virtual reality training simulators to provide a component in an automated scoring and assessment feedback system.
Vaughan, N, Gabrys, B & Dubey, VN 2016, 'An overview of self-adaptive technologies within virtual reality training', Computer Science Review, vol. 22, pp. 65-87.
View/Download from: Publisher's site
View description>>
This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training.
Voinov, A, Kolagani, N & McCall, MK 2016, 'Preface to this Virtual Thematic Issue: Modelling with Stakeholders II', Environmental Modelling & Software, vol. 79, pp. 153-155.
View/Download from: Publisher's site
Voinov, A, Kolagani, N, McCall, MK, Glynn, PD, Kragt, ME, Ostermann, FO, Pierce, SA & Ramu, P 2016, 'Modelling with stakeholders – Next generation', Environmental Modelling & Software, vol. 77, pp. 196-220.
View/Download from: Publisher's site
Wang, W, Jiao, P, He, D, Jin, D, Pan, L & Gabrys, B 2016, 'Autonomous overlapping community detection in temporal networks: A dynamic Bayesian nonnegative matrix factorization approach', Knowledge-Based Systems, vol. 110, pp. 121-134.
View/Download from: Publisher's site
View description>>
A wide variety of natural or artificial systems can be modeled as time-varying or temporal networks. To understand the structural and functional properties of these time-varying networked systems, it is desirable to detect and analyze the evolving community structure. In temporal networks, the identified communities should reflect the current snapshot network, and at the same time be similar to the communities identified in history or say the previous snapshot networks. Most of the existing approaches assume that the number of communities is known or can be obtained by some heuristic methods. This is unsuitable and complicated for most real world networks, especially temporal networks. In this paper, we propose a Bayesian probabilistic model, named Dynamic Bayesian Nonnegative Matrix Factorization (DBNMF), for automatic detection of overlapping communities in temporal networks. Our model can not only give the overlapping community structure based on the probabilistic memberships of nodes in each snapshot network but also automatically determines the number of communities in each snapshot network based on automatic relevance determination. Thereafter, a gradient descent algorithm is proposed to optimize the objective function of our DBNMF model. The experimental results using both synthetic datasets and real-world temporal networks demonstrate that the DBNMF model has superior performance compared with two widely used methods, especially when the number of communities is unknown and when the network is highly sparse.
Wang, W, Zhang, G & Lu, J 2016, 'Member contribution-based group recommender system', DECISION SUPPORT SYSTEMS, vol. 87, pp. 80-93.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. Developing group recommender systems (GRSs) is a vital requirement in many online service systems to provide recommendations in contexts in which a group of users are involved. Unfortunately, GRSs cannot be effectively supported using traditional individual recommendation techniques because it needs new models to reach an agreement to satisfy all the members of this group, given their conflicting preferences. Our goal is to generate recommendations by taking each group member's contribution into account through weighting members according to their degrees of importance. To achieve this goal, we first propose a member contribution score (MCS) model, which employs the separable non-negative matrix factorization technique on a group rating matrix, to analyze the degree of importance of each member. A Manhattan distance-based local average rating (MLA) model is then developed to refine predictions by addressing the fat tail problem. By integrating the MCS and MLA models, a member contribution-based group recommendation (MC-GR) approach is developed. Experiments show that our MC-GR approach achieves a significant improvement in the performance of group recommendations. Lastly, using the MC-GR approach, we develop a group recommender system called GroTo that can effectively recommend activities to web-based tourist groups.
Wu, J, Wang, J, Qin, S & Lu, H 2016, 'Suitable error evaluation criteria selection in the wind energy assessment via the K-means clustering algorithm', International Journal of Green Energy, vol. 13, no. 11, pp. 1145-1162.
View/Download from: Publisher's site
View description>>
© 2016 Taylor & Francis Group, LLC. In this paper, wind energy potential of four locations in Xinjiang region is assessed. The Weibull distribution as well as the Logistic and the Lognormal distributions are applied to describe the distributions of the wind speed at different heights. In determining the parameters in the Weibull distribution, four intelligent parameter optimization approaches including the differential evolutionary, the particle swarm optimization, and two other approaches derived from these two algorithms and combined advantages of these two approaches are employed. Then the optimal distribution is chosen through the Chi-square error (CSE), the Kolmogorov–Smirnov test error (KSE), and the root mean square error (RMSE) criteria. However, it is found that the variation range of some criteria is quite large, thus these criteria are analyzed and evaluated both from the anomalous values and by the K-means clustering method. Anomaly observation results have shown that the CSE is the first one should be considered to be eliminated from the consequent optimal distribution function selection. This idea is further confirmed by the K-means clustering algorithm, by which the CSE is clustered into a different group with KSE and RMSE. Therefore, only the reserved two error evaluation criteria are utilized to evaluate the wind power potential.
Wu, X, Liu, M, Dou, W & Yu, S 2016, 'DDoS attacks on data plane of software‐defined network: are they possible?', Security and Communication Networks, vol. 9, no. 18, pp. 5444-5459.
View/Download from: Publisher's site
View description>>
AbstractWith software‐defined networking (SDN) becoming the leading technology for large‐scale networks, it is definitely expected that SDN will suffer various types of distributed denial‐of‐service (DDoS) attacks because of its centralized control logic. However, almost all of existing works concentrate on the controller overloading DDoS attacks, while vulnerabilities exposed by data plane of SDN for DDoS attacks are largely ignored. In this paper, we firstly investigate a flow rule flooding DDoS attack. By thoroughly analyzing the flow table size and miss rate, we find that attackers are able to inflict significant performance degradation over the system with limited volume of attack resource. We then prove that it is possible for attackers to maximize the performance degradation and minimize the attack rate at the same time. Besides the flooding DDoS attack, we also study a novel DDoS attack targeting data plane of SDN. By utilizing the entry lifetime management mechanism of flow tables, this attack almost never exhibits an intensive controller access behavior. It flies under the radar by inflicting non‐notable performance impact on the system, while it creates heavy long‐term financial burden on the target application. Finally, we present a potential countermeasure for this stealthy DDoS attack. Through extensive experiments, we conclude that DDoS attacks targeting data plane are possible. Copyright © 2016 John Wiley & Sons, Ltd.
Wu, X, Liu, M, Dou, W, Gao, L & Yu, S 2016, 'A scalable and automatic mechanism for resource allocation in self-organizing cloud', Peer-to-Peer Networking and Applications, vol. 9, no. 1, pp. 28-41.
View/Download from: Publisher's site
View description>>
Taking advantage of the huge potential of consumers’ untapped computing power, self-organizing cloud is a novel computing paradigm where the consumers are able to contribute/sell their computing resources. Meanwhile, host machines held by the consumers are connected by a peer-to-peer (P2P) overlay network on the Internet. In this new architecture, due to large and varying multitudes of resources and prices, it is inefficient and tedious for consumers to select the proper resource manually. Thus, there is a high demand for a scalable and automatic mechanism to accomplish resource allocation. In view of this challenge, this paper proposes two novel economic strategies based on mechanism design. Concretely, we apply the Modified Vickrey Auction (MVA) mechanism to the case where the resource is sufficient; and the Continuous Double Auction (CDA) mechanism is employed when the resource is insufficient. We also prove that aforementioned mechanisms have dominant strategy incentive compatibility. Finally, extensive experiment results are conducted to verify the performance of the proposed strategies in terms of procurement cost and execution efficiency.
Xiao, L, Shao, W, Wang, C, Zhang, K & Lu, H 2016, 'Research and application of a hybrid model based on multi-objective optimization for electrical load forecasting', Applied Energy, vol. 180, pp. 213-233.
View/Download from: Publisher's site
Xu, G, Fu, B & Gu, Y 2016, 'Point-of-Interest Recommendations via a Supervised Random Walk Algorithm', IEEE Intelligent Systems, vol. 31, no. 1, pp. 15-23.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Recently, location-based social networks (LBSNs) such as Foursquare and Whrrl have emerged as a new application for users to establish personal social networks and review various points of interest (POIs), triggering a new recommendation service aimed at helping users locate more preferred POIs. Although users' check-in activities could be explicitly considered as user ratings, in turn being utilized directly for collaborative filtering-based recommendations, such solutions don't differentiate the sentiment of reviews accompanying check-ins, resulting in unsatisfactory recommendations. This article proposes a new POI recommendation framework by simultaneously incorporating user check-ins and reviews along with side information into a tripartite graph and predicting personalized POI recommendations via a sentiment-supervised random walk algorithm. The experiments conducted on real data demonstrate the superiority of this approach in comparison with state-of-the-art techniques.
Xuan, J, Luo, X, Zhang, G, Lu, J & Xu, Z 2016, 'Uncertainty Analysis for the Keyword System of Web Events', IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, vol. 46, no. 6, pp. 829-842.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Webpage recommendations for hot Web events can assist people to easily follow the evolution of these Web events. At the same time, there are different levels of semantic uncertainty underlying the amount of Webpages for a Web event, such as recapitulative information and detailed information. Apparently, the grasp of the semantic uncertainty of Web events could improve the satisfactoriness of Webpage recommendations. However, traditional hit-rate-based or clustering-based Webpage recommendation methods have overlooked these different levels of semantic uncertainty. In this paper, we propose a framework to identify the different underlying levels of semantic uncertainty in terms of Web events, and then utilize these for Webpage recommendations. Our idea is to consider a Web event as a system composed of different keywords, and the uncertainty of this keyword system is related to the uncertainty of the particular Web event. Based on keyword association linked network Web event representation and Shannon entropy, we identify the different levels of semantic uncertainty, and construct a semantic pyramid (SP) to express the uncertainty hierarchy of a Web event. Finally, an SP-based Webpage recommendation system is developed. Experiments show that the proposed algorithm can significantly capture the different levels of the semantic uncertainties of Web events and it can be applied to Webpage recommendations.
Yi, X, Paulet, R, Bertino, E & Xu, G 2016, 'Private Cell Retrieval From Data Warehouses', IEEE Transactions on Information Forensics and Security, vol. 11, no. 6, pp. 1346-1361.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Publicly accessible data warehouses are an indispensable resource for data analysis. However, they also pose a significant risk to the privacy of the clients, since a data warehouse operator may follow the client's queries and infer what the client is interested in. Private information retrieval (PIR) techniques allow the client to retrieve a cell from a data warehouse without revealing to the operator which cell is retrieved and, therefore, protects the privacy of the client's queries. However, PIR cannot be used to hide online analytical processing (OLAP) operations performed by the client, which may disclose the client's interest. This paper presents a solution for private cell retrieval from a data warehouse on the basis of the Paillier cryptosystem. By our solution, the client can privately perform OLAP operations on the data warehouse and retrieve one (or more) cell without revealing any information about which cell is selected. In addition, we propose a solution for private block download on the basis of the Paillier cryptosystem. Our private block download allows the client to download an encrypted block from a data warehouse without revealing which block in a cloaking region is downloaded and improves the feasibility of our private cell retrieval. Our solutions ensure both the server's privacy and the client's privacy. Our experiments have shown that our solutions are practical.
Ying, M 2016, 'Introduction', ASIAN WOMEN, vol. 28, no. 4, pp. 3-9.
View/Download from: Publisher's site
Yu, D, Li, D-F & Merigó, JM 2016, 'Dual hesitant fuzzy group decision making method and its application to supplier selection', International Journal of Machine Learning and Cybernetics, vol. 7, no. 5, pp. 819-831.
View/Download from: Publisher's site
View description>>
The concept of dual hesitant fuzzy set arising from hesitant fuzzy set is generalized by including a function reflecting the decision maker’s fuzziness about the non-membership degree of the information provided. This paper studies some dual hesitant fuzzy information aggregation operators for aggregating dual hesitant fuzzy elements, such as dual hesitant fuzzy Heronian mean operator and dual hesitant fuzzy geometric Heronian mean operator. The research resulting dual hesitant fuzzy information aggregation operators finds an important role in group decision making (GDM) applications. It can fusion the experts’ opinion to the comprehensive ones and based on which an optimal decision making scheme can be determined. The properties of the proposed operators are studied and the application on GDM are investigated. The effectiveness of the GDM method is demonstrated on the case study about supplier selection.
Yu, D, Li, D-F, Merigó, JM & Fang, L 2016, 'Mapping development of linguistic decision making studies', Journal of Intelligent & Fuzzy Systems, vol. 30, no. 5, pp. 2727-2736.
View/Download from: Publisher's site
Yu, D, Merigó, JM & Xu, Y 2016, 'Group Decision Making in Information Systems Security Assessment Using Dual Hesitant Fuzzy Set', International Journal of Intelligent Systems, vol. 31, no. 8, pp. 786-812.
View/Download from: Publisher's site
View description>>
Network information system security has become a global issue since it is related to the economic development and national security. Information system security assessment plays an important role in the development of security solutions. Aiming at this issue, a dual hesitant fuzzy (DHF) group decision-making (GDM) method was proposed in this paper to assist the assessment of network information system security. A systemic index containing four aspects was established including organization security, management security, technical security, and personnel management security. The DHF group evaluation matrix was constructed based on the individual evaluation information from each expert. Some power average operator-based DHF information aggregation operators are proposed and used to fusion the performance of each criterion for information systems. The advantage of these operators is that they can describe the relationship between the indexes quantitatively. Finally, a case study about information systems security assessment was presented to verify the effectiveness of proposed GDM methods.
Yu, S 2016, 'Big Privacy: Challenges and Opportunities of Privacy Study in the Age of Big Data', IEEE Access, vol. 4, pp. 2751-2763.
View/Download from: Publisher's site
View description>>
One of the biggest concerns of big data is privacy. However, the study on big data privacy is still at a very early stage. We believe the forthcoming solutions and theories of big data privacy root from the in place research output of the privacy discipline. Motivated by these factors, we extensively survey the existing research outputs and achievements of the privacy field in both application and theoretical angles, aiming to pave a solid starting ground for interested readers to address the challenges in the big data case. We first present an overview of the battle ground by defining the roles and operations of privacy systems. Second, we review the milestones of the current two major research categories of privacy: data clustering and privacy frameworks. Third, we discuss the effort of privacy study from the perspectives of different disciplines, respectively. Fourth, the mathematical description, measurement, and modeling on privacy are presented. We summarize the challenges and opportunities of this promising topic at the end of this paper, hoping to shed light on the exciting and almost uncharted land.
Yu, S & Liu, K 2016, 'Special Issue on Big Data from networking perspective', Big Data Research, vol. 3, pp. 1-1.
View/Download from: Publisher's site
Yu, S, Wang, C, Liu, K & Zomaya, AY 2016, 'Editorial for IEEE Access Special Section on Theoretical Foundations for Big Data Applications: Challenges and Opportunities', IEEE Access, vol. 4, pp. 5730-5732.
View/Download from: Publisher's site
Yu, S, Zhou, W, Guo, S & Guo, M 2016, 'A Feasible IP Traceback Framework through Dynamic Deterministic Packet Marking', IEEE Transactions on Computers, vol. 65, no. 5, pp. 1418-1427.
View/Download from: Publisher's site
View description>>
DDoS attack source traceback is an open and challenging problem. Deterministic packet marking (DPM) is a simple and effective traceback mechanism, but the current DPM based traceback schemes are not practical due to their scalability constraint. We noticed a factor that only a limited number of computers and routers are involved in an attack session. Therefore, we only need to mark these involved nodes for traceback purpose, rather than marking every node of the Internet as the existing schemes doing. Based on this finding, we propose a novel marking on demand (MOD) traceback scheme based on the DPM mechanism. In order to traceback to involved attack source, what we need to do is to mark these involved ingress routers using the traditional DPM strategy. Similar to existing schemes, we require participated routers to install a traffic monitor. When a monitor notices a surge of suspicious network flows, it will request a unique mark from a globally shared MOD server, and mark the suspicious flows with the unique marks. At the same time, the MOD server records the information of the marks and their related requesting IP addresses. Once a DDoS attack is confirmed, the victim can obtain the attack sources by requesting the MOD server with the marks extracted from attack packets. Moreover, we use the marking space in a round-robin style, which essentially addresses the scalability problem of the existing DPM based traceback schemes. We establish a mathematical model for the proposed traceback scheme, and thoroughly analyze the system. Theoretical analysis and extensive real-world data experiments demonstrate that the proposed traceback method is feasible and effective.
Yu, Y-H, Chen, S-H, Chang, C-L, Lin, C-T, Hairston, W & Mrozek, R 2016, 'New Flexible Silicone-Based EEG Dry Sensor Material Compositions Exhibiting Improvements in Lifespan, Conductivity, and Reliability', Sensors, vol. 16, no. 11, pp. 1826-1826.
View/Download from: Publisher's site
View description>>
This study investigates alternative material compositions for flexible silicone-based dry electroencephalography (EEG) electrodes to improve the performance lifespan while maintaining high-fidelity transmission of EEG signals. Electrode materials were fabricated with varying concentrations of silver-coated silica and silver flakes to evaluate their electrical, mechanical, and EEG transmission performance. Scanning electron microscope (SEM) analysis of the initial electrode development identified some weak points in the sensors’ construction, including particle pull-out and ablation of the silver coating on the silica filler. The newly-developed sensor materials achieved significant improvement in EEG measurements while maintaining the advantages of previous silicone-based electrodes, including flexibility and non-toxicity. The experimental results indicated that the proposed electrodes maintained suitable performance even after exposure to temperature fluctuations, 85% relative humidity, and enhanced corrosion conditions demonstrating improvements in the environmental stability. Fabricated flat (forehead) and acicular (hairy sites) electrodes composed of the optimum identified formulation exhibited low impedance and reliable EEG measurement; some initial human experiments demonstrate the feasibility of using these silicone-based electrodes for typical lab data collection applications.
Yu, Y-H, Lu, S-W, Chuang, C-H, King, J-T, Chang, C-L, Chen, S-A, Chen, S-F & Lin, C-T 2016, 'An Inflatable and Wearable Wireless System for Making 32-Channel Electroencephalogram Measurements', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 24, no. 7, pp. 806-813.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Potable electroencephalography (EEG) devices have become critical for important research. They have various applications, such as in brain-computer interfaces (BCI). Numerous recent investigations have focused on the development of dry sensors, but few concern the simultaneous attachment of high-density dry sensors to different regions of the scalp to receive qualified EEG signals from hairy sites. An inflatable and wearable wireless 32-channel EEG device was designed, prototyped, and experimentally validated for making EEG signal measurements; it incorporates spring-loaded dry sensors and a novel gasbag design to solve the problem of interference by hair. The cap is ventilated and incorporates a circuit board and battery with a high-tolerance wireless (Bluetooth) protocol and low power consumption characteristics. The proposed system provides a 500/250 Hz sampling rate, and 24 bit EEG data to meet the BCI system data requirement. Experimental results prove that the proposed EEG system is effective in measuring audio event-related potential, measuring visual event-related potential, and rapid serial visual presentation. Results of this work demonstrate that the proposed EEG cap system performs well in making EEG measurements and is feasible for practical applications.
Zeng, D, Gu, L, Guo, S, Cheng, Z & Yu, S 2016, 'Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System', IEEE Transactions on Computers, vol. 65, no. 12, pp. 3702-3712.
View/Download from: Publisher's site
View description>>
Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Zhang, A, Chen, J, Zhou, L & Yu, S 2016, 'Graph Theory-Based QoE-Driven Cooperation Stimulation for Content Dissemination in Device-to-Device Communication', IEEE Transactions on Emerging Topics in Computing, vol. 4, no. 4, pp. 556-567.
View/Download from: Publisher's site
View description>>
With multimedia dominating the digital contents, device-To-device communication has been proposed as a promising data offloading solution in the big data area. As the quality of experience (QoE) is a major determining factor in the success of new multimedia applications, we propose a QoE-driven cooperative content dissemination (QeCS) scheme in this paper. In particular, all users predict the QoE of the potential connections characterized by the mean opinion score (MOS), and send the results to the content provider (CP). Then, the CP formulates a weighted directed graph according to the network topology and MOS of each potential connection. In order to stimulate cooperation among the users, the content dissemination mechanism is designed through seeking one-factor of the weighted directed graph with the maximum weight thus achieving maximum total user MOS. In addition, a debt mechanism is adopted to combat the cheat attacks. Furthermore, we extend the proposed QeCS scheme by considering a constrained condition to the optimization problem for fairness improvement. Extensive simulation results demonstrate that the proposed QeCS scheme achieves both efficiency and fairness especially in large scale and density networks.
Zhang, G, Han, J & Lu, J 2016, 'Fuzzy Bi-level Decision-Making Techniques: A Survey', INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, vol. 9, pp. 25-34.
View/Download from: Publisher's site
View description>>
© 2016 the authors. Bi-level decision-making techniques aim to deal with decentralized management problems that feature interactive decision entities distributed throughout a bi-level hierarchy. A challenge in handling bi-level decision problems is that various uncertainties naturally appear in decision-making process. Significant efforts have been devoted that fuzzy set techniques can be used to effectively deal with uncertain issues in bi-level decision-making, known as fuzzy bi-level decision-making techniques, and researchers have successfully gained experience in this area. It is thus vital that an instructive review of current trends in this area should be conducted, not only of the theoretical research but also the practical developments. This paper systematically reviews up-to-date fuzzy bi-level decisionmaking techniques, including models, approaches, algorithms and systems. It also clusters related technique developments into four main categories: basic fuzzy bi-level decision-making, fuzzy bi-level decision-making with multiple optima, fuzzy random bi-level decision-making, and the applications of bi-level decision-making techniques in different domains. By providing state-of-the-art knowledge, this survey paper will directly support researchers and practitioners in their understanding of developments in theoretical research results and applications in relation to fuzzy bi-level decision-making techniques.
Zhang, H, Quan, W, Song, J, Jiang, Z & Yu, S 2016, 'Link State Prediction-Based Reliable Transmission for High-Speed Railway Networks', IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9617-9629.
View/Download from: Publisher's site
View description>>
Due to unpredictable noise and ambient interference along high-speed railways (HSRs), it is challenging to provide reliable Internet services in severe HSR network environments. Most existing research that requires expensive modifications for large-scale already in-used base stations cannot be immediately deployed into the existing HSR systems. In this paper, we propose a quite lightweight but effective solution to improve the Internet experience for HSR passengers. Different from other existing approaches, we employ a data-driven link state prediction (LSP) mechanism for HSR reliable transmission, called LSP4HSR, which directly operates in HSR's on-board routers. In particular, we conduct an extensive measurement of network status on several realistic HSR lines and collect a first-hand dataset in terms of round-trip time and packet loss rate. By analyzing this real dataset, we find that HSR link quality presents obvious two-time-scale variation characteristics. We execute a lot of in-depth studies to explore potential reasons for this interesting phenomenon. Furthermore, based on the two-time-scale Markov chain, we establish an accurate HSR link prediction approach, which brings an LSP-based transmission enhancement mechanism to alleviate the impact from poor link status along HSR lines. Extensive experiments verify that the proposed solution can not only improve the packet transmission reliability in HSR networks but can be also deployed in existing HSR systems quite smoothly and easily.
Zhang, L, Yang, Z, Voinov, A & Gao, S 2016, 'Nature-inspired stormwater management practice: The ecological wisdom underlying the Tuanchen drainage system in Beijing, China and its contemporary relevance', Landscape and Urban Planning, vol. 155, pp. 11-20.
View/Download from: Publisher's site
Zhang, Y, Jiang, C, Han, Z, Yu, S & Yuan, J 2016, 'Interference-Aware Coordinated Power Allocation in Autonomous Wi-Fi Environment', IEEE Access, vol. 4, pp. 3489-3500.
View/Download from: Publisher's site
View description>>
Self-managed access points (APs) with growing intelligence can optimize their own performances but pose potential negative impacts on others without energy efficiency. In this paper, we focus on modeling the coordinated interaction among interest-independent and self-configured APs, and conduct the power allocation case study in the autonomous Wi-Fi scenario. Specifically, we build a 'coordination Wi-Fi platform (CWP), a public platform for APs interacting with each other. OpenWrt-based APs in the physical world are mapped to virtual agents (VAs) in CWP, which communicate with each other through a standard request-reply process defined as AP talk protocol (ATP). With ATP, an active interference measurement methodology is proposed reflecting both in-range interference and hidden terminal interference, and the Nash bargaining-based power control is further formulated for interference reductions. CWP is deployed in a real office environment, where coordination interactions between VAs can bring a maximum 40-Mb/s throughput improvement with the Nash bargaining-based power control in the multi-AP experiments.
Zhang, Y, Robinson, DKR, Porter, AL, Zhu, D, Zhang, G & Lu, J 2016, 'Technology roadmapping for competitive technical intelligence', TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, vol. 110, no. 2016, pp. 175-186.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Inc. Understanding the evolution and emergence of technology domains remains a challenge, particularly so for potentially breakthrough technologies. Though it is well recognized that emergence of new fields is complex and uncertain, to make decisions amidst such uncertainty, one needs to mobilize various sources of intelligence to identify known–knowns and known–unknowns to be able to choose appropriate strategies and policies. This competitive technical intelligence cannot rely on simple trend analyses because breakthrough technologies have little past to inform such trends, and positing the directions of evolution is challenging. Neither do qualitative tools, embracing the complexities, provide all the solutions, since transparent and repeatable techniques need to be employed to create best practices and evaluate the intelligence that comes from such exercises. In this paper, we present a hybrid roadmapping technique that draws on a number of approaches and integrates them into a multi-level approach (individual activities, industry evolutions and broader global changes) that can be applied to breakthrough technologies. We describe this approach in deeper detail through a case study on dye-sensitized solar cells. Our contribution to this special issue is to showcase the technique as part of a family of approaches that are emerging around the world to inform strategy and policy.
Zhang, Y, Shang, L, Huang, L, Porter, AL, Zhang, G, Lu, J & Zhu, D 2016, 'A hybrid similarity measure method for patent portfolio analysis', Journal of Informetrics, vol. 10, no. 4, pp. 1108-1130.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Similarity measures are fundamental tools for identifying relationships within or across patent portfolios. Many bibliometric indicators are used to determine similarity measures; for example, bibliographic coupling, citation and co-citation, and co-word distribution. This paper aims to construct a hybrid similarity measure method based on multiple indicators to analyze patent portfolios. Two models are proposed: categorical similarity and semantic similarity. The categorical similarity model emphasizes international patent classifications (IPCs), while the semantic similarity model emphasizes textual elements. We introduce fuzzy set routines to translate the rough technical (sub-) categories of IPCs into defined numeric values, and we calculate the categorical similarities between patent portfolios using membership grade vectors. In parallel, we identify and highlight core terms in a 3-level tree structure and compute the semantic similarities by comparing the tree-based structures. A weighting model is designed to consider: 1) the bias that exists between the categorical and semantic similarities, and 2) the weighting or integrating strategy for a hybrid method. A case study to measure the technological similarities between selected firms in China's medical device industry is used to demonstrate the reliability our method, and the results indicate the practical meaning of our method in a broad range of informetric applications.
Zhang, Y, Zhang, G, Chen, H, Porter, AL, Zhu, D & Lu, J 2016, 'Topic analysis and forecasting for science, technology and innovation: Methodology with a case study focusing on big data research', TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, vol. 105, pp. 179-191.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Inc. The number and extent of current Science, Technology & Innovation topics are changing all the time, and their induced accumulative innovation, or even disruptive revolution, will heavily influence the whole of society in the near future. By addressing and predicting these changes, this paper proposes an analytic method to (1) cluster associated terms and phrases to constitute meaningful technological topics and their interactions, and (2) identify changing topical emphases. Our results are carried forward to present mechanisms that forecast prospective developments using Technology Roadmapping, combining qualitative and quantitative methodologies. An empirical case study of Awards data from the United States National Science Foundation, Division of Computer and Communication Foundation, is performed to demonstrate the proposed method. The resulting knowledge may hold interest for R&D management and science policy in practice.
Zhang, Z, Liu, Y, Xu, G & Luo, G 2016, 'Recommendation using DMF-based fine tuning method', Journal of Intelligent Information Systems, vol. 47, no. 2, pp. 233-246.
View/Download from: Publisher's site
View description>>
© 2016 Springer Science+Business Media New York Recommender Systems (RS) have been comprehensively analyzed in the past decade, Matrix Factorization (MF)-based Collaborative Filtering (CF) method has been proved to be an useful model to improve the performance of recommendation. Factors that inferred from item rating patterns shows the vectors which are useful for MF to characterize both items and users. A recommendation can concluded from good correspondence between item and user factors. A basic MF model starts with an object function, which is consisted of the squared error between original training matrix and predicted matrix as well as the regularization term (regularization parameters). To learn the predicted matrix, recommender systems minimize the squared error which has been regularized. However, two important details have been ignored: (1) the predicted matrix will be more and more accuracy as the iterations carried out, then a fix value of regularization parameters may not be the most suitable choice. (2) the final distribution trend of ratings of predicted matrix is not similar with the original training matrix. Therefore, we propose a Dynamic-MF algorithm and fine tuning method which is quite general to overcome the mentioned detail problems. Some other information, such as social relations, etc, can be easily incorporated into this method (model). The experimental analysis on two large datasets demonstrates that our approaches outperform the basic MF-based method.
Zhang, Z, Oberst, S & Lai, JCS 2016, 'Instability analysis of friction oscillators with uncertainty in the friction law distribution', Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, vol. 230, no. 6, pp. 948-958.
View/Download from: Publisher's site
View description>>
Despite substantial research efforts in the past two decades, the prediction of brake squeal propensity, as a significant noise, vibration and harshness (NVH) issue to automotive manufactures, is as difficult as ever. This is due to the complexity of the interacting mechanisms (e.g. stick-slip, sprag-slip, mode coupling and hammering effect) and the uncertain operating conditions (temperature, pressure). In particular, two major aspects in brake squeal have attracted significant attention recently: nonlinearity and uncertainty. The fugitiveness of brake squeal could be attributed to a number of factors including the difficulty in accurately modelling friction. In this paper, the influence of the uncertainty arising from the tribological aspect in brake squeal prediction is analysed. Three types of friction models, namely the Amonton-Coulomb model, the velocity-dependent model and the LuGre model, are randomly assigned to a group of interconnected oscillators which model the dynamics of a brake system. The complex eigenvalue analysis, as a standard stability analysis tool, and the friction work calculation are performed to investigate the probability for instability arising from the uncertainty in the friction models. The results are discussed with a view to apply this approach to the analysis of the squeal propensity for a full brake system.
Zhang, Z, Oberst, S & Lai, JCS 2016, 'On the potential of uncertainty analysis for prediction of brake squeal propensity', Journal of Sound and Vibration, vol. 377, pp. 123-132.
View/Download from: Publisher's site
Zheng, Y, Zhang, G, Han, J & Lu, J 2016, 'Pessimistic bilevel optimization model for risk-averse production-distribution planning', INFORMATION SCIENCES, vol. 372, pp. 677-689.
View/Download from: Publisher's site
View description>>
© 2016Production-distribution (PD) planning problems are often addressed in an organizational hierarchy in which a distribution company that utilizes several depots is the leader and the manufacturing companies are the followers. The classical objective function of the leader is to minimize the total operating cost of the distribution company, and the followers optimize their respective production cost. However, the distribution company (the leader) frequently cannot obtain complete production information from the manufacturing companies, and may thus become risk-averse. In this case, a better description of the leader's objective function is the minimization of the maximum possible operating cost (Min-Max). In this paper, this type of PD problem is called a risk-averse PD planning problem and is formulated as a pessimistic mixed-integer bilevel optimization (PMIBO) model from the worst-case point of view. To solve the risk-averse PD planning problem, not yet well solved in literature, a penalty function-based method is presented which transforms the PMIBO model into a series of single-level optimization problems so that the latter ones can be solved by available optimization software. Finally, the feasibility of the proposed model is demonstrated using a set of case-based examples of PD planning.
Zhou, L, Merigó, JM, Chen, H & Liu, J 2016, 'The optimal group continuous logarithm compatibility measure for interval multiplicative preference relations based on the COWGA operator', Information Sciences, vol. 328, pp. 250-269.
View/Download from: Publisher's site
View description>>
The calculation of compatibility measures is an important technique employed in group decision-making with interval multiplicative preference relations. In this paper, a new compatibility measure called the continuous logarithm compatibility, which considers risk attitudes in decision-making based on the continuous ordered weighted geometric averaging (COWGA) operator, is introduced. We also develop a group continuous compatibility model (GCC Model) by minimizing the group continuous logarithm compatibility measure between the synthetic interval multiplicative preference relation and the continuous characteristic preference relation. Furthermore, theoretical foundations are established for the proposed model, such as the sufficient and necessary conditions for the existence of an optimal solution, the conditions for the existence of a superior optimal solution and the conditions for the existence of redundant preference relations. In addition, we investigate certain conditions for which the optimal objective function of the GCC Model guarantees its efficiency as the number of decision-makers increases. Finally, practical illustrative examples are examined to demonstrate the model and compare it with previous methods.
Zhu, W 2016, 'Preface', Journal of Computer Science and Technology, vol. 31, no. 6, pp. 1069-1071.
View/Download from: Publisher's site
Abeyrathna, MPAR, Abeygunawrdane, DA, Wijesundara, RAAV, Mudalige, VB, Bandara, M, Perera, S, Maldeniya, D, Madhawa, K & Locknathan, S 1970, 'Dengue propagation prediction using human mobility', 2016 Moratuwa Engineering Research Conference (MERCon), 2016 Moratuwa Engineering Research Conference (MERCon), IEEE, pp. 156-161.
View/Download from: Publisher's site
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Named Entity Recognition from Unstructured Handwritten Document Images', 2016 12th IAPR Workshop on Document Analysis Systems (DAS), 2016 12th IAPR Workshop on Document Analysis Systems (DAS), IEEE, Santorini, Greece, pp. 375-380.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Named entity recognition is an important topic in the field of natural language processing, whereas in document image processing, such recognition is quite challenging without employing any linguistic knowledge. In this paper we propose an approach to detect named entities (NEs) directly from offline handwritten unstructured document images without explicit character/word recognition, and with very little aid from natural language and script rules. At the preprocessing stage, the document image is binarized, and then the text is segmented into words. The slant/skew/baseline corrections of the words are also performed. After preprocessing, the words are sent for NE recognition. We analyze the structural and positional characteristics of NEs and extract some relevant features from the word image. Then the BLSTM neural network is used for NE recognition. Our system also contains a post-processing stage to reduce the true NE rejection rate. The proposed approach produces encouraging results on both historical and modern document images, including those from an Australian archive, which are reported here for the very first time.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Offline Cursive Bengali Word Recognition Using CNNs with a Recurrent Model', 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), IEEE, Shenzhen, China, pp. 429-434.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper deals with offline handwritten word recognition of a major Indic script: Bengali. Due to the structure of this script, the characters (mostly ortho-syllables) are frequently overlapping and hard to segment, especially when the writing is cursive. Individual character recognition and the combination of outputs can increase the likelihood of errors. Instead, a better approach can be sending the whole word to a suitable recognizer. Here we use the Convolutional Neural Network (CNN) integrated with a recurrent model for this purpose. Long short-term memory blocks are used as hidden units. Also, the CNN-derived features are employed in a recurrent model with a CTC (Connectionist Temporal Classification) layer to get the output. We have tested our method on three datasets: (a) a publicly available dataset, (b) a new dataset generated by our research group and (c) an unconstrained dataset. The dataset (a) contains 17,091 words, while our dataset (b) contains 107,550 number of words in total. In addition to these, the dataset (c) is comprised of 5,223 words. We have compared our results with those of some earlier work in the area and have found improved performance, which is due to the novel integration of CNNs with the recurrent model.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Writer identification by training on one script but testing on another', 2016 23rd International Conference on Pattern Recognition (ICPR), 2016 23rd International Conference on Pattern Recognition (ICPR), IEEE, Mexico, pp. 1153-1158.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper deals with identifying a writer from his/her offline handwriting. In a multilingual country where a writer can scribe in multiple scripts, writer identification becomes challenging when we have individual handwriting data in one script while we need to verify/identify a writer from handwriting in another script. In this paper such an issue is addressed with two scripts: English and Bengali. Here we model the task as a classification problem, where training data contains only Bengali handwritten samples and testing is performed on English handwritten texts. This work is based on the understanding that a writer has some inherent stroke characteristics that are independent of the script in which (s)he writes. In this work, some implicit structural and statistical features are extracted, and multiple classifiers are employed for writer identification. Many training sessions are run on a database of 100 writers and the performances are analyzed. We have obtained encouraging results on this database, which show the effectiveness of our method.
Ahadi, A, Behbood, V, Vihavainen, A, Prior, J & Lister, R 1970, 'Students' Syntactic Mistakes in Writing Seven Different Types of SQL Queries and its Application to Predicting Students' Success', Proceedings of the 47th ACM Technical Symposium on Computing Science Education, SIGCSE '16: The 47th ACM Technical Symposium on Computing Science Education, ACM, Memphis, Tennessee, pp. 401-406.
View/Download from: Publisher's site
View description>>
© 2016 ACM. The computing education community has studied extensively the errors of novice programmers. In contrast, little attention has been given to student's mistake in writing SQL statements. This paper represents the first large scale quantitative analysis of the student's syntactic mistakes in writing different types of SQL queries. Over 160 thousand snapshots of SQL queries were collected from over 2000 students across eight years. We describe the most common types of syntactic errors that students make. We also describe our development of an automatic classifier with an overall accuracy of 0.78 for predicting student performance in writing SQL queries.
Ahadi, A, Lister, R & Vihavainen, A 1970, 'On the Number of Attempts Students Made on Some Online Programming Exercises During Semester and their Subsequent Performance on Final Exam Questions', Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16: Innovation and Technology in Computer Science Education Conference 2016, ACM, Arequipa, Peru, pp. 218-223.
View/Download from: Publisher's site
View description>>
This paper explores the relationship between student performance on online programming exercises completed during semester with subsequent student performance on a final exam. We introduce an approach that combines whether or not a student produced a correct solution to an online exercise with information on the number of attempts at the exercise submitted by the student. We use data collected from students in an introductory Java course to assess the value of this approach. We compare the approach that utilizes the number of attempts to an approach that simply considers whether or not a student produced a correct solution to each exercise. We found that the results for the method that utilizes the number of attempts correlates better with performance on a final exam.
Ahadi, A, Prior, J, Behbood, V & Lister, R 1970, 'Students' Semantic Mistakes in Writing Seven Different Types of SQL Queries', Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16: Innovation and Technology in Computer Science Education Conference 2016, ACM, Peru, pp. 272-277.
View/Download from: Publisher's site
Al-Doghman, F, Chaczko, Z, Ajayan, AR & Klempous, R 1970, 'A review on Fog Computing technology', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Budapest, Hungary, pp. 001525-001530.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Out of the many computing and software oriented models that are being adopted by Computer Networking, Fog Computing has captured quite a wide audience in Research and Industry. There is a lot of confusion on its precise definition, position, role and application. The Internet of Things (IOT), todays' digitized intelligent connectivity domain, demands real time response in many applications and services. This renders Fog Computing a suitable platform for achieving goals of autonomy and efficiency. This paper is a justification of the concepts, interest, approaches, and practices of Fog Computing. It describes the need for adopting this new model and investigate its prime features by elucidating the scenarios for implementing it, thereby outlining its significance in the IoT world.
Alfaro-Garcia, VG, Gil-Lafuente, AM & Merigo, JM 1970, 'Induced generalized ordered weighted logarithmic aggregation operators', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greexe, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. We present the induced generalized ordered weighted logarithmic aggregation (IGOWLA) operator. It is an extension of the generalized ordered weighted logarithmic aggregation (GOWLA) operator. The IGOWLA operator uses order-induced variables that modify the reordering mechanism of the arguments to be aggregated. The main advantage of the induced process is the consideration of the complex attitude of the decision makers. We study some properties of the IGOWLA operator, such as idempotency, commutativity, boundedness and monotonicity. Finally we present an illustrative example of a group decision-making procedure using a multi-person analysis and the IGOWLA operator in the area of innovation management.
Alkalbani, AM, Ghamry, AM, Hussain, FK & Hussain, OK 1970, 'Harvesting Multiple Resources for Software as a Service Offers: A Big Data Study', NEURAL INFORMATION PROCESSING, ICONIP 2016, PT I, International Conference on Neural Information Processing, Springer, Kyoto, Japan, pp. 61-71.
View/Download from: Publisher's site
View description>>
Currently, the World Wide Web (WWW) is the primary resource for cloud services information, including offers and providers. Cloud applications (Software as a Service), such as Google App, are one of the most popular and commonly used types of cloud services. Having access to a large amount of information on SaaS offers is critical for the potential cloud client to select and purchase an appropriate service. Web harvesting has become a primary tool for discovering knowledge from the Web source. This paper describes the design and development of Web scraper to collect information on SaaS offers from target Digital cloud services advertisement portals, namely www.getApp.com, and www.cloudreviews.com. The collected data were used to establish two datasets: a SaaS provider’s dataset and a SaaS reviews/feedback dataset. Further, we applied sentiment analysis on the reviews dataset to establish a third dataset called the SaaS sentiment polarity dataset. The significance of this study is that the first work focuses on Web harvesting for cloud computing domain, and it also establishes the first SaaS services datasets. Furthermore, we present statistical data that can be helpful to determine the current status of SaaS services and the number of services offered on the Web. In our conclusion, we provide further insight into improving Web scraping for SaaS service information. Our datasets are available online through www.bluepagesdataset.com
Alkalbani, AM, Ghamry, AM, Hussain, FK & Hussain, OK 1970, 'Predicting the sentiment of SaaS online reviews using supervised machine learning techniques', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, CANADA, pp. 1547-1553.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.There has been a dramatic increase in the sharing of opinions and information across different web platforms and social media, especially online product reviews. Cloud web portals, such as getApp.com, were designed to amalgamate cloud service information and to also examine how consumers evaluate their experience of using cloud computing products. The current literature shows the growing importance of online users' reviews, hence this study focuses on investigating consumers' feedback on Software-as-a-Service (SaaS) products by developing models to predict reviewers' attitudes. The goal of this paper is to develop prediction models to predict the sentiment of SaaS consumers' reviews (positive or negative). This research proposes five models that are based on five algorithms, the Support Vector Machine algorithm, Naive Bayes algorithm, Naive Bayes (Kernel) algorithm, k-nearest neighbors algorithm, and the decision tree algorithm to predict the attitude of SaaS reviews. The prediction accuracy of the space vector algorithm (5-fold cross-validation) is 92.37% which suggests that this algorithm is able to better determine the sentiment of online reviews compared with the other models. The results of this study provide valuable insight into online SaaS reviews and will assist in the design of SaaS review websites.
Alkalbani, AM, Ghamry, AM, Hussain, FK & Hussain, OK 1970, 'Sentiment Analysis and classification for Software as a Service Reviews', IEEE 30TH INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS IEEE AINA 2016, International Conference on Advanced Information Networking and Applications (was ICOIN), IEEE, Crans-Montana, Switzerland, pp. 53-58.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.With the rapid growth of cloud services, there has been a significant increase in the number of online consumer reviews and opinions on these services on different social media platforms. These reviews are a source of valuable information in regard to cloud market position and cloud consumer satisfaction. This study explores cloud consumers' reviews that reflect the user's experience with Software as a Service (SaaS) applications. The reviews were collected from different web portals, and around 4000 online reviews were analysed using sentiment analysis to identify the polarity of each review, that is, whether the sentiment being expressed is positive, negative, or neutral. Also, this research develops a model for predicting the sentiment of Software as a Service consumers' reviews using a supervised learning machine called a support vector machine (SVM). The sentiment results show that 62% of the reviews are positive which indicates that consumers are most likely satisfied with SaaS services. The results show that the prediction accuracy of the SVM-based Binary Occurrence approach (3-fold crossvalidation testing) is 92.30%, indicating it performs better in determining sentiment compared with other approaches (Term Occurrences, TFIDF). This work also provides valuable insight into online SaaS reviews and offers the research community the first SaaS polarity dataset.
Alkalbani, AM, Hussain, FK & IEEE 1970, 'A Comparative Study and Future Research Directions in Cloud Service Discovery', PROCEEDINGS OF THE 2016 IEEE 11TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), IEEE Conference on Industrial Electronics and Applications, IEEE, Dearborn, MI, United States, pp. 1049-1056.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Cloud computing technology is a new paradigm which provides Information Technology (IT) resources via the Internet. This new shift in the way that IT re-sources are offered to the user brings new challenges, such as cloud service discovery. Nowadays, cloud users are faced with a dilemma as they have an abundant choice of cloud services. Moreover, many cloud providers offer a range of services which deliver similar functionality. Locating the best and most appropriate cloud service with a suitable and capable provider is a primary concern for any consumer. In order to clearly comprehend the scope of this problem, a thorough analysis of the limitations of cloud service discovery approaches is required which, in turn, will empower researchers to deliver better solutions for consumers to make an informed decision and choose the right service. This paper presents an overview of the current cloud service discovery trends and challenges in recent studies. Additionally, the reviewed approaches are classified according to service discovery architecture and techniques. Furthermore, these approaches are compared and analysed from several perspectives including approach model/architecture, service type, ontology representation (domain, language, and reasoning), dynamic discovery model, evaluation model, user's preferences techniques, data updates, and public repositories.
Allen, G, Burdon, SW & Dovey, K 1970, 'The Socio-Political Antecedents of Technical Innovation', International Society for Professional Innovation Management, International Society for Professional Innovation Management, International Society for Professional Innovation Management, Porto, Portugal, pp. 1-10.
View description>>
The paper reports on a management initiative within an iconic global high-tech company to facilitate technical innovation within two teams (situated in different global locations of the company) that had been unable to produce any form of technical innovation over a period of several years. Experimenting with an action research strategy, this initiative had the practical goal of generating technical innovation and the research goal of gaining insight into the social dynamics that may facilitate such innovation. The two-year process delivered novel insights into the circumstances that enabled these teams to deliver four company-lauded technical innovations. The principal finding of the research - that social innovation is an antecedent of technical innovation – highlights the importance of alternative research methodologies (to that of the dominant research approach involved in R&D facilities) in addressing the politics of innovation within large organisations.
Alzoubi, YI & Gill, AQ 1970, 'An Agile Enterprise Architecture-Driven Model for Geographically Distributed Agile Development', International Conference on Information Systems Development, ISD 2015, International Conference on Information Systems Development, Springer International Publishing, Harbin, China, pp. 63-77.
View/Download from: Publisher's site
View description>>
Agile development is a highly collaborative environment, which requires active communication (i.e. effective and efficient communication) among stakeholders. The active communication in geographically distributed agile development (GDAD) environment is difficult to achieve due to many challenges. Literature has reported that active communication play critical role in enhancing GDAD performance through reducing the cost and time of a project. However, little empirical evidence is known about how to study and establish active communication construct in GDAD in terms of its dimensions, determinants and effects on GDAD performance. To address this knowledge gap, this paper describes an enterprise architecture (EA) driven research model to identify and empirically examine the GDAD active communication construct. This model can be used by researchers and practitioners to examine the relationships among two dimensions of GDAD active communication (effectiveness and efficiency), one antecedent that can be controlled (agile EA), and four dimensions of GDAD performance (on-Time completion, on-budget completion, software functionality and software quality).
Arellano, LAP, Castro, EL, Ochoa, EA & MerigoLindahl, JM 1970, 'Prioritized induced probabilistic OWA for dispute resolution methods', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, Univ Texas El Paso, El Paso, TX, pp. 1-6.
View/Download from: Publisher's site
Awais, M & Gill, AQ 1970, 'Enterprise IT governance: Back to basics', 25th International Conference on Information Systems Development Isd 2016, International Conference on Information Systems Development, AIS eLibrary, Katowice, Poland., pp. 188-196.
View description>>
Enterprise IT (EIT) governance is an emerging and convoluted area in Information Technology (IT). As a subset, EIT governance operates under defined boundaries and set of rules inherited from the enterprise governance. There are a number of definitions that define EIT governance concepts. These concepts are linked in an intricate web of EIT governance. These concepts and related definitions have emerged over a period of time either through implementation models or IT events. This marks the need for a comprehensive review and synthesis of governance concepts in the modern context of always changing IT landscape. This research applied the well-known Systematic Literature Review (SLR) method. 4 different databases are used to find relevant research papers. Based on available definitions, evidence and analysis, it is found that four concepts are used more than any other: decision, organization, process and goal. This study result provides a consolidated set of key concepts, their relationships and trends, which can be used as a knowledge-base by researchers and practitioners' for further work in this important area of EIT governance.
Bakirov, R, Gabrys, B & Fay, D 1970, 'Augmenting adaptation with retrospective model correction for non-stationary regression problems', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, CANADA, pp. 771-779.
View/Download from: Publisher's site
View description>>
Existing adaptive predictive methods often use multiple adaptive mechanisms as part of their coping strategy in non-stationary environments. We address a scenario when selective deployment of these adaptive mechanisms is possible. In this case, deploying each adaptive mechanism results in different candidate models, and only one of these candidates is chosen to make predictions on the subsequent data. After observing the error of each of candidate, it is possible to revert the current model to the one which had the least error. We call this strategy retrospective model correction. In this work we aim to investigate the benefits of such approach. As a vehicle for the investigation we use an adaptive ensemble method for regression in batch learning mode which employs several adaptive mechanisms to react to changes in the data. Using real world data from the process industry we show empirically that the retrospective model correction is indeed beneficial for the predictive accuracy, especially for the weaker adaptive mechanisms.
Bashir, MR & Gill, AQ 1970, 'Towards an IoT Big Data Analytics Framework: Smart Buildings Systems.', HPCC/SmartCity/DSS, IEEE International Conference on High Performance Computing and Communications, IEEE Computer Society, Sydney, Australia, pp. 1325-1332.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. There is a growing interest in IoT-enabled smart buildings. However, the storage and analysis of large amount of high-speed real-time smart building data is a challenging task. There are a number of contemporary Big Data management technologies and advanced analytics techniques that can be used to deal with this challenge. There is a need for an integrated IoT Big Data Analytics (IBDA) framework to fill the research gap in the Big Data Analytics domain. This paper presents one such IBDA framework for the storage and analysis of real time data generated from IoT sensors deployed inside the smart building. The initial version of the IBDA framework has been developed by using Python and the Big Data Cloudera platform. The applicability of the framework is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. The initial results indicate that the proposed framework is fit for the purpose and seems useful for IoT-enabled Big Data Analytics for smart buildings. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain. This framework will be further evaluated and extended through its implementation in other domains.
Blanco-Mesa, F & Merigó, JM 1970, 'Bonferroni Means with the Adequacy Coefficient and the Index of Maximum and Minimum Level', Lecture Notes in Business Information Processing, Springer International Publishing, pp. 155-166.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. The aim of the paper is to develop new aggregation operators using Bonferroni means, OWA operators and some distance and norms measures. We introduce the BON-OWAAC and BON-OWAIMAM operators. We are able to include adequacy coefficient and the maximum and minimum level in the same formulation with Bonferroni means and OWA operator. The main advantages on using these operators are that they allow considering continuous aggregations, multiple-comparison between each argument and distance measures in the same formulation. The numerical sample is focused on an entrepreneurial example in the sport industry in Colombia.
Blanco-Mesa, F & Merigo-Lindahl, JM 1970, 'Bonferroni distances with OWA operators', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, El Paso, TX, USA, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The aim of the paper is to develop new aggregation operators using Bonferroni means, ordered weighted averaging (OWA) operators and some distance measures. We introduce the Bonferroni-Hamming weighted distance, Bonferroni OWA distance, and Bonferroni distances with OWA operators and weighted averages. The main advantages of using these operators are that they allow considering different aggregations contexts, multiple-comparison between each argument and distance measures in the same formulation.
Blanco-Mesa, F, Merigo Lindahl, JM & Gil-Lafuente, AM 1970, 'A bibliometric analysis of fuzzy decision making research', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, El Paso, TX, USA, pp. 1-4.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Fuzzy decision-making consists in making decisions under complex and uncertain environments where the information can be assessed with fuzzy sets and systems. The aim of this study is to review the main contributions in this field by using a bibliometric approach. For doing so, the article uses a wide range of bibliometric indicators including the citations and the h-index. Moreover, it also uses the VOS viewer software in order to map the main trends in this area. The work considers the leading journals, articles, authors, institutions and countries. The results indicate that the Zadeh L.A. led the origins of fuzzy research and Ronald Yager is the most prominent author in FDM. The USA was the traditional leader in this field with the most significant researcher. However, during the last years, this field is receiving more attention by Asian authors that are starting to lead the field. This discipline has a strong potential and the expectations for the future is that it will continue to grow.
Brady, F & Dyson, LE 1970, 'Exploring the Contribution of Design to Mobile Technology Uptake in a Remote Region of Australia', Culture, Technology, Communication. Common World, Different Futures, International Conference on Culture, Technology, and Communication, Springer International Publishing, London, UK, pp. 55-67.
View/Download from: Publisher's site
View description>>
© IFIP International Federation for Information Processing 2016.Some of the most remote communities in Australia have participated in a technological revolution since the arrival of mobile phone networks in 2003. We follow this journey in four largely Indigenous communities in Cape York and the Torres Strait Islands, from the first 2G network, to 3G, and finally to mobile broadband and smartphones, looking at its impact on communication, Internet access, new media use and social networking. In seeking to understand this phenomenon, we conclude that aspects of the design of the mobile system have contributed, including the flexibility of the technology to adapt to the needs of varying social groups, the small portable nature of the devices which allows them to serve a traditionally mobile people and to be kept as personal devices, a billing system which serves low income people, and the multifunctionality of the technology which provide entertainment while also supporting their use of Facebook.
Braytee, A, Catchpoole, DR, Kennedy, PJ & Liu, W 1970, 'Balanced Supervised Non-Negative Matrix Factorization for Childhood Leukaemia Patients', Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM'16: ACM Conference on Information and Knowledge Management, ACM, Indianapolis, Indiana, USA, pp. 2405-2408.
View/Download from: Publisher's site
View description>>
© 2016 ACM. Supervised feature extraction methods have received considerable attention in the data mining community due to their capability to improve the classification performance of the unsupervised dimensionality reduction methods. With increasing dimensionality, several methods based on supervised feature extraction are proposed to achieve a feature ranking especially on microarray gene expression data. This paper proposes a method with twofold objectives: it implements a balanced supervised non-negative matrix factorization (BSNMF) to handle the class imbalance problem in supervised non-negative matrix factorization techniques. Furthermore, it proposes an accurate gene ranking method based on our proposed BSNMF for microarray gene expression datasets. To the best of our knowledge, this is the first work to handle the class imbalance problem in supervised feature extraction methods. This work is part of a Human Genome project at The Children's Hospital at Westmead (TB-CHW), Australia. Our experiments indicate that the factorized components using supervised feature extraction approach have more classification capability than the unsu-pervised one, but it drastically fails at the presence of class imbalance problem. Our proposed method outperforms the state-of-the-art methods and shows promise in overcoming this concern.
Braytee, A, Liu, W & Kennedy, P 1970, 'A Cost-Sensitive Learning Strategy for Feature Extraction from Imbalanced Data', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Kyoto, Japan, pp. 78-86.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. In this paper, novel cost-sensitive principal component analysis (CSPCA) and cost-sensitive non-negative matrix factorization (CSNMF) methods are proposed for handling the problem of feature extraction from imbalanced data. The presence of highly imbalanced data misleads existing feature extraction techniques to produce biased features, which results in poor classification performance especially for the minor class problem. To solve this problem, we propose a costsensitive learning strategy for feature extraction techniques that uses the imbalance ratio of classes to discount the majority samples. This strategy is adapted to the popular feature extraction methods such as PCA and NMF. The main advantage of the proposed methods is that they are able to lessen the inherent bias of the extracted features to the majority class in existing PCA and NMF algorithms. Experiments on twelve public datasets with different levels of imbalance ratios show that the proposed methods outperformed the state-of-the-art methods on multiple classifiers.
Bremner, MJ, Montanaro, A & Shepherd, D 1970, 'Average-case complexity versus approximate simulation of commuting quantum computations', 19th Conference on Quantum Information Processing, Banff, Canada.
Brereton, M & Van den Hoven, E 1970, 'Session details: Provocations and Work-in-Progress (P-WiP)', Proceedings of the 2016 ACM Conference Companion Publication on Designing Interactive Systems, DIS '16: Designing Interactive Systems Conference 2016, ACM.
View/Download from: Publisher's site
Broekhuijsen, M, Mols, I & van den Hoven, E 1970, 'A holistic design perspective on media capturing and reliving', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania, pp. 180-184.
View/Download from: Publisher's site
Carey, B & Johnston, A 1970, 'Reflection on action in NIME research: Two complementary perspectives', Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 377-382.
View description>>
This paper discusses practice-based research in the context of live performance with interactive systems. Practicebased research is outlined in depth, with key concepts and approaches contextualised with respect to research in the NIME field. We focus on two approaches, both of which are concerned with documenting, examining and reflecting on the real-world behaviours and experiences of people and artefacts involved in the creation of new works. The first approach is primarily based on reflections by an individual performer/developer (auto-ethnography) and the second on interviews and observations. The rationales for both approaches are presented along with findings from research which applied them in order to illustrate and explore the characteristics of both. Challenges, including the difficulty of balancing rigour and relevance and the risks of negatively impacting on creative practices are articulated, as are the potential benefits.
Castro, EL, Ochoa, EA, Merigo Lindahl, JM & Lafuente, AMG 1970, 'Heavy Moving Averages in exchange rate forecasting', 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), IEEE, Univ Texas El Paso, El Paso, TX, pp. 1-4.
View/Download from: Publisher's site
Cetindamar, D 1970, 'A new role for universities: Technology transfer for social innovations', 2016 Portland International Conference on Management of Engineering and Technology (PICMET), 2016 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Honolulu, HI, USA, pp. 290-295.
View/Download from: Publisher's site
View description>>
© 2016 Portland International Conference on Management of Engineering and Technology, Inc. Universities have played a significant role in stimulating technological change and innovation, the focus has been commercialization of technical knowledge generated within science, technology and mathematics disciplines. Universities have increased disseminating knowledge as well as integration with industry in the form of entrepreneurial university. The transformation of university mission has supported university-industry-government interactions in creating commercial entrepreneurial spinoffs while it neglected to interact with a critical stakeholder of the university: society. To our knowledge, the transfer of knowledge generated within universities into social enterprises / social entrepreneurs has not been studied in the literature. This paper will present the gap in the literature review that might be an invitation for researchers to focus on the topic.
Chang, CL, Huang, CS, Lu, SW & Lin, C 1970, 'Apply Artifact Rejection on Multi-Channel Dry EEG System under Motion'.
Chang, CL, Huang, CS, Lu, SW & Lin, C 1970, 'Real-Time Unsupervised Artifact Removal Algorithm Using Wearable Dry EEG System'.
Chen, S, Chen, S, Wang, Z, Liang, J, Yuan, X, Cao, N & Wu, Y 1970, 'D-Map: Visual analysis of ego-centric information diffusion patterns in social media', 2016 IEEE Conference on Visual Analytics Science and Technology (VAST), 2016 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, Baltimore, MD, USA, pp. 41-50.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Popular social media platforms could rapidly propagate vital information over social networks among a significant number of people. In this work we present D-Map (Diffusion Map), a novel visualization method to support exploration and analysis of social behaviors during such information diffusion and propagation on typical social media through a map metaphor. In D-Map, users who participated in reposting (i.e., resending a message initially posted by others) one central user's posts (i.e., a series of original tweets) are collected and mapped to a hexagonal grid based on their behavior similarities and in chronological order of the repostings. With additional interaction and linking, D-Map is capable of providing visual portraits of the influential users and describing their social behaviors. A comprehensive visual analysis system is developed to support interactive exploration with D-Map. We evaluate our work with real world social media data and find interesting patterns among users. Key players, important information diffusion paths, and interactions among social communities can be identified.
Chen, S, Wang, Z, Liang, J & Yuan, X 1970, 'Uncertainty-aware Visual Analytics for Exploring Human Behaviors from Heterogeneous Spatial Temporal Data', Proceedings of the Third Conference of China Visualization and Visual Analytics (ChinaVis'16), the Third Conference of China Visualization and Visual Analytics (ChinaVis'16), Changsha, China.
View description>>
When analyzing human behaviors, we need to construct the humanbehaviors from multiple sources of data, e.g. trajectory data, transactiondata, identity data, etc. The problem we’re facing is the dataconflicts, different resolution, missing and conflicting data, whichtogether lead to the uncertainty in the spatial temporal data. Suchuncertainty in data leads to difficulties even failure in the visualanalytics task for analyzing people behavior, pattern and outliers.However, traditional automatic methods can not solve the problemsin such complex scenario, where the uncertain and conflicting patternsare not well-defined. To solve the problems, we proposed asemi-automatic approach, for users to solve the conflicts and identifythe uncertainties. To be general, We summarized five types ofuncertainties and solutions to conduct the tasks of behavior analysis.Combined with the uncertainty-aware methods, we proposed avisual analytics system to analyze human behaviors, detect patternsand find outliers. Case studies from the IEEE VAST Challenge2014 dataset confirms the effectiveness of our approach.
Chinchore, A, Xu, G & Jiang, F 1970, 'Classifying sybil in MSNs using C4.5', 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Durham, USA, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Sybil detection is an important task in cyber security research. Over past years, many data mining algorithms have been adopted to fulfill such task. Using classification and regression for sybil detection is a very challenging task. Despite of existing research made toward modeling classification for sybil detection and prediction, this research has proposed new solution on how sybil activity could be tracked to address this challenging issue. Prediction of sybil behaviour has been demonstrated by analysing the graph-based classification and regression techniques, using decision trees and described dependencies across different methods. Calculated gain and maxGain helped to trace some sybil users in the datasets.
Chinchore, A, Xu, G & Jiang, F 1970, 'Classifying Sybil in MSNs using C4.5', 2016 INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC AND SOCIO-CULTURAL COMPUTING (BESC), IEEE/ACM International Conference on Behavioral, Economic, Socio-Cultural Computing (BESC), IEEE, Durham, NC, pp. 145-150.
Chu, C, Xu, G, Brownlow, J & Fu, B 1970, 'Deployment of churn prediction model in financial services industry', 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Durham, NC, pp. 1-2.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.Nowadays, data analytics techniques are playing an increasingly crucial role in financial services due to the huge benefits they bring. To ensure a successful implementation of an analytics project, various factors and procedures need to be considered besides technical issues. This paper introduces some practical lessons from our deployment of a data analytics project in a leading wealth management company in Australia. Specifically, the process of building a customer churn prediction model is described. Besides common steps of data analysis, how to deal with other practical issues like data privacy and change management that are encountered by many financial companies are also introduced.
Davis, JJ, Kozma, R, Chin-Teng Lin & Freeman, WJ 1970, 'Spatio-temporal EEG pattern extraction using high-density scalp arrays', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, BC, Canada, pp. 889-896.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Previous experimental studies on rabbits using electrocorticograms (ECoGs) over the cortical surface indicate spatio-temporal dynamics in the form of amplitude modulation (AM) patterns, which intermittently collapse at theta rates and give rise to rapidly propagating phase modulated (PM) patterns. The observed dynamics have been shown to be of cognitive relevance carrying useful information on the meaning of sensory information perceived by the subject. We have extended these studies to human scalp EEG measurements, which show evidence that cognitively relevant AM and PM patterns are observable by non-intrusive experimental techniques as well. The present work develops experimental techniques for studying cognitively relevant spatio-temporal neural dynamics using a high-density EEG array. Theoretical considerations indicate that the required spatial resolution to detect and categorize amplitude and phase patterns should be in the range of 3-5 mm. A prototype 1-dimensional array (MINDO-48S) has been developed, which has 48 electrodes in a flexible linear array of 5 mm spacing. The present work focuses on the extraction of broadly distributed spatio-temporal patterns, which carry cognitively relevant information. Preliminary analysis of the signal-to-noise ratio indicates that the sensitivity of the experiment allows the predicted AM patterns to be measured.
de Vries, NJ, Arefin, AS, Mathieson, L, Lucas, B & Moscato, P 1970, 'Relative Neighborhood Graphs Uncover the Dynamics of Social Media Engagement', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Advanced Data Mining and Applications, Springer International Publishing, Gold Coast, Queensland, Australia, pp. 283-297.
View/Download from: Publisher's site
View description>>
In this paper, we examine if the Relative Neighborhood Graph (RNG) can reveal related dynamics of page-level social media metrics. A statistical analysis is also provided to illustrate the application of the method in two other datasets (the Indo-European Language dataset and the Shakespearean Era Text dataset). Using social media metrics on the world’s ‘top check-in locations’ Facebook pages dataset, the statistical analysis reveals coherent dynamical patterns. In the largest cluster, the categories ‘Gym’, ‘Fitness Center’, and ‘Sports and Recreation’ appear closely linked together in the RNG. Taken together, our study validates our expectation that RNGs can provide a “parameter-free" mathematical formalization of proximity. Our approach gives useful insights on user behaviour in social media page-level metrics as well as other applications.
Du, J, Jiang, C, Wang, J, Yu, S & Ren, Y 1970, 'Trustable service rating in social networks: A peer prediction method', 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), IEEE, pp. 415-419.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. With the development of social network based applications, different service approaches to achieve these applications have emerged. Users' reporting and sharing of their consumption experience can be utilized to rate the quality of different approaches of online services. How to ensure the authenticity of users' reports and identify malicious ones with cheating reports become important issues to achieve an accurate service rating. In this paper, we provide a private-prior peer prediction mechanism based service rating system with a fusion center, which evaluates users' trustworthiness with their reports by applying the strictly proper scoring rule. In addition, to identify malicious users and bad-functioning/unreliable users with high error rate of quality judgement, an unreliability index is proposed in this paper to evaluate the uncertainty of reports. By combining the trustworthiness and unreliability, malicious users cannot receive a high trustworthiness and low unreliability at the same time when they report falsified feedbacks. Simulation results indicate that the proposed peer prediction based service rating can identify malicious and unreliable users effectively, motivate users to report truthfully, and achieve high service rating accuracy.
Du, J, Jiang, C, Yu, S & Ren, Y 1970, 'Time cumulative complexity modeling and analysis for space-based networks', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In this paper, the notion of the cumulative time varying graph (C-TVG) is proposed to model the high dynamics and relationships between ordered static graph sequences for space-based information networks (SBINs). In order to improve the performance of management and control of the SBIN, the complexity and social properties of the SBIN's high dynamic topology during a period of time is investigated based on the proposed C-TVG. Moreover, a cumulative topology generation algorithm is designed to establish the topology evolution of the SBIN, which supports the C-TVG based complexity analysis and reduces network congestions and collisions resulting from traditional link establishment mechanisms between satellites. Simulations test the social properties of the SBIN cumulative topology generated through the proposed C-TVG algorithm. Results indicate that through the C-TVG based analysis, more complexity properties of the SBIN can be revealed than the topology analysis without time cumulation. In addition, the application of attack on the SBIN is simulated, and results indicate the validity and effectiveness of the proposed C-TVG and C-TVG based complexity analysis for the SBIN.
Du, J, Jiang, C, Yu, S, Chen, K-C & Ren, Y 1970, 'Privacy protection: A community-structured evolutionary game approach', 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), IEEE, pp. 1007-1011.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Users of social networks can be connected with each other by different communities according to professions, living locations and personal interests. As each user on the social network platforms stores and shows a large amount of personal data, the privacy protection raises as a major concern. This paper establishes a game theoretic framework to model users' interactions to influence users' strategies to take the privacy protection or not. To model the relationship of user communities, we introduce the community-structured evolutionary dynamics. Users' interactions can only happen among those who have at least one common community. Then we analyze the dynamics of users' privacy protection behavior based on the proposed community structured evolutionary game theoretic framework. Results show that social network managers need to provide appropriate security service b and payment mechanism c to ensure that cost performance b/c is larger than the critical cost performance, which can promote the spread of the privacy security behavior over the network. Moreover, results can help to design appropriate structure of the social network and control the convergence speed that all users take the privacy protection.
Erfani, SS, Abedin, B & Blount, Y 1970, 'Social support, Social belongingness, and psychological well-being: Benefits of Online healthcare community membership', Pacific Asia Conference on Information Systems Pacis 2016 Proceedings, Pacific Asia Conference on Information Systems, PACIS, Taiwan.
View description>>
Despite an increase in users interacting using Online Social Network Sites, the value they generate for health purposes are under-researched. Previous research has mainly focused on the capacity of Online Social Network Sites for improving social and organisational value. Yet, the value of these platforms can be investigated in the other context such as health. This paper studies the value of membership in health related Online Social Network Sites, and in particular investigates how participation in such communities benefits users' psychological well-being. Twenty-five qualitative semi-structured interviews were conducted with users of the Ovarian Cancer Australia Facebook page (OCA Facebook), the exemplar online community used in this study. The participants were people who were affected by ovarian cancer and were members of the OCA Facebook community where they exchanged information and received support. Using a multi-theory perspective to interpret the data, results showed that a sense of belongingness to a community with like-minded people as well as receiving social support through message exchange in the community were two main perceived benefits of the OCA online community membership. Findings also showed that most interviewees used OCA Facebook on a daily basis. While some were passive users and only read/observed the content crated by others, other users actively posted content and communicated with other members. The paper concludes with implications of the results, recommendations for future studies and proposes a qualitative theoretical framework to examine the value of online communities in a more holistic way.
Fang, XS, Sheng, QZ & Wang, X 1970, 'An Ensemble Approach for Better Truth Discovery', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 298-311.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Truth discovery is a hot research topic in the Big Data era, with the goal of identifying true values from the conflicting data provided by multiple sources on the same data items. Previously, many methods have been proposed to tackle this issue. However, none of the existing methods is a clear winner that consistently outperforms the others due to the varied characteristics of different methods. In addition, in some cases, an improved method may not even beat its original version as a result of the bias introduced by limited ground truths or different features of the applied datasets. To realize an approach that achieves better and robust overall performance, we propose to fully leverage the advantages of existing methods by extracting truth from the prediction results of these existing truth discovery methods. In particular, we first distinguish between the single-truth and multi-truth discovery problems and formally define the ensemble truth discovery problem. Then, we analyze the feasibility of the ensemble approach, and derive two models, i.e., serial model and parallel model, to implement the approach, and to further tackle the above two types of truth discovery problems. Extensive experiments over three large real-world datasets and various synthetic datasets demonstrate the effectiveness of our approach.
Feng Gu, Zhang, G, Jie Lu & Chin-Teng Lin 1970, 'Concept drift detection based on equal density estimation', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, pp. 24-30.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. An important problem that remains in online data mining systems is how to accurately and efficiently detect changes in the underlying distribution of large data streams. The challenge for change detection methods is to maximise the accumulative effect of changing regions with unknown distribution, while at the same time providing sufficient information to describe the nature of the changes. In this paper, we propose a novel change detection method based on the estimation of equal density regions, with the aim of overcoming the issues of instability and inefficiency that underlie methods of predefined space partitioning schemes. Our method is general, nonparametric and requires no prior knowledge of the data distribution. A series of experiments demonstrate that our method effectively detects concept drift in single dimension as well as high dimension data, and is also able to explain the change by locating the data points that contribute most to the change. The detection result is guaranteed by statistical tests.
Feng, B, Zhou, H, Li, G, Li, H & Yu, S 1970, 'SAT-GRD: An ID/Loc split network architecture interconnecting satellite and ground networks', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Since the satellite network plays an irreplaceable role in many fields, how to interconnect it with the ground network has received an unprecedented attention. However, with much more requirements imposed to the current terrestrial network, many serious problems caused by the IP dual-role exposed. In this context, their direct interconnection seems not the most appropriate way. Thus, in this paper, SAT-GRD, an incrementally deployable ID/Loc split network architecture is proposed, aiming to integrate the satellite and ground networks efficiently. Specifically, SAT-GRD separates the identity of both the host and network from the location. Then, it isolates the host from the network, and further divides the whole network into core and edge networks. These make SAT-GRD much more flexible and scalable to achieve heterogeneous network convergence and avoid problems resulting from the overloaded semantics of IP addresses. In addition, much work has been done to implement the proof-of-concept prototype of SAT-GRD, and experimental results prove its feasibility.
Feng, B, Zhou, H, Zhang, H, Jiang, J & Yu, S 1970, 'A Popularity-Based Cache Consistency Mechanism for Information-Centric Networking', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Information-Centric Networking (ICN) has emerged as a promising way for the efficient content delivery over the Internet, and it can be seen as a super large-scale caching distributed system. However, as one of the most important problems, the cache consistency issue, which refers to whether cached contents in routers are outdated, is still not investigated thoroughly in ICN. Thus, in this paper, we propose a cost-effective Popularity-based Cache Consistency (PCC) mechanism to guarantee the freshness of cached contents in ICN routers. PCC is able to balance the trade between the consistency strength and related costs since it only maintains the strong consistency for popular contents while the weak for unpopular ones. Besides, we improve another two cache consistency mechanisms used in the web caching, namely Polling-Every-Time (PET) and Time-To-Live (TTL), to be suitable for ICN, and use them as the benchmarks for comparisons with PCC. To evaluate their performance, we firstly analyse the costs of these mechanisms including the user latency in terms of hop counts and corresponding signaling overheads, and then conduct extensive simulations using a real topology. The simulation results show the high efficiency of PCC compared with the improved PET and TTL.
Gao, Y, Ma, H, Liu, W & Yu, S 1970, 'Cost Optimal Resource Provisioning for Live Video Forwarding Across Video Data Centers', BIG DATA COMPUTING AND COMMUNICATIONS, (BIGCOM 2016), 2nd International Conference on Big Data Computing and Communication (BigCom), Springer International Publishing, Shenyang, PEOPLES R CHINA, pp. 27-38.
View/Download from: Publisher's site
Gao, Y, Yu, H, Luo, S & Yu, S 1970, 'Efficient and Low-Delay Task Scheduling for Big Data Clusters in a Theoretical Perspective', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In big data clusters, task dispatchers assign arriving tasks to one of many workers (servers) for load balancing. Workers schedule task executions for rapidly completing queueing tasks. Both dispatchers and workers are important for optimizing task/job-completion-time (TCT/JCT). Current dispatchers probe loads on workers before assigning every task/job, which incurs expensive message overheads and significant delays. Besides, they use simple First-In-First-Out (FIFO) scheduling on workers, which further harms their TCT/JCT performance due to head-of-line blocking. In our TASCO scheduler, workers report their loads to dispatchers so that dispatchers avoid to probe them, which significantly reduces expensive overheads and delays on current dispatchers. Motivated by recent observations that more than 60% tasks in big data clusters are recurring with predictable task service time, we also use delay- optimal smallest-task-first (STF) scheduling to improve current simple FIFO scheduling on workers. We also derive the average TCT of TASCO based on its equivalence to an M/G/1/STF queue and the insight that workers reporting loads to dispatchers follows a Poisson process in the large- system limit. Our theories and simulation results demonstrate that the average TCT/JCT of TASCO outperforms state-of-art schedulers from 5.3% to 55.9%.
Gao, Y, Yu, H, Luo, S & Yu, S 1970, 'Information-agnostic coflow scheduling with optimal demotion thresholds', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Previous coflow scheduling proposals improve the coflow completion time (CCT) over per-flow scheduling based on prior information of coflows, which makes them hard to apply in practice. State-of-art information-agnostic coflow scheduling solution Aalo adopts Discretized Coflow-aware Least-Attained-Service (D-CLAS) to gradually demote coflows from the highest priority class into several lower priority classes when their sent-bytes-count exceeds several predefined demotion thresholds. However, current design standards of these demotion thresholds are crude because they do not analyze the impacts of different demotion thresholds on the average coflow delay. In this paper, we model the D-CLAS system by an M/G/1 queue and formulate the average coflow delay as a function of the demotion thresholds. In addition, we prove the valley-like shape of the function and design the Down-hill searching (DHS) algorithm. The DHS algorithm locates a set of optimal demotion thresholds which minimizes the average coflow delay in the system. Real-data-center-trace driven simulations indicate that DHS improves average CCT up to 6.20× over Aalo.
Gill, AQ & Hevary, S 1970, 'Cloud Monitoring Data Challenges: A Systematic Review.', ICONIP (1), International Conference on Neural Information Processing, Springer, Kyoto, Japan, pp. 72-79.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Organizations need to continuously monitor, source and process large amount of operational data for optimizing the cloud computing environment. The research problem is: what are cloud monitoring data challenges – in particular virtual CPU monitoring data? This paper adopts a Systematic Literature Review (SLR) approach to identify and report cloud monitoring data challenges. SLR approach was applied to initially identify a large set of 1861 papers. Finally, 24 of 1861 relevant papers were selected and reviewed to identify the five major challenges of cloud monitoring data: monitoring technology, virtualization technology, energy, availability and performance. The results of this review are expected to help researchers and practitioners to understand cloud computing data challenges and develop innovative techniques and strategies to deal with these challenges.
Gill, AQ, Chew, EK, Kricker, D & Bird, G 1970, 'Adaptive Enterprise Resilience Management: Adaptive Action Design Research in Financial Services Case Study.', CBI (1), IEEE Conference on Business Informatics (CBI), IEEE, Paris, pp. 113-122.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Resilience is the ability of an enterprise to absorb, recover and adapt from a disruption. Being resilient is a complex undertaking for enterprises operating in a highly dynamic environment and striving for continuous efficiency and innovation. The challenge for enterprises is to offer and run a customer-centric and interdependent large portfolio of resilient services. The fundamental research question is: how to enable service resilience in the practical enterprise resilience context? This paper addresses this important research question, and reports findings from on-going (2014-2016) research on adaptive enterprise resilience management in an Australian financial services organization (FSO). This research is being conducted using the adaptive action-design research (ADR) method to iteratively research, develop and deliver the desired resilience framework in short increments. This paper presents the overall evolved adaptive enterprise resilience management framework and its 'service resilience' element details as one of the key outcomes from the second adaptive ADR increment.
Grochow, JA, Mulmuley, KD & Qiao, Y 1970, 'Boundaries of VP and VNP', Leibniz International Proceedings in Informatics Lipics, International Colloquium on Automata Languages and Programming, Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, Rome, Italy.
View/Download from: Publisher's site
View description>>
One fundamental question in the context of the geometric complexity theory approach to the VP vs. VNP conjecture is whether VP = VP, where VP is the class of families of polynomials that can be computed by arithmetic circuits of polynomial degree and size, and VP is the class of families of polynomials that can be approximated infinitesimally closely by arithmetic circuits of polynomial degree and size. The goal of this article is to study the conjecture in (Mulmuley, FOCS 2012) that VP is not contained in VP. Towards that end, we introduce three degenerations of VP (i.e., sets of points in VP), namely the stable degeneration Stable-VP, the Newton degeneration Newton-VP, and the p-definable one-parameter degeneration VP∗. We also introduce analogous degenerations of VNP. We show that Stable-VP ⊆ Newton-VP ⊆ VP∗ ⊆ VNP, and Stable-VNP = Newton-VNP = VNP∗ = VNP. The three notions of degenerations and the proof of this result shed light on the problem of separating VP from VP. Although we do not yet construct explicit candidates for the polynomial families in VP \VP, we prove results which tell us where not to look for such families. Specifically, we demonstrate that the families in Newton-VP \VP based on semi-invariants of quivers would have to be nongeneric by showing that, for many finite quivers (including some wild ones), Newton degeneration of any generic semi-invariant can be computed by a circuit of polynomial size. We also show that the Newton degenerations of perfect matching Pfaffians, monotone arithmetic circuits over the reals, and Schur polynomials have polynomial-size circuits.
Gu, Y-L, Zhu, X-Y, Zhang, G & He, Y 1970, 'Pareto Optimal Scheduling for Synchronous Data Flow Graphs on Heterogeneous Multiprocessor', 2016 21ST INTERNATIONAL CONFERENCE ON ENGINEERING OF COMPLEX COMPUTER SYSTEMS (ICECCS 2016), 21st International Conference on Engineering of Complex Computer Systems (ICECCS), IEEE, U ARAB EMIRATES, Dubai, pp. 91-100.
View/Download from: Publisher's site
Guo, Y, Zhu, J, Lu, H & Lei, G 1970, 'Design considerations of electric motors with soft magnetic composite cores', 2016 IEEE 8th International Power Electronics and Motion Control Conference (IPEMC-ECCE Asia), 2016 IEEE 8th International Power Electronics and Motion Control Conference (IPEMC 2016 - ECCE Asia), IEEE, Hefei, China, pp. 3007-3011.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Soft magnetic composite (SMC) materials possess many unique properties, which are particularly suitable for development of novel structure electric motors for various electric drive systems. The unique properties of SMC material include three-dimensional (3-D) magnetic and thermal isotropy, very low eddy current loss, and prospect of very low cost mass production. Therefore, the application of SMC materials in electrical appliance, particularly in electric motors, has attracted great interest in research. However, SMC materials also have some drawbacks, e.g. low permeability, high hysteresis loss and low mechanical strength, and hence a direct replacement of electrical steels by SMC would not necessarily lead to satisfaction or improvement of motor performance. To fully explore the application potential of the SMC materials, their unique properties should be fully employed and at the same time the effects of their drawbacks should be avoided or minimized. This paper aims to present some key issues on design of SMC electric motors based on the extensive research in the past two decades by various researchers including the authors of this paper. The key design issues are discussed and some conclusions are drawn for future effort in this area.
Hao, P, Zhang, G & Lu, J 1970, 'Enhancing cross domain recommendation with domain dependent tags', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, CANADA, pp. 1266-1273.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. One challenge in recommender system is to deal with data sparsity. To handle this issue, social tags are utilized to bring disjoint domains together for knowledge transfer in cross-domain recommendation. The most intuitive way is to use common tags that present in both source and target domains. However, it is difficult to obtain a strong domain connection by exploiting a small amount of common tags, especially when the tagging data in target domain is too scarce to share enough common tags with source domain. In this paper we propose a novel framework, called Enhanced Tag-induced Cross Domain Collaborative Filtering (ETagiCDCF), to integrate the rich information contained in domain dependent tags into recommendation procedure. We perform experiments on two public datasets and compare with several single and cross domain recommendation approaches, the results demonstrate that ETagiCDCF can effectively address data sparseness and improve recommendation performance.
Hazber, MAG, Li, R, Xu, G & Alalayah, KM 1970, 'An Approach for Automatically Generating R2RML-Based Direct Mapping from Relational Databases', Communications in Computer and Information Science, International Conference of Young Computer Scientists, Engineers and Educators (ICYCSEE), Springer Singapore, Harbin, China, pp. 151-169.
View/Download from: Publisher's site
View description>>
For integrating relational databases (RDBs) into semantic web applications, the W3C RDB2RDF Working Group recommended two approaches, Direct Mapping (DM) and R2RML. The DM provides a set of mapping rules according to RDB schema, while the R2RML allows users to manually define mappings according to existing target ontology. The major problem to use R2RML is the effort for creating R2RML mapping documents manually. This may lead to appearance of many mistakes in the R2RML documents and requires domain experts. In this paper, we propose and implement an approach to generate an R2RML mapping documents automatically from RDB schema. The R2RML mapping reflects the behavior of the DM specification and allows any R2RML parser to generate a set of RDF triples from relational data. The input of generating approach is DBsInfo class that automatically generated from relational schema. An experimental prototype is developed and shows the effectiveness of our approach algorithms.
He, H, Maple, C, Watson, T, Tiwari, A, Mehnen, J, Jin, Y & Gabrys, B 1970, 'The security challenges in the IoT enabled cyber-physical systems and opportunities for evolutionary computing & other computational intelligence', 2016 IEEE Congress on Evolutionary Computation (CEC), 2016 IEEE Congress on Evolutionary Computation (CEC), IEEE, Vancouver, CANADA, pp. 1015-1021.
View/Download from: Publisher's site
View description>>
Internet of Things (IoT) has given rise to the fourth industrial revolution (Industrie 4.0), and it brings great benefits by connecting people, processes and data. However, cybersecurity has become a critical challenge in the IoT enabled cyber physical systems, from connected supply chain, Big Data produced by huge amount of IoT devices, to industry control systems. Evolutionary computation combining with other computational intelligence will play an important role for cybersecurity, such as artificial immune mechanism for IoT security architecture, data mining/fusion in IoT enabled cyber physical systems, and data driven cybersecurity. This paper provides an overview of security challenges in IoT enabled cyber-physical systems and what evolutionary computation and other computational intelligence technology could contribute for the challenges. The overview could provide clues and guidance for research in IoT security with computational intelligence.
He, H, Tiwari, A, Mehnen, J, Watson, T, Maple, C, Jin, Y & Gabrys, B 1970, 'Incremental information gain analysis of input attribute impact on RBF-kernel SVM spam detection', 2016 IEEE Congress on Evolutionary Computation (CEC), 2016 IEEE Congress on Evolutionary Computation (CEC), IEEE, Vancouver, CANADA, pp. 1022-1029.
View/Download from: Publisher's site
View description>>
The massive increase of spam is posing a very serious threat to email and SMS, which have become an important means of communication. Not only do spams annoy users, but they also become a security threat. Machine learning techniques have been widely used for spam detection. Email spams can be detected through detecting senders' behaviour, the contents of an email, subject and source address, etc, while SMS spam detection usually is based on the tokens or features of messages due to short content. However, a comprehensive analysis of email/SMS content may provide cures for users to aware of email/SMS spams. We cannot completely depend on automatic tools to identify all spams. In this paper, we propose an analysis approach based on information entropy and incremental learning to see how various features affect the performance of an RBF-based SVM spam detector, so that to increase our awareness of a spam by sensing the features of a spam. The experiments were carried out on the spambase and SMSSpemCollection databases in UCI machine learning repository. The results show that some features have significant impacts on spam detection, of which users should be aware, and there exists a feature space that achieves Pareto efficiency in True Positive Rate and True Negative Rate.
Herron, D, Andalibi, N, Haimson, O, Moncur, W, van den Hoven, E & ACM 1970, 'HCI and Sensitive Life Experiences', PROCEEDINGS OF THE NORDICHI '16: THE 9TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION - GAME CHANGING DESIGN, Nordic Conference on Human-Computer Interaction (NordiCHI), ACM, Gothenburg, Sweden, pp. 1-3.
View/Download from: Publisher's site
View description>>
HCI research has identified a number of life events and life transitions which see individuals in a vulnerable state, such as gender transition, domestic abuse, romantic relationship dissolution, bereavement, and even genocide. Although these life events differ across the human lifespan, considering them as a group of 'sensitive life experiences', and exploring the similarities and differences in how we approach those experiences as researchers could be invaluable in generating a better understanding of them. In this workshop, we aim to identify current opportunities for, and barriers to, the design of social computing systems that support people during sensitive life events and transitions. Participants will take part in activities centred around exploring the similarities and differences between their own and others' research methods and results, drawing on their own experiences in discussions around carrying out research in these sensitive contexts.
Herron, D, Moncur, W, van den Hoven, E & ACM 1970, 'Digital Possessions After a Romantic Break Up', PROCEEDINGS OF THE NORDICHI '16: THE 9TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION - GAME CHANGING DESIGN, Nordic Conference on Human-Computer Interaction (NordiCHI), Association Computing Machinery Digital Library, Gothenburg, Sweden.
View/Download from: Publisher's site
View description>>
© 2016 ACM.With technology becoming more pervasive in everyday life, it is common for individuals to use digital media to support the enactment and maintenance of romantic relationships. Partners in a relationship may create digital possessions frequently. However, after a relationship ends, individuals typically seek to disconnect from their ex-partner. This becomes difficult due to the partners' interwoven digital presence and digital possessions. In this paper, we report on a qualitative study exploring individuals' experiences of relationship break up in a digital context, and discuss their attitudes towards digital possessions from those relationships. Five main themes emerged: digital possessions that sustain relationships, comparing before and after, tainted digital possessions, digital possessions and invasions of privacy, involved and emotional reminiscing. Design opportunities were identified in managing attitudes towards digital possessions, disconnecting and reconnecting, and encouraging awareness of digital possessions.
Holland, S, McPherson, AP, Mackay, WE, Wanderley, MM, Gurevich, MD, Mudd, TW, O Modhrain, S, Wilkie, KL, Malloch, JW, Garcia, J & others 1970, 'Music and HCI', Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, ACM, pp. 3339-3346.
Holland, S, McPherson, AP, Mackay, WE, Wanderley, MM, Gurevich, MD, Mudd, TW, O'Modhrain, S, Wilkie, KL, Malloch, JW, Garcia, J & Johnston, A 1970, 'Music and HCI', Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI'16: CHI Conference on Human Factors in Computing Systems, ACM, pp. 3339-3346.
View/Download from: Publisher's site
View description>>
Music is an evolutionarily deep-rooted, abstract, real-time, complex, non-verbal, social activity. Consequently, interaction design in music can be a valuable source of challenges and new ideas for HCI. This workshop will reflect on the latest research in Music and HCI (Music Interaction for short), with the aim of strengthening the dialogue between the Music Interaction community and the wider HCI community. We will explore recent ideas from Music Interaction that may contribute new perspectives to general HCI practice, and conversely, recent HCI research in non-musical domains with implications for Music Interaction. We will also identify any concerns of Music Interaction that may require unique approaches. Contributors engaged in research in any area of Music Interaction or HCI who would like to contribute to a sustained widening of the dialogue between the distinctive concerns of the Music Interaction community and the wider HCI community will be welcome.
Hollmén, J, Spiliopoulou, M, Kane, B, Marshall, A, Soda, P, Antani, S & McGregor, C 1970, 'Preface', 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, pp. xiii-xiv.
View/Download from: Publisher's site
Howlett, RJ, Jain, LC, Gabrys, B, Toro, C & Lim, CP 1970, 'Preface', Procedia Computer Science, Elsevier BV, pp. 1-6.
View/Download from: Publisher's site
Huang, J, Li, S, Duan, Q, Yu, R & Yu, S 1970, 'QoS Correlation-Aware Service Composition for Unified Network-Cloud Service Provisioning', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Recent development in Cloud and networking technologies have stimulated unification of network and Cloud service provisioning, in which service composition plays a crucial role. While encouraging progress has been made toward network-Cloud service composition, the impact of correlated network and Cloud services on the QoS of composite services, however, has not been sufficiently studied. In this paper, we address the challenging problem of QoS correlation-aware network and Cloud service composition. Specifically, we formulate this problem as a multi-constraint optimal path problem and propose a novel algorithm to solve it. We also evaluate the performance of the proposed algorithm with extensive simulations. The experimental results show that the proposed algorithm is effective and efficient and it is able to yield service composition solutions with better QoS guarantees through considering QoS correlations among different services.
Huo, H, Liu, X, Li, J, Yang, H, Peng, D & Chen, Q 1970, 'A Weighted K-AP Query Method for RSSI Based Indoor Positioning', Springer International Publishing, pp. 150-163.
View/Download from: Publisher's site
Hussain, W, Hussain, F & Hussain, O 1970, 'Allocating optimized resources in the cloud by a viable SLA model', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, Canada, pp. 1282-1287.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. A cloud business environment comprises service providers and service consumers. Services are supplied through a Service Level Agreement (SLA) which defines all deliverables, commitments, obligations, QoS, violation penalties etc. that help a service provider and a service consumer to execute their business transactions. The primary aim of a service provider is to fulfill its commitment to a consumer by forming a viable SLA that wisely assigns the appropriate amount of resources to a requesting consumer. In this paper, we propose a viable SLA model that helps a service provider form a viable agreement with a consumer, based on its previous resource usage profile. The model uses a Fuzzy Inference System and takes the reliability and the contract duration of a consumer as input to calculate the suitability of this consumer, which is also used as input along with the risk propensity of a service provider to determine the amount of resources to offer to a consumer. We evaluate our approach and find that by using an optimized viable SLA model, providers are able to allocate an appropriate amount of resources to avoid an SLA violation.
Hussain, W, Hussain, F, Hussain, O & IEEE 1970, 'QoS Prediction Methods to Avoid SLA Violation in Post-Interaction Time Phase', PROCEEDINGS OF THE 2016 IEEE 11TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), IEEE Conference on Industrial Electronics and Applications, IEEE, Hefei, China, pp. 32-37.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Due to the dynamic nature of cloud computing it is very important for a small to medium scale service providers to optimally assign computing resources and apply accurate prediction methods that enable the best resource management. The choice of an ideal quality of service (QoS) prediction method is one of the key factors in business transactions that help a service provider manage the risk of SLA violations by taking appropriate and immediate action to reduce occurrence, or avoid operations that may cause risk. In this paper we analyze ten prediction methods, including neural network methods, stochastic methods and others to predict time series cloud data and compare their prediction accuracy over five time intervals. We use Cascade Forward Backpropagation, Elman Backpropagation, Generalized Regression, NARX, Simple Exponential Smoothing, Simple Moving Average, Weighted Moving Average, Extrapolation, Holt-Winters Double Exponential Smoothing and ARIMA and predict resource usage at 1, 2, 3, 4 and 5 hours into the future. We use Root Means Square Error and Mean Absolute Deviation as a benchmark for their prediction accuracy. From the prediction results we observed that the ARIMA method provides the most optimal prediction results for all time intervals.
Hussain, W, Hussain, FK & Hussain, OK 1970, 'SLA Management Framework to Avoid Violation in Cloud', NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III, International Conference on Neural Information Processing, Springer International Publishing, Kyoto, Japan, pp. 309-316.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. Cloud computing is an emerging technology that have a broad scope to offers a wide range of services to revolutionize the existing IT infrastructure. This internet based technology offers a services like-on demand service, shared resources, multitenant architecture, scalability, portability, elasticity and giving an illusion of having an infinite resource by a consumer through virtualization. Because of the elastic nature of a cloud it is very critical of a service provider specially for a small/medium cloud provider to form a viable SLA with a consumer to avoid any service violation. SLA is a key agreement that need to be intelligently form and monitor, and if there is a chance of service violation then a provider should be informed to take necessary remedial action to avoid violation. In this paper we propose our viable SLA management framework that comprise of two time phases-pre-interaction time phase and post-interaction time phase. Our viable SLA framework help a service provider in making a decision of a consumer request, offer the amount of resources to consumer, predict QoS parameters, monitor run time QoS parameters and take an appropriate action to mitigate risks when there is a variation between a predicted and an agreed QoS parameters.
Hussain, W, Zowghi, D, Clear, T, MacDonell, S & Blincoe, K 1970, 'Managing Requirements Change the Informal Way: When Saying ‘No’ is Not an Option', 2016 IEEE 24th International Requirements Engineering Conference (RE), 2016 IEEE 24th International Requirements Engineering Conference (RE), IEEE, Beijing, China, pp. 126-135.
View/Download from: Publisher's site
View description>>
Software has always been considered as malleable. Changes to software requirements are inevitable during the development process. Despite many software engineering advances over several decades, requirements changes are a source of project risk, particularly when businesses and technologies are evolving rapidly. Although effectively managing requirements changes is a critical aspect of software engineering, conceptions of requirements change in the literature and approaches to their management in practice still seem rudimentary. The overall goal of this study is to better understand the process of requirements change management. We present findings from an exploratory case study of requirements change management in a globally distributed setting. In this context we noted a contrast with the traditional models of requirements change. In theory, change control policies and formal processes are considered as a natural strategy to deal with requirements changes. Yet we observed that "informal requirements changes" (InfRc) were pervasive and unavoidable. Our results reveal an equally ’natural’ informal change management process that is required to handle InfRc in parallel. We present a novel model of requirements change which, we argue, better represents the phenomenon and more realistically incorporates both the informal and formal types of change.
Ijaz, K, Wang, Y, Milne, D & Calvo, RA 1970, 'Competitive vs Affiliative Design of Immersive VR Exergames', SERIOUS GAMES, JCSG 2016, 2nd International Joint Conference on Serious Games (JCSG), Springer International Publishing, Griffith Univ, Brisbane, AUSTRALIA, pp. 140-150.
View/Download from: Publisher's site
Ijaz, K, Wang, Y, Milne, D & Calvo, RA 1970, 'VR-Rides: Interactive VR Games for Health', SERIOUS GAMES, JCSG 2016, 2nd International Joint Conference on Serious Games (JCSG), Springer International Publishing, Griffith Univ, Brisbane, AUSTRALIA, pp. 289-292.
View/Download from: Publisher's site
Ikeda, M, Pop, F & Hussain, F 1970, 'Message from CISIS 2016 Program Co-Chairs', 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS), 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS), IEEE, pp. xviii-xviii.
View/Download from: Publisher's site
Inan, DI, Beydoun, G & Opper, S 1970, 'Towards knowledge sharing in disaster management: An agent oriented knowledge analysis framework', Proceedings of the Australasian Conference on Information Systems 2015, Australasian Conference on Information Systems 2015.
View description>>
Disaster Management (DM) is a complex set of interrelated activities. Theactivities are often knowledge intensive and time sensitive. Sharing therequired knowledge timely is critical for DM. In developed countries, forrecurring disasters (e.g. floods), there are dedicated document repositories ofDisaster Management Plans (DMP) that can be accessed as needs arise. However,accessing the appropriate plan in a timely manner and sharing activitiesbetween plans often requires domain knowledge and intimate knowledge of theplans in the first place. In this paper, we introduce an agent-based knowledgeanalysis method to convert DMPs into a collection of knowledge units that canbe stored into a unified repository. The repository of DM actions then enablesthe mixing and matching knowledge between different plans. The repository isstructured as a layered abstraction according to Meta Object Facility (MOF). Weuse the flood management plans used by SES in NSW to illustrate and give apreliminary validation of the approach. It is illustrated using DMPs along theflood prone Murrumbidgee River in central NSW.
Inibhunu, C & McGregor, C 1970, 'Machine learning model for temporal pattern recognition', 2016 IEEE EMBS International Student Conference (ISC), 2016 IEEE EMBS International Student Conference (ISC), IEEE, pp. 1-4.
View/Download from: Publisher's site
View description>>
Temporal abstraction and data mining are two research fields that have tried to synthesis time oriented data and bring out an understanding on the hidden relationships that may exist between time oriented events. In clinical settings, having the ability to know the hidden relationships on patient data as they unfold could help save a life by aiding in detection of conditions that are not obvious to clinicians and healthcare workers. Understanding the hidden patterns is a huge challenge due to the exponential search space unique to time-series data. In this paper, we propose a temporal pattern recognition model based on dimension reduction and similarity measures thereby maintaining the temporal nature of the raw data.
Izaddoost, A & McGregor, C 1970, 'Enhance Network Communications in a Cloud-Based Real-Time Health Analytics Platform Using SDN', 2016 IEEE International Conference on Healthcare Informatics (ICHI), 2016 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Chicago, IL, pp. 388-391.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Transferring collected physiological data from health facilities to a cloud-based health analytical platform can be seen as an efficient and cost effective approach to provide clinical support to rural and remote healthcare centres from urban based specialists. A cloud-based healthcare platform will reduce the requirement of patient transfer due to lack of clinical experts or providing consultative support through the phone. However transferring physiological data streams through a data path with insufficient quality and unsatisfactory conditions may have negative performance impact on real-time data processing. To address this issue, we study the benefit of using software-defined networking (SDN) technology. SDN as an emerging networking paradigm, can be employed to manage and control network conditions and apply desired policies. This research introduces the significant features in SDN technology to transfer physiological data streams through an alternative path with a better quality rather than the congested predetermined shortest path in order to enhance data transfer reliability and improve real-time data processing quality.
Jalali, R, Dauda, A, El-Khatib, K, McGregor, C & Surti, C 1970, 'An architecture for health data collection using off-the-shelf health sensors', 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
Nowadays, many people, and not only the ones with health problems are being more health conscious. With the advent of sensor based technologies, it has become possible to create wearable wireless biometric sensor networks, known as Body Sensor Networks (BSNs) which allow people to collect their health data and send it remotely for further analysis and storage. Research has shown that the use of BSNs enables remote wireless diagnosis of various health conditions. In this paper, we propose a novel layered architecture for smart healthcare system where health community service providers, patients, doctors and hospitals have access to real time data which has been gathered using various sensory mechanisms. An experimental case study has been implemented for evaluation. Early results show benefits of this system in improving the quality of health care.
Jayakodi, K, Bandara, M & Meedeniya, D 1970, 'An automatic classifier for exam questions with WordNet and Cosine similarity', 2016 Moratuwa Engineering Research Conference (MERCon), 2016 Moratuwa Engineering Research Conference (MERCon), IEEE, pp. 12-17.
View/Download from: Publisher's site
Jiang, F, Gan, J, Xu, Y & Xu, G 1970, 'Coupled behavioral analysis for user preference-based email spamming', 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Durham, NC, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2016 IEEE.In this paper, we develop and implement a new email spamming system leveraged by coupled text similarity analysis on user preference and a virtual meta-layer user-based email network, we take the social networks or campus LAN networks as the spam social network scenario. Fewer current practices exploit social networking initiatives to assist in spam filtering. Social network has essentially a large number of accounts features and attributes to be considered. Instead of considering large amount of users accounts features, we construct a new model called meta-layer email network which can reduce these features by only considering individual user's actions as an indicator of user preference, these common user actions are considered to construct a social behavior-based email network. With the further analytic results from text similarity measurements for each individual email contents, the behavior-based virtual email network can be improved with much higher accuracy on user preferences. Further, a coupled selection model is developed for this email network, we are able to consider all relevant factors/features in a whole and recommend the emails practically to the user individually. The experimental results show the new approach can achieve higher precision and accuracy with better email ranking in favor of personalised preference.
Jiang, J, Qu, Y, Yu, S, Zhou, W & Wu, W 1970, 'Studying the Global Spreading Influence and Local Connections of Users in Online Social Networks', 2016 IEEE International Conference on Computer and Information Technology (CIT), 2016 IEEE International Conference on Computer and Information Technology (CIT), IEEE, Nadi, FIJI, pp. 431-435.
View/Download from: Publisher's site
Jiang, J, Wen, S, Yu, S, Zhou, W & Qian, Y 1970, 'Analysis of the Spreading Influence Variations for Online Social Users under Attacks', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Identifying influential spreaders in online social networks (OSNs) has long been an important but difficult problem to be addressed. Distinguished from previous works that mainly focused on the stationary features of users' influence, we systematically study the variations of users' spreading capability given the fact that influential spreaders are more likely to be the targets of various cyber attacks in real OSNs. In order to rank users' spreading capability, we adopt the k-shell structure which assigns a coreness index, ks, to each user. We find that users' spreading capability can considerably change when attacks occur in specific structures of OSNs. Generally, if the OSN structure is assortative (i.e., large-degree nodes preferably connect to nodes with large degree), users' spreading capability is resilient to attacks. However, if the OSN structure is disassortative (i.e., large-degree nodes preferably connect to nodes with small degree), users' spreading capability decreases significantly under attacks. We further carried out a series of empirical studies in real OSN datasets to disclose the causation of the variations caused by attacks. The research presented in this paper benefit decision makers to protect the propagation in the case of product promotion or prevent the diffusion like rumor.
Johnston, A & Pickrell, M 1970, 'Designing for technicians working in the field', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania, Australia, pp. 494-498.
View/Download from: Publisher's site
View description>>
Copyright © 2016 ACM. Mobile applications are frequently used by technicians and logistics personnel to access documentation and communicate and log information about the work they do in the field. Currently, however, there are no context-specific usability heuristics for use by designers who are building mobile applications for this sector. By conducting contextual inquiries with technicians and logistics personnel who use mobile applications for their day to day work, we identified specific usability issues affecting the use of these applications. From this research, we propose a set of eight heuristics for use by designers and developers creating mobile applications for users in this area.
Juang, C-F & Chang, Y-C 1970, 'Data-driven interpretable fuzzy controller design through mult-objective genetic algorithm', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, pp. 002403-002408.
View/Download from: Publisher's site
Kamaleswaran, R, James, A, Collins, C & McGregor, C 1970, 'CoRAD: Visual Analytics for Cohort Analysis', 2016 IEEE International Conference on Healthcare Informatics (ICHI), 2016 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Chicago, IL, pp. 517-526.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In this paper, we introduce a novel dynamic visual analytic tool called the Cohort Relative Aligned Dashboard (CoRAD). We present the design components of CoRAD, along with alternatives that lead to the final instantiation. We also present an evaluation involving expert clinical researchers, comparing CoRAD against an existing analytics method. The results of the evaluation show CoRAD to be more usable and useful for the target user. The relative alignment of physiologic data to clinical events were found to be a highlight of the tool. Clinical experts also found the interactive selection and filter functions to be useful in reducing information overload. Moreover, CoRAD was also found to allow clinical researchers to generate alternative hypotheses and test them in vivo.
Kang, G, Li, J & Tao, D 1970, 'Shakeout: A New Regularized Deep Neural Network Training Scheme.', AAAI, AAAI Conference on Artificial Intelligence, AAAI Press, Phoenix, USA, pp. 1751-1757.
View description>>
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. The invention of effective training techniques largely contributes to this success. The so-called 'Dropout' training scheme is one of the most powerful tool to reduce over-fitting. From the statistic point of view, Dropout works by implicitly imposing an L2 regularizer on the weights. In this paper, we present a new training scheme: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, our method randomly chooses to enhance or inverse the contributions of each unit to the next layer. We show that our scheme leads to a combination of L1 regularization and L2 regularization imposed on the weights, which has been proved effective by the Elastic Net models in practice.We have empirically evaluated the Shakeout scheme and demonstrated that sparse network weights are obtained via Shakeout training. Our classification experiments on real-life image datasets MNIST and CIFAR- 10 show that Shakeout deals with over-fitting effectively.
Khalili, SM, Babagolzadeh, M, Yazdani, M, Saberi, M & Chang, E 1970, 'A Bi-objective Model for Relief Supply Location in Post-Disaster Management', 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), IEEE, Ostrava, CZECH REPUBLIC, pp. 428-434.
View/Download from: Publisher's site
Khan, M, Xu, X, Dou, W & Yu, S 1970, 'OSaaS: Online Shopping as a Service to Escalate E-Commerce in Developing Countries', 2016 IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 2016 IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), IEEE, pp. 1402-1409.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Service Computing, peculiarly, Everything as a Service (XaaS) has brought an immense commute in the cloud computing and boosts up the business strategies by introducing online platforms and technologies. It creates a new horizon of opportunities in business process, modeling, management and online shopping. Here a set of problems is unfolded which destitute the growth rate of E-Commerce in developing countries, especially, in rural areas. Low literacy, communication language, limited Internet access, low Internet users, non-availability of credit or debit cards are the core problems in the developing countries for online shopping. In this paper, Online Shopping as a Service model with Cloud Service Center is proposed to overcome these challenges. This model escalates the online shopping usage to enhance E-Commerce. Cloud service center is introduced in this model which plays a third party role between consumers and online vendors. Consumers can place an order to the cloud service center in local language via phone or can visit the facility center for the desired product. Experimental analysis showed the adoption of our proposed model to build confidence of online vendors for implementation in immense range in least developed regions.
Khan, M, Xu, X, Dou, W & Yu, S 1970, 'OSaaS: Online Shopping as a Service to Escalate E-Commerce in Developing Countries', PROCEEDINGS OF 2016 IEEE 18TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS; IEEE 14TH INTERNATIONAL CONFERENCE ON SMART CITY; IEEE 2ND INTERNATIONAL CONFERENCE ON DATA SCIENCE AND SYSTEMS (HPCC/SMARTCITY/DSS), 18th IEEE International Conference on High Performance Computing and Communications (HPCC) / 14th IEEE International Conference on Smart City (Smart City) / 2nd IEEE International Conference on Data Science and Systems (DSS), IEEE, AUSTRALIA, Sydney, pp. 1402-1409.
View/Download from: Publisher's site
Ko, LW, Komarov, SH, Liu, SH, Hsu, WC, König, P, Goeke, P, David Hairston, W, Lin, C & Jung, TP 1970, 'Investigation of brain activity patterns related to the effect of classroom fatigue'.
Ko, LW, Yang, BJ, Singanamalla, SKR, Lin, C, King, JT & Jung, TP 1970, 'A practical neurogaming design based on SSVEP brain computer interface'.
Kocaballi, AB & Yorulmaz, Y 1970, 'Performative Photography as an Ideation Method', Proceedings of the 2016 ACM Conference on Designing Interactive Systems, DIS '16: Designing Interactive Systems Conference 2016, ACM, Qeensland Univ Technol, Brisbane, AUSTRALIA, pp. 1083-1095.
View/Download from: Publisher's site
Kolagani, N, Gray, S & Voinov, A 1970, 'Session D1: Tools and methods of participatory modelling', Environmental Modelling and Software for Supporting A Sustainable Future Proceedings 8th International Congress on Environmental Modelling and Software Iemss 2016, p. 804.
Kolamunna, H, Hu, Y, Perino, D, Thilakarathna, K, Makaroff, D, Guan, X & Seneviratne, A 1970, 'AFit', Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM, pp. 309-312.
View/Download from: Publisher's site
Kolamunna, H, Hu, Y, Perino, D, Thilakarathna, K, Makaroff, D, Guan, X & Seneviratne, A 1970, 'AFV', Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM, pp. 981-991.
View/Download from: Publisher's site
Korhonen, JJ, Lapalme, J, McDavid, D & Gill, AQ 1970, 'Adaptive Enterprise Architecture for the Future: Towards a Reconceptualization of EA.', CBI (1), IEEE Conference on Business Informatics (CBI), IEEE, Paris, France, pp. 272-281.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In some conventional definitions, Enterprise Architecture (EA) is conceived as a descriptive overview of the enterprise, while in other views EA is seen as a prescriptive framework of principles and models that helps translate business strategy to enterprise change. The conceptualizations of EA also vary in scope. There is an increasing recognition of EA as a systemic, enterprise-wide capability encompassing all relevant facets of the organization, transcending the traditional IT-centric view. However, we argue that none of the conventional conceptualizations of EA are adaptive in the face of today's complex environment. We view that an adaptive EA must go beyond a single organization and fully appreciate enterprise-in-environment ecosystemic perspective. Drawing on the heritage of Open Socio-Technical Systems Design and adopting the 'three schools of thought' as a meta-paradigmatic backdrop, the paper features four different views of long-time scholar-practitioners, who discuss what an adaptive enterprise architecture would entail. Integration of these views paints a radically reconceptualized picture of enterprise architecture for the future. With this paper, we want to lay a foundation for a debate on the need for alternative conceptualizations, manifestations and research agenda for enterprise architecture.
Kuznecova, T & Voinov, AA 1970, 'A conceptual framework for an agricultural agent-based model with a two-level social component: Modeling farmer groups', Environmental Modelling and Software for Supporting A Sustainable Future Proceedings 8th International Congress on Environmental Modelling and Software Iemss 2016, pp. 1045-1053.
View description>>
In the last decade, collective actions within smallholder groups and cooperatives have been promoted by various development programs and projects. However, to develop appropriate programs and policies aimed at supporting cooperation among farmers, an approach may be required able to reflect the dynamics of an agricultural system resulting from decision-making and interactions between elements at different levels and scales. In this study, we are focusing on the groups of smallholders organizing for collective crop production and/or marketing. Our aim is to provide an approach and a tool to gain a deeper insight in how cooperative groups emerge and perform under different conditions and objective functions. An agent-based model will be built as a core of such a tool. The main difference from existing agricultural models is that we consider at least two levels of social agents and corresponding decision-making categories - individual and collective. The collective level refers to a dynamic cooperative group or network emerging as a higher level agent from the individual agents. Moreover, we are seeking for the trade-offs between simplicity and more realistic representation of social agent behavior, compared to purely rational economic optimization approach. We start with a conceptual model to represent the system of interest. More specifically, in this model we: i) identify system components and interactions between them at different levels; ii) explore applicability of the heuristics-based approaches, such as Consumat (Jager, 2000), for individual decision-making and agent's transition to collective actions, when enriched with various socio-economic, spatial and environmental influencing factors; iii) explore ways to represent collective activities and decision-making in groups. The conceptual model, further combined with a land use/land cover and crop productivity framework, will be used as a prototype implementation to study emergence and performance o...
Lanese, I & Devitt, S 1970, 'Preface', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
Leong, TW & Johnston, B 1970, 'Co-design and Robots: A Case Study of a Robot Dog for Aging People', SOCIAL ROBOTICS, (ICSR 2016), International Conference on Social Robotics (ICSR), Springer, Kansas City, Missouri, United States, pp. 702-711.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. The day-to-day experiences of aging citizens differ significantly from young, technologically savvy engineers. Yet, well-meaning engineers continue to design technologies for aging citizens, informed by skewed stereotypes of aging without deep engagements from these users. This paper describes a co-design project based on the principles of Participatory Design that sought to provide aging people with the capacity to co-design technologies that suit their needs. The project combined the design intuitions of both participants and designers, on equal footing, to produce a companion robot in the form of a networked robotic dog. Besides evaluating a productive approach that empowers aging people in the process of co-designing and evaluating technologies for themselves, this paper presents a viable solution that is playful and meaningful to these elderly people; capable of enhancing their independence, social agency and well-being.
Leong, TW & Robertson, T 1970, 'Voicing values', Proceedings of the 14th Participatory Design Conference: Full papers - Volume 1, PDC '16: The 14th Participatory Design Conference, ACM, Aarhus, Denmark, pp. 31-40.
View/Download from: Publisher's site
View description>>
© 2016 ACM. This paper discusses Participatory Design workshops that sought to enable ageing people to articulate their core values in relation to their experiences of ageing. Our motivations were to better understand how ageing people decide whether or not to adopt and use particular technologies, and to gain insights into the kinds of technologies that might support their aspirations as they age. We contribute to current understandings of ageing people's values, including a range of values that were most important to our participants, insights into how these values are expressed and experienced in everyday lives, the interrelatedness of values in action, and how the three social dimensions of self, friends and family, as well as community influence the expression of values. The workshops demonstrated how engaged ageing people are with others and the broader communities they inhabit. We reflect on the processes, methods and tools that were useful when supporting people to voice their values and how this approach can support the participation of ageing people in design.
Li, Q, Qiao, M, Bian, W & Tao, D 1970, 'Conditional Graphical Lasso for Multi-label Image Classification', 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Las Vegas, Nevada, United States, pp. 2977-2986.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Multi-label image classification aims to predict multiple labels for a single image which contains diverse content. By utilizing label correlations, various techniques have been developed to improve classification performance. However, current existing methods either neglect image features when exploiting label correlations or lack the ability to learn image-dependent conditional label structures. In this paper, we develop conditional graphical Lasso (CGL) to handle these challenges. CGL provides a unified Bayesian framework for structure and parameter learning conditioned on image features. We formulate the multi-label prediction as CGL inference problem, which is solved by a mean field variational approach. Meanwhile, CGL learning is efficient due to a tailored proximal gradient procedure by applying the maximum a posterior (MAP) methodology. CGL performs competitively for multi-label image classification on benchmark datasets MULAN scene, PASCAL VOC 2007 and PASCAL VOC 2012, compared with the state-of-the-art multi-label classification algorithms.
Li, S, Fei, F, Ruihan, D, Yu, S & Dou, W 1970, 'A Dynamic Pricing Method for Carpooling Service Based on Coalitional Game Analysis', 2016 IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 2016 IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), IEEE, pp. 78-85.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In recent years, carpooling service provided by corporations like Uber (UberPool), Didi (DidiPool) and Lyft (Lift Link) have become more and more popular. It helps alleviating the urban traffic congestion, by decreasing the empty seats rate. To balance the supply and demand of the taxi service, a dynamic pricing method is needed. More specifically, passengers taking a same vehicle may be charged differently, even thought they shared a most part of a trip. It often challenges the current dynamic pricing policy that how to balance the service and the pricing among different passengers who shared a certain route in their personal trip. In view of this challenge, we propose a new dynamic pricing method and divide the payoff according to the contribution of each passenger. Concretely, we deploy the framework of coalitional game to analyze spatial temporal constraints that guarantee individual benefits from the carpooling coalition. Then, we explore the Nash Product to maximize the utility of passengers as a whole and reduce our problem into a geometry-programming problem. At last we use Shapley value method to measure the specific contribution of each passenger. We conduct a simulated experiment and the results show effectiveness of our method.
Li, X, Xiong, J, Liu, B, Gui, L & Qiu, M 1970, 'A capacity improving and energy saving scheduling scheme in push-based converged wireless broadcasting and cellular networks', 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper proposes a capacity improving and energy saving scheduling scheme in push-based Converged Wireless Broadcasting and Cellular Network (CWBCN). We maximize the network capacity and alleviate request congestion through broadcasting/multi-casting the most popular services, and locally caching for further request on the user side. In the proposed scheme, we firstly introduce a UE (User Equipment)-based caching mechanism by considering both the frequency and recency of the popular services. Simulations show that the proposed mechanism brings a significant improvement on the capacity of the CWBCN. Based on the mechanism, a sleep-awake algorithm is proposed to further reduce the energy consumption of the UEs. Simulations show that with the proposed algorithm for the converged network can reduce 20%-30% energy consumption by comparing to the traditional one.
Lister, R 1970, 'Toward a Developmental Epistemology of Computer Programming', Proceedings of the 11th Workshop in Primary and Secondary Computing Education, WiPSCE '16: 11th Workshop in Primary and Secondary Computing Education, ACM, Münster, Germany, pp. 5-16.
View/Download from: Publisher's site
View description>>
This paper was written as a companion to my keynote address at the 11th Workshop in Primary and Secondary Computing Education (WiPSCE 2016). The paper outlines my own research on how novices learn to program. Any reader whose interest has been piqued may pursue furher detail in the papers cited. I begin by explaining my philosophical position. In making that explanation, I do not claim that it is the only right position; on the contrary I allude to other philosophical positions that I regard as complimentary to my own. The academic warfare between these positions is pointless and counterproductive --- all the established positions have something positive to offer. Having established my position, I then go on to argue that the work of Jean Piaget, and subsequent neo-Piagetians, offers useful insight into how children learn to program computers.
Liu, A, Zhang, G, Lu, J, Lu, N & Lin, C-T 1970, 'An Online Competence-Based Concept Drift Detection Algorithm', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Hobart, TAS, Australia, pp. 416-428.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016. The ability to adapt to new learning environments is a vital feature of contemporary case-based reasoning system. It is imperative that decision makers know when and how to discard outdated cases and apply new cases to perform smart maintenance operations. Competencebased empirical distance has been recently proposed as a measurement that can estimate the difference between case sample sets without knowing the actual case distributions. It is reportedly one of the most accurate drift detection algorithms in both synthetic and real-world data sets. However, as the construction of competence models have to retain every case in memory, it is not suitable for online drift detection. In addition, the high computational complexity O(n2) also limits its practical application, especially when dealing with large scale data sets with time constrains. In this paper, therefore, we propose a space-based online case grouping strategy, and a new case group enhanced competence distance (CGCD), to address these issues. The experiment results show that the proposed strategy and related algorithms significantly improve the efficiency of the current leading competence-based drift detection algorithm.
Liu, B, Zhou, W, Jiang, J & Wang, K 1970, 'K-Source: Multiple source selection for traffic offloading in mobile social networks', 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP), 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP), IEEE, Yangzhou, PEOPLES R CHINA, pp. 1-5.
View/Download from: Publisher's site
Liu, DYT, Richards, D, Dawson, P, Froissard, J-C & Atif, A 1970, 'Knowledge Acquisition for Learning Analytics: Comparing Teacher-Derived, Algorithm-Derived, and Hybrid Models in the Moodle Engagement Analytics Plugin', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 183-197.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. One of the promises of big data in higher education (learning analytics) is being able to accurately identify and assist students who may not be engaging as expected. These expectations, distilled into parameters for learning analytics tools, can be determined by human teacher experts or by algorithms themselves. However, there has been little work done to compare the power of knowledge models acquired from teachers and from algorithms. In the context of an open source learning analytics tool, the Moodle Engagement Analytics Plugin, we examined the ability of teacher-derived models to accurately predict student engagement and performance, compared to models derived from algorithms, as well as hybrid models. Our preliminary findings, reported here, provided evidence for the fallibility and strength of teacher-and algorithm-derived models, respectively, and highlighted the benefits of a hybrid approach to model-and knowledge-generation for learning analytics. A human in the loop solution is therefore suggested as a possible optimal approach.
Liu, Y, Huang, ML, Liang, J & Huang, W 1970, 'Facial Feature Extraction and Recognition for Traditional Chinese Physiognomy', 2016 20th International Conference Information Visualisation (IV), 2016 20th International Conference Information Visualisation (IV), IEEE, Lisbon, Portugal, pp. 408-412.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. We propose a novel calculation method of personality based on the Chinese physiognomy. The proposed solution combines the ancient and the modem physiognomy to summarize the corresponding relation between the personality and facial feature and model the baseline to shape the face feature. We compute histogram of image by searching for the threshold values to create a binary image in an adaptive way. The two-pass connected component method indicates the feature region. We encode the binary image to remove the noise point, so that the new connected image can provide a better result. The method was tested on ORL face database.
Liu, Y-T, Pal, NR, Wu, S-L, Hsieh, T-Y & Lin, C-T 1970, 'Adaptive subspace sampling for class imbalance processing', 2016 International Conference on Fuzzy Theory and Its Applications (iFuzzy), 2016 International Conference on Fuzzy Theory and Its Applications (iFuzzy), IEEE, Taichung, Taiwan, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper presents a novel oversampling technique that addresses highly imbalanced data distribution. At present, the imbalanced data that have anomalous class distribution and underrepresented data are difficult to deal with through a variety of conventional machine learning technologies. In order to balance class distributions, an adaptive subspace self-organizing map (ASSOM) that combines the local mapping scheme and globally competitive rule is proposed to artificially generate synthetic samples focusing on minority class samples. The ASSOM is conformed with feature-invariant characteristics, including translation, scaling and rotation, and it retains the independence of basis vectors in each module. Specifically, basis vectors generated via each ASSOM module can avoid generating repeated representative features that offer nothing but heavy computational load. Several experimental results demonstrate that the proposed ASSOM method with supervised learning manner is superior to other existing oversampling techniques.
Liu, Y-T, Wu, S-L, Kuang-Pen Chou, Lin, Y-Y, Jie Lu, Guangquan Zhang, Wen-Chieh Lin & Lin, C-T 1970, 'Driving fatigue prediction with pre-event electroencephalography (EEG) via a recurrent fuzzy neural network', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, Canada, pp. 2488-2494.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. We propose an electroencephalography (EEG) prediction system based on a recurrent fuzzy neural network (RFNN) architecture to assess drivers' fatigue degrees during a virtual-reality (VR) dynamic driving environment. Prediction of fatigue degrees is a crucial and arduous biomedical issue for driving safety, which has attracted growing attention of the research community in the recent past. Meanwhile, combined with the benefits of measuring EEG signals facilitates, many EEG-based brain-computer interfaces (BCIs) have been developed for use in real-Time mental assessment. In the literature, EEG signals are severely blended with stochastic noise; therefore, the performance of BCIs is constrained by low resolution in recognition tasks. For this rationale, independent component analysis (ICA) is usually used to find a source mapping from original data that has been blended with unrelated artificial noise. However, the mechanism of ICA cannot be used in real-Time BCI design. To overcome this bottleneck, the proposed system in this paper utilizes a recurrent self-evolving fuzzy neural work (RSEFNN) to increase memory capability for adaptive noise cancellation when assessing drivers' mental states during a car driving task. The experimental results without the use of ICA procedure indicate that the proposed RSEFNN model remains superior performance compared with the state-of-Thearts models.
LU, H, ZHANG, K, XIAO, L & WANG, C 1970, 'A HYBRID MODEL FOR SHORT-TERM WIND SPEED FORECASTING BASED ON NON-POSITIVE CONSTRAINT COMBINATION THEORY', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, Roubaix, France, pp. 240-245.
View/Download from: Publisher's site
View description>>
© 2016 by World Scientific Publishing Co. Pte. Ltd. Short-term wind speed forecasting plays an irreplaceable role in efficient management of wind energy systems and accurate forecasting results could provide effective future plans for operators of utilities and wind energy systems. Aiming at improving the accuracy of short-term wind forecasting, this paper presents a new forecasting model based on the non-positive constraint combination theory. In this model, a modified optimization algorithm is used to optimize the weight coefficients of the constituent models based on the non-positive constraint combination theory. The combined model is tested using three sets of 10-min wind speed data from real-world wind farms. The testing results show that the forecasting accuracy of new model is significantly better than the constituent models.
Lumor, T, Chew, E & Gill, AQ 1970, 'Exploring the Role of Enterprise Architecture in IS-enabled Ot: An EA Principles Perspective.', EDOC Workshops, Workshop in conjunction with the IEEE International Enterprise Distributed Object Computing Conference, IEEE Computer Society, Vienna, Austria, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Although EA principles have received considerable attention in recent years, there is still little known about how EA principles can be used to govern the transformation of the Information Systems enabled organization. In this research-in-progress paper, we communicate our initial step towards answering the sub-question: how do enforcing EA principles contribute to IS-enabled OT? Based on a comprehensive literature review, we initially propose five testable hypotheses and a research model, which is a pre-requisite to developing a data-driven theory for this important area of research. It is anticipated that the ensuing theory will provide a basis for further research studying the impact of EA on IS-enabled OT. The tested research model will also provide guidance to practitioners on how to effectively design and use EA principles in managing transformative changes caused by IS within their organizations and overall industry sectors.
Luo, L, Yu, H, Luo, S, Zhang, M & Yu, S 1970, 'Achieving Fast and Lightweight SDN Updates with Segment Routing', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In SDN, forwarding rules are frequently updated to adapt to network dynamics. During the procedure, path consistency needs to be preserved; otherwise, in-flight packets might meet with forwarding errors such as loops and black holes. Despite a large number of suggestions have been proposed, they take either a long duration or have high rule-space overheads, thus fail to be practical for large-scale high dynamic networks. In this paper, we propose FLUS, a Segment Routing (SR) based mechanism, to achieve fast and lightweight path updates. Basically, when a route needs a change, FLUS instantly employs SR to construct its desired new path by concatenating some fragments of the already existing paths. After the actual paths are established, FLUS then shifts incoming packets to them and disables the transitional ones. Such a design helps packets enjoy their new paths immediately without introducing rule-space overheads. This paper presents FLUS's segment allocation, path construction, and the corresponding optimal algorithms in detail. Our evaluation based on real and synthesized networks shows: FLUS can handle up to 92-100% updates using SR in real-time and save 72-88% rule overhead compared to prior methods.
Madi, BMA, Sheng, QZ, Yao, L, Qin, Y & Wang, X 1970, 'PLMwsp: Probabilistic Latent Model for Web Service QoS Prediction', 2016 IEEE International Conference on Web Services (ICWS), 2016 IEEE International Conference on Web Services (ICWS), IEEE, San Francisco, CA, pp. 623-630.
View/Download from: Publisher's site
Manongdo, R & Xu, G 1970, 'Applying client churn prediction modeling on home-based care services industry', 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2016 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Durham, NC, pp. 1-6.
View/Download from: Publisher's site
View description>>
Client churn prediction model is widely acknowledged as an effective way of realizing customer life-time value especially in service-oriented industries and under a competitive business environment. Churn model allows targeting of clients for retention campaigns and is a critical component of customer relationship management(CRM) and business intelligence systems. There are numerous statistical models and techniques applied successfully on data mining projects for various industries. While there is literature for prediction modeling on hospital health care services, non-exist for home-based care services. In this study, logistic regression, random forest and C5.0 decision tree were the models used in building a binary client churn classifier for a home-based care services company based in Australia. All models yielded prediction accuracies over 90% with tree based classifiers marginally higher and C5.0 model found to be suitable for use in this industry. This study also showed that existing client satisfaction measures currently in use by the company does not adequately contribute to churn analysis.
Mao, Z, Jung, T-P, Lin, C-T & Huang, Y 1970, 'Predicting EEG Sample Size Required for Classification Calibration', Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience (LNCS), International Conference on Augmented Cognition, Springer International Publishing, Toronto, Canada, pp. 57-68.
View/Download from: Publisher's site
View description>>
This study considers an important problem of predicting required calibration sample size for electroencephalogram (EEG)-based classification in brain computer interaction (BCI). We propose an adaptive algorithm based on learning curve fitting to learn the relationship between sample size and classification performance for each individual subject. The algorithm can always provide the predicted result in advance of reaching the baseline performance with an average error of 17.4 %. By comparing the learning curve of different classifiers, the algorithm can also recommend the best classifier for a BCI application. The algorithm also learns a sample size upper bound from the prior datasets and uses it to detect subject outliers that potentially need excessive amount of calibration data. The algorithm is applied to three EEG-based BCI datasets to demonstrate its utility and efficacy. A Matlab package with GUI is also developed and available for downloading at https://github.com/ZijingMao/LearningCurveFittingForSampleSizePrediction. Since few algorithms are yet available to predict performance for BCIs, our algorithm will be an important tool for real-life BCI applications.
Martin Salvador, M, Budka, M & Gabrys, B 1970, 'Towards Automatic Composition of Multicomponent Predictive Systems', Hybrid Artificial Intelligent Systems, International Conference on Hybrid Artificial Intelligence Systems, Springer International Publishing, Seville, Spain, pp. 27-39.
View/Download from: Publisher's site
View description>>
Automatic composition and parametrisation of multicomponent predictive systems (MCPSs) consisting of chains of data transformation steps is a challenging task. In this paper we propose and describe an extension to the Auto-WEKA software which now allows to compose and optimise such flexible MCPSs by using a sequence of WEKA methods. In the experimental analysis we focus on examining the impact of significantly extending the search space by incorporating additional hyperparameters of the models, on the quality of the found solutions. In a range of extensive experiments three different optimisation strategies are used to automatically compose MCPSs on 21 publicly available datasets. A comparison with previous work indicates that extending the search space improves the classification accuracy in the majority of the cases. The diversity of the found MCPSs are also an indication that fully and automatically exploiting different combinations of data cleaning and preprocessing techniques is possible and highly beneficial for different predictive models. This can have a big impact on high quality predictive models development, maintenance and scalability aspects needed in modern application and deployment scenarios.
Mateos, MK, Trahair, TN, Mayoh, C, Barbaro, PM, Sutton, R, Revesz, T, Barbaric, D, Giles, J, Alvaro, F, Mechinaud, F, Catchpoole, D, Kotecha, RS, Dalla-Pozza, L & Marshall, GM 1970, 'Clinical Predictors of Venous Thromboembolism during Therapy for Childhood Acute Lymphoblastic Leukemia', Blood, American Society of Hematology, pp. 1182-1182.
View/Download from: Publisher's site
View description>>
Abstract Venous thromboembolism (VTE) is an unpredictable and life-threatening toxicity that occurs early in acute lymphoblastic leukemia (ALL) therapy. The incidence is approximately 5% in children diagnosed with ALL [Caruso et al. Blood. 2006;108(7):2216-22], which is higher than in other pediatric cancer types [Athale et al. Pediatric Blood & Cancer. 2008;51(6):792-7]. Clinical risk factors for VTE in children during ALL therapy include older age and the use of asparaginase. We hypothesized that there may be additional risk factors that can modify VTE risk, beyond those previously reported [Mitchell et al. Blood. 2010;115(24):4999-5004]. We sought to define early predictive clinical factors that could select a group of children at highest risk of VTE, with possible utility in an interventional trial of prophylactic anticoagulation. We conducted a retrospective study of 1021 Australian children, aged 1-18 years, treated between 1998-2013 on successive BFM-based ALL therapies. Patient records were reviewed to ascertain incidence of VTE; and to systematically document clinical variables present at diagnosis and during induction/consolidation phases of therapy. The CTCAE v4.03 system was used for grading of VTE events. Multivariate logistic and cox regression were used to determine significant clinical risk factors associated with VTE (SPSS v23.0). All P values were 2-tailed, significance level <.05. The incidence of on-treatment VTE was 5.09% [96% ≥Grade 2 (CTCAE v4.0)]. Age ≥10 years [P =.048, HR 1.96 (95% confidence interval= 1.01-3.82)], positive blood culture in induction/consolidation [ P =.009, HR 2.35 (1.24-4.46)], extreme weight at diagnosis <5th or >95th centile [ P =.028, HR 2.14 (1.09-4.20)] and elevated peak gamma-glutamyl transferase (GGT) >5 x upper limit normal in induction/consolidation [ P =.018, HR 2.24 (1.15-...
McGregor, C & Bonnis, B 1970, 'Big Data Analytics for Resilience Assessment and Development in Tactical Training Serious Games', 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, NORTH IRELAND, pp. 158-162.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Training activities utilising virtual realityenvironments are being used increasingly to create trainingscenarios to promote resilience for mental and physicalwellbeing and to enable repeatable scenarios to allow traineesto learn techniques for various stressors. However, assessmentof the trainees' response to these training activities has eitherbeen limited to various pre and post training assessmentmetrics or collected in parallel during experiments andanalysed retrospectively. We have created a Big Data analyticsplatform, Athena, that in real-time acquires data from a firstperson shooter game, ArmA 3, as well as the data ArmA 3sends to the muscle stimulation component of a multisensorygarment, ARAIG that provides on the body feedback to thewearer for communications, weapon fire and being hit andintegrates that data with physiological response data such asheart rate, breathing behaviour and blood oxygen saturation. This paper presents a method to create structured resiliencetraining scenarios that incorporate Big Data analytics forresilience analytics for new approaches for resilienceassessment and development in tactical training serious games.
McGregor, C, Bonnis, B, Stanfield, B & Stanfield, M 1970, 'Design of the ARAIG haptic garment for enhanced resilience assessment and development in tactical training serious games', 2016 IEEE 6th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), 2016 IEEE 6th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), IEEE, pp. 214-217.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. First person shooter virtual reality games have begun to be used for serious games for military or civilian tactical training for new approaches for resilience assessment and development as part of new approaches for mental health training. However, sensory stimulation has been largely constrained to visual and auditory sensations with limited tactile feedback through haptic controllers. This paper presents a design for the ARAIG haptic garment for enhanced resilience assessment and development in tactical training serious games.
Merigo, JM, Alrajeh, N & Peris-Ortiz, M 1970, 'Induced aggregation operators in the ordered weighted average sum', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greece, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The ordered weighted average (OWA) aggregation is an extension of the classical weighted average by using a reordering process of the arguments in a decreasing or increasing way. This article presents new averaging aggregation operators by using sums and order inducing variables. This approach produces the induced ordered weighted average sum (IOWAS). The IOWAS operator aggregates a set of sums using a complex reordering process based on order-inducing variables. This approach includes a different types of aggregation structures including the well-known OWA families. The work presents additional generalizations by using generalized and quasi-arithmetic means. The paper ends with a simple numerical example that shows how to aggregate with this new approach.
Merigo, JM, Blanco-Mesa, F, Gil-Lafuente, AM & Yager, RR 1970, 'A bibliometric analysis of the first thirty years of the International Journal of Intelligent Systems', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greece, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The International Journal of Intelligent Systems was created on 1986. Today, the journal has become thirty years old. In order to celebrate this anniversary, this study develops a bibliometric review of all the papers published in the journal between 1986 and 2015. The results are mainly based on the Web of Science Core Collection that classifies the bibliographic material by using several indicators including the total number of publications and citations, the h-index, the cites per paper and the citing articles. Moreover, the work also uses the VOS viewer software for visualizing the main results through bibliographic coupling and co-citation. The results show a general overview of the leading trends that have influenced the journal in terms of highly cited papers, authors, journals, universities and countries.
Merigó, JM, Zurita, G & Link-Chaparro, S 1970, 'Normalization of the article influence score between categories', Lecture Notes in Engineering and Computer Science, pp. 182-187.
View description>>
This study introduces a normalized article influence score. The main objective is to show that the article influence score obtained in different categories is not equivalent and it is necessary to normalize it when comparing journals form different categories. Several methods are suggested including a normalization that divides the article influence score by the average and another approach that normalizes the results in [0, 1] inside the same category in order to be able to compare between different fields. The results show that each category have different results and it is necessary to develop a normalization process in order to compare the journals. The article analyses a case study in engineering.
Merigó, JM, Zurita, G & Lobos-Ossandón, V 1970, 'Computer science research in artificial intelligence', Lecture Notes in Engineering and Computer Science, pp. 216-220.
View description>>
This paper presents a bibliometric overview of the research carried out between 1990 and 2014 in computer science with a focus on artificial intelligence. The work analyses all the journals available in Web of Science during this period and presents their publication and citation results. The study also considers the most cited articles in this area during the last twenty-five years. IEEE Journals obtain the most remarkable results publishing more than half of the most cited papers.
Milne, DN, Pink, G, Hachey, B & Calvo, RA 1970, 'CLPsych 2016 Shared Task: Triaging content in online peer-support forums', Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology, Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology, Association for Computational Linguistics, pp. 118-127.
View/Download from: Publisher's site
Mols, I, van den Hoven, E & Eggen, B 1970, 'Technologies for Everyday Life Reflection', Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Eindhoven, The Netherlands, pp. 53-61.
View/Download from: Publisher's site
View description>>
Reflection gives insight, supports action and can improve wellbeing. People might want to reflect more often for these benefits, but find it difficult to do so in everyday life. Research in HCI has shown the potential of systems to support reflection in different contexts. In this paper we present a design space for supporting everyday life reflection. We produced a workbook with a selection of conceptual design proposals, which show how systems can take different roles in the process of reflection: triggering, supporting and capturing. We describe a design space with two dimensions by combining these roles with strategies found in literature. We contribute to the extensive body of work on reflection by outlining how design for everyday life reflection requires a focus on more holistic reflection, design with openness and integration in everyday life.
Mols, I, van den Hoven, E, Eggen, B & ACM 1970, 'Informing Design for Reflection: an Overview of Current Everyday Practices', PROCEEDINGS OF THE NORDICHI '16: THE 9TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION - GAME CHANGING DESIGN, Nordic Conference on Human-Computer Interaction (NordCHI), ACM, Gothenburg, Sweden, pp. 1-10.
View/Download from: Publisher's site
View description>>
There is an increasing interest in HCI in designing to support reflection in users. In this paper, we specifically focus on everyday life reflection, covering and connecting a broad range of topics from someone's life rather than focusing on a very specific aspect. Although many systems aim to support reflection, few are based on an overview of how people currently integrate reflection in everyday life. In this paper, we aim to contribute to this gap through a questionnaire on everyday life reflection practices combining both qualitative and quantitative questions. Findings provide insights in the broad range of people that engage with reflection in different ways. We aim to inform design through four considerations: rumination, timing, initiative and social context.
Montgomery, J, Reid, M & Drake, BJ 1970, 'Protocols and Structures for Inference: A RESTful API for Machine Learning', Proceedings of The 2nd International Conference on Predictive APIs and Apps, 2nd International Conference on Predictive APIs and Apps, Journal of Machine Learning Research, Sydney, pp. 29-42.
View description>>
Diversity in machine learning APIs (in both software toolkits and web services), works against realising machine learning’s full potential, making it difficult to draw on individual algorithms from different products or to compose multiple algorithms to solve complex tasks. This paper introduces the Protocols and Structures for Inference (PSI) service architecture and specification, which presents inferential entities—relations, attributes, learners and predictors—as RESTful web resources that are accessible via a common but flexible and extensible interface. Resources describe the data they ingest or emit using a variant of the JSON schema language, and the API has mechanisms to support non-JSON data and future extension of service features.
Muhammad, A, Zhou, Q, Beydoun, G, Xu, D & Shen, J 1970, 'Learning Path Adaptation in Online Learning Systems', 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD), International Conference on Computer Supported Cooperative Work in Design, IEEE, Nanchang, China, pp. 421-426.
View/Download from: Publisher's site
View description>>
Learning path in online learning systems refers to a sequence of learning objects which are designated to help the students in improving their knowledge or skill in particular subjects or degree courses. In this paper, we review the recent research on learning path adaptation to pursue two goals, first is to organize and analyze the parameter of adaptation in learning path; the second is to discuss the challenges in implementing learning path adaptation. The survey covers the state of the art and aims at providing a comprehensive introduction to the learning path adaptation for researchers and practitioners
Murphy, PT, Lynch, G, Bergin, S, Quinn, J, Glavey, S, Murphy, PW & Kennedy, P 1970, 'Strong Correlation Between CTLA-4 and LEF1 Gene Expression Levels in CLL: Targeting of the Wnt/β-Catenin Pathway May Adversely Affect CTLA-4 Expression and Function', Blood, American Society of Hematology, pp. 5571-5571.
View/Download from: Publisher's site
View description>>
Abstract Recently published clinical trials have confirmed the effectiveness of anti-CD38 monoclonal antibody therapy in myeloma. Furthermore, in vitro studies of chronic lymphocytic leukaemia (CLL) cells suggest that CD38 expression can be enhanced by treatment with retinoid derivatives and thus may enhance the cytotoxic effects of anti-CD38 therapy. However, retinoids have been shown to have diverse effects on cellular function and we have previously shown that the retinoid drug acitretin upregulates CD38 expression while also reducing cell homing to the chemokine CXCL12 in primary CLL cells. To investigate possible key mechanisms for these effects, we purified CD20+ B cells from the peripheral blood of 20 CLL patients (9 previously treated, 11 untreated) and, using flow cytometry, measured percentage cell surface expression of CD38 and cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4, CD152). We also measured gene expression levels of the key retinoid receptor, stimulated by retinoic acid 6 (STRA6) and it's agonist, retinol-binding protein 4 (RBP4), as well as CTLA-4, cyclin D1 (CCND1) and the transcription factors, lymphoid enhancer factor 1 (LEF1) and signal transducer and activator of transcription 3 (STAT3) using RT-PCR. GAPDH was used as a reference gene. Mean percentage surface expression of CD38 and CTLA-4 was 21.96% and 45.25% respectively. Mean ∆CT gene expression levels of CCND1, CTLA-4, LEF1 and STAT3 were 12.03, 5.57 , 5.99 and 8.98 respectively. RBP4 and STRA6 gene expression levels were undetectable in all 20 patients. Gene expression of LEF1 showed significant correlations with CTLA-4 (rs=0.572, p=0.008), CCND1 (rs=0.61, p=0.004) and STAT3 (rs=0.587, p=0.006). There was also a significant correlation between gene expression of CCND1 and of STAT3 (r =0.499, p=0.025). No significant correlations were found between percentage...
Naqshbandi, K, Milne, DN, Davies, B, Potter, S, Calvo, RA & Hoermann, S 1970, 'Helping young people going through tough times', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Univ Tasmania, Hobart, AUSTRALIA, pp. 640-642.
View/Download from: Publisher's site
Nascimben, M, King, JT & Lin, CT 1970, 'Resting Upper Alpha Can Predict Motor Imagery Performance?'.
NEJAD, MZ, LU, JIE, ASGARI, P & BEHBOOD, V 1970, 'THE EFFECT OF GOOGLE DRIVE DISTANCE AND DURATION IN RESIDENTIAL PROPERTY IN SYDNEY, AUSTRALIA', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, FRANCE, pp. 646-655.
View/Download from: Publisher's site
Nguyen, TTS & Lu, H 1970, 'Domain Ontology Construction Using Web Usage Data', Proceedings of AI 2016: Advances in Artificial Intelligence (LNCS), Australasian Joint Conference on Artificial Intelligence, Springer International Publishing, Hobart, Australia, pp. 338-344.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016.Ontologies play an important role in conceptual model design and the development of machine-readable knowledge bases. They can be used to represent various knowledge not only about content concepts, but also explicit and implicit relations. While ontologies exist for many application domains of websites, the implicit relations between domain and accessed Web-pages might be less concerned and unclear. These relations are crucial for Web-page recommendation in recommender systems. This paper presents a novel method developing an ontology of Web-pages mapped to domain knowledge. It will focus on solutions of semi-automating ontology construction using Web usage data. An experiment of Microsoft Web data is implemented and evaluated.
Nie, L, Jiang, D, Guo, L, Yu, S & Song, H 1970, 'Traffic Matrix Prediction and Estimation Based on Deep Learning for Data Center Networks', 2016 IEEE Globecom Workshops (GC Wkshps), 2016 IEEE Globecom Workshops (GC Wkshps), IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Network traffic analysis is a crucial technique for systematically operating a data center network. Many network management functions rely on exact network traffic information. Although a great number of works to obtain network traffic have been carried out in traditional ISP networks, they cannot be employed effectively in data center networks. Motivated by that, we focus on the problem of network traffic prediction and estimation in data center networks. We involve deep learning techniques in the network traffic prediction and estimation fields, and propose two deep architectures for network traffic prediction and estimation, respectively. We first use a deep architecture to explore the time-varying property of network traffic in a data center network, and then propose a novel network traffic prediction approach based on a deep belief network and a logistic regression model. Meanwhile, to deal with the highly ill-pose property of network traffic estimation, we further propose a network traffic estimation method using the deep belief network trained by link counts. We validate the effectiveness of our methodologies by real traffic data.
Niu, J & Yu, S 1970, 'Message from the Program Chairs of CSCloud 2016', 2016 IEEE 3rd International Conference on Cyber Security and Cloud Computing (CSCloud), 2016 IEEE 3rd International Conference on Cyber Security and Cloud Computing (CSCloud), IEEE, pp. xi-xi.
View/Download from: Publisher's site
Norman, J, Torchia, J, De Jay, N, Picard, D, Rakopoulos, P, Adamek, D, Catchpoole, D, Clifford, S, Fan, X, Fangusaro, J, Forest, F, Fouladi, M, Garjjar, A, Gillespie, Y, Hansford, J, Hayden, J, Hoffman, L, Hongeng, S, Jones, C, Jouvet, A, Kaorshunov, A, Lau, C, Miller, S, Muraszko, K, Ng, H-K, Pfister, S, Phillips, J, Pomeroy, S, Reddy, A, Rogers, H, Toledano, H, Van Meter, T, Wang, Y, Ho, CY, Young-Shin, R, Taylor, M, Birks, D, Hawkins, C, Bouffet, E, Grundy, R, Jabado, N, Kleinman, C & Huang, A 1970, 'DISTINCT GENE FUSIONS SEGREGATE SUB-CLASSES OF CNS-PNETs', NEURO-ONCOLOGY, 17th International Symposium on Pediatric Neuro-Oncology (ISPNO), OXFORD UNIV PRESS INC, ENGLAND, Liverpool, pp. 15-15.
Oberst, S, Zhang, Z, Campbell, G, Morlock, M, Lai, JCS & Hoffmann, N 1970, 'Towards the understanding of hip squeak in total hip arthroplasty using analytical contact models with uncertainty', Proceedings of the Inter Noise 2016 45th International Congress and Exposition on Noise Control Engineering Towards A Quieter Future, Internoise Congress, http://pub.dega-akustik.de/IN2016/data/index.html, Hamburg, Germany, pp. 5539-5549.
View description>>
Osteoarthritis in hip joints affects patients' quality of life such that often only costly orthopaedic surgeries i.e. total hip arthroplasty (THA) provide relief. Common implant materials are metal alloys, steel or titanium-based, plastics such as ultra-high molecular weight polyethylene, or biocompatible alumina and composite ceramics. Hard-on-hard (HoH) bearing articulations, i.e. ceramic-on-ceramic, or hard-on-soft combinations are used. HoH implants have been known to suffer from squeaking, a phenomenon commonly encountered in friction-induced self-excited vibrations. However, the frictional contact mechanics, its dynamics related to impingement, the effect of socket position, stem configuration, bearing size and patient characteristics are poorly understood. This study gives an overview of the state of the art biomechanical research related to squeaking in THA, with a focus on the effects of friction, stability, related wear and lubrication. An analytical model is proposed to study the onset of friction-induced vibrations in a simplified hemispherical hip stem rubbing in its bearing by varying the contact area. Preliminary results of the complex eigenvalue analysis and stick-slip motion analysis indicate that an increased contact fosters the development of instabilities, even at very small values of the friction coefficient owing to large local contact pressures.
Ochoa, EA, Castro, EL, Lindahl, JMM & Lafuente, AMG 1970, 'Forgotten effects and heavy moving averages in exchange rate forecasting', 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Athens, Greece, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper presents the results of using experton, forgotten effects and heavy moving averages operators in three traditional models based purchasing power parity (PPP) model to forecast exchange rate. Therefore, the use of these methods is to improve the forecast error under scenarios of volatility and uncertainty, such as the financial markets and more precise in exchange rate. The heavy ordered weighted moving average weighted average (HOWMAWA) operator is introduced. This new operator includes the weighted average in the usual heavy ordered weighted moving average (HOWMA) operator, considering a degree of importance for each concept that includes the operator. The use of experton and forgotten effects methodology represents the information of the experts in the field and with that information were obtained hidden variables or second degree relations. The results show that the inclusion of the forgotten effects and heavy moving average operators improve our results and reduce the forecast error.
Orth, D & van den Hoven, E 1970, ''I wouldn't choose that key ring; it's not me'', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania, pp. 316-325.
View/Download from: Publisher's site
View description>>
We each possess certain objects that are dear to us for a
variety of reasons. They can be sentimental to us, bring us
delight through their use or empower us. Throughout our
lives, we use these cherished possessions to reaffirm who
we are, who we were and who we wish to become. To
explore this, we conducted a design study that asked ten
participants to consider their emotional attachment
towards and the identity-relevance of cherished and newly
introduced possessions. Participants were then asked to
elaborate on their responses in interviews. Through a
thematic analysis of these responses, we found that the
emotional significance of possessions was reportedly
influenced by both their relevance to selfhood and position
within a life story. We use these findings to discuss how
the design of new products and systems can promote
emotional attachment by holding a multitude of
emotionally significant meanings to their owners.
Pan, C, Liu, B, Zhou, H & Gui, L 1970, 'Multi-path routing for video streaming in multi-radio multi-channel wireless mesh networks', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Multi-radio multi-channel (MRMC) is a promising approach to relieve the overload caused by the explosive growth of video streaming traffic in wireless mesh networks (WMNs). Previous studies have shown that in MRMC WMNs, network capacity can be increased significantly by proper design of channel assignment and routing algorithm. Multi-path routing can make good use of the network capacity improvement of MRMC WMNs. Multi-path routing has been applied in wired and wireless networks for load balancing or congestion control. However, it remains a challenge in MRMC WMNs. In this paper, we first discuss how to find multiple high-quality paths from source to destination while considering the interference between each other. Then we focus on the rate allocation among multiple paths and formulate it as a max-min problem which can be transformed to a linear programming (LP) problem. Finally, we propose a joint multi-path discovery and rate allocation algorithm. We evaluate this algorithm through simulations. Results show that our algorithm not only increases the network capacity, but also keeps the average end-to-end delay over all video streaming sessions at a low level.
Perera, P, Bandara, M & Perera, I 1970, 'Evaluating the impact of DevOps practice in Sri Lankan software development organizations', 2016 Sixteenth International Conference on Advances in ICT for Emerging Regions (ICTer), 2016 Sixteenth International Conference on Advances in ICT for Emerging Regions (ICTer), IEEE.
View/Download from: Publisher's site
Pickrell, M, Bongers, B & van den Hoven, E 1970, 'Understanding Changes in the Motivation of Stroke Patients Undergoing Rehabilitation in Hospital', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Persuasive Technology, Springer International Publishing, Salzburg, Austria, pp. 251-262.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2016. Stroke patient motivation can fluctuate during rehabilitation due to a range of factors. This study reports on qualitative research, consisting of observations of stroke patients undergoing rehabilitation and interviews with patients about the changes in motivation they identified during their time completing rehabilitation in the hospital. We found a range of positive and negative factors which affect motivation. Positive factors include improvements in patient movement and support from other patients and family members. Negative factors include pain and psychological issues such as changes in mood. From this fieldwork, a set of design guidelines has been developed to act as a platform for researchers and designers developing equipment for the rehabilitation of stroke patients.
Pileggi, SF 1970, 'A Privacy-Friendly Model for an Efficient and Effective Activity Scheduling Inside Dynamic Virtual Organizations', COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS, AND WORKSHARING, COLLABORATECOM 2015, 11th EAI International Conference on Collaborative Computing - Networking, Applications and Worksharing, Springer International Publishing, Wuhan, PEOPLES R CHINA, pp. 303-308.
View/Download from: Publisher's site
Pileggi, SF 1970, 'Probabilistic Semantics', Procedia Computer Science, 16th Annual International Conference on Computational Science (ICCS), Elsevier BV, Univ Calif, San Diego Supercomputer Ctr, San Diego, CA, pp. 1834-1845.
View/Download from: Publisher's site
Popov, A, Fink, W, McGregor, C & Hess, A 1970, 'PHM for astronauts: Elaborating and refining the concept', 2016 IEEE Aerospace Conference, 2016 IEEE Aerospace Conference, IEEE, pp. 1-9.
View/Download from: Publisher's site
View description>>
Clarifying and evolving the PHM for Astronauts concept, introduced in [1], this conceptual paper focuses on particular PHM-based solutions to bring Human Health and Performance (HH&P) technologies to the required technology readiness level (TRL) in order to mitigate the HH&P risks of manned space exploration missions. This paper discusses the particular PHM-based solutions for some HH&P technologies that are, namely by NASA designation, the Autonomous Medical Decision technology and the Integrated Biomedical Informatics technology. Both of the technologies are identified as essential ones in NASA's integrated technology roadmap for the Technology Area 06: Human Health, Life Support, and Habitation Systems. The proposed technology solutions are to bridge PHM, an engineering discipline, to HH&P domain in order to mitigate the risks by focusing on efforts to reduce countermeasure mass and volume and drive the risks down to an acceptable level. The Autonomous Medical Decision technology is based on wireless handheld devices and is a result of a paradigm shift from tele-medicine to that of health support autonomy. The Integrated Biomedical Informatics technology is based on Crew Electronic Health Records (CEHR) system with predictive diagnostics capability developed for crew members rather than for healthcare professionals. The paper explores the proposed PHM-based solutions on crew health maintenance in terms of predictive diagnostics providing early and actionable real-time warnings of impending health problems that otherwise would have gone undetected.
Prior, J, Ferguson, S & Leaney, J 1970, 'Reflection is hard', Proceedings of the Australasian Computer Science Week Multiconference, ACSW '16: Australasian Computer Science Week, ACM, Canberra, Australia, pp. 1-8.
View/Download from: Publisher's site
View description>>
We have observed that it is a non-trivial exercise for undergraduate students to learn how to reflect. Reflective practice is now recognised as important for software developers and has become a key part of software studios in universities, but there is limited empirical investigation into how best to teach and learn reflection. In the literature on reflection in software studios, there are many papers that claim that reflection in the studio is mandatory. However, there is inadequate guidance about teaching early stage students to reflect in that literature. The essence of the work presented in this paper is a beginning to the consideration of how the teaching of software development can best be combined with teaching reflective practice for early stage software development students. We started on a research programme to understand how to encourage students to learn to reflect. As we were unsure about teaching reflection, and we wished to change our teaching as we progressively understood better what to do, we chose action research as the most suitable approach. Within the action research cycles we used ethnography to understand what was happening with the students when they attempted to reflect. This paper reports on the first 4 semesters of research.
We have developed and tested a reflection model and process that provide scaffolding for students beginning to reflect. We have observed three patterns in how our students applied this process in writing their reflections, which we will use to further understand what will help them learn to reflect. We have also identified two themes, namely, motivation and intervention, which highlight where the challenges lie in teaching and learning reflection.
Ramezani, F, Naderpour, M & Lu, J 1970, 'A multi-objective optimization model for virtual machine mapping in cloud data centres', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, BC, Canada, pp. 1259-1265.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Modern cloud computing environments exploit virtualization for efficient resource management to reduce computational cost and energy budget. Virtual machine (VM) migration is a technique that enables flexible resource allocation and increases the computation power and communication capability within cloud data centers. VM migration helps cloud providers to successfully achieve various resource management objectives such as load balancing, power management, fault tolerance, and system maintenance. However, the VM migration process can affect the performance of applications unless it is supported by smart optimization methods. This paper presents a multi-objective optimization model to address this issue. The objectives are to minimize power consumption, maximize resource utilization (or minimize idle resources), and minimize VM transfer time. Fuzzy particle swarm optimization (PSO), which improves the efficiency of conventional PSO by using fuzzy logic systems, is relied upon to solve the optimization problem. The model is implemented in a cloud simulator to investigate its performance, and the results verify the performance improvement of the proposed model.
Rapp, A, Cena, F, Kay, J, Kummerfeld, B, Hopfgartner, F, Larsen, JE & Van Den Hoven, E 1970, 'FuturePD. the future of personal data: Envisioning new personalized services enabled by Quantified Self technologies', Ceur Workshop Proceedings.
View description>>
Quantified Self is rising new challenges for user modeling and personalization. In this workshop we aim at exploring the future of personalized services enabled by Quantified Self technologies.
Sá, AME, Rodriguez-Echavarria, K, Pietroni, N & Cignoni, P 1970, 'State of The Art on Functional Fabrication.', GraDiFab@Eurographics, Eurographics Association, Eurographics Association, pp. 1-9.
Saberi, M, Chang, E, Hussain, OK & Saberi, Z 1970, 'Next generation of interactive contact centre for efficient customer recognition: Conceptual framework', Proceedings of the International Conference on Industrial Engineering and Operations Management, pp. 3231-3241.
View description>>
Contact centers, as the organization's touch point, have a considerable effect on customer experience and retention. It has been shown that 70% of all business interactions are handled in contact centers. A framework is proposed in this conceptual paper to build cleaned interactive customer recognition framework (CICRF) in CCs. CICRF consists of two integrated modules: cleansing and ICRF. The first module focuses on the detection and resolution of duplicate records to improve the effectiveness and efficiency of customer recognition. The second module focuses on interactive customer recognition in a customer database when there are multiple records with the same name. Cleansing module uses Semi-Automatic deduplication process by incorporating three main functions in its design, namely: DedupCrowd, DedupNN and DedupCSR. DedupCrowd is a function that provides training pairs of records for DeduppNN which is a deduplication based neural network. Researchers suggest leveraging human computing power in managing duplicate data which is scalable top the large size of contact centers data. However completion of crowdsourcing tasks is an error-prone process that affects the overall performance of the crowd. Thus, controlling the quality of workers is an essential step for crowdsourcing systems and for that I propose OSQC, an online statistical quality control framework, to monitor the performance of workers. DeduppNN is a neural network based deduplication method that uses output of DedupCrowd for the training purposes. DeduppNN has two features: first is that it is an online deduplication method which is essential for the purposes of customer recognition. Second is that in terms of costs it is much lower in comparison with DedupCrowd. The last function is designed for providing label to pairs when DedupNN is not sure about their label. The intuition behind this function is similar with active learning area which selects appropriate data for labeling. ICRF consist...
Saberi, M, Janjua, NK, Chang, E, Hussain, OK & Pazhoheshfar, P 1970, 'In-house crowdsourcing-based entity resolution using argumentation', Proceedings of the International Conference on Industrial Engineering and Operations Management, p. 135.
View description>>
A conceptual framework is proposed in this study to improve Entity Resolution in contact centers. It is stated in the paper that how RFID produce dirty data in CC's databases and how using customer service representatives (CSRs) via argumentation framework deal with issue. Leveraging the power of CSRs put this work as a crowdsourcing technique that combine human and machine together to rich to the high quality of data in CC's databases. © IEOM Society International.
Saberi, M, Karduck, A, Hussain, OK & Chang, E 1970, 'Challenges in Efficient Customer Recognition in Contact Centre: State-of-the-Art Survey by Focusing on Big Data Techniques Applicability', 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), IEEE, Ostrava, CZECH REPUBLIC, pp. 548-554.
View/Download from: Publisher's site
Saqib, M, Daud Khan, S & Blumenstein, M 1970, 'Texture-based feature mining for crowd density estimation: A study', 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Palmerston North, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Texture feature is an important feature descriptor for many image analysis applications. The objectives of this research are to determine distinctive texture features for crowd density estimation and counting. In this paper, we have comprehensively reviewed different texture features and their different possible combinations to evaluate their performance on pedestrian crowds. A two-stage classification and regression based framework have been proposed for performance evaluation of all the texture features for crowd density estimation and counting. According to the framework, input images are divided into blocks and blocks into cells of different sizes, having varying crowd density levels. Due to perspective distortion, people appearing close to the camera contribute more to the feature vector than people far away. Therefore, features extracted are normalized using a perspective normalization map of the scene. At the first stage, image blocks are classified using multi-class SVM into different density level. At the second stage Gaussian Process Regression is used to re gress low-level features to count. Various texture features and their possible combinations are evaluated on publicly available dataset.
Shu, L, Fan, F, Dou, R, Shui, Y & Dou, W 1970, 'A Dynamic Pricing Method for Carpooling Service Based on Coalitional Game Analysis', PROCEEDINGS OF 2016 IEEE 18TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS; IEEE 14TH INTERNATIONAL CONFERENCE ON SMART CITY; IEEE 2ND INTERNATIONAL CONFERENCE ON DATA SCIENCE AND SYSTEMS (HPCC/SMARTCITY/DSS), 18th IEEE International Conference on High Performance Computing and Communications (HPCC) / 14th IEEE International Conference on Smart City (Smart City) / 2nd IEEE International Conference on Data Science and Systems (DSS), IEEE, AUSTRALIA, Sydney, pp. 78-85.
View/Download from: Publisher's site
Shu, Q, Guo, H, Liang, J, Che, L, Liu, J & Yuan, X 1970, 'EnsembleGraph: Interactive visual analysis of spatiotemporal behaviors in ensemble simulation data', 2016 IEEE Pacific Visualization Symposium (PacificVis), 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE, Taipei, Taiwan, pp. 56-63.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper presents a novel visual analysis tool, EnsembleGraph, which aims at helping scientists understand spatiotemporal similarities across runs in time-varying ensemble simulation data. We abstract the input data into a graph, where each node represents a region with similar behaviors across runs and nodes in adjacent time frames are linked if their regions overlap spatially. The visualization of this graph, combined with multiple-linked views showing details, enables users to explore, select, and compare the extracted regions that have similar behaviors. The driving application of this paper is the study of regional emission influences over tropospheric ozone, based on the ensemble simulations conducted with different anthropogenic emission absences using MOZART-4. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations.
Sohaib, O & Kang, K 1970, 'Assessing Web Content Accessibility of E-Commerce Websites for People with Disabilities.', ISD, International Conference on Information Systems Development, University of Economics in Katowice / Association for Information Systems, Katowice-Poland, pp. 466-475.
View description>>
In recent years online shopping has grown significantly. Due to the rapid growth of technology, companies also continuing to extend the functionality and design of their Business-to-Consumer (B2C) e-business websites. However, it is also important to adopt web accessibility such as the Web Content Accessibility Guidelines in B2C websites to increase the consumer's satisfaction of all ages and with disabilities. This study analyses 30 Australian B2C websites in accordance to Web Content Accessibility Guidelines (WCAG 2.0) using an automated web service. The result shows that B2C websites in Australia are not paying attention to web accessibility for people with disabilities. However, e-commerce will succeed in meeting WCAG 2.0 by making B2C e-commerce websites accessible to consumer of all ages and with disabilities. Recommendations are proposed in order to improve web accessibility for people with sensory (hearing and vision), motor (limited use of hands) and cognition (language and learning) disabilities in B2C e-commerce websites.
Sohaib, O & Kang, K 1970, 'Individual Level Culture Effects on Multi-Perspective iTrust in B2C E-commerce', ACIS 2015 Proceedings - 26th Australasian Conference on Information Systems, Australasian Conference on Information Systems, ACIS, Adelaide, pp. 1-11.
View description>>
Consumer trust is one of the key obstacles to online vendors seeking toextend their consumers across cultures. This research identifies culture at theindividual consumer level. Based on the Stimulus-Organism-Response (SOR) model,this study focuses on the moderating role of uncertainty avoidance culturevalue on privacy and security as cognition influences, joy and fear asemotional influences (Stimuli), and individualism-collectivism on socialnetworking services as social influence and subsequently on interpersonal trust(cognitive and affect-based trust) (Organism) towards purchase intention(Response). Data were collected in Australia and the Partial least squares(PLS) approach was used to test the research model. The findings confirmed themoderating role of individual level culture on consumer's cognitive andaffect-based trust in B2Ce-commerce websites with diverse degrees ofuncertainty avoidance and individualism.
Sood, K, Yu, S, Xiang, Y & Peng, S 1970, 'Control layer resource management in SDN-IoT networks using multi-objective constraint', 2016 IEEE 11th Conference on Industrial Electronics and Applications (ICIEA), 2016 IEEE 11th Conference on Industrial Electronics and Applications (ICIEA), IEEE, Hefei, PEOPLES R CHINA, pp. 71-76.
View/Download from: Publisher's site
Sun, F, Liu, B, Hou, F, Zhou, H, Gui, L & Chen, J 1970, 'Cournot equilibrium in the mobile virtual network operator oriented oligopoly offloading market', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Cellular networks are now facing severe traffic overload problems due to the explosive growth of mobile data traffic. One of the promising solutions is to offload part of the traffic through WiFi. In this paper, we investigate an oligopoly offloading market, where several Mobile Virtual Network Operators (MVNOs) compete to serve end users using the network infrastructure leased from the host Mobile Network Operator (MNO) at the wholesale market. First, we study the competitive interactions among the MVNOs considering the overload problems of the offloading market. Specially, we formulate the interactions as a non-cooperative inventory competition game, where each MVNO determines the amount of cellular traffic it can provide to end users (named as the traffic inventory of each MVNO in this paper) simultaneously. We analyze and derive the existence of the Cournot equilibrium using game theory. Furthermore, we study the impact of the MNO's wholesale price strategy on the market equilibrium. Based on these analysis, we find the optimal initial inventory strategy for these competitors according to the Cournot equilibrium. Finally, our simulations present the process of achieving the market equilibrium and illustrate the impact of the host MNO to the MVNOs.
Sun, G, Cui, T, Beydoun, G, Shen, J & Chen, S 1970, 'Profiling and Supporting Adaptive Micro Learning on Open Education Resources.', CBD, International Conference on Advanced Cloud and Big Data, IEEE Computer Society, Chengdu, China, pp. 158-163.
View/Download from: Publisher's site
View description>>
It is found that learners prefer to use micro learning mode to conduct learning activities through open educational resources (OERs). However, adaptive micro learning is scarcely supported by current OER platforms. In this paper we focus on profiling an effective micro learning process which is central to establish the raw materials and set up rules for the final adaptive process. This work consists of two parts. First, we conducted an educational data mining and learning analysis study to discover the patterns and rules in micro learning through OER. Then based on its findings, we profiled features of both learners and OERs to reveal the full learning story in order to support the decision making process. Incorporating educational data mining and learning analysis, an cloud-based architecture for Micro Learning as a Service (MLaaS) was designed to integrate all necessary procedures together as a complete service for delivering micro OERs. The MLaaS also provides a platform for resource sharing and exchanging in peer-to-peer learning environment. Working principle of a key step, namely the computational decision-making of micro OER adaptation, was also introduced
Tawk, T, Al-Kilidar, H & Bagia, R 1970, 'Skills for Managing Virtual Projects: Are they Gained Through Graduate Project Management Programs?', 27th Annual Conference of the Australasian Association for Engineering Education : AAEE 2016, AAEE - Annual Conference of Australasian Association for Engineering Education, Australasian Association for Engineering Education, Coffs Harbour, Australia.
Thuy Do, QN, Zhilin, A, Junior, CZP, Wang, G & Hussain, FK 1970, 'A network-based approach to detect spammer groups', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, BC, Canada, pp. 3642-3648.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Online reviews nowadays are an important source of information for consumers to evaluate online services and products before deciding which product and which provider to choose. Therefore, online reviews have significant power to influence consumers' purchase decisions. Being aware of this, an increasing number of companies have organized spammer review campaigns, in order to promote their products and gain an advantage over their competitors by manipulating and misleading consumers. To make sure the Internet remains a reliable source of information, we propose a method to identify both individual and group spamming reviews by assigning a suspicion score to each user. The proposed method is a network-based approach combining clustering techniques. We demonstrate the efficiency and effectiveness of our approach on a real-world and manipulated dataset that contains over 8000 restaurants and 600,000 restaurant reviews from TripAdvisor website. We tested our method in three testing scenarios. The method was able to detect all spammers in two testing scenarios, however it did not detect all in the last scenario.
Tian, F, Liu, B, Xiong, J & Gui, L 1970, 'Movement-based incentive for cellular traffic offloading through D2D communications', 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2016 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Due to the various applications for smartphones, mobile data traffic is growing at an unprecedented rate. The cellular network is suffering from traffic overloaded currently. Offloading part of the cellular traffic through opportunistic contact between mobile devices is a promising solution to solve the overload problem. However, due to the uneven distribution of devices and regular mobility of smartphone users, the contacts between mobile devices are opportunistic, the cellular traffic offloading approach results in poor performance, i.e., the relay user contacts with other mobile users with small probability. In this paper, we are the first to propose a movement-based incentive mechanism for cellular traffic offloading, where we control the mobility of relay users to improve the performance of traffic offloading. The movement-based incentive mechanism contains a relay user selection algorithm and a payment determination algorithm. Comparing with existing solutions, our proposed movement-based incentive mechanism has better performance.
Tibben, W, Brown, RBK, Beydoun, G & Zamani, R 1970, 'Is consensus a viable concept to justify use of online collaborative networks in multi-stakeholder governance?', 2016 49TH HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES (HICSS), Hawaii International Conference on System Sciences (HICSS), IEEE, Kauai, USA, pp. 4665-4674.
View/Download from: Publisher's site
View description>>
The adoption of multi-stakeholder decision-making processes using online collaborative technologies for Internet governance has facilitated participation of stakeholders from many developing countries in decision making within organizations such as ISOC and ICANN. One important and underlying rationale that gives rise to such arrangements is the notion of consensus. The paper uses the work of Arrow to firstly question whether consensus is indeed a theoretically justifiable concept on which to base multi-stakeholder governance. The paper then further uses Arrow's insights to develop an analytical framework which identifies expertise and authority as two key factors in the analysis of online decision making. The paper presents a conjecture that a significant challenge in ensuring productive multi-stakeholder governance are the practices that govern the ways in which authority and expertise interact. To that end, two potential sources of leadership are defined within online collaborative networks: positional leadership and thought leadership.
Tonelli, D, Pietroni, N, Cignoni, P & Scopigno, R 1970, 'Design and Fabrication of Grid-shells Mockups.', STAG, Eurographics Association, pp. 21-27.
van Gennip, D, Orth, D, Imtiaz, MA, van den Hoven, E & Plimmer, B 1970, 'Tangible cognition', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Tasmania, Australia, pp. 662-665.
View/Download from: Publisher's site
View description>>
This workshop will explore the relationship between HCI using tangible user interfaces (TUIs) and cognition. We see exciting opportunities for tangible interaction to address some of the cognitive challenges of concern to the HCI community, in areas such as education, healthcare, games, reminiscing and reflection, and community issues. Drawing together the Australasian community, with those from further afield, we hope to strengthen research and build a local community in this exciting and rapidly developing field. Participation is invited from researchers working in tangible user interfaces or those interested in cognition and interaction. During the workshop the majority of the time will be spent in small group discussions and brainstorming solutions.
van Gennip, D, van den Hoven, E & Markopoulos, P 1970, 'The Phenomenology of Remembered Experience', Proceedings of the European Conference on Cognitive Ergonomics, ECCE '16: European Conference on Cognitive Ergonomics, ACM, Nottingham, United Kingdom, pp. 1-8.
View/Download from: Publisher's site
View description>>
There is a growing interest in interactive technologies that support remembering by considering functional, experiential, and emotional support to their users. Design driven research benefits from an understanding of how people experience autobiographical remembering. We present a phenomenological study in which twenty-two adults were interviewed using the repertory grid technique; we aimed at soliciting personal constructs that characterize people's remembered experiences. Inductive coding revealed that 77,8% of identified constructs could be reliably coded in five categories referring to contentment, confidence/unease, social interactions, reflection, and intensity. These results align with earlier classifications of personal constructs and models of human emotion. The categorization derived from this study provides an empirically founded characterization of the design space of technologies for supporting remembering. We discuss its potential value as a tool for evaluating interactive systems in relation to personal and social memory talk, and outline future improvements.
Versteeg, M, van den Hoven, E & Hummels, C 1970, 'Interactive Jewellery', Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Eindhoven, The Netherlands, pp. 44-52.
View/Download from: Publisher's site
View description>>
Many current wearables have a technology-driven background: the focus is primarily on functionality, while their possible personal and social-cultural value is underappreciated. We think that developing wearables from a jewellery perspective can compensate for this. The personal and social cultural values embodied by traditional jewellery are often tightly connected to their function as memento. In this paper we reflect from a jewellery perspective, a memory-studies perspective and a TEI-perspective on three design proposals for interactive jewellery. We identify 1) drawing inspiration from interaction with traditional jewellery, 2) using relatively simple technology with high experiential qualities, 3) abstract and poetic data representation and 4) storing data uniquely on the digital jewel as possible design directions.
Voinov, A, Pierce, S & Barreteau, O 1970, 'Stream D sessions', Environmental Modelling and Software for Supporting A Sustainable Future Proceedings 8th International Congress on Environmental Modelling and Software Iemss 2016, p. 803.
Wakefield, J, Tyler, J, Dyson, L & Frawley, J 1970, 'Implications of Tablet Computing Annotation and Sharing Technology on Student Learning', American Accounting Association Annual Meeting, New York.
Wang, C, Zhang, W, Shu, C-C & Dong, D 1970, 'Learning a control field for simultaneous state transformation in CO molecules', 2016 12th World Congress on Intelligent Control and Automation (WCICA), 2016 12th World Congress on Intelligent Control and Automation (WCICA), IEEE, pp. 1180-1184.
View/Download from: Publisher's site
Wang, D, Deng, S, Zhang, X & Xu, G 1970, 'Learning Music Embedding with Metadata for Context Aware Recommendation', Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, ICMR'16: International Conference on Multimedia Retrieval, ACM, New York, USA, pp. 249-253.
View/Download from: Publisher's site
View description>>
© 2016 ACM. Contextual factors can benefit music recommendation and retrieval tasks remarkably. However, how to acquire and utilize the contextual information still need to be studied. In this paper, we propose a context aware music recommendation approach, which can recommend music appropriate for users' contextual preference for music. In analogy to matrix factorization methods for collaborative filtering, the proposed approach does not require songs to be described by features beforehand, but it learns music pieces' embeddings (vectors in low-dimensional continuous space) from music playing records and corresponding metadata and infer users' general and contextual preference for music from their playing records with the learned embedding. Then, our approach can recommend appropriate music pieces. Experimental evaluations on a real world dataset show that the proposed approach outperforms baseline methods.
Wang, F, He, Y, Qu, J, Xie, Q, Lin, Q, Ni, X, Chen, Y, Yu, R, Lin, C-T & Li, Y 1970, 'An audiovisual BCI system for assisting clinical communication assessment in patients with disorders of consciousness: A case study', 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Orlando, FL, USA, pp. 1536-1539.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. The JFK Coma Recovery Scale-Revised (JFK CRS-R), a behavioral scale, is often used for clinical assessments of patients with disorders of consciousness (DOC), such as patients in a vegetative state. However, there has been a high rate of clinical misdiagnosis with the JFK CRS-R because patients with severe brain injures cannot provide sufficient behavioral responses. It is particularly difficult to evaluate the communication function in DOC patients using the JFK CRS-R because a higher level of behavioral responses is needed for communication assessments than for many other assessments, such as an auditory startle assessment. Brain-computer interfaces (BCIs), which provide control and communication by detecting changes in brain signals, can be used to evaluate patients with DOC without the need of behavioral expressions. In this paper, we proposed an audiovisual BCI system to supplement the JFK CRS-R in assessing the communication ability of patients with DOC. In the graphic user interface of the BCI system, two word buttons ('Yes' and 'No' in Chinese) were randomly displayed in the left and right sides and flashed in an alternating manner. When a word button flashed, its corresponding spoken word was broadcast from an ipsilateral headphone. The use of semantically congruent audiovisual stimuli improves the detection performance of the BCI system. Similar to the JFK CRS-R, several situation-orientation questions were presented one by one to patients with DOC. For each question, the patient was required to provide his/her answer by selectively focusing on an audiovisual stimulus (audiovisual 'Yes' or 'No'). As a case study, we applied our BCI system in a patient with DOC who was clinically diagnosed as being in a minimally conscious state (MCS). According to the JFK CRS-R assessment, this patient was unable to communicate consistently. However, he achieved a high accuracy of 86.5% in our BCI experiment. This result indicates his reliable com...
Wang, J, Jiang, C, Gao, L, Yu, S, Han, Z & Ren, Y 1970, 'Complex network theoretical analysis on information dissemination over vehicular networks', 2016 IEEE International Conference on Communications (ICC), ICC 2016 - 2016 IEEE International Conference on Communications, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. How to enhance the communication efficiency and quality on vehicular networks is one critical important issue. While with the larger and larger scale of vehicular networks in dense cities, the real-world datasets show that the vehicular networks essentially belong to the complex network model. Meanwhile, the extensive research on complex networks has shown that the complex network theory can both provide an accurate network illustration model and further make great contributions to the network design, optimization and management. In this paper, we start with analyzing characteristics of a taxi GPS dataset and then establishing the vehicular-to-infrastructure, vehicle-to-vehicle and the hybrid communication model, respectively. Moreover, we propose a clustering algorithm for station selection, a traffic allocation optimization model and an information source selection model based on the communication performances and complex network theory.
Wang, S, Liu, W, Wu, J, Cao, L, Meng, Q & Kennedy, PJ 1970, 'Training deep neural networks on imbalanced data sets', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, pp. 4368-4374.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.
Wang, X, Sheng, QZ, Yao, L, Li, X, Fang, XS, Xu, X & Benatallah, B 1970, 'Empowering Truth Discovery with Multi-Truth Prediction', Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM'16: ACM Conference on Information and Knowledge Management, ACM, IUPUI, Indianapolis, IN, pp. 881-890.
View/Download from: Publisher's site
Wang, X, Sheng, QZ, Yao, L, Li, X, Fang, XS, Xu, X & Benatallah, B 1970, 'Truth Discovery via Exploiting Implications from Multi-Source Data', Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM'16: ACM Conference on Information and Knowledge Management, ACM, IUPUI, Indianapolis, IN, pp. 861-870.
View/Download from: Publisher's site
Wang, Y, Qi, B, Dong, D & Petersen, IR 1970, 'An iterative algorithm for Hamiltonian identification of quantum systems', 2016 IEEE 55th Conference on Decision and Control (CDC), 2016 IEEE 55th Conference on Decision and Control (CDC), IEEE, pp. 2523-2528.
View/Download from: Publisher's site
Wang, Z, Chen, C, Li, H-X, Dong, D & Tarn, T-J 1970, 'A novel incremental learning scheme for reinforcement learning in dynamic environments', 2016 12th World Congress on Intelligent Control and Automation (WCICA), 2016 12th World Congress on Intelligent Control and Automation (WCICA), IEEE, pp. 2426-2431.
View/Download from: Publisher's site
Wei, C-S, Lin, Y-P, Wang, Y-T, Lin, C-T & Jung, T-P 1970, 'Transfer learning with large-scale data in brain-computer interfaces', 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Orlando, FL, USA, pp. 4666-4669.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Human variability in electroencephalogram (EEG) poses significant challenges for developing practical real-world applications of brain-computer interfaces (BCIs). The intuitive solution of collecting sufficient user-specific training/calibration data can be very labor-intensive and time-consuming, hindering the practicability of BCIs. To address this problem, transfer learning (TL), which leverages existing data from other sessions or subjects, has recently been adopted by the BCI community to build a BCI for a new user with limited calibration data. However, current TL approaches still require training/calibration data from each of conditions, which might be difficult or expensive to obtain. This study proposed a novel TL framework that could nearly eliminate requirement of subject-specific calibration data by leveraging large-scale data from other subjects. The efficacy of this method was validated in a passive BCI that was designed to detect neurocognitive lapses during driving. With the help of large-scale data, the proposed TL approach outperformed the within-subject approach while considerably reducing the amount of calibration data required for each individual (∼1.5 min of data from each individual as opposed to a 90 min pilot session used in a standard within-subject approach). This demonstration might considerably facilitate the real-world applications of BCIs.
Wertheim, G, Luskin, M, Smith, C, Zhou, L, Harrison, J, Figueroa, M, Catchpoole, D, Aplenc, R, Tasian, S, Carroll, M & Master, SR 1970, 'Multi-Locus DNA Methylation Measured by xMELP Predicts Survival in Pediatric Patients with AML', MODERN PATHOLOGY, 105th Annual Meeting of the United-States-and-Canadian-Academy-of-Pathology, NATURE PUBLISHING GROUP, WA, Seattle, pp. 384A-384A.
Wertheim, G, Luskin, M, Smith, C, Zhou, L, Harrison, J, Figueroa, M, Catchpoole, D, Aplenc, R, Tasian, S, Carroll, M & Master, SR 1970, 'Multi-Locus DNA Methylation Measured by xMELP Predicts Survival in Pediatric Patients with AML', LABORATORY INVESTIGATION, 105th Annual Meeting of the United-States-and-Canadian-Academy-of-Pathology, NATURE PUBLISHING GROUP, WA, Seattle, pp. 384A-384A.
WU, D, HUSSAIN, F, ZHANG, G, LU, JIE, UNWIN, J & RANCE, G 1970, 'A CLOUD-BASED COMPREHENSIVE HEALTH INFORMATION SYSTEM FRAMEWORK', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, pp. 612-617.
View/Download from: Publisher's site
View description>>
© 2016 by World Scientific Publishing Co. Pte. Ltd. Big data appearing in health domain bring great opportunities for the health information system development. To effectively utilize the big health data, three challenges: data heterogeneity, huge data volume and high velocity of data generation, and various kinds of user requirements, need to be dealt with. To solve the problem, this paper proposes a cloud-based comprehensive health information system framework, which uses cloud computing techniques to manage and process the big health data, and provides several data analysis and recommendation services to explore the data and extract values from them.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Chin-Teng Lin 1970, 'Agreement rate initialized maximum likelihood estimator for ensemble classifier aggregation and its application in brain-computer interface', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Budapest, Hungary, pp. 000724-000729.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Ensemble learning is a powerful approach to construct a strong learner from multiple base learners. The most popular way to aggregate an ensemble of classifiers is majority voting, which assigns a sample to the class that most base classifiers vote for. However, improved performance can be obtained by assigning weights to the base classifiers according to their accuracy. This paper proposes an agreement rate initialized maximum likelihood estimator (ARIMLE) to optimally fuse the base classifiers. ARIMLE first uses a simplified agreement rate method to estimate the classification accuracy of each base classifier from the unlabeled samples, then employs the accuracies to initialize a maximum likelihood estimator (MLE), and finally uses the expectation-maximization algorithm to refine the MLE. Extensive experiments on visually evoked potential classification in a brain-computer interface application show that ARIMLE outperforms majority voting, and also achieves better or comparable performance with several other state-of-the-art classifier combination approaches.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 1970, 'Agreement Rate Initialized Maximum Likelihood Estimator for Ensemble Classifier Aggregation and Its Application in Brain-Computer Interface', 2016 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, HUNGARY, Budapest, pp. 724-729.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 1970, 'Offline EEG-based driver drowsiness estimation using enhanced batch-mode active learning (EBMAL) for regression', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Budapest, Hungary, pp. 000730-000736.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. There are many important regression problems in real-world brain-computer interface (BCI) applications, e.g., driver drowsiness estimation from EEG signals. This paper considers offline analysis: given a pool of unlabeled EEG epochs recorded during driving, how do we optimally select a small number of them to label so that an accurate regression model can be built from them to label the rest? Active learning is a promising solution to this problem, but interestingly, to our best knowledge, it has not been used for regression problems in BCI so far. This paper proposes a novel enhanced batch-mode active learning (EBMAL) approach for regression, which improves upon a baseline active learning algorithm by increasing the reliability, representativeness and diversity of the selected samples to achieve better regression performance. We validate its effectiveness using driver drowsiness estimation from EEG signals. However, EBMAL is a general approach that can also be applied to many other offline regression problems beyond BCI.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 1970, 'Spectral meta-learner for regression (SMLR) model aggregation: Towards calibrationless brain-computer interface (BCI)', 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Budapest, Hungary, pp. 000743-000749.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. To facilitate the transition of brain-computer interface (BCI) systems from laboratory settings to real-world application, it is very important to minimize or even completely eliminate the subject-specific calibration requirement. There has been active research on calibrationless BCI systems for classification applications, e.g., P300 speller. To our knowledge, there is no literature on calibrationless BCI systems for regression applications, e.g., estimating the continuous drowsiness level of a driver from EEG signals. This paper proposes a novel spectral meta-learner for regression (SMLR) approach, which optimally combines base regression models built from labeled data from auxiliary subjects to label offline EEG data from a new subject. Experiments on driver drowsiness estimation from EEG signals demonstrate that SMLR significantly outperforms three state-of-the-art regression model fusion approaches. Although we introduce SMLR as a regression model fusion in the BCI domain, we believe its applicability is far beyond that.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 1970, 'Spectral Meta-Learner for Regression (SMLR) Model Aggregation: Towards Calibrationless Brain-Computer Interface (BCI)', 2016 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, HUNGARY, Budapest, pp. 743-749.
Wu, S-L, Liu, Y-T, Kuang-Pen Chou, Lin, Y-Y, Jie Lu, Guangquan Zhang, Chun-Hsiang Chuang, Wen-Chieh Lin & Lin, C-T 1970, 'A motor imagery based brain-computer interface system via swarm-optimized fuzzy integral and its application', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, BC, Canada, pp. 2495-2500.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. A brain-computer interface (BCI) system provides a convenient means of communication between the human brain and a computer, which is applied not only to healthy people but also for people that suffer from motor neuron diseases (MNDs). Motor imagery (MI) is one well-known basis for designing Electroencephalography (EEG)-based real-life BCI systems. However, EEG signals are often contaminated with severe noise and various uncertainties, imprecise and incomplete information streams. Therefore, this study proposes spectrum ensemble based on swam-optimized fuzzy integral for integrating decisions from sub-band classifiers that are established by a sub-band common spatial pattern (SBCSP) method. Firstly, the SBCSP effectively extracts features from EEG signals, and thereby the multiple linear discriminant analysis (MLDA) is employed during a MI classification task. Subsequently, particle swarm optimization (PSO) is used to regulate the subject-specific parameters for assigning optimal confidence levels for classifiers used in the fuzzy integral during the fuzzy fusion stage of the proposed system. Moreover, BCI systems usually tend to have complex architectures, be bulky in size, and require time-consuming processing. To overcome this drawback, a wireless and wearable EEG measurement system is investigated in this study. Finally, in our experimental result, the proposed system is found to produce significant improvement in terms of the receiver operating characteristic (ROC) curve. Furthermore, we demonstrate that a robotic arm can be reliably controlled using the proposed BCI system. This paper presents novel insights regarding the possibility of using the proposed MI-based BCI system in real-life applications.
Xiang, H, Xu, X, Zheng, H, Li, S, Wu, T, Dou, W & Yu, S 1970, 'An Adaptive Cloudlet Placement Method for Mobile Applications over GPS Big Data', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Mobile cloud computing provides powerful computing and storage capacity on managing GPS big data by offloading vast workloads to remote clouds. For the mobile applications with urgent computing or communication deadline, it is necessary to reduce the workload transmission latency between mobile devices and clouds. This can be technically achieved by expanding mobile cloudlets that are moving co-located with Access Points (APs). However, it is not-trivial to place such movable cloudlets efficiently to enhance the cloud service for dynamic context-aware mobile applications. In view of this challenge, an adaptive cloudlet placement method for mobile applications over GPS big data is proposed in this paper. Specifically, the gathering regions of the mobile devices are identified based on position clustering, and the cloudlet destination locations are confirmed accordingly. Besides, the traces between the origin and destination locations of these mobile cloudlets are also achieved. Finally, the experimental results demonstrate that the proposed method is both effective and efficient.
Xing, P, Zhang, C & Yu, S 1970, 'Service Quality Decision in Service Supply Chain Considering Supervision Behavior Based on Quantum Game', PROCEEDINGS OF 2016 INTERNATIONAL CONFERENCE ON MODELING, SIMULATION AND OPTIMIZATION TECHNOLOGIES AND APPLICATIONS (MSOTA2016), International Conference on Modeling, Simulation and Optimization Technologies and Applications (MSOTA), ATLANTIS PRESS, PEOPLES R CHINA, Xiamen, pp. 155-160.
Xu, X, Wang, W, Wu, T, Dou, W & Yu, S 1970, 'A Virtual Machine Scheduling Method for Trade-offs Between Energy and Performance in Cloud Environment', 2016 FOURTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD 2016), 4th International Conference on Advanced Cloud and Big Data (CBD), IEEE, PEOPLES R CHINA, Chengdu, pp. 246-251.
View/Download from: Publisher's site
Xu, X, Wang, W, Wu, T, Dou, W & Yu, S 1970, 'A Virtual Machine Scheduling Method for Trade-Offs Between Energy and Performance in Cloud Environment', 2016 International Conference on Advanced Cloud and Big Data (CBD), 2016 International Conference on Advanced Cloud and Big Data (CBD), IEEE, pp. 246-251.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Cloud computing promises on-demand resource provisioning for customers and it has drawn most attention of academia and industry to accommodate their applications in cloud platforms. Currently, cloud datacenters consume a huge amount of power which has become a big concern worldwide. Live virtual machine (VM) migration provides potential opportunities and probabilities to achieve energy savings. However, it is still a challenge to conduct VM scheduling in energy-efficient and performance-guaranteed manners, since VM migrations bring about both energy conservation and VM performance degradation. In this paper, a VM scheduling method for trade-offs between energy and performance in cloud environment is proposed to address the above challenge. Specifically, a joint optimization model is designed to formalize our problem, then a corresponding energy and performance aware VM scheduling method is proposed to determine which VMs should be migrated and where they should be migrated, aiming at reducing energy consumption and mitigating performance degradation. Simulation results demonstrate that the proposed method is both effective and efficient.
Xue, S, Lu, J, Wu, J, Zhang, G & Xiong, L 1970, 'Multi-instance graphical transfer clustering for traffic data learning', 2016 International Joint Conference on Neural Networks (IJCNN), 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, Vancouver, Canada, pp. 4390-4395.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. In order to better model complex real-world data and to develop robust features that capture relevant information, we usually employ unsupervised feature learning to learn a layer of features representations from unlabeled data. However, developing domain-specific features for each task is expensive, time-consuming and requires expertise of the data. In this paper, we introduce multi-instance clustering and graphical learning to unsupervised transfer learning. For a better clustering efficient, we proposed a set of algorithms on the application of traffic data learning, instance feature representation, distance calculation of multi-instance clustering, multi-instance graphical cluster initialisation, multi-instance multi-cluster update, and graphical multi-instance transfer clustering (GMITC). In the end of this paper, we examine the proposed algorithms on the Eastwest datasets by couples of baselines. The experiment results indicate that our proposed algorithms can get higher clustering accuracy and much higher programming speed.
YANG, C, ZHU, F & ZHANG, G 1970, 'SAO-BASED TOPIC MODELING FOR COMPETITIVE TECHNICAL INTELLIGENCE: A CASE STUDY IN GRAPHENE', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, pp. 155-161.
View/Download from: Publisher's site
View description>>
Competitive technical intelligence (CTI) tries to identify key technology, current R&D emphases, and key players for intellectual and policy reasons in academia and industry. Many researches apply Latent Dirichlet Allocation (LDA) to CTI mining based on the assumptions of “bag-of-words”, “bag-of-n-grams” or “bag-of-phrases”, which produce the topics at words/phrases level. However, technological words/phrases are not enough to explore problem & solution patterns hidden in technological documents, which are the most important technology intelligence for solution-oriented CTI mining. In this paper, we propose a Subject-Action-Object (SAO)-based LDA model to identify underlying topics represented by related SAOs and explore the problem & solution patterns embodied in SAO structures. SAO-Based LDA model is built based on the “bag-of-SAO” assumption and perform technology analysis at concept level. The validity and feasibility of the proposed method are tested by a case study in the Graphene technology.
Yang, D, Wu, Z, Wang, X, Cao, J & Xu, G 1970, 'Predicting Replacement of Smartphones with Mobile App Usage', Web Information Systems Engineering – WISE 2016 (LNCS), International Conference on Web Information Systems Engineering, Springer International Publishing, Shanghai, China, pp. 343-351.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2016.To identify right customers who intend to replace the smart phone can help to perform precision marketing and thus bring significant financial gains to cell phone retailers. In this paper,we provide a study of exploiting mobile app usage for predicting users who will change the phone in the future. We first analyze the characteristics of mobile log data and develop the temporal bag-of-apps model,which can transform the raw data to the app usage vectors. We then formularize the prediction problem,present the hazard based prediction model,and derive the inference procedure. Finally,we evaluate both data model and prediction model on real-world data. The experimental results show that the temporal usage data model can effectively capture the unique characteristics of mobile log data,and the hazard based prediction model is thus much more effective than traditional classification methods. Furthermore,the hazard model is explainable,that is,it can easily show how the replacement of smart phones relate to mobile app usage over time.
Yao, L, Benatallah, B, Wang, X, Tran, NK & Lu, Q 1970, 'Context as a Service: Realizing Internet of Things-Aware Processes for the Independent Living of the Elderly', SERVICE-ORIENTED COMPUTING, (ICSOC 2016), 14th International Conference on Service-Oriented Computing (ICSOC), Springer International Publishing, Banff, CANADA, pp. 763-779.
View/Download from: Publisher's site
Ye, T, Hao, Y, Wang, Z, Lai, C, Chen, S, Li, Z, Liang, J & Yuan, X 1970, 'Behavior Analysis through Collaborative Visual Exploration on Trajectory Data', ChinaVis2016, ChinaVis2016, Changsha, China.
Yu Pan, Daoyi Dong & Petersen, IR 1970, 'A direct method for analysis and synthesis of a decoherence-free mode in quantum linear systems', 2016 American Control Conference (ACC), 2016 American Control Conference (ACC), IEEE, pp. 4760-4764.
View/Download from: Publisher's site
Zeng, X, Lu, J, Kerre, EE, Martinez, L & Koehl, L 1970, 'Foreword', Uncertainty Modelling in Knowledge Engineering and Decision Making - Proceedings of the 12th International FLINS Conference, FLINS 2016, WORLD SCIENTIFIC PUBL CO PTE LTD, pp. v-vi.
View/Download from: Publisher's site
Zhan, Y, Xu, D, Yu, H & Yu, S 1970, 'Breaking the Split Incentive Hurdle via Time-Varying Monetary Rewards', 2016 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2016 - 2016 IEEE Global Communications Conference, IEEE, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Demand response is widely employed by today's data centers to response to the increasing of electricity cost. To incentivize users of data centers participate in the demand response programs, i.e., breaking the split incentive hurdle, some prior researches proposed market-based mechanisms such as dynamic pricing and static monetary rewards. However, these mechanisms are either intrusive or unfair. In this paper, we use time-varying rewards to incentivize users of data centers grant time-shifting of their requests. With a game-theoretic framework, we model/analyze the game between a single data center and its users. Further, we extend our design via integrating it with another emerging practical demand response strategies: server shutdown or local renewable energy generation. With real-world data traces, we show that a data center with our design can effectively shed its peak electricity load and overall electricity cost without reducing its profit, when compared with the current practice where no incentive mechanism is established.
Zhang, T, Gu, M, Zhang, G & Lu, J 1970, 'A fast load pattern extraction approach based on dimension reduction and sampling', 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Vancouver, BC, Canada, pp. 1253-1258.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. This paper proposes a fast load pattern extraction approach to solve the time consuming problem in using a traditional κ-means clustering method for large volumes of load curves. The approach, based on dimension reduction and sampling, segments and averages sampling characteristic points to reduce the load curve's dimensions, then reduces the overall size of the sample data set using representative random sampling. κ-means clustering algorithm is used to extract load patterns from the representative data set, which will be used to classify the full data set. Reducing the size and dimension of the data set allows use of a less complex algorithm, and thus greatly improves the clustering speed. The validity of the approach is proven by experiments designed to evaluate the trade-off between complexity and consistency.
Zhang, W, Dong, D & Petersen, IR 1970, 'Learning control for maximizing the purity at a fixed time in an open quantum system', 2016 Australian Control Conference (AuCC), 2016 Australian Control Conference (AuCC), IEEE, pp. 387-390.
View/Download from: Publisher's site
Zhang, Z, Huang, K, Tan, T, Yang, P & Li, J 1970, 'ReD-SFA: Relation Discovery Based Slow Feature Analysis for Trajectory Clustering', 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, USA, pp. 752-760.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. For spectral embedding/clustering, it is still an open problem on how to construct an relation graph to reflect the intrinsic structures in data. In this paper, we proposed an approach, named Relation Discovery based Slow Feature Analysis (ReD-SFA), for feature learning and graph construction simultaneously. Given an initial graph with only a few nearest but most reliable pairwise relations, new reliable relations are discovered by an assumption of reliability preservation, i.e., the reliable relations will preserve their reliabilities in the learnt projection subspace. We formulate the idea as a cross entropy (CE) minimization problem to reduce the discrepancy between two Bernoulli distributions parameterized by the updated distances and the existing relation graph respectively. Furthermore, to overcome the imbalanced distribution of samples, a Boosting-like strategy is proposed to balance the discovered relations over all clusters. To evaluate the proposed method, extensive experiments are performed with various trajectory clustering tasks, including motion segmentation, time series clustering and crowd detection. The results demonstrate that ReDSFA can discover reliable intra-cluster relations with high precision, and competitive clustering performance can be achieved in comparison with state-of-the-art.
Zhang, Z, Oberst, S & Lai, JCS 1970, 'Influence of contact condition and sliding speed on friction-induced instability', Icsv 2016 23rd International Congress on Sound and Vibration from Ancient to Modern Acoustics, International Congress on Sound and Vibration: From Ancient to Modern Acoustics (ICSV), International Institute of Acoustics and Vibration, Athens, Greece.
View description>>
Brake squeal, defined as audible noise above 1 kHz, is triggered by energy provided in the contact area between the pad and the disc and friction-induced instabilities. Owing to customers' demand of reducing vehicle noise and the increasing use of light composite materials in cars, squealing brakes remain a major concern to the automotive industry because of warranty-related claims. The prediction of disc brake squeal propensity is as challenging as ever. Although friction-induced instabilities are inherently nonlinear and during squeal the brake system's operating and environmental conditions keep changing, mostly linear and steady state methods are used for the analysis of brake squeal propensity. While many different instability mechanisms have been identified, their interactions and the resulting dynamics are not yet fully understood. Linear instability predictions suffer from over- and under-predictions and have to be complemented by extensive noise dynamometer or in vehicle tests. Recent studies indicate that frictional contact is multi-scaled in nature, highly sensitive and inhomogeneous. Very high local pressures and partial contact separations in the contact interface further complicate its numerical modelling. By studying an analytical model of 3 × 3 friction oscillators using three different friction laws (Amonton-Coulomb, the velocity-dependent and the LuGre friction model) in point contact with a sliding rigid plate and incorporating uncertainties in the contact condition, robustly unstable vibration modes have been identified in our previous research. Here, the number and the combination of friction oscillators engaged in contact are randomised to model imperfect contact. In addition, the effect of the variation in the plate's sliding velocity on the in-stability analysis is investigated with randomised friction coefficient of the Amonton-Coulomb friction model. Results of instability prediction and net work calculations are used to illust...
Zheng, D, Huo, H, Chen, S-Y, Xu, B & Liu, L 1970, 'LTMF: Local-Based Tag Integration Model for Recommendation', Springer International Publishing, pp. 296-302.
View/Download from: Publisher's site
Zhu, X & Xu, G 1970, 'Applying Visual Analytics on Traditional Data Mining Process: Quick Prototype, Simple Expertise Transformation, and Better Interpretation', 2016 4th International Conference on Enterprise Systems (ES), 2016 4th International Conference on Enterprise Systems (ES), IEEE, Melbourne, Australia, pp. 208-213.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Due to a lack of experience, business might not be confident about the completeness of their proposed data mining (DM) project objectives at early stage. Besides, business domain expertise usually shrinks when delivered to data analysts. This expertise ought to contribute more throughout whole project. In addition, the outcome from DM project might fail to transform into actionable advice as the interpretation for the outcome is hard to understand and, as a result, unconvincing to apply in real. To fill the above three gaps, Visual Analytics (VA) tools are applied in different stages to optimize traditional data analytics process. In my practice, VA tools have offered both an easy access to generate quick insights for evaluating project objective's viability, and a bidirectional channel between data analysts and stakeholders to break the background barrier. Consequently, more applicable outcomes and better client satisfaction are gained.
Zijlema, A, van den Hoven, E & Eggen, B 1970, 'Companions', Proceedings of the 28th Australian Conference on Computer-Human Interaction - OzCHI '16, the 28th Australian Conference, ACM Press, Launceston, Australia, pp. 170-174.
View/Download from: Publisher's site
View description>>
Cherished utilitarian objects can provide comfort and pleasure through their associations to our personal past and the time and energy we have invested in and with them. In this paper, we present a specific type of object relationship, which we call the companion. They are mundane objects that accrued meaning over time, and evoke tiny pleasureswhen we interact with them. We then draw insights from the HCI research literature on digital possessions and attachment that could be applied to enhance digital products or processes with companion qualities. We argue the importance to design for digital companionship in everyday use products, for example by enabling the accruement of subtle marks of the owners past with the product. We wish to evoke thought and awareness of the role of companions, and how this relationship can be supported in digital products.
ZUO, HUA, ZHANG, G, BEHBOOD, V, LU, JIE, PEDRYCZ, W & ZHANG, T 1970, 'FUZZY TRANSFER LEARNING IN DATA-SHORTAGE AND RAPIDLY CHANGING ENVIRONMENTS', Uncertainty Modelling in Knowledge Engineering and Decision Making, Conference on Uncertainty Modelling in Knowledge Engineering and Decision Making (FLINS 2016), WORLD SCIENTIFIC, Roubaix, FRANCE, pp. 175-180.
View/Download from: Publisher's site
Zurita, G, Merigó, JM & Lobos-Ossandón, V 1970, 'A bibliometric analysis of journals in educational research', Lecture Notes in Engineering and Computer Science, pp. 403-408.
View description>>
The influence and impact of journals in the scientific community is a fundamental question for researchers worldwide because it measures the importance and quality of a publication. This study analyses all the journals that are currently ranked in any educational research category in Web of Science by using bibliometric indicators. The aim is to provide a general overview of their impact and influence between 1989 and 2013. The journals are divided in seven research categories that represent the whole field of educational research. The analysis also develops a general comparison between all the categories. The results show that many interdisciplinary journals obtain a broader impact than the core journals although these publications are also well positioned in the field.