Abdul Hanan, AH, Yazid Idris, M, Kaiwartya, O, Prasad, M & Ratn Shah, R 2017, 'Real traffic-data based evaluation of vehicular traffic environment and state-of-the-art with future issues in location-centric data dissemination for VANETs', Digital Communications and Networks, vol. 3, no. 3, pp. 195-210.
View/Download from: Publisher's site
View description>>
© 2017 Chongqing University of Posts and Telecommuniocations Extensive investigation has been performed in location-centric or geocast routing protocols for reliable and efficient dissemination of information in Vehicular Adhoc Networks (VANETs). Various location-centric routing protocols have been suggested in literature for road safety ITS applications considering urban and highway traffic environment. This paper characterizes vehicular environments based on real traffic data and investigates the evolution of location-centric data dissemination. The current study is carried out with three main objectives: (i) to analyze the impact of dynamic traffic environment on the design of data dissemination techniques, (ii) to characterize location-centric data dissemination in terms of functional and qualitative behavior of protocols, properties, and strengths and weaknesses, and (iii) to find some future research directions in information dissemination based on location. Vehicular traffic environments have been classified into three categories based on physical characteristics such as speed, inter-vehicular distance, neighborhood stability, traffic volume, etc. Real traffic data is considered to analyze on-road traffic environments based on the measurement of physical parameters and weather conditions. Design issues are identified in incorporating physical parameters and weather conditions into data dissemination. Functional and qualitative characteristics of location-centric techniques are explored considering urban and highway environments. Comparative analysis of location-centric techniques is carried out for both urban and highway environments individually based on some unique and common characteristics of the environments. Finally, some future research directions are identified in the area based on the detailed investigation of traffic environments and location-centric data dissemination techniques.
Ahadi, A, Hellas, A & Lister, R 2017, 'A Contingency Table Derived Method for Analyzing Course Data', ACM Transactions on Computing Education, vol. 17, no. 3, pp. 1-19.
View/Download from: Publisher's site
View description>>
We describe a method for analyzing student data from online programming exercises. Our approach uses contingency tables that combine whether or not a student answered an online exercise correctly with the number of attempts that the student made on that exercise. We use this method to explore the relationship between student performance on online exercises done during semester with subsequent performance on questions in a paper-based exam at the end of semester. We found that it is useful to include data about the number of attempts a student makes on an online exercise.
Arodudu, O, Helming, K, Wiggering, H & Voinov, A 2017, 'Bioenergy from Low-Intensity Agricultural Systems: An Energy Efficiency Analysis', Energies, vol. 10, no. 1, pp. 29-29.
View/Download from: Publisher's site
View description>>
In light of possible future restrictions on the use of fossil fuel, due to climate change obligations and continuous depletion of global fossil fuel reserves, the search for alternative renewable energy sources is expected to be an issue of great concern for policy stakeholders. This study assessed the feasibility of bioenergy production under relatively low-intensity conservative, eco-agricultural settings (as opposed to those produced under high-intensity, fossil fuel based industrialized agriculture). Estimates of the net e nergy gain (NEG) and the energy return on energy invested (EROEI) obtained from a life cycle inventory of the energy inputs and outputs involved reveal that the energy efficiency of bioenergy produced in low-intensity eco-agricultural systems could be as much as much as 448.5-488.3 GJ·ha -1 of NEG and an EROEI of 5.4-5.9 for maize ethanol production systems, and as much as 155.0-283.9 GJ·ha -1 of NEG and an EROEI of 14.7-22.4 for maize biogas production systems. This is substantially higher than for industrialized agriculture with a NEG of 2.8-52.5 GJ·ha -1 and an EROEI of 1.2-1.7 for maize ethanol production systems, as well as a NEG of 59.3-188.7 GJ·ha -1 and an EROEI of 2.2-10.2 for maize biogas production systems. Bioenergy produced in low-intensity eco-agricultural systems could therefore be an important source of energy with immense net benefits for local and regional end-users, provided a more efficient use of the co-products is ensured.
Arodudu, O, Helming, K, Wiggering, H & Voinov, A 2017, 'Towards a more holistic sustainability assessment framework for agro-bioenergy systems — A review', Environmental Impact Assessment Review, vol. 62, pp. 61-75.
View/Download from: Publisher's site
Arodudu, OT, Helming, K, Voinov, A & Wiggering, H 2017, 'Integrating agronomic factors into energy efficiency assessment of agro-bioenergy production – A case study of ethanol and biogas production from maize feedstock', Applied Energy, vol. 198, pp. 426-439.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Previous life cycle assessments for agro-bioenergy production rarely considered some agronomic factors with local and regional impacts. While many studies have found the environmental and socio-economic impacts of producing bioenergy on arable land not good enough to be considered sustainable, others consider it still as one of the most effective direct emission reduction and fossil fuel replacement measures. This study improved LCA methods in order to examine the individual and combined effects of often overlooked agronomic factors (e.g. alternative farm power, seed sowing, fertilizer, tillage and irrigation options) on life-cycle energy indicators (net energy gain-NEG, energy return on energy invested-EROEI), across the three major agro-climatic zones namely tropic, sub-tropic and the temperate landscapes. From this study, we found that individual as well as combined effects of agronomic factors may improve the energy productivity of arable bioenergy sources considerably in terms of the NEG (from between 6.8 and 32.9 GJ/ha to between 99.5 and 246.7 GJ/ha for maize ethanol; from between 39.0 and 118.4 GJ/ha to between 127.9 and 257.9 GJ/ha for maize biogas) and EROEI (from between 1.2 and 1.8 to between 2.1 and 3.0 for maize ethanol, from between 4.3 and 12.1 to between 15.0 and 33.9 for maize biogas). The agronomic factors considered by this study accounted for an extra 7.5–14.6 times more of NEG from maize ethanol, an extra 2.2–3.3 times more of NEG from maize biogas, an extra 1.7 to 1.8 times more of EROEI from maize ethanol, and an extra 2.8–3.5 times more of EROEI from maize biogas respectively. This therefore underscores the need to factor in local and regional agronomic factors into energy efficiency and sustainability assessments, as well as decision making processes regarding the application of energy from products of agro-bioenergy production.
Ashamalla, A, Beydoun, G & Low, G 2017, 'Model driven approach for real-time requirement analysis of multi-agent systems.', Comput. Lang. Syst. Struct., vol. 50, pp. 127-139.
View/Download from: Publisher's site
View description>>
Software systems can fail when requirement constraints are overlooked or violated. With the increased complexity of software systems, software development has become more reliant on model driven development. The paper advocates a model driven approach to ensure real-time requirement constraints are taken into account prior to the design of a multi-agent system (MAS). The paper presents the synthesis of a real-time metamodel to support requirements analysis of a MAS. The metamodel describes a collection of modelling units and constraints that can be used to identify the real-time requirements of a multi-agent system during the analysis phase. The paper takes the view that the earlier you model real-time requirements in the software development life cycle, the more reliable and robust the resultant system will be. Furthermore, the more likely it is an appropriate balance between competing time requirements will be achieved. The paper also presents a validation of the metamodel in a Call Management MAS application. This provides a preliminary evidence of the coverage and validity of the metamodel presented.
Avilés-Ochoa, E, Perez-Arellano, LA, León-Castro, E & Merigó, JM 2017, 'PRIORITIZED INDUCED PROBABILISTIC DISTANCES IN TRANSPARENCY AND ACCESS TO INFORMATION LAWS', FUZZY ECONOMIC REVIEW, vol. 22, no. 01, pp. 45-55.
View/Download from: Publisher's site
View description>>
© 2016 Int. Association for Fuzzy-Set Management and Economy. All rights reserved. In this paper, a new extension of the ordered weighted average (OWA) operator is developed using four different methods: prioritized operators, induced operators, probabilistic operators and distance techniques. This new operator is called the prioritized induced probabilistic ordered weighted average distance (PIPOWAD) operator. The primary advantage is that we include in one formulation different characteristics and information provided by a group of decision makers to compare actual and ideal situations. Finally, an example of transparency and access to information law in Mexico is presented to forecast the score based on the expectations of decision makers.
Azadeh, A, Foroozan, H, Ashjari, B, Motevali Haghighi, S, Yazdanparast, R, Saberi, M & Torki Nejad, M 2017, 'Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach', Enterprise Information Systems, vol. 11, no. 9, pp. 1-15.
View/Download from: Publisher's site
Azadeh, A, Jebreili, S, Chang, E, Saberi, M & Hussain, OK 2017, 'An integrated fuzzy algorithm approach to factory floor design incorporating environmental quality and health impact', International Journal of System Assurance Engineering and Management, vol. 8, no. S4, pp. 2071-2082.
View/Download from: Publisher's site
View description>>
This paper presents an integrated algorithm based on fuzzy simulation, fuzzy linear programming (FLP), and fuzzy data envelopment analysis (FDEA) to cope with a special case of workshop facility layout design problem with ambiguous environmental and health indicators. First a software package is used for generating feasible layout alternatives and then quantitative performance indicators are calculated. Weights are estimated by LP for pairwise comparisons (by linguistic terms) in evaluating certain qualitative performance indicators. Fuzzy simulation is then employed for modeling different layout alternatives with uncertain parameters. Next, the impacts of environment and health indicators are retrieved from a standard questionnaire. Finally, FDEA is used for ranking the alternatives and consequently finding the optimal layout design alternatives. A possibilistic programming approach is used to modify the fuzzy DEA model to an equivalent crisp one. Moreover, fuzzy principal component analysis method is used to validate the results of FDEA model at various α-cut levels by Spearman correlation experiment. This is the first study that presents an integrated algorithm for optimization of facility layout with environmental and health indicators.
Azadeh, A, Sadri, S, Saberi, M, Yoon, JH, Chang, E, Khadeer Hussain, O & Pourmohammad Zia, N 2017, 'An Integrated Fuzzy Trust Prediction Approach in Product Design and Engineering', International Journal of Fuzzy Systems, vol. 19, no. 4, pp. 1190-1199.
View/Download from: Publisher's site
Bano, M, Zowghi, D & Rimini, FD 2017, 'User satisfaction and system success: an empirical exploration of user involvement in software development.', Empir. Softw. Eng., vol. 22, no. 5, pp. 2339-2372.
View/Download from: Publisher's site
View description>>
© 2016, Springer Science+Business Media New York. For over four decades user involvement has been considered intuitively to lead to user satisfaction, which plays a pivotal role in successful outcome of a software project. The objective of this paper is to explore the notion of user satisfaction within the context of the user involvement and system success relationship. We have conducted a longitudinal case study of a software development project and collected qualitative data by means of interviews, observations and document analysis over a period of 3 years. The analysis of our case study data revealed that user satisfaction significantly contributes to the system success even when schedule and budget goals are not met. The case study data analysis also presented additional factors that contribute to the evolution of user satisfaction throughout the project. Users’ satisfaction with their involvement and the resulting system are mutually constituted while the level of user satisfaction evolves throughout the stages of software development process. Effective management strategies and user representation are essential elements of maintaining an acceptable level of user satisfaction throughout software development process.
Belete, GF, Voinov, A & Laniak, GF 2017, 'An overview of the model integration process: From pre-integration assessment to testing', Environmental Modelling & Software, vol. 87, pp. 49-63.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Integration of models requires linking models which can be developed using different tools, methodologies, and assumptions. We performed a literature review with the aim of improving our understanding of model integration process, and also presenting better strategies for building integrated modeling systems. We identified five different phases to characterize integration process: pre-integration assessment, preparation of models for integration, orchestration of models during simulation, data interoperability, and testing. Commonly, there is little reuse of existing frameworks beyond the development teams and not much sharing of science components across frameworks. We believe this must change to enable researchers and assessors to form complex workflows that leverage the current environmental science available. In this paper, we characterize the model integration process and compare integration practices of different groups. We highlight key strategies, features, standards, and practices that can be employed by developers to increase reuse and interoperability of science software components and systems.
Belete, GF, Voinov, A & Morales, J 2017, 'Designing the Distributed Model Integration Framework – DMIF', Environmental Modelling & Software, vol. 94, pp. 112-126.
View/Download from: Publisher's site
View description>>
� 2017 Elsevier Ltd We describe and discuss the design and prototype of the Distributed Model Integration Framework (DMIF) that links models deployed on different hardware and software platforms. We used distributed computing and service-oriented development approaches to address the different aspects of interoperability. Reusable web service wrappers were developed for technical interoperability models created in NetLogo and GAMS modeling languages. We investigated automated semantic mapping of text-based input-output data and attribute names of components using word overlap semantic matching algorithms and using an openly available lexical database. We also incorporated automated unit conversion in semantic mediation by using openly available ontologies. DMIF helps to avoid significant amount of reinvention by framework developers, and opens up the modeling process for many stakeholders who are not prepared to deal with the technical difficulties associated with installing, configuring, and running various models. As a proof of concept, we implemented our design to integrate several climate-energy-economy models.
Beydoun, G, Hoffmann, A & Gill, A 2017, 'Constructing enhanced default theories incrementally', Complex & Intelligent Systems, vol. 3, no. 2, pp. 83-92.
View/Download from: Publisher's site
View description>>
The main difference between various formalisms of non-monotonic reasoning is the representation of non-monotonic rules. In default logic, they are represented by special expressions called defaults. In default logic, commonsense knowledge about the world is represented as a set of named defaults. The use of defaults is popular because they reduce the complexity of the representation, and they are sufficient for knowledge representation in many naturally occurring contexts. This paper offers an incremental process to acquire defaults from human experts directly and at the same time it provides added semantics to defaults by adding priorities to defaults and creating additional relations between them. The paper uses an existing incremental framework, NRDR, to generate these defaults. This framework is chosen as it not only enables incremental context driven formulation of defaults, but also allows experts to introduce their own domain terms. In choosing this framework, the paper broadens its utility.
Blanco-Mesa, F & Merigó, JM 2017, 'BONFERRONI DISTANCES WITH HYBRID WEIGHTED DISTANCE AND IMMEDIATE WEIGHTED DISTANCE', FUZZY ECONOMIC REVIEW, vol. 22, no. 02, pp. 2274-2274.
View/Download from: Publisher's site
View description>>
© 2017 Int. Association for Fuzzy-Set Management and Economy. All rights reserved. The aim of the paper is to develop new aggregation operators using Bonferroni means, ordered weighted averaging (OWA) operators and some measures of distance. We introduce the Bonferroni Hybrid-weighted distance (BON-HWD), and Bonferroni distances with OWA operators and weighted averages (BON-IWOWAD). The main advantages of using these operators are that they allow the consideration of different aggregations contexts to be considered and multiple-comparison between each argument and distance measures in the same formulation. We develop a mathematical application to show the versatility of new models. Finally, this new group of family distances can be used in a wide range of management and economic fields.
Blanco-Mesa, F, Merigó, JM & Gil-Lafuente, AM 2017, 'Fuzzy decision making: A bibliometric-based review', Journal of Intelligent & Fuzzy Systems, vol. 32, no. 3, pp. 2033-2050.
View/Download from: Publisher's site
View description>>
© 2017 IOS Press and the authors. All rights reserved. Fuzzy decision-making consists in making decisions under complex and uncertain environments where the information can be assessed with fuzzy sets and systems. The aim of this study is to review the main contributions in this field by using a bibliometric approach. For doing so, the article uses a wide range of bibliometric indicators including the citations and the h-index. Moreover, it also uses the VOS viewer software in order to map the main trends in this area. The work considers the leading journals, articles, authors and institutions. The results indicate that the USA was the traditional leader in this field with the most significant researcher. However, during the last years, this field is receiving more attention by Asian authors that are starting to lead the field. This discipline has a strong potential and the expectations for the future is that it will continue to grow.
Bluff, A & Johnston, A 2017, 'Creature:Interactions: A Social Mixed-Reality Playspace', Leonardo, vol. 50, no. 4, pp. 360-367.
View/Download from: Publisher's site
View description>>
This paper discusses Creature:Interactions (2015), a large-scale mixed-reality artwork created by the authors that incorporates immersive 360° stereoscopic visuals, interactive technology, and live actor facilitation. The work uses physical simulations to promote an expressive full-bodied interaction as children explore the landscapes and creatures of Ethel C. Pedley’s ecologically focused children’s novel, Dot and the Kangaroo. The immersive visuals provide a social playspace for up to 90 people and have produced “phantom” sensations of temperature and touch in certain participants.
Bower, M, Wood, L, Lai, J, Howe, C, Lister, R, Mason, R, Highfield, K & Veal, J 2017, 'Improving the Computational Thinking Pedagogical Capabilities of School Teachers', Australian Journal of Teacher Education, vol. 42, no. 3, pp. 53-72.
View/Download from: Publisher's site
View description>>
The idea of computational thinking as skills and universal competence which every child should possess emerged last decade and has been gaining traction ever since. This raises a number of questions, including how to integrate computational thinking into the curriculum, whether teachers have computational thinking pedagogical capabilities to teach children, and the important professional development and training areas for teachers. The aim of this paper is to address the strategic issues by illustrating a series of computational thinking workshops for Foundation to Year 8 teachers held at an Australian university. Data indicated that teachers' computational thinking understanding, pedagogical capabilities, technological know-how and confidence can be improved in a relatively short period of time through targeted professional learning.
Broekhuijsen, M, van den Hoven, E & Markopoulos, P 2017, 'From PhotoWork to PhotoUse: exploring personal digital photo activities', Behaviour & Information Technology, vol. 36, no. 7, pp. 754-767.
View/Download from: Publisher's site
CALVO, RA, MILNE, DN, HUSSAIN, MS & CHRISTENSEN, H 2017, 'Natural language processing in mental health applications using non-clinical texts', Natural Language Engineering, vol. 23, no. 5, pp. 649-685.
View/Download from: Publisher's site
View description>>
AbstractNatural language processing (NLP) techniques can be used to make inferences about peoples’ mental states from what they write on Facebook, Twitter and other social media. These inferences can then be used to create online pathways to direct people to health information and assistance and also to generate personalized interventions. Regrettably, the computational methods used to collect, process and utilize online writing data, as well as the evaluations of these techniques, are still dispersed in the literature. This paper provides a taxonomy of data sources and techniques that have been used for mental health support and intervention. Specifically, we review how social media and other data sources have been used to detect emotions and identify people who may be in need of psychological assistance; the computational techniques used in labeling and diagnosis; and finally, we discuss ways to generate and personalize mental health interventions. The overarching aim of this scoping review is to highlight areas of research where NLP has been applied in the mental health literature and to help develop a common language that draws together the fields of mental health, human-computer interaction and NLP.
Cancino, C, Merigo, JM, Coronado, F, Dessouky, Y & Dessouky, M 2017, 'Forty years of computers and industrial engineering: A bibliometric analysis', Proceedings of International Conference on Computers and Industrial Engineering, CIE, vol. 0, pp. 614-629.
View/Download from: Publisher's site
View description>>
Computers & Industrial Engineering is a leading international journal in the field of industrial engineering that published its first issue in 1976. In 2016, the journal has celebrated the 40th anniversary. Motivated by this event, the aim of this study is to develop a bibliometric overview of the publications of the journal between 1976 and 2015. The objective is to identify the leading trends that are occurring in the journal in terms of productivity and influence of topics, authors, universities and countries. For doing so, the work uses the Web of Science Core Collection database to analyse the bibliometric data. The results show the strong position of the USA in the journal although China and other Asian countries are becoming very significant.
Cancino, C, Merigó, JM, Coronado, F, Dessouky, Y & Dessouky, M 2017, 'Forty years of Computers & Industrial Engineering: A bibliometric analysis', Computers & Industrial Engineering, vol. 113, pp. 614-629.
View/Download from: Publisher's site
View description>>
Computers & Industrial Engineering is a leading international journal in the field of industrial engineering that published its first issue in 1976. In 2016, the journal has celebrated the 40th anniversary. Motivated by this event, the aim of this study is to develop a bibliometric overview of the publications of the journal between 1976 and 2015. The objective is to identify the leading trends that are occurring in the journal in terms of productivity and influence of topics, authors, universities and countries. For doing so, the work uses the Web of Science Core Collection database to analyse the bibliometric data. The results show the strong position of the USA in the journal although China and other Asian countries are becoming very significant.
Cancino, CA, Merigo, JM & Coronado, FC 2017, 'Big Names in Innovation Research:A Bibliometric Overview', Current Science, vol. 113, no. 08, pp. 1507-1507.
View/Download from: Publisher's site
View description>>
Over the last few years an increasing number of scientific studies related to innovation research has been carried out. The present study analyses innovation research developed between 1989 and 2013. It uses the Web of Science database and provides several author-level bibliometric indicators including the total number of publications and citations, and the h-index. The results indicate that the most influential professors over the last 25 years, according to their highest h-index, are David Audretsch, Michael Hitt, Shaker Zahra, Rajshree Agarwal, Eric Von Hippel, David Teece, Will Mitchell and Robert Cooper. Among these authors, it is possible to demonstrate that they are not necessarily the most productive authors, with the highest number of publications; however, they are the most influential, with the highest number of citations. The incorporation of a larger number of journals to the Web of Science has granted different authors access to publish their work on innovation research.
Cancino, CA, Merigó, JM & Coronado, FC 2017, 'A bibliometric analysis of leading universities in innovation research', Journal of Innovation & Knowledge, vol. 2, no. 3, pp. 106-124.
View/Download from: Publisher's site
View description>>
© 2017 Journal of Innovation & Knowledge The number of innovation studies with a management perspective has grown considerably over the last 25 years. This study identified the universities that are most productive and influential in innovation research. The leading innovation research journals were also studied individually to identify the most productive universities for each journal. Data from the Web of Science were analyzed. Studies that were published between 1989 and 2013 were filtered first by the keyword “innovation” and second by 18 management-related research areas. The results indicate that US universities are the most productive and influential because they account for the most publications with a high number of citations and high h-index. Following advances in the productivity of numerous European journals, however, universities from the UK and the Netherlands are the most involved in publishing in journals that specialize in innovation research.
Cetindamar, D & Ozkazanc‐Pan, B 2017, 'Assessing mission drift at venture capital impact investors', Business Ethics: A European Review, vol. 26, no. 3, pp. 257-270.
View/Download from: Publisher's site
View description>>
AbstractIn this article, we consider a recent trend whereby private equity available from venture capital (VC) firms is being deployed toward mission‐driven initiatives in the form of impact investing. Acting as hybrid organizations, these impact investors aim to achieve financial results while also targeting companies and funds to achieve social impact. However, potential mission drift in these VCs, which we define as a decoupling between the investments made (means) and intended aims (ends), might become detrimental to the simultaneous financial and social goals of such firms. Based on a content analysis of mission statements, we assess mission drift and the hybridization level of VC impact investors by examining their missions (ends/goals) and their investment practices (means) through the criteria of social and financial logic. After examining eight impact‐oriented VC investors and their investments in 164 companies, we find mission drift manifest as a disparity between the means and ends in half of the VC impact investors in our sample. We discuss these findings and make suggestions for further studies.
Cetindamar, D & Rickne, A 2017, 'Using the functional analysis to understand the emergence of biomaterials within an existing biotechnology system: observations from a case study in Turkey.', Technol. Anal. Strateg. Manag., vol. 29, no. 3, pp. 313-324.
View/Download from: Publisher's site
View description>>
The paper applies a functional approach to the analysis of an emerging technology within an innovation system (IS) in a developing country. By doing so, the paper identifies the advantages and drawbacks of the approach through a dynamic analysis and highlights the life cycle of an IS within which a new technology is emerging. This is done empirically by analysing the emergence of biosimilars within the infant Turkish biotechnology system mainly from the perspective of firms. Our analysis of the Turkish case illustrates how the tool of functional approach could be valuable in understanding the dynamics of a technology in a developing country context. Policy suggestions and implications of the study are presented as concluding remarks.
Chaudhuri, BB & Adak, C 2017, 'An approach for detecting and cleaning of struck-out handwritten text', Pattern Recognition, vol. 61, pp. 282-294.
View/Download from: Publisher's site
View description>>
This paper deals with the identification and processing of struck-out texts in unconstrained offline handwritten document images. If run on the OCR engine, such texts will produce nonsense character-string outputs. Here we present a combined (a) pattern classification and (b) graph-based method for identifying such texts. In case of (a), a feature-based two-class (normal vs. struck-out text) SVM classifier is used to detect moderate-sized struck-out components. In case of (b), skeleton of the text component is considered as a graph and the strike-out stroke is identified using a constrained shortest path algorithm. To identify zigzag or wavy struck-outs, all paths are found and some properties of zigzag and wavy line are utilized. Some other types of strike-out stroke are also detected by modifying the above method. The large sized multi-word and multi-line struck-outs are segmented into smaller components and treated as above. The detected struck-out texts can then be blocked from entering the OCR engine. In another kind of application involving historical documents, page images along with their annotated ground-truth are to be generated. In this case the strike-out strokes can be deleted from the words and then fed to the OCR engine. For this purpose an inpainting-based cleaning approach is employed. We worked on 500 pages of documents and obtained an overall F-Measure of 91.56% (91.06%) in English (Bengali) script for struck-out text detection. Also, for strike-out stroke identification and deletion, the F-Measures obtained were 89.65% (89.31%) and 91.16% (89.29%), respectively.
Chen, H, Zhang, G, Zhu, D & Lu, J 2017, 'Topic-based technological forecasting based on patent data: A case study of Australian patents from 2000 to 2014', Technological Forecasting and Social Change, vol. 119, no. June 2017, pp. 39-52.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. The study of technological forecasting is an important part of patent analysis. Although fitting models can provide a rough tendency of a technical area, the trend of the detailed content within the area remains hidden. It is also difficult to reveal the trend of specific topics using keyword-based text mining techniques, since it is very hard to track the temporal patterns of a single keyword that generally represents a technological concept. To overcome these limitations, this research proposes a topic-based technological forecasting approach, to uncover the trends of specific topics underlying massive patent claims using topic modelling. A topic annual weight matrix and a sequence of topic-based trend coefficients are generated to quantitatively estimate the developing trends of the discovered topics, and evaluate to what degree various topics have contributed to the patenting activities of the whole area. To demonstrate the effectiveness of the approach, we present a case study using 13,910 utility patents that were published during the years 2000 to 2014, owned by Australian assignees, in the United States Patent and Trademark Office (USPTO). The results indicate that the proposed approach is effective for estimating the temporal patterns and forecast the future trends of the latent topics underlying massive claims. The topic-based knowledge and the corresponding trend analysis provided by the approach can be used to facilitate further technological decisions or opportunity discovery.
Chen, J, Li, K, Tang, Z, Bilal, K, Yu, S, Weng, C & Li, K 2017, 'A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment', IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 4, pp. 919-933.
View/Download from: Publisher's site
View description>>
With the emergence of the big data age, the issue of how to obtain valuable knowledge from a dataset efficiently and accurately has attracted increasingly attention from both academia and industry. This paper presents a Parallel Random Forest (PRF) algorithm for big data on the Apache Spark platform. The PRF algorithm is optimized based on a hybrid approach combining dataparallel and task-parallel optimization. From the perspective of data-parallel optimization, a vertical data-partitioning method is performed to reduce the data communication cost effectively, and a data-multiplexing method is performed is performed to allow the training dataset to be reused and diminish the volume of data. From the perspective of task-parallel optimization, a dual parallel approach is carried out in the training process of RF, and a task Directed Acyclic Graph (DAG) is created according to the parallel training process of PRF and the dependence of the Resilient Distributed Datasets (RDD) objects. Then, different task schedulers are invoked for the tasks in the DAG. Moreover, to improve the algorithm's accuracy for large, high-dimensional, and noisy data, we perform a dimension-reduction approach in the training process and a weighted voting approach in the prediction process prior to parallelization. Extensive experimental results indicate the superiority and notable advantages of the PRF algorithm over the relevant algorithms implemented by Spark MLlib and other studies in terms of the classification accuracy, performance, and scalability. With the expansion of the scale of the random forest model and the Spark cluster, the advantage of the PRF algorithm is more obvious.
Chen, J, Liu, B, Zhou, H, Yu, Q, Gui, L & Shen, X 2017, 'QoS-Driven Efficient Client Association in High-Density Software-Defined WLAN', IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp. 7372-7383.
View/Download from: Publisher's site
Chen, Z, You, X, Zhong, B, Li, J & Tao, D 2017, 'Dynamically Modulated Mask Sparse Tracking', IEEE Transactions on Cybernetics, vol. 47, no. 11, pp. 3706-3718.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Visual tracking is a critical task in many computer vision applications such as surveillance and robotics. However, although the robustness to local corruptions has been improved, prevailing trackers are still sensitive to large scale corruptions, such as occlusions and illumination variations. In this paper, we propose a novel robust object tracking technique depends on subspace learning-based appearance model. Our contributions are twofold. First, mask templates produced by frame difference are introduced into our template dictionary. Since the mask templates contain abundant structure information of corruptions, the model could encode information about the corruptions on the object more efficiently. Meanwhile, the robustness of the tracker is further enhanced by adopting system dynamic, which considers the moving tendency of the object. Second, we provide the theoretic guarantee that by adapting the modulated template dictionary system, our new sparse model can be solved by the accelerated proximal gradient algorithm as efficient as in traditional sparse tracking methods. Extensive experimental evaluations demonstrate that our method significantly outperforms 21 other cutting-edge algorithms in both speed and tracking accuracy, especially when there are challenges such as pose variation, occlusion, and illumination changes.
Choi, I, Milne, DN, Glozier, N, Peters, D, Harvey, SB & Calvo, RA 2017, 'Using different Facebook advertisements to recruit men for an online mental health study: Engagement and selection bias', Internet Interventions, vol. 8, pp. 27-34.
View/Download from: Publisher's site
View description>>
© 2017 A growing number of researchers are using Facebook to recruit for a range of online health, medical, and psychosocial studies. There is limited research on the representativeness of participants recruited from Facebook, and the content is rarely mentioned in the methods, despite some suggestion that the advertisement content affects recruitment success. This study explores the impact of different Facebook advertisement content for the same study on recruitment rate, engagement, and participant characteristics. Five Facebook advertisement sets (“resilience”, “happiness”, “strength”, “mental fitness”, and “mental health”) were used to recruit male participants to an online mental health study which allowed them to find out about their mental health and wellbeing through completing six measures. The Facebook advertisements recruited 372 men to the study over a one month period. The cost per participant from the advertisement sets ranged from $0.55 to $3.85 Australian dollars. The “strength” advertisements resulted in the highest recruitment rate, but participants from this group were least engaged in the study website. The “strength” and “happiness” advertisements recruited more younger men. Participants recruited from the “mental health” advertisements had worse outcomes on the clinical measures of distress, wellbeing, strength, and stress. This study confirmed that different Facebook advertisement content leads to different recruitment rates and engagement with a study. Different advertisement also leads to selection bias in terms of demographic and mental health characteristics. Researchers should carefully consider the content of social media advertisements to be in accordance with their target population and consider reporting this to enable better assessment of generalisability.
Davis, JJJ, Lin, C-T, Gillett, G & Kozma, R 2017, 'An Integrative Approach to Analyze Eeg Signals and Human Brain Dynamics in Different Cognitive States', Journal of Artificial Intelligence and Soft Computing Research, vol. 7, no. 4, pp. 287-299.
View/Download from: Publisher's site
View description>>
AbstractElectroencephalograph (EEG) data provide insight into the interconnections and relationships between various cognitive states and their corresponding brain dynamics, by demonstrating dynamic connections between brain regions at different frequency bands. While sensory input tends to stimulate neural activity in different frequency bands, peaceful states of being and self-induced meditation tend to produce activity in the mid-range (Alpha). These studies were conducted with the aim of: (a) testing different equipment in order to assess two (2) different EEG technologies together with their benefits and limitations and (b) having an initial impression of different brain states associated with different experimental modalities and tasks, by analyzing the spatial and temporal power spectrum and applying our movie making methodology to engage in qualitative exploration via the art of encephalography. This study complements our previous study of measuring multichannel EEG brain dynamics using MINDO48 equipment associated with three experimental modalities measured both in the laboratory and the natural environment. Together with Hilbert analysis, we conjecture, the results will provide us with the tools to engage in more complex brain dynamics and mental states, such as Meditation, Mathematical Audio Lectures, Music Induced Meditation, and Mental Arithmetic Exercises. This paper focuses on open eye and closed eye conditions, as well as meditation states in laboratory conditions. We assess similarities and differences between experimental modalities and their associated brain states as well as differences between the different tools for analysis and equipment.
Deng, S, Huang, L, Xu, G, Wu, X & Wu, Z 2017, 'On Deep Learning for Trust-Aware Recommendations in Social Networks', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1164-1177.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. With the emergence of online social networks, the social network-based recommendation approach is popularly used. The major benefit of this approach is the ability of dealing with the problems with cold-start users. In addition to social networks, user trust information also plays an important role to obtain reliable recommendations. Although matrix factorization (MF) becomes dominant in recommender systems, the recommendation largely relies on the initialization of the user and item latent feature vectors. Aiming at addressing these challenges, we develop a novel trust-based approach for recommendation in social networks. In particular, we attempt to leverage deep learning to determinate the initialization in MF for trust-aware social recommendations and to differentiate the community effect in user's trusted friendships. A two-phase recommendation process is proposed to utilize deep learning in initialization and to synthesize the users' interests and their trusted friends' interests together with the impact of community effect for recommendations. We perform extensive experiments on real-world social network data to demonstrate the accuracy and effectiveness of our proposed approach in comparison with other state-of-the-art methods.
Dovey, K, Burdon, S & Simpson, R 2017, 'Creative leadership as a collective achievement: An Australian case', Management Learning, vol. 48, no. 1, pp. 23-38.
View/Download from: Publisher's site
View description>>
In this article, we examine the construct of ‘leadership’ through an analysis of the social practices that underpinned the Australian Broadcasting Corporation television production entitled The Code. Positioning the production within the neo-bureaucratic organisational form currently adopted by the global television industry, we explore new conceptualisations of the leadership phenomenon emerging within this industry in response to the increasingly complex, uncertain and interdependent nature of creative work within it. We show how the polyarchic governance regime characteristic of the neo-bureaucratic organisational form ensures broadcaster control and coordination through ‘hard power’ mechanisms embedded in the commissioning process and through ‘soft power’ relational practices that allow creative licence to those employed in the production. Furthermore, we show how both sets of practices (commissioning and creative practices) leverage and regenerate the relational resources – such as trust, commitment and resilience – gained from rich stakeholder experience of working together in the creative industries over a significant period of time. Referencing the leadership-as-practice perspective, we highlight the contingent and improvisational nature of these practices and metaphorically describe the leadership manifesting in this production as a form of ‘interstitial glue’ that binds and shapes stakeholder interests and collective agency.
Erfani, SS, Abedin, B & Blount, Y 2017, 'The effect of social network site use on the psychological well‐being of cancer patients', Journal of the Association for Information Science and Technology, vol. 68, no. 5, pp. 1308-1322.
View/Download from: Publisher's site
View description>>
Social network sites (SNSs) are growing in popularity and social significance. Although researchers have attempted to explain the effect of SNS use on users' psychological well‐being, previous studies have produced inconsistent results. In addition, most previous studies relied on healthy students as participants; other cohorts of SNSs users, in particular people living with serious health conditions, have been neglected. In this study, we carried out semistructured interviews with users of the Ovarian Cancer Australia (OCA) Facebook to assess how and in what ways SNS use impacts their psychological well‐being. A theoretical model was proposed to develop a better understanding of the relationships between SNS use and the psychological well‐being of cancer patients. Analysis of data collected through a subsequent quantitative survey confirmed the theoretical model and empirically revealed the extent to which SNS use impacts the psychological well‐being of cancer patients. Findings showed the use of OCA Facebook enhances social support, enriches the experience of social connectedness, develops social presence and learning and ultimately improves the psychological well‐being of cancer patients.
Fang, XS, Sheng, QZ, Wang, X, Ngu, AHH & Zhang, Y 2017, 'GrandBase: generating actionable knowledge from Big Data', PSU Research Review, vol. 1, no. 2, pp. 105-126.
View/Download from: Publisher's site
View description>>
PurposeThis paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.Design/methodology/approachIn particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.FindingsEmpirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.Originality/valueTo revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has ...
Feng, B, Zhang, H, Zhou, H & Yu, S 2017, 'Locator/Identifier Split Networking: A Promising Future Internet Architecture', IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2927-2948.
View/Download from: Publisher's site
View description>>
The Internet has achieved unprecedented success in human history. However, its original design has encountered many challenges in the past decades due to the significant changes of context and requirements. As a result, the design of future networks has received great attention from both academia and industry, and numerous novel architectures have sprung up in recent years. Among them, the locator/identifier (Loc/ID) split networking is widely discussed for its decoupling of the overloaded IP address semantics, which satisfies several urgent needs of the current Internet such as mobility, multi-homing, routing scalability, security, and heterogeneous network convergence. Hence, in this paper, we focus on Loc/ID split network architectures, and provide a related comprehensive survey on their principles, mechanisms, and characteristics. First, we illustrate the major serious problems of the Internet caused by the overloading of IP address semantics. Second, we classify the existing Loc/ID split network architectures based on their properties, abstract the general principle and framework for each classification, and demonstrate related representative architectures in detail. Finally, we summarize the fundamental features of the Loc/ID split networking, compare corresponding investigated architectures, and discuss several open issues and opportunities.
Feng, B, Zhou, H, Zhang, H, Li, G, Li, H, Yu, S & Chao, H-C 2017, 'HetNet: A Flexible Architecture for Heterogeneous Satellite-Terrestrial Networks', IEEE Network, vol. 31, no. 6, pp. 86-92.
View/Download from: Publisher's site
View description>>
As satellite networks have played an indispensable role in many fields, how to integrate them with terrestrial networks (e.g., the Internet) has attracted significant attention in academia. However, it is challenging to efficiently build such an integrated network, since terrestrial networks are facing a number of serious problems, and since they do not provide good support for heterogeneous network convergence. In this article, we propose a flexible network architecture, HetNet, for efficient integration of heterogeneous satellite-terrestrial networks. Specifically, the HetNet synthesizes Locator/ID split and Information-Centric Networking to establish a general network architecture. In this way, it is able to achieve heterogeneous network convergence, routing scalability alleviation, mobility support, traffic engineering, and efficient content delivery. Moreover, the HetNet can further improve its network elasticity by using the techniques of Software-Defined Networking and Network Functions Virtualization. In addition, to evaluate the HetNet performance, we build a proof-of-concept prototype system and conduct extensive experiments. The results confirm the feasibility of the HetNet and its advantages.
Gao, L, Luan, TH, Yu, S, Zhou, W & Liu, B 2017, 'FogRoute: DTN-based Data Dissemination Model in Fog Computing', IEEE Internet of Things Journal, vol. 4, no. 1, pp. 1-1.
View/Download from: Publisher's site
View description>>
Fog computing, known as 'cloud closed to ground,' deploys light-weight compute facility, called Fog servers, at the proximity of mobile users. By precatching contents in the Fog servers, an important application of Fog computing is to provide high-quality low-cost data distributions to proximity mobile users, e.g., video/live streaming and ads dissemination, using the single-hop low-latency wireless links. A Fog computing system is of a three tier Mobile-Fog-Cloud structure; mobile user gets service from Fog servers using local wireless connections, and Fog servers update their contents from Cloud using the cellular or wired networks. This, however, may incur high content update cost when the bandwidth between the Fog and Cloud servers is expensive, e.g., using the cellular network, and is therefore inefficient for nonurgent, high volume contents. How to economically utilize the Fog-Cloud bandwidth with guaranteed download performance of users thus represents a fundamental issue in Fog computing. In this paper, we address the issue by proposing a hybrid data dissemination framework which applies software-defined network and delay-tolerable network (DTN) approaches in Fog computing. Specifically, we decompose the Fog computing network with two planes, where the cloud is a control plane to process content update queries and organize data flows, and the geometrically distributed Fog servers form a data plane to disseminate data among Fog servers with a DTN technique. Using extensive simulations, we show that the proposed framework is efficient in terms of data-dissemination success ratio and content convergence time among Fog servers.
Gheisari, S, Charlton, A, Catchpoole, DR & Kennedy, PJ 2017, 'Computers can classify neuroblastic tumours from histopathological images using machine learning', Pathology, vol. 49, pp. S72-S73.
View/Download from: Publisher's site
Gholami, MF, Daneshgar, F, Beydoun, G & Rabhi, FA 2017, 'Challenges in migrating legacy software systems to the cloud - an empirical study.', Inf. Syst., vol. 67, pp. 100-113.
View/Download from: Publisher's site
View description>>
© 2017 Moving existing legacy systems to cloud platforms is a difficult and high cost process that may involve technical and non-technical resources and challenges. There is evidence that the lack of understanding and preparedness of cloud computing migration underpin many migration failures in achieving organisations’ goals. The main goal of this article is to identify the most important challenging activities for moving legacy systems to cloud platforms from a perspective of reengineering process. Through a combination of a bottom-up and a top-down analysis, a set of common activities is derived from the extant cloud computing literature. These are expressed as a model and are validated using a population of 104 shortlisted and randomly selected domain experts from different industry sectors. We used a Web-based survey questionnaire to collect data and analysed them using SPSS Sample T-Test. The results of this study highlight the most important and critical challenges that should be addressed by various roles within a legacy to cloud migration endeavour. The study provides an overall understanding of this process including common occurring activities, concerns and recommendations. In addition, the findings of this study constitute a practical guide to conduct this transition. This guide is platform agnostic and independent from any specific migration scenario, cloud platform, or an application domain.
Gill, AQ, Braytee, A & Hussain, FK 2017, 'Adaptive service e-contract information management reference architecture', VINE Journal of Information and Knowledge Management Systems, vol. 47, no. 3, pp. 395-410.
View/Download from: Publisher's site
View description>>
PurposeThe aim of this paper is to report on the adaptive e-contract information management reference architecture using the systematic literature review (SLR) method. Enterprises need to effectively design and implement complex adaptive e-contract information management architecture to support dynamic service interactions or transactions.Design/methodology/approachThe SLR method is three-fold and was adopted as follows. First, a customized literature search with relevant selection criteria was developed, which was then applied to initially identify a set of 1,573 papers. Second, 55 of 1,573 papers were selected for review based on the initial review of each identified paper title and abstract. Finally, based on the second review, 24 papers relevant to this research were selected and reviewed in detail.FindingsThis detailed review resulted in the adaptive e-contract information management reference architecture elements including structure, life cycle and supporting technology.Research limitations/implicationsThe reference architecture elements could serve as a taxonomy for researchers and practitioners to develop context-specific service e-contract information management architecture to support dynamic service interactions for value co-creation. The results are limited to the number of selected databases and papers reviewed in this study.Originality/valueThis paper offers a review of the body of knowledge and novel e-contract information management reference architecture, ...
Glynn, PD, Voinov, AA, Shapiro, CD & White, PA 2017, 'From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments', Earth's Future, vol. 5, no. 4, pp. 356-378.
View/Download from: Publisher's site
View description>>
Our different kinds of minds and types of thinking affect the ways we decide, take action, and cooperate (or not). Derived from these types of minds, innate biases, beliefs, heuristics, and values (BBHV) influence behaviors, often beneficially, when individuals or small groups face immediate, local, acute situations that they and their ancestors faced repeatedly in the past. BBHV, though, need to be recognized and possibly countered or used when facing new, complex issues or situations especially if they need to be managed for the benefit of a wider community, for the longer‐term and the larger‐scale. Taking BBHV into account, we explain and provide a cyclic science‐infused adaptive framework for (1) gaining knowledge of complex systems and (2) improving their management. We explore how this process and framework could improve the governance of science and policy for different types of systems and issues, providing examples in the area of natural resources, hazards, and the environment. Lastly, we suggest that an “Open Traceable Accountable Policy” initiative that followed our suggested adaptive framework could beneficially complement recent Open Data/Model science initiatives.Plain Language SummaryOur review paper suggests that society can improve the management of natural resources and environments by (1) recognizing the sources of human decisions and thinking and understanding their role in the scientific progression to knowledge; (2) considering innate human needs and biases, beliefs, heuristics, and values that may need to be countered or embraced; and (3) creating science and policy governance that is inclusive, integrated, considerate of diversity, explicit, and accounta...
Goodswen, SJ, Kennedy, PJ & Ellis, JT 2017, 'On the application of reverse vaccinology to parasitic diseases: a perspective on feature selection and ranking of vaccine candidates', International Journal for Parasitology, vol. 47, no. 12, pp. 779-790.
View/Download from: Publisher's site
View description>>
Reverse vaccinology has the potential to rapidly advance vaccine development against parasites, but it is unclear which features studied in silico will advance vaccine development. Here we consider Neospora caninum which is a globally distributed protozoan parasite causing significant economic and reproductive loss to cattle industries worldwide. The aim of this study was to use a reverse vaccinology approach to compile a worthy vaccine candidate list for N. caninum, including proteins containing pathogen-associated molecular patterns to act as vaccine carriers. The in silico approach essentially involved collecting a wide range of gene and protein features from public databases or computationally predicting those for every known Neospora protein. This data collection was then analysed using an automated high-throughput process to identify candidates. The final vaccine list compiled was judged to be the optimum within the constraints of available data, current knowledge, and existing bioinformatics programs. We consider and provide some suggestions and experience on how ranking of vaccine candidate lists can be performed. This study is therefore important in that it provides a valuable resource for establishing new directions in vaccine research against neosporosis and other parasitic diseases of economic and medical importance.
Grochow, JA & Qiao, Y 2017, 'Algorithms for Group Isomorphism via Group Extensions and Cohomology', SIAM Journal on Computing, vol. 46, no. 4, pp. 1153-1216.
View/Download from: Publisher's site
View description>>
© 2017 SIAM. The isomorphism problem for finite groups of order n (GpI) has long been known to be solvable in nlog n+O(1) time, but only recently were polynomial-time algorithms designed for several interesting group classes. Inspired by recent progress, we revisit the strategy for GpI via the extension theory of groups. The extension theory describes how a normal subgroup N is related to G/N via G, and this naturally leads to a divide-and-conquer strategy that 'splits' GpI into two subproblems: one regarding group actions on other groups, and one regarding group cohomology. When the normal subgroup N is abelian, this strategy is well known. Our first contribution is to extend this strategy to handle the case when N is not necessarily abelian. This allows us to provide a unified explanation of all recent polynomial-time algorithms for special group classes. Guided by this strategy, to make further progress on GpI, we consider central-radical groups, proposed in Babai et al. [Code equivalence and group isomorphism, in Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'11), SIAM, Philadelphia, 2011, ACM, New York, pp. 1395-1408]: the class of groups such that G modulo its center has no abelian normal subgroups. This class is a natural extension of the group class considered by Babai et al. [Polynomial-time isomorphism test for groups with no abelian normal subgroups (extended abstract), in International Colloquium on Automata, Languages, and Programming (ICALP), 2012, pp. 51-62], namely those groups with no abelian normal subgroups. Following the above strategy, we solve GpI in nO(log log n) time for central-radical groups, and in polynomial time for several prominent subclasses of centralradical groups. We also solve GpI in nO(log log n) time for groups whose solvable normal subgroups are elementary abelian but not necessarily central. As far as we are aware, this is the first time there have been worst-case guarantees on an n...
Han, J, Lu, J & Zhang, G 2017, 'Tri-level decision-making for decentralized vendor-managed inventory', Information Sciences, vol. 421, pp. 85-103.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. Vendor-managed inventory (VMI) is a common inventory management policy which allows the vendor to manage the buyer's inventory based on the information shared in the course of supply chain management. One challenge in VMI is that both the vendor and buyer are manufacturers who try to achieve an inventory as small as possible or even a zero inventory; it is therefore difficult to manage inventory coordination between them. This paper considers a decentralized VMI problem in a three-echelon supply chain network in which multiple distributors (third-party logistics companies) are selected to balance the inventory between a vendor (manufacturer) and multiple buyers (manufacturers). To handle this issue, this paper first proposes a tri-level decision model to describe the decentralized VMI problem, which allows us to examine how decision members coordinate with each other in respect of decentralized VMI decision-making in a predetermined sequence. We then turn our attention to the geometry of the solution space and present a vertex enumeration algorithm to solve the resulting tri-level decision model. Lastly, a computational study is developed to illustrate how the proposed tri-level decision model and solution approach can handle the decentralized VMI problem. The results indicate that the proposed tri-level decision-making techniques provide a practical way to design a novel manufacturer-manufacturer (vendor-buyer) VMI system where third-party logistics are involved.
He, Q, Xie, X, Wang, Y, Ye, D, Chen, F, Jin, H & Yang, Y 2017, 'Localizing Runtime Anomalies in Service-Oriented Systems', IEEE Transactions on Services Computing, vol. 10, no. 1, pp. 94-106.
View/Download from: Publisher's site
He, Q, Zhou, R, Zhang, X, Wang, Y, Ye, D, Chen, F, Grundy, JC & Yang, Y 2017, 'Keyword Search for Building Service-Based Systems', IEEE Transactions on Software Engineering, vol. 43, no. 7, pp. 658-674.
View/Download from: Publisher's site
He, X, Wu, Y, Yu, D & Merigó, JM 2017, 'Exploring the Ordered Weighted Averaging Operator Knowledge Domain: A Bibliometric Analysis', International Journal of Intelligent Systems, vol. 32, no. 11, pp. 1151-1166.
View/Download from: Publisher's site
View description>>
© 2017 Wiley Periodicals, Inc. Ordered weighted averaging (OWA) operator has been received increasingly widespread interest since its appearance in 1988. Recently, a topic search with the keywords “ordered weighted averaging operator” or “OWA operator” on Web of Science (WOS) found 1231 documents. As the publications about OWA operator increase rapidly, thus a scientometric analysis of this research field and discovery of its knowledge domain becomes very important and necessary. This paper studies the publications about OWA operator between 1988 and 2015, and it is based on 1213 bibliographic records obtained by using topic search from WOS. The disciplinary distribution, most cited papers, influential journals, as well as influential authors are analyzed through citation and cocitation analysis. The emerging trends in OWA operator research are explored by keywords and references burst detection analysis. The research methods and results in this paper are meaningful for researchers associated with OWA operator field to understand the knowledge domain and establish their own future research direction.
Heng, J, Wang, J, Xiao, L & Lu, H 2017, 'Research and application of a combined model based on frequent pattern growth algorithm and multi-objective optimization for solar radiation forecasting', Applied Energy, vol. 208, pp. 845-866.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Solar radiation forecasting plays a significant role in precisely designing solar energy systems and in the efficient management of solar energy plants. Most research only focuses on accuracy improvements; however, for an effective forecasting model, considering only accuracy or stability is inadequate. To solve this problem, a combined model based on nondominated sorting-based multiobjective bat algorithm (NSMOBA) is developed for the optimization of weight coefficients of each model to achieve high accuracy and stability results simultaneously. In addition, a statistical method and data mining-based approach are used to determine the input variables for constructing the combined model. Monthly average solar radiation and meteorological variables from six datasets in the U.S. collected for case studies were used to assess the comprehensive performance (both in accuracy and stability) of the proposed combined model. The simulation in four experiments demonstrated the following: (a) the proposed combined model is suitable for providing accurate and stable solar radiation forecasting; (b) the combined model exhibits a more competitive forecasting performance than the individual models by using the advantage of each model; (c) the NSMOBA is an efficient algorithm for providing accurate forecasting results and improving the stability where the single bat algorithm is insufficient.
Herr, D, Nori, F & Devitt, SJ 2017, 'Lattice surgery translation for quantum computation', New Journal of Physics, vol. 19, no. 1, pp. 013034-013034.
View/Download from: Publisher's site
View description>>
© 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. In this paper we outline a method for a compiler to translate any non fault tolerant quantum circuit to the geometric representation of the lattice surgery error-correcting code using inherent merge and split operations. Since the efficiency of state distillation procedures has not yet been investigated in the lattice surgery model, their translation is given as an example using the proposed method. The resource requirements seem comparable or better to the defect-based state distillation process, but modularity and eventual implementability allow the lattice surgery model to be an interesting alternative to braiding.
Herr, D, Nori, F & Devitt, SJ 2017, 'Optimization of Lattice Surgery is NP-Hard', npj Quantum Information 3, Article number: 35 (2017), vol. 3, no. 1, pp. 1-5.
View/Download from: Publisher's site
View description>>
The traditional method for computation in either the surface code or in theRaussendorf model is the creation of holes or 'defects' within the encodedlattice of qubits that are manipulated via topological braiding to enact logicgates. However, this is not the only way to achieve universal, fault-tolerantcomputation. In this work, we focus on the Lattice Surgery representation,which realizes transversal logic operations without destroying the intrinsic 2Dnearest-neighbor properties of the braid-based surface code and achievesuniversality without defects and braid based logic. For both techniques thereare open questions regarding the compilation and resource optimization ofquantum circuits. Optimization in braid-based logic is proving to be difficultand the classical complexity associated with this problem has yet to bedetermined. In the context of lattice-surgery-based logic, we can introduce anoptimality condition, which corresponds to a circuit with the lowest resourcerequirements in terms of physical qubits and computational time, and prove thatthe complexity of optimizing a quantum circuit in the lattice surgery model isNP-hard.
Hoermann, S, McCabe, KL, Milne, DN & Calvo, RA 2017, 'Application of Synchronous Text-Based Dialogue Systems in Mental Health Interventions: Systematic Review', Journal of Medical Internet Research, vol. 19, no. 8, pp. e267-e267.
View/Download from: Publisher's site
View description>>
© Simon Hoermann, Kathryn L McCabe, David N Milne, Rafael A Calvo. Background: Synchronous written conversations (or 'chats') are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions. Objective: The aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat. Methods: A systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure. Results: A total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling). Conclusions: Feasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.
Hou, S, Chen, L, Tao, D, Zhou, S, Liu, W & Zheng, Y 2017, 'Multi-layer multi-view topic model for classifying advertising video', Pattern Recognition, vol. 68, pp. 66-81.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd The recent proliferation of advertising (ad) videos has driven the research in multiple applications, ranging from video analysis to video indexing and retrieval. Among them, classifying ad video is a key task because it allows automatic organization of videos according to categories or genres, and this further enables ad video indexing and retrieval. However, classifying ad video is challenging compared to other types of video classification because of its unconstrained content. While many studies focus on embedding ads relevant to videos, to our knowledge, few focus on ad video classification. In order to classify ad video, this paper proposes a novel ad video representation that aims to sufficiently capture the latent semantics of video content from multiple views in an unsupervised manner. In particular, we represent ad videos from four views, including bag-of-feature (BOF), vector of locally aggregated descriptors (VLAD), fisher vector (FV) and object bank (OB). We then devise a multi-layer multi-view topic model, mlmv_LDA, which models the topics of videos from different views. A topical representation for video, supporting category-related task, is finally achieved by the proposed method. Our empirical classification results on 10,111 real-world ad videos demonstrate that the proposed approach effectively differentiate ad videos.
Hu, L, Cao, L, Cao, J, Gu, Z, Xu, G & Wang, J 2017, 'Improving the Quality of Recommendations for Users and Items in the Tail of Distribution', ACM Transactions on Information Systems, vol. 35, no. 3, pp. 1-37.
View/Download from: Publisher's site
View description>>
Short-head and long-tail distributed data are widely observed in the real world. The same is true of recommender systems (RSs), where a small number of popular items dominate the choices and feedback data while the rest only account for a small amount of feedback. As a result, most RS methods tend to learn user preferences from popular items since they account for most data. However, recent research in e-commerce and marketing has shown that future businesses will obtain greater profit from long-tail selling. Yet, although the number of long-tail items and users is much larger than that of short-head items and users, in reality, the amount of data associated with long-tail items and users is much less. As a result, user preferences tend to be popularity-biased. Furthermore, insufficient data makes long-tail items and users more vulnerable to shilling attack. To improve the quality of recommendations for items and users in the tail of distribution, we propose a coupled regularization approach that consists of two latent factor models: C-HMF, for enhancing credibility, and S-HMF, for emphasizing specialty on user choices. Specifically, the estimates learned from C-HMF and S-HMF recurrently serve as the empirical priors to regularize one another. Such coupled regularization leads to the comprehensive effects of final estimates, which produce more qualitative predictions for both tail users and tail items. To assess the effectiveness of our model, we conduct empirical evaluations on large real-world datasets with various metrics. The results prove that our approach significantly outperforms the compared methods.
Hu, L, Cao, L, Cao, J, Gu, Z, Xu, G & Yang, D 2017, 'Learning Informative Priors from Heterogeneous Domains to Improve Recommendation in Cold-Start User Domains', ACM Transactions on Information Systems, vol. 35, no. 2, pp. 1-37.
View/Download from: Publisher's site
View description>>
In the real-world environment, users have sufficient experience in their focused domains but lack experience in other domains. Recommender systems are very helpful for recommending potentially desirable items to users in unfamiliar domains, and cross-domain collaborative filtering is therefore an important emerging research topic. However, it is inevitable that the cold-start issue will be encountered in unfamiliar domains due to the lack of feedback data. The Bayesian approach shows that priors play an important role when there are insufficient data, which implies that recommendation performance can be significantly improved in cold-start domains if informative priors can be provided. Based on this idea, we propose a Weighted Irregular Tensor Factorization (WITF) model to leverage multi-domain feedback data across all users to learn the cross-domain priors w.r.t. both users and items. The features learned from WITF serve as the informative priors on the latent factors of users and items in terms of weighted matrix factorization models. Moreover, WITF is a unified framework for dealing with both explicit feedback and implicit feedback. To prove the effectiveness of our approach, we studied three typical real-world cases in which a collection of empirical evaluations were conducted on real-world datasets to compare the performance of our model and other state-of-the-art approaches. The results show the superiority of our model over comparison models.
Hussain, W, Hussain, FK, Hussain, OK, Damiani, E & Chang, E 2017, 'Formulating and managing viable SLAs in cloud computing from a small to medium service provider's viewpoint: A state-of-the-art review', Information Systems, vol. 71, pp. 240-259.
View/Download from: Publisher's site
View description>>
In today's competitive world, service providers need to be customer-focused and proactive in their marketing strategies to create consumer awareness of their services. Cloud computing provides an open and ubiquitous computing feature in which a large random number of consumers can interact with providers and request services. In such an environment, there is a need for intelligent and efficient methods that increase confidence in the successful achievement of business requirements. One such method is the Service Level Agreement (SLA), which is comprised of service objectives, business terms, service relations, obligations and the possible action to be taken in the case of SLA violation. Most of the emphasis in the literature has, until now, been on the formation of meaningful SLAs by service consumers, through which their requirements will be met. However, in an increasingly competitive market based on the cloud environment, service providers too need a framework that will form a viable SLA, predict possible SLA violations before they occur, and generate early warning alarms that flag a potential lack of resources. This is because when a provider and a consumer commit to an SLA, the service provider is bound to reserve the agreed amount of resources for the entire period of that agreement – whether the consumer uses them or not. It is therefore very important for cloud providers to accurately predict the likely resource usage for a particular consumer and to formulate an appropriate SLA before finalizing an agreement. This problem is more important for a small to medium cloud service provider which has limited resources that must be utilized in the best possible way to generate maximum revenue. A viable SLA in cloud computing is one that intelligently helps the service provider to determine the amount of resources to offer to a requesting consumer, and there are number of studies on SLA management in the literature. The aim of this paper is two-fold. First, it pr...
Hussain, W, Hussain, OK, Hussain, FK & Khan, MQ 2017, 'Usability Evaluation of English, Local and Plain Languages to Enhance On-Screen Text Readability: A Use Case of Pakistan', Global Journal of Flexible Systems Management, vol. 18, no. 1, pp. 33-49.
View/Download from: Publisher's site
View description>>
© 2016, Global Institute of Flexible Systems Management. In today’s digital world, information can very easily be accessed and digitally processed anywhere. Devices which are capable of processing digital data range from desktop computers to laptops, mobile phones, tablets, and personal digital assistants. For effective communication, text on a Web site should catch a reader’s attention and should be easy to both read and understand. Different constraints are associated with on-screen text readability and legibility, such as font size, color, and style, as well as foreground and background color contrast, line spacing, text congestion, vocabulary and grammar, but text recognition and comprehension are two of the major problems. In this study, we address the issue of how to enhance text readability for non-native English speakers who have a basic understanding of English language and speak local languages which are not formally taught in academia. We select a use case in Pakistan, a country in which English and Urdu are the official languages, and a number of local languages are spoken in different parts of the country. Due to the wide variety of local languages, no Web site can support the many local language scripts or alphabets and display them on digital devices. When users with only a basic knowledge of English—particularly low-literate users from a local language background—try to read an English text, it is highly challenging for them to understand the meaning of words. In this study, we propose a plain language scheme in which a text is converted into a roman text. A roman text is formed by using the English alphabet and combining letters in such a way that when it is read, it sounds like a local language. To evaluate the applicability of our approach, we conducted a survey of users from different educational backgrounds, using a text written in English, local and plain language from users who speak particular local language. For each survey, we ...
Inan, DI & Beydoun, G 2017, 'Disaster Knowledge Management Analysis Framework Utilizing Agent-Based Models: Design Science Research Approach', Procedia Computer Science, vol. 124, pp. 116-124.
View/Download from: Publisher's site
Ivanyos, G, Qiao, Y & Subrahmanyam, KV 2017, 'Non-commutative Edmonds’ problem and matrix semi-invariants', computational complexity, vol. 26, no. 3, pp. 717-763.
View/Download from: Publisher's site
View description>>
© 2016, Springer International Publishing. In 1967, J. Edmonds introduced the problem of computing the rank over the rational function field of an n× n matrix T with integral homogeneous linear polynomials. In this paper, we consider the non-commutative version of Edmonds’ problem: compute the rank of T over the free skew field. This problem has been proposed, sometimes in disguise, from several different perspectives in the study of, for example, the free skew field itself (Cohn in J Symbol Log 38(2):309–314, 1973), matrix spaces of low rank (Fortin-Reutenauer in Sémin Lothar Comb 52:B52f 2004), Edmonds’ original problem (Gurvits in J Comput Syst Sci 69(3):448–484, 2004), and more recently, non-commutative arithmetic circuits with divisions (Hrubeš and Wigderson in Theory Comput 11:357-393, 2015. doi:10.4086/toc.2015.v011a014). It is known that this problem relates to the following invariant ring, which we call the F-algebra of matrix semi-invariants, denoted as R(n, m). For a field F, it is the ring of invariant polynomials for the action of SL (n, F) × SL (n, F) on tuples of matrices—(A, C) ∈ SL (n, F) × SL (n, F) sends (B1, … , Bm) ∈ M(n, F) ⊕m to (AB1CT, … , ABmCT). Then those T with non-commutative rank < n correspond to those points in the nullcone of R(n, m). In particular, if the nullcone of R(n, m) is defined by elements of degree ≤ σ, then there follows a poly (n, σ) -time randomized algorithm to decide whether the non-commutative rank of T is full. To our knowledge, previously the best bound for σ was O(n2·4n2) over algebraically closed fields of characteristic 0 (Derksen in Proc Am Math Soc 129(4):955–964, 2001). We now state the main contributions of this paper:We observe that by using an algorithm of Gurvits, and assuming the above bound σ for R(n, m) over Q, deciding whether or not T has non-commutative rank < n over Q can be done deterministically in time polynomial in the input size and σ.When F is large enough, we devise an algorithm ...
Jiang, J, Wen, S, Yu, S, Xiang, Y & Zhou, W 2017, 'Identifying Propagation Sources in Networks: State-of-the-Art and Comparative Studies', IEEE Communications Surveys & Tutorials, vol. 19, no. 1, pp. 465-481.
View/Download from: Publisher's site
View description>>
It has long been a significant but difficult problem to identify propagation sources based on limited knowledge of network structures and the varying states of network nodes. In practice, real cases can be locating the sources of rumors in online social networks and finding origins of a rolling blackout in smart grids. This paper reviews the state-of-the-art in source identification techniques and discusses the pros and cons of current methods in this field. Furthermore, in order to gain a quantitative understanding of current methods, we provide a series of experiments and comparisons based on various environment settings. Especially, our observation reveals considerable differences in performance by employing different network topologies, various propagation schemes, and diverse propagation probabilities. We therefore reach the following points for future work. First, current methods remain far from practice as their accuracy in terms of error distance (δ) is normally larger than three in most scenarios. Second, the majority of current methods are too time consuming to quickly locate the origins of propagation. In addition, we list five open issues of current methods exposed by the analysis, from the perspectives of topology, number of sources, number of networks, temporal dynamics, and complexity and scalability. Solutions to these open issues are of great academic and practical significance.
Kaiwartya, O, Prasad, M, Prakash, S, Samadhiya, D, Abdullah, AH & Rahman, SOA 2017, 'An investigation on biometric internet security', International Journal of Network Security, vol. 19, no. 2, pp. 167-176.
View/Download from: Publisher's site
View description>>
Due to the Internet revolution in the last decade, each and every work area of society are directly or indirectly depending on computers, highly integrated computer networks and communication systems, electronic data storage and high transfer based devices, e-commerce, e-security, e-governance, and e-business. The Internet revolution is also emerged as significant challenge due to the threats of hacking systems and individual accounts, malware, fraud and vulnerabilities of system and networks, etc. In this context, this paper explores E-Security in terms of challenges and measurements. Biometric recognition is also investigated as a key e-security solution. E-Security is precisely described to understand the concept and requirements. The major challenges of e-security; namely, threats, attacks, vulnerabilities are presented in detail. Some measurement are identified and discussed for the challenges. Biometric recognition is discussed in detail wit pros and cons of the approach as a key e-security solution. This investigation helps in clear understating of e-security challenges and possible implementation of the identified measurements for the challenges in wide area of network communications.
Kieferová, M & Wiebe, N 2017, 'Tomography and generative training with quantum Boltzmann machines', Physical Review A, vol. 96, no. 6.
View/Download from: Publisher's site
Ko, L-W, Komarov, O, Hairston, WD, Jung, T-P & Lin, C-T 2017, 'Sustained Attention in Real Classroom Settings: An EEG Study', Frontiers in Human Neuroscience, vol. 11, pp. 1-10.
View/Download from: Publisher's site
View description>>
© 2017 Ko, Komarov, Hairston, Jung and Lin. Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra.
Kolamunna, H, Chauhan, J, Hu, Y, Thilakarathna, K, Perino, D, Makaroff, D & Seneviratne, A 2017, 'Are Wearables Ready for Secure and Direct Internet Communication?', GetMobile: Mobile Computing and Communications, vol. 21, no. 3, pp. 5-10.
View/Download from: Publisher's site
View description>>
Recent advances in wearable technology tend towards standalone wearables. Most of today's wearable devices and applications still rely on a paired smartphone for secure Internet communication, even though many current generation wearables are equipped with Wi-Fi and 3G/4G network interfaces that provide direct Internet access. Yet it is not clear if such communication can be efficiently and securely supported through existing protocols. Our findings show that it is possible to use secure and efficient direct communication between wearables and the Internet
Kong, Y, Zhang, M & Ye, D 2017, 'A belief propagation-based method for task allocation in open and dynamic cloud environments', Knowledge-Based Systems, vol. 115, pp. 123-132.
View/Download from: Publisher's site
Kurian, JC & John, BM 2017, 'User-generated content on the Facebook page of an emergency management agency', Online Information Review, vol. 41, no. 4, pp. 558-579.
View/Download from: Publisher's site
View description>>
PurposeThe purpose of this paper is to explore themes eventuating from the user-generated content posted by users on the Facebook page of an emergency management agency.Design/methodology/approachAn information classification framework was used to classify user-generated content posted by users including all of the content posted during a six month period (January to June 2015). The posts were read and analysed thematically to determine the overarching themes evident across the entire collection of user posts.FindingsThe results of the analysis demonstrate that the key themes that eventuate from the user-generated content posted are “Self-preparedness”, “Emergency signalling solutions”, “Unsurpassable companion”, “Aftermath of an emergency”, and “Gratitude towards emergency management staff”. Major user-generated content identified among these themes are status-update, criticism, recommendation, and request.Research limitations/implicationsThis study contributes to theory on the development of key themes from user-generated content posted by users on a public social networking site. An analysis of user-generated content identified in this study implies that, Facebook is primarily used for information dissemination, coordination and collaboration, and information seeking in the context of emergency management. Users may gain the benefits of identity construction and social provisions, whereas social conflict is a potential detrimental implication. Other user costs include lack of social support by stakeholders, investment in social infrastructure and additional work force req...
Laengle, S, Loyola, G & Merigo, JM 2017, 'Mean-Variance Portfolio Selection With the Ordered Weighted Average', IEEE Transactions on Fuzzy Systems, vol. 25, no. 2, pp. 350-362.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Portfolio selection is the theory that studies the process of selecting the optimal proportion of different assets. The first approach was introduced by Harry Markowitz and was based on a mean-variance framework. This paper introduces the ordered weighted average (OWA) in the mean-variance model. The main idea is to replace the classical mean and variance by the OWA operator. By doing so, the new model is able to study different degrees of optimism and pessimism in the analysis being able to develop an approach that considers the decision makers attitude in the selection process. This paper also suggests a new framework for dealing with the attitudinal character of the decision maker based on the numerical values of the available arguments. The main advantage of this method is the ability to adapt to many situations offering a more complete representation of the available data from the most pessimistic situation to the most optimistic one. An illustrative with fictitious data and a real example are studied.
Laengle, S, Merigó, JM, Miranda, J, Słowiński, R, Bomze, I, Borgonovo, E, Dyson, RG, Oliveira, JF & Teunter, R 2017, 'Forty years of the European Journal of Operational Research: A bibliometric overview', European Journal of Operational Research, vol. 262, no. 3, pp. 803-816.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. The European Journal of Operational Research (EJOR) published its first issue in 1977. This paper presents a general overview of the journal over its lifetime by using bibliometric indicators. We discuss its performance compared to other journals in the field and identify key contributing countries/institutions/authors as well as trends in research topics based on the Web of Science Core Collection database. The results indicate that EJOR is one of the leading journals in the area of operational research (OR) and management science (MS), with a wide range of authors from institutions and countries from all over the world publishing in it. Graphical visualization of similarities (VOS) provides further insights into how EJOR links to other journals and how it links researchers across the globe.
Lai, P-W, Ko, L-W, Wang, Y & Lin, C-T 2017, 'EEG-based assessment of pilot spatial navigation on an aviation simulator', Journal of Science and Medicine in Sport, vol. 20, pp. S37-S38.
View/Download from: Publisher's site
Lekitsch, B, Weidt, S, Fowler, AG, Mølmer, K, Devitt, SJ, Wunderlich, C & Hensinger, WK 2017, 'Blueprint for a microwave trapped ion quantum computer', Science Advances, vol. 3, no. 2, pp. 1-11.
View/Download from: Publisher's site
View description>>
Design to build a trapped ion quantum computer with modules connected by ion transport and voltage-driven quantum gate technology.
Li, J, Mei, X, Prokhorov, D & Tao, D 2017, 'Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 3, pp. 690-703.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.
Li, M, Fu, C, Liu, X-Y, Yang, J, Zhu, T & Han, L 2017, 'Evolutionary virus immune strategy for temporal networks based on community vitality', Future Generation Computer Systems, vol. 74, pp. 276-290.
View/Download from: Publisher's site
View description>>
Preventing viruses spreading in networks is a hot topic. Existing immune strategies are mainly designed for static networks, which become ineffective for temporal networks. In this paper, we propose an evolutionary virus immune strategy for temporal networks, which takes into account the community evolution. First, we define a new metric, community vitality (CV), to quantize the evolution characteristics of communities. Second, based on the community vitality, we propose an immune strategy which selects an optimized number of initial nodes according to node influence (NI). Third, a theoretical analysis is proposed to measure the immune effect of the evolutionary immune strategy. Compared with the random immunization, the targeted immunization and the acquaintance immune strategy, we show that the proposed strategy has a much larger coverage, i.e., more nodes will have immune ability given the same number of initial immune nodes.
Liang, C, Lin, C-T, Yao, S-N, Chang, W-S, Liu, Y-C & Chen, S-A 2017, 'Visual attention and association: An electroencephalography study in expert designers', Design Studies, vol. 48, pp. 76-95.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Extant research on the visual attention and association of designers is limited, and scientific evidence differentiating among the effects of diverse visual stimuli on design thinking is insufficient. The current study invited 12 healthy expert designers and analysed their experiences of visual attention and association in addition to exploring the differences caused by three types of pictorial representation. The results of this electroencephalography (EEG) experiment indicated that the frontoparietal region was particularly activated when the designers engaged in visual attention tasks, whereas the brainwaves were particularly activated in the distributed prefrontal, frontocentral, and parietooccipital regions during the visual association tasks. In addition, there were no significant differences in the brainwave energy resulting from the three types of pictorial representation applied in this study. The research outcomes linking design studies to cognitive neuroscience establish a concrete foundation for developing future applied research and diverse educational practices.
Lin, C-T, Chuang, C-H, Cao, Z, Singh, AK, Hung, C-S, Yu, Y-H, Nascimben, M, Liu, Y-T, King, J-T, Su, T-P & Wang, S-J 2017, 'Forehead EEG in Support of Future Feasible Personal Healthcare Solutions: Sleep Management, Headache Prevention, and Depression Treatment', IEEE Access, vol. 5, pp. 10612-10621.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. There are current limitations in the recording technologies for measuring EEG activity in clinical and experimental applications. Acquisition systems involving wet electrodes are time-consuming and uncomfortable for the user. Furthermore, dehydration of the gel affects the quality of the acquired data and reliability of long-term monitoring. As a result, dry electrodes may be used to facilitate the transition from neuroscience research or clinical practice to real-life applications. EEG signals can be easily obtained using dry electrodes on the forehead, which provides extensive information concerning various cognitive dysfunctions and disorders. This paper presents the usefulness of the forehead EEG with advanced sensing technology and signal processing algorithms to support people with healthcare needs, such as monitoring sleep, predicting headaches, and treating depression. The proposed system for evaluating sleep quality is capable of identifying five sleep stages to track nightly sleep patterns. Additionally, people with episodic migraines can be notified of an imminent migraine headache hours in advance through monitoring forehead EEG dynamics. The depression treatment screening system can predict the efficacy of rapid antidepressant agents. It is evident that frontal EEG activity is critically involved in sleep management, headache prevention, and depression treatment. The use of dry electrodes on the forehead allows for easy and rapid monitoring on an everyday basis. The advances in EEG recording and analysis ensure a promising future in support of personal healthcare solutions.
Lin, C-T, Liu, Y-T, Wu, S-L, Cao, Z, Wang, Y-K, Huang, C-S, King, J-T, Chen, S-A, Lu, S-W & Chuang, C-H 2017, 'EEG-Based Brain-Computer Interfaces A Novel Neurotechnology and Computational Intelligence Method', IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE, vol. 3, no. 4, pp. 16-26.
View/Download from: Publisher's site
Lin, C-T, Liu, Y-T, Wu, S-L, Cao, Z, Wang, Y-K, Huang, C-S, King, J-T, Chen, S-A, Lu, S-W & Chuang, C-H 2017, 'EEG-Based Brain-Computer Interfaces: A Novel Neurotechnology and Computational Intelligence Method', IEEE Systems, Man, and Cybernetics Magazine, vol. 3, no. 4, pp. 16-26.
View/Download from: Publisher's site
View description>>
This article presents the latest BCI-related research done in our group. Our previous work applied computational intelligence technology in BCIs to inspire detailed investigations of practical issues in real-life applications. Novel EEG devices featuring dry electrodes facilitate and speed up electrode positioning before recording and allow subjects to move freely in operational environments. We also demonstrate the feasibility of applying CCA, RBFNs, effective connectivity measurements, and D-S theory to help BCIs extract informative knowledge from brain signals. Two recent trends in research in the computational and artificial intelligence community, big data and deep learning, are expected to impact the direction and development of BCIs.
Liu, C, Talaei-Khoei, A, Zowghi, D & Daniel, J 2017, 'Data Completeness in Healthcare: A Literature Survey', PACIFIC ASIA JOURNAL OF THE ASSOCIATION FOR INFORMATION SYSTEMS, vol. 9, no. 2, pp. 75-100.
Liu, F, Zhang, G & Lu, J 2017, 'Heterogeneous domain adaptation: An unsupervised approach', IEEE transactions on neural networks and learning systems, vol. 31, no. 12, pp. 5588-5602.
View/Download from: Publisher's site
View description>>
Domain adaptation leverages the knowledge in one domain - the source domain -to improve learning efficiency in another domain - the target domain. Existingheterogeneous domain adaptation research is relatively well-progressed, butonly in situations where the target domain contains at least a few labeledinstances. In contrast, heterogeneous domain adaptation with an unlabeledtarget domain has not been well-studied. To contribute to the research in thisemerging field, this paper presents: (1) an unsupervised knowledge transfertheorem that guarantees the correctness of transferring knowledge; and (2) aprincipal angle-based metric to measure the distance between two pairs ofdomains: one pair comprises the original source and target domains and theother pair comprises two homogeneous representations of two domains. Thetheorem and the metric have been implemented in an innovative transfer model,called a Grassmann-Linear monotonic maps-geodesic flow kernel (GLG), that isspecifically designed for heterogeneous unsupervised domain adaptation (HeUDA).The linear monotonic maps meet the conditions of the theorem and are used toconstruct homogeneous representations of the heterogeneous domains. The metricshows the extent to which the homogeneous representations have preserved theinformation in the original source and target domains. By minimizing theproposed metric, the GLG model learns the homogeneous representations ofheterogeneous domains and transfers knowledge through these learnedrepresentations via a geodesic flow kernel. To evaluate the model, five publicdatasets were reorganized into ten HeUDA tasks across three applications:cancer detection, credit assessment, and text classification. The experimentsdemonstrate that the proposed model delivers superior performance over theexisting baselines.
Liu, Y, Huang, ML, Huang, W & Liang, J 2017, 'A physiognomy based method for facial feature extraction and recognition', Journal of Visual Languages & Computing, vol. 43, pp. 103-109.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd This paper proposes a novel calculation method of personality based on Chinese physiognomy. The proposed solution combines ancient and modern physiognomy to understand the relationship between personality and facial features and to model a baseline to shape facial features. We compute a histogram of image by searching for threshold values to create a binary image in an adaptive way. The two-pass connected component method indicates the feature's region. We encode the binary image to remove the noise point, so that the new connected image can provide a better result. According to our analysis of contours, we can locate facial features and classify them by means of a calculation method. The number of clusters is decided by a model and the facial feature contours are classified by using the k-means method. The validity of our method was tested on a face database and demonstrated by a comparative experiment.
Liu, Y-T, Pal, NR, Marathe, AR, Wang, Y-K & Lin, C-T 2017, 'Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources', Frontiers in Neuroscience, vol. 11, no. JUN, pp. 1-10.
View/Download from: Publisher's site
View description>>
© 2017 Liu, Pal, Marathe, Wang and Lin. A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance...
Llopis-Albert, C, Merigó, JM, Xu, Y & Liao, H 2017, 'Improving Regional Climate Projections by Prioritized Aggregation via Ordered Weighted Averaging Operators', Environmental Engineering Science, vol. 34, no. 12, pp. 880-886.
View/Download from: Publisher's site
View description>>
© Copyright 2017, Mary Ann Liebert, Inc. 2017. Decision makers express a strong need for reliable information on future climate changes to develop with the best mitigation and adaptation strategies to address impacts. These decisions are based on future climate projections that are simulated by using different Representative Concentration Pathways (RCPs), General Circulation Models (GCMs), and downscaling techniques to obtain high-resolution Regional Climate Models. RCPs defined by the Intergovernmental Panel on Climate Change entail a certain combination of the underlying driving forces behind climate and land use/land cover changes, which leads to different anthropogenic Greenhouse Gases concentration trajectories. Projections of global and regional climate change should also take into account relevant sources of uncertainty and stakeholders' risk attitudes when defining climate polices. The goal of this article is to improve regional climate projections by their prioritized aggregation through the ordered weighted averaging (OWA) operator. The aggregated projection is achieved by considering the similarity of the projections obtained by combining different GCMs, RCPs, and downscaling techniques. Relative weights of different projections to be aggregated by the OWA operator are obtained by regular increasing monotone fuzzy quantifiers, which enables modeling the stakeholders' risk attitudes. The methodology provides a robust decision-making tool to evaluate performance of future climate projections and to design sustainable policies under uncertainty and risk tolerance, which has been successfully applied to a real-case study.
Lu, J, Herrera, F & Zhang, G 2017, 'Guest Editorial Special Section on Fuzzy Systems in Data Science', IEEE Transactions on Fuzzy Systems, vol. 25, no. 6, pp. 1373-1375.
View/Download from: Publisher's site
Lu, M, Lai, C, Ye, T, Liang, J & Yuan, X 2017, 'Visual Analysis of Multiple Route Choices Based on General GPS Trajectories', IEEE Transactions on Big Data, vol. 3, no. 2, pp. 234-247.
View/Download from: Publisher's site
View description>>
There are often multiple routes between regions. Drivers choose different routes with different considerations. Such considerations, have always been a point of interest in the transportation area. Studies of route choice behaviour are usually based on small range experiments with a group of volunteers. However, the experiment data is quite limited in its spatial and temporal scale as well as the practical reliability. In this work, we explore the possibility of studying route choice behaviour based on general trajectory dataset, which is more realistic in a wider scale. We develop a visual analytic system to help users handle the large-scale trajectory data, compare different route choices, and explore the underlying reasons. Specifically, the system consists of: 1. the interactive trajectory filtering which supports graphical trajectory query; 2. the spatial visualization which gives an overview of all feasible routes extracted from filtered trajectories; 3. the factor visual analytics which provides the exploration and hypothesis construction of different factors' impact on route choice behaviour, and the verification with an integrated route choice model. Applying to real taxi GPS dataset, we report the system's performance and demonstrate its effectiveness with three cases.
Lund, AP, Bremner, MJ & Ralph, TC 2017, 'Quantum Sampling Problems, BosonSampling and Quantum Supremacy', npj Quantum Information (2017) 3:15, vol. 3, no. 1, pp. 1-8.
View/Download from: Publisher's site
View description>>
There is a large body of evidence for the potential of greater computationalpower using information carriers that are quantum mechanical over thosegoverned by the laws of classical mechanics. But the question of the exactnature of the power contributed by quantum mechanics remains only partiallyanswered. Furthermore, there exists doubt over the practicality of achieving alarge enough quantum computation that definitively demonstrates quantumsupremacy. Recently the study of computational problems that produce samplesfrom probability distributions has added to both our understanding of the powerof quantum algorithms and lowered the requirements for demonstration of fastquantum algorithms. The proposed quantum sampling problems do not require aquantum computer capable of universal operations and also permit physicallyrealistic errors in their operation. This is an encouraging step towards anexperimental demonstration of quantum algorithmic supremacy. In this paper, wewill review sampling problems and the arguments that have been used to deducewhen sampling problems are hard for classical computers to simulate. Twoclasses of quantum sampling problems that demonstrate the supremacy of quantumalgorithms are BosonSampling and IQP Sampling. We will present the details ofthese classes and recent experimental progress towards demonstrating quantumsupremacy in BosonSampling.
Mahalleh, MKK, Ashjari, B, Yousefi, F & Saberi, M 2017, 'A Robust Solution to Resource-Constraint Project Scheduling Problem', INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGENT SYSTEMS, vol. 17, no. 3, pp. 221-227.
View/Download from: Publisher's site
View description>>
© The Korean Institute of Intelligent Systems. This paper aims to propose a solution to the resource-constraint project scheduling problem (RCPSP). RCPSP is a significant scheduling problem in project management. Currently, there are insufficient studies dealing with the robustness of RCPSP. This paper improves the robustness of RCPSP and develops a Robust RCPSP, namely RRCSP. RRCSP is structured with relaxing a fundamental assumption that is 'the tasks start on time as planned'. Relaxing this assumption makes the model more realistic. The proposed solution minimizes the makespan while maximizing the robustness. Maximizing the robustness requires maximizing floating time of activities (it is NP hard). This creates more stability in the project finishing time. RCPSP stands as the root cause of many other problems such as multi-mode resourceconstrained project scheduling problems (MRCPSP), multi-skill resource-constrained project scheduling problem (MSRCPSP), or similar problems and hence proposing a solution to this problem contributes to pave a new line for future research in other mentioned areas. The applicability of the proposed model is examined through a numerical example.
Mans, B & Mathieson, L 2017, 'Incremental Problems in the Parameterized Complexity Setting', Theory of Computing Systems, vol. 60, no. 1, pp. 3-19.
View/Download from: Publisher's site
View description>>
© 2016, Springer Science+Business Media New York. Dynamic systems are becoming steadily more important with the profusion of mobile and distributed computing devices. Coincidentally incremental computation is a natural approach to deal with ongoing changes. We explore incremental computation in the parameterized complexity setting and show that incrementalization leads to non-trivial complexity classifications. Interestingly, some incremental versions of hard problems become tractable, while others remain hard. Moreover tractability or intractability is not a simple function of the problem’s static complexity, every level of the W-hierarchy exhibits complete problems with both tractable and intractable incrementalizations. For problems that are already tractable in their static form, we also show that incrementalization can lead to interesting algorithms, improving upon the trivial approach of using the static algorithm at each step.
Mao, M, Lu, J, Zhang, G & Zhang, J 2017, 'Multirelational Social Recommendations via Multigraph Ranking', IEEE Transactions on Cybernetics, vol. 47, no. 12, pp. 4049-4061.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Recommender systems aim to identify relevant items for particular users in large-scale online applications. The historical rating data of users is a valuable input resource for many recommendation models such as collaborative filtering (CF), but these models are known to suffer from the rating sparsity problem when the users or items under consideration have insufficient rating records. With the continued growth of online social networks, the increased user-to-user relationships are reported to be helpful and can alleviate the CF rating sparsity problem. Although researchers have developed a range of social network-based recommender systems, there is no unified model to handle multirelational social networks. To address this challenge, this paper represents different user relationships in a multigraph and develops a multigraph ranking model to identify and recommend the nearest neighbors of particular users in high-order environments. We conduct empirical experiments on two real-world datasets: 1) Epinions and 2) Last.fm, and the comprehensive comparison with other approaches demonstrates that our model improves recommendation performance in terms of both recommendation coverage and accuracy, especially when the rating data are sparse.
McGregor, C & Bonnis, B 2017, 'New Approaches for Integration: Integration of Haptic Garments, Big Data Analytics, and Serious Games for Extreme Environments', IEEE Consumer Electronics Magazine, vol. 6, no. 4, pp. 92-96.
View/Download from: Publisher's site
View description>>
© 2012 IEEE. Haptic garments present new opportunities to increase realism in gaming. As Real As It Gets (ARAIG) is a new form of haptic garment that uses muscle stimulation, vibration, and 7.1 surround sound to provide a new level of realism in gaming. The integration of new haptic garments like ARAIG with big data analytics and serious games presents new opportunities for more realistic virtual training that has application in many domains. In particular, there is great potential to support repeatable virtual training for extreme environments. In 2016, the IEEE Life Sciences Technical Community worked across the IEEE Societies to demonstrate this interdisciplinary nature, with a focus on solving life science problems in extreme environments. This article is based on our keynote address at the International Conference on Consumer Electronics (ICCE)-Berlin in 2016. It provides an example of this interdisciplinary case study research in action.
Meng, Q, Catchpoole, D, Skillicorn, D & Kennedy, PJ 2017, 'DBNorm: normalizing high-density oligonucleotide microarray data based on distributions', BMC Bioinformatics, vol. 18, no. 1.
View/Download from: Publisher's site
View description>>
© 2017 The Author(s). Background: Data from patients with rare diseases is often produced using different platforms and probe sets because patients are widely distributed in space and time. Aggregating such data requires a method of normalization that makes patient records comparable. Results: This paper proposed DBNorm, implemented as an R package, is an algorithm that normalizes arbitrarily distributed data to a common, comparable form. Specifically, DBNorm merges data distributions by fitting functions to each of them, and using the probability of each element drawn from the fitted distribution to merge it into a global distribution. DBNorm contains state-of-the-art fitting functions including Polynomial, Fourier and Gaussian distributions, and also allows users to define their own fitting functions if required. Conclusions: The performance of DBNorm is compared with z-score, average difference, quantile normalization and ComBat on a set of datasets, including several that are publically available. The performance of these normalization methods are compared using statistics, visualization, and classification when class labels are known based on a number of self-generated and public microarray datasets. The experimental results show that DBNorm achieves better normalization results than conventional methods. Finally, the approach has the potential to be applicable outside bioinformatics analysis.
Meo, PD, Musial-Gabrys, K, Rosaci, D, Sarnè, GML & Aroyo, L 2017, 'Using Centrality Measures to Predict Helpfulness-Based Reputation in Trust Networks', ACM Transactions on Internet Technology, vol. 17, no. 1, pp. 1-20.
View/Download from: Publisher's site
View description>>
In collaborative Web-based platforms, user reputation scores are generally computed according to two orthogonal perspectives: (a) helpfulness-based reputation (HBR) scores and (b) centrality-based reputation (CBR) scores. In HBR approaches, the most reputable users are those who post the most helpful reviews according to the opinion of the members of their community. In CBR approaches, a “who-trusts-whom” network—known as a trust network —is available and the most reputable users occupy the most central position in the trust network, according to some definition of centrality. The identification of users featuring large HBR scores is one of the most important research issue in the field of Social Networks, and it is a critical success factor of many Web-based platforms like e-marketplaces, product review Web sites, and question-and-answering systems. Unfortunately, user reviews/ratings are often sparse, and this makes the calculation of HBR scores inaccurate. In contrast, CBR scores are relatively easy to calculate provided that the topology of the trust network is known. In this article, we investigate if CBR scores are effective to predict HBR ones, and, to perform our study, we used real-life datasets extracted from CIAO and Epinions (two product review Web sites) and Wikipedia and applied five popular centrality measures—Degree Centrality, Closeness Centrality, Betweenness Centrality, PageRank and Eigenvector Centrality—to calculate CBR scores. Our analysis provides a positive answer to our research question: CBR scores allow for predicting HBR ones and Eigenvector Centrality was found to be the most important predictor. Our findings prove that we can leverage trust relationships to spot those users producing the most helpful reviews for the whole community.
Merigó, JM & Yang, J 2017, 'Accounting Research: A Bibliometric Analysis', Australian Accounting Review, vol. 27, no. 1, pp. 71-100.
View/Download from: Publisher's site
View description>>
Bibliometrics is a fundamental field of information science that studies bibliographic material quantitatively. It is very useful for organising available knowledge within a specific scientific discipline. This study presents a bibliometric overview of accounting research using the Web of Science database, identifying the most relevant research in the field classified by papers, authors, journals, institutions and countries. The results show that the most influential journals are: The Journal of Accounting and Economics, Journal of Accounting Research, The Accounting Review and Accounting, Organizations and Society. It also shows that US institutions are the most influential worldwide. However, it is important to note that some very good research in this area, including a small number of papers and citations, may not show up in this study due to the specific characteristics of different subtopics.
Merigó, JM & Yang, J-B 2017, 'A bibliometric analysis of operations research and management science', Omega, vol. 73, pp. 37-48.
View/Download from: Publisher's site
View description>>
© 2016 Bibliometric analysis is the quantitative study of bibliographic material. It provides a general picture of a research field that can be classified by papers, authors and journals. This paper presents a bibliometric overview of research published in operations research and management science in recent decades. The main objective of this study is to identify some of the most relevant research in this field and some of the newest trends according to the information found in the Web of Science database. Several classifications are made, including an analysis of the most influential journals, the two hundred most cited papers of all time and the most productive and influential authors. The results obtained are in accordance with the common wisdom, although some variations are found.
Merigó, JM, Blanco-Mesa, F, Gil-Lafuente, AM & Yager, RR 2017, 'Thirty Years of theInternational Journal of Intelligent Systems: A Bibliometric Review', International Journal of Intelligent Systems, vol. 32, no. 5, pp. 526-554.
View/Download from: Publisher's site
View description>>
© 2016 Wiley Periodicals, Inc. The International Journal of Intelligent Systems was created in 1986. Today, the journal is 30 years old. To celebrate this anniversary, this study develops a bibliometric review of all of the papers published in the journal between 1986 and 2015. The results are largely based on the Web of Science Core Collection, which classifies leading bibliographic material by using several indicators including total number of publications and citations, the h-index, cites per paper, and citing articles. The work also uses the VOS viewer software for visualizing the main results through bibliographic coupling and co-citation. The results show a general overview of leading trends that have influenced the journal in terms of highly cited papers, authors, journals, universities and countries.
Merigó, JM, Linares-Mustarós, S & Ferrer-Comalat, JC 2017, 'Guest editorial', Kybernetes, vol. 46, no. 1, pp. 2-7.
View/Download from: Publisher's site
Merigó, JM, Palacios-Marqués, D & Soto-Acosta, P 2017, 'Distance measures, weighted averages, OWA operators and Bonferroni means', Applied Soft Computing, vol. 50, pp. 356-366.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier B.V. The ordered weighted average (OWA) is an aggregation operator that provides a parameterized family of operators between the minimum and the maximum. This paper presents the OWA weighted average distance operator. The main advantage of this new approach is that it unifies the weighted Hamming distance and the OWA distance in the same formulation and considering the degree of importance that each concept has in the analysis. This operator includes a wide range of particular cases from the minimum to the maximum distance. Some further generalizations are also developed with generalized and quasi-arithmetic means. The use of Bonferroni means under this framework is also studied. The paper ends with an application of the new approach in a group decision making problem with Dempster-Shafer belief structure regarding the selection of strategies.
Mirtalaie, MA, Hussain, OK, Chang, E & Hussain, FK 2017, 'A decision support framework for identifying novel ideas in new product development from cross-domain analysis', Information Systems, vol. 69, pp. 59-80.
View/Download from: Publisher's site
View description>>
In current competitive times, product manufacturers need not only to retain their existing customer base, but also to increase their market share. One way they can achieve this is by generating new ideas and developing novel products with new features. As highlighted in the literature, in generating new ideas to develop novel and innovative products, it is important that product designers satisfy the needs of both current customers and new customers. However, despite the large number of existing studies that identify novel features in the ideation phase, product designers do not have a systematic framework that utilises additional information relating to products from either far-field or related domains to generate such new ideas in the ideation phase. This paper presents our proposed framework FEATURE which provides just such a systemic framework for product designers in the ideation phase of new product development. FEATURE has three phases. The first phase identifies and recommends to the product designers novel features that can be added to the next version of a reference product. In order to incorporate the customer's voice into the ideation phase, the second phase ascertains the popularity of the proposed features by using social media. The third phase ranks the proposed features based on the designer's decision criteria to select those that should be considered further in the next phases of new product development. We explain the importance of each phase of FEATURE and show the working of its first module in detail.
Mukhopadhyay, P & Qiao, Y 2017, 'Sparse multivariate polynomial interpolation on the basis of Schubert polynomials', computational complexity, vol. 26, no. 4, pp. 881-909.
View/Download from: Publisher's site
View description>>
© 2016, Springer International Publishing. Schubert polynomials were discovered by A. Lascoux and M. Schützenberger in the study of cohomology rings of flag manifolds in 1980s. These polynomials generalize Schur polynomials and form a linear basis of multivariate polynomials. In 2003, Lenart and Sottile introduced skew Schubert polynomials, which generalize skew Schur polynomials and expand in the Schubert basis with the generalized Littlewood–Richardson coefficients. In this paper, we initiate the study of these two families of polynomials from the perspective of computational complexity theory. We first observe that skew Schubert polynomials, and therefore Schubert polynomials, are in #P (when evaluating on nonnegative integral inputs) and VNP. Our main result is a deterministic algorithm that computes the expansion of a polynomial f of degree d in Z[ x1, ⋯ , xn] on the basis of Schubert polynomials, assuming an oracle computing Schubert polynomials. This algorithm runs in time polynomial in n, d, and the bit size of the expansion. This generalizes, and derandomizes, the sparse interpolation algorithm of symmetric polynomials in the Schur basis by Barvinok and Fomin (Adv Appl Math 18(3):271–285, 1997). In fact, our interpolation algorithm is general enough to accommodate any linear basis satisfying certain natural properties. Applications of the above results include a new algorithm that computes the generalized Littlewood–Richardson coefficients.
Musial, K, Bródka, P & De Meo, P 2017, 'Analysis and Applications of Complex Social Networks', Complexity, vol. 2017, pp. 1-2.
View/Download from: Publisher's site
Nemoto, K, Devitt, S & Munro, WJ 2017, 'Noise management to achieve superiority in quantum information systems', Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 375, no. 2099, pp. 20160236-20160236.
View/Download from: Publisher's site
View description>>
Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue ‘Quantum technology for the 21st century’.
Oberst, S, Bann, G, Lai, JCS & Evans, TA 2017, 'Cryptic termites avoid predatory ants by eavesdropping on vibrational cues from their footsteps', Ecology Letters, vol. 20, no. 2, pp. 212-221.
View/Download from: Publisher's site
View description>>
AbstractEavesdropping has evolved in many predator–prey relationships. Communication signals of social species may be particularly vulnerable to eavesdropping, such as pheromones produced by ants, which are predators of termites. Termites communicate mostly by way of substrate‐borne vibrations, which suggest they may be able to eavesdrop, using two possible mechanisms: ant chemicals or ant vibrations. We observed termites foraging within millimetres of ants in the field, suggesting the evolution of specialised detection behaviours. We found the termite Coptotermes acinaciformis detected their major predator, the ant Iridomyrmex purpureus, through thin wood using only vibrational cues from walking, and not chemical signals. Comparison of 16 termite and ant species found the ants‐walking signals were up to 100 times higher than those of termites. Eavesdropping on passive walking signals explains the predator detection and foraging behaviours in this ancient relationship, which may be applicable to many other predator–prey relationships.
Oberst, S, Marburg, S & Hoffmann, N 2017, 'Determining periodic orbits via nonlinear filtering and recurrence spectra in the presence of noise', Procedia Engineering, vol. 199, pp. 772-777.
View/Download from: Publisher's site
Peris-Ortiz, M, Gómez, JA, Merigó, JM & Rueda-Armengot, C 2017, 'Preface', Innovation, Technology and Knowledge Management, pp. ix-xiii.
Pietroni, N, Tarini, M, Vaxman, A, Panozzo, D & Cignoni, P 2017, 'Position-based tensegrity design.', ACM Trans. Graph., vol. 36, pp. 172:1-172:1.
View/Download from: Publisher's site
Pileggi, SF & Hunter, J 2017, 'An ontological approach to dynamic fine-grained Urban Indicators', Procedia Computer Science, vol. 108, pp. 2059-2068.
View/Download from: Publisher's site
View description>>
© 2017 The Authors. Published by Elsevier B.V. Urban indicators provide a unique multi-disciplinary data framework which social scientists, planners and policy makers employ to understand and analyze the complex dynamics of metropolitan regions. Indicators provide an independent, quantitative measure or benchmark of an aspect of an urban environment, by combining different metrics for a given region. While the current approach to urban indicators involves the systematic accurate collection of the raw data required to produce reliable indicators and the standardization of well-known commonly accepted or widely adopted indicators, the next generation of indicators is expected to support a more dynamic, customizable, fine-grained approach to indicators, via a context of interoperability and linked open data. Within this paper, we address these emerging requirements through an ontological approach aimed at (i) establishing interoperability among heterogeneous data sets, (ii) expressing the high-level semantics of the indicators, (iii) supporting indicator adaptability and dynamic composition for specific applications and (iv) representing properly the uncertainties of the resulting ecosystem.
Prasad, M, Lin, C-T, Li, D-L, Hong, C-T, Ding, W-P & Chang, J-Y 2017, 'Soft-Boosted Self-Constructing Neural Fuzzy Inference Network', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 3, pp. 584-588.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. This correspondence paper proposes an improved version of the self-constructing neural fuzzy inference network (SONFIN), called soft-boosted SONFIN (SB-SONFIN). The design softly boosts the learning process of the SONFIN in order to decrease the error rate and enhance the learning speed. The SB-SONFIN boosts the learning power of the SONFIN by taking into account the numbers of fuzzy rules and initial weights which are two important parameters of the SONFIN, SB-SONFIN advances the learning process by: 1) initializing the weights with the width of the fuzzy sets rather than just with random values and 2) improving the parameter learning rates with the number of learned fuzzy rules. The effectiveness of the proposed soft boosting scheme is validated on several real world and benchmark datasets. The experimental results show that the SB-SONFIN possesses the capability to outperform other known methods on various datasets.
Prasad, M, Liu, Y-T, Li, D-L, Lin, C-T, Shah, RR & Kaiwartya, OP 2017, 'A New Mechanism for Data Visualization with Tsk-Type Preprocessed Collaborative Fuzzy Rule Based System', Journal of Artificial Intelligence and Soft Computing Research, vol. 7, no. 1, pp. 33-46.
View/Download from: Publisher's site
View description>>
Abstract A novel data knowledge representation with the combination of structure learning ability of preprocessed collaborative fuzzy clustering and fuzzy expert knowledge of Takagi- Sugeno-Kang type model is presented in this paper. The proposed method divides a huge dataset into two or more subsets of dataset. The subsets of dataset interact with each other through a collaborative mechanism in order to find some similar properties within each-other. The proposed method is useful in dealing with big data issues since it divides a huge dataset into subsets of dataset and finds common features among the subsets. The salient feature of the proposed method is that it uses a small subset of dataset and some common features instead of using the entire dataset and all the features. Before interactions among subsets of the dataset, the proposed method applies a mapping technique for granules of data and centroid of clusters. The proposed method uses information of only half or less/more than the half of the data patterns for the training process, and it provides an accurate and robust model, whereas the other existing methods use the entire information of the data patterns. Simulation results show the proposed method performs better than existing methods on some benchmark problems.
Pratama, M, Lu, J, Lughofer, E, Zhang, G & Er, MJ 2017, 'An Incremental Learning of Concept Drifts Using Evolving Type-2 Recurrent Fuzzy Neural Networks', IEEE Transactions on Fuzzy Systems, vol. 25, no. 5, pp. 1175-1192.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The age of online data stream and dynamic environments results in the increasing demand of advanced machine learning techniques to deal with concept drifts in large data streams. Evolving fuzzy systems (EFS) are one of recent initiatives from the fuzzy system community to resolve the issue. Existing EFSs are not robust against data uncertainty, temporal system dynamics, and the absence of system order, because a vast majority of EFSs are designed in the type-1 feedforward network architecture. This paper aims to solve the issue of data uncertainty, temporal behavior, and the absence of system order by developing a novel evolving recurrent fuzzy neural network, called evolving type-2 recurrent fuzzy neural network (eT2RFNN). eT2RFNN is constructed in a new recurrent network architecture, featuring double recurrent layers. The new recurrent network architecture evolves a generalized interval type-2 fuzzy rule, where the rule premise is built upon the interval type-2 multivariate Gaussian function, whereas the rule consequent is crafted by the nonlinear wavelet function. The eT2RFNN adopts a holistic concept of evolving systems, where the fuzzy rule can be automatically generated, pruned, merged, and recalled in the single-pass learning mode. eT2RFNN is capable of coping with the problem of high dimensionality because it is equipped with online feature selection technology. The efficacy of eT2RFNN was experimentally validated using artificial and real-world data streams and compared with prominent learning algorithms. eT2RFNN produced more reliable predictive accuracy, while retaining lower complexity than its counterparts.
Qi, M, Sun, T, Zhang, H, Zhu, M, Yang, W, Shao, D & Voinov, A 2017, 'Maintenance of salt barrens inhibited landward invasion of Spartina species in salt marshes', Ecosphere, vol. 8, no. 10, pp. e01982-e01982.
View/Download from: Publisher's site
View description>>
AbstractSpartina spp. (cordgrasses) often dominates intertidal mudflats and/or low marshes. The landward invasion of these species was typically thought to be restrained by low tidal inundation frequencies and interspecific competition. We noticed that the reported soil salinity levels in some salt marshes were much higher than those at the mean higher high water level, which might inhibit the landward invasion of cordgrass. To test this possibility, we transplanted Spartina alterniflora across an elevational gradient in an invaded salt marsh in the Yellow River Delta National Nature Reserve, where a salt accumulation zone (i.e., salt barren) was previously observed. We found that S. alterniflora was significantly inhibited by the salt barren in high marsh regions, although it performed better at upland and low marsh regions. A common garden experiment further elucidated that S. alterniflora performed best at low salinity levels and that this species is less sensitive to inundation frequency. Our results indicated that the salt barren inhibited the landward invasion of S. alterniflora in salt marshes and provided a natural barrier to protect the upland from invasion. Though field observations suggest that S. alterniflora could propagate along tidal channels, which provide low‐salinity corridors for the dispersal of propagules, natural salt barrens can inhibit the landward invasion of Spartina in salt marshes. However, artificial disturbances that break the salt barren band in salt marshes (e.g., artificial ditches) might accelerate the invasion of Spartina spp. This new finding should alert salt marsh managers to pay attention to artificial ditches and/or other human activities when attempting to control
Qiao, M, Liu, L, Yu, J, Xu, C & Tao, D 2017, 'Diversified dictionaries for multi-instance learning', Pattern Recognition, vol. 64, pp. 407-416.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd Multiple-instance learning (MIL) has been a popular topic in the study of pattern recognition for years due to its usefulness for such tasks as drug activity prediction and image/text classification. In a typical MIL setting, a bag contains a bag-level label and more than one instance/pattern. How to bridge instance-level representations to bag-level labels is a key step to achieve satisfactory classification accuracy results. In this paper, we present a supervised learning method, diversified dictionaries MIL, to address this problem. Our approach, on the one hand, exploits bag-level label information for training class-specific dictionaries. On the other hand, it introduces a diversity regularizer into the class-specific dictionaries to avoid ambiguity between them. To the best of our knowledge, this is the first time that the diversity prior is introduced to solve the MIL problems. Experiments conducted on several benchmark (drug activity and image/text annotation) datasets show that the proposed method compares favorably to state-of-the-art methods.
Qiao, M, Xu, RYD, Bian, W & Tao, D 2017, 'Fast Sampling for Time-Varying Determinantal Point Processes', ACM Transactions on Knowledge Discovery from Data, vol. 11, no. 1, pp. 1-24.
View/Download from: Publisher's site
View description>>
Determinantal Point Processes (DPPs) are stochastic models which assign each subset of a base dataset with a probability proportional to the subset’s degree of diversity. It has been shown that DPPs are particularly appropriate in data subset selection and summarization (e.g., news display, video summarizations). DPPs prefer diverse subsets while other conventional models cannot offer. However, DPPs inference algorithms have a polynomial time complexity which makes it difficult to handle large and time-varying datasets, especially when real-time processing is required. To address this limitation, we developed a fast sampling algorithm for DPPs which takes advantage of the nature of some time-varying data (e.g., news corpora updating, communication network evolving), where the data changes between time stamps are relatively small. The proposed algorithm is built upon the simplification of marginal density functions over successive time stamps and the sequential Monte Carlo (SMC) sampling technique. Evaluations on both a real-world news dataset and the Enron Corpus confirm the efficiency of the proposed algorithm.
Ramezani, F, Lu, J, Taheri, J & Zomaya, AY 2017, 'A Multi-Objective Load Balancing System for Cloud Environments', The Computer Journal, vol. 60, no. 9, pp. 1316-1337.
View/Download from: Publisher's site
View description>>
© 2017 The British Computer Society. All rights reserved. Virtual machine (VM) live migration has been applied to system load balancing in cloud environments for the purpose of minimizing VM downtime and maximizing resource utilization. However, the migration process is both time-and cost-consuming as it requires the transfer of large size files or memory pages and consumes a huge amount of power and memory for the origin and destination physical machine (PM), especially for storage VM migration. This process also leads to VM downtime or slowdown. To deal with these shortcomings, we develop a Multi-objective Load Balancing (MO-LB) system that avoids VM migration and achieves system load balancing by transferring extra workload from a set of VMs allocated on an overloaded PM to other compatible VMs in the cluster with greater capacity. To reduce the time factor even more and optimize load balancing over a cloud cluster, MO-LB contains a CPU Usage Prediction (CUP) sub-system. The CUP not only predicts the performance of the VMs but also determines a set of appropriate VMs with the potential to execute the extra workload imposed on the VMs of an overloaded PM. We also design a Multi-Objective Task Scheduling optimization model using Particle Swarm Optimization to migrate the extra workload to the compatible VMs. The proposed method is evaluated using a VMware-vSphere-based private cloud in contrast to the VM migration technique applied by vMotion. The evaluation results show that the MO-LB system dramatically increases VM performance while reducing service response time, memory usage, job makespan, power consumption and the time taken for the load balancing process.
Romeo, M, Yepes-Baldó, M, Boria-Reverter, S & Merigó, JM 2017, 'Twenty-five years of research on work and organizational psychology: A bibliometric perspective', Anuario de Psicología, vol. 47, no. 1, pp. 32-44.
View/Download from: Publisher's site
View description>>
© 2017 Universitat de Barcelona The research aims to analyze the scientific productivity in the field of work/organizational psychology (WOP) in the last 25 years. We focus our analysis on the most influential journals and articles, generally and for 5-year periods, as well as structures of co-citation among the highest quality journals based on their h-index. We found that a high percentage of papers published each year receive between 5 and 10 cites. Secondly, we observe an exponential increase in the number of papers published, citations, and h-index. Additionally, the number of self-citations significantly increases in the last 5 years. In this sense, we consider that the most recent papers need more time to increase their level of citation and, subsequently, to correct the bias on self-citation. This research shows the status of research in the field of work/organizational psychology, analyzing the scientific journals and papers published in the Web of Science.
Saberi, M, Khadeer Hussain, O & Chang, E 2017, 'Past, present and future of contact centers: a literature review', Business Process Management Journal, vol. 23, no. 3, pp. 574-597.
View/Download from: Publisher's site
View description>>
PurposeContact centers (CCs) are one of the main touch points of customers in an organization. They form one of the inputs to customer relationship management (CRM) to enable an organization to efficiently resolve customer queries. CCs have an important impact on customer satisfaction and are a strategic asset for CRM systems. The purpose of this paper is to review the current literature on CCs and identify their shortcomings to be addressed in the current digital age.Design/methodology/approachThe current literature on CCs can be classified into the analytical and the managerial aspects of CCs. In the former, data mining, text mining, and voice recognition techniques are discussed, and in the latter, staff training, CC performance, and outsourced CCs are discussed.FindingsWith the growth of information and communication technologies, the information that CCs must handle both in terms of type and volume, has changed. To deal with such changes, CCs need to evolve in terms of their operation and public relations. The authors present a state-of-the-art review of the challenges in identifying the gaps in order to have the next generation of CCs. Lack of an interactive CC and lack of data integrity for CCs are highlighted as important issues that need to be dealt with properly by CCs.Originality/valueAs far as the authors know, this is the first paper that reviews CCs’ literature by providing the comprehensive survey, critical evaluation, and future research.
Saxena, A, Prasad, M, Gupta, A, Bharill, N, Patel, OP, Tiwari, A, Er, MJ, Ding, W & Lin, C-T 2017, 'A review of clustering techniques and developments', Neurocomputing, vol. 267, pp. 664-681.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. This paper presents a comprehensive study on clustering: exiting methods and developments made at various times. Clustering is defined as an unsupervised learning where the objects are grouped on the basis of some similarity inherent among them. There are different methods for clustering the objects such as hierarchical, partitional, grid, density based and model based. The approaches used in these methods are discussed with their respective states of art and applicability. The measures of similarity as well as the evaluation criteria, which are the central components of clustering, are also presented in the paper. The applications of clustering in some fields like image segmentation, object and character recognition and data mining are highlighted.
Seneviratne, S, Hu, Y, Nguyen, T, Lan, G, Khalifa, S, Thilakarathna, K, Hassan, M & Seneviratne, A 2017, 'A Survey of Wearable Devices and Challenges', IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2573-2620.
View/Download from: Publisher's site
View description>>
© 1998-2012 IEEE. As smartphone penetration saturates, we are witnessing a new trend in personal mobile devices-wearable mobile devices or simply wearables as it is often called. Wearables come in many different forms and flavors targeting different accessories and clothing that people wear. Although small in size, they are often expected to continuously sense, collect, and upload various physiological data to improve quality of life. These requirements put significant demand on improving communication security and reducing power consumption of the system, fueling new research in these areas. In this paper, we first provide a comprehensive survey and classification of commercially available wearables and research prototypes. We then examine the communication security issues facing the popular wearables followed by a survey of solutions studied in the literature. We also categorize and explain the techniques for improving the power efficiency of wearables. Next, we survey the research literature in wearable computing. We conclude with future directions in wearable market and research.
Shahbazi, B, Chehreh Chelgani, S & Matin, SS 2017, 'Prediction of froth flotation responses based on various conditioning parameters by Random Forest method', Colloids and Surfaces A: Physicochemical and Engineering Aspects, vol. 529, pp. 936-941.
View/Download from: Publisher's site
Sharma, S, Puthal, D, Tazeen, S, Prasad, M & Zomaya, AY 2017, 'MSGR: A Mode-Switched Grid-Based Sustainable Routing Protocol for Wireless Sensor Networks', IEEE Access, vol. 5, pp. 19864-19875.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. A Wireless Sensor Network (WSN) consists of enormous amount of sensor nodes. These sensor nodes sense the changes in physical parameters from the sensing range and forward the information to the sink nodes or the base station. Since sensor nodes are driven with limited power batteries, prolonging the network lifetime is difficult and very expensive, especially for hostile locations. Therefore, routing protocols for WSN must strategically distribute the dissipation of energy, so as to increase the overall lifetime of the system. Current research trends from areas, such as from Internet of Things and fog computing use sensors as the source of data. Therefore, energy-efficient data routing in WSN is still a challenging task for real-Time applications. Hierarchical grid-based routing is an energy-efficient method for routing of data packets. This method divides the sensing area into grids and is advantageous in wireless sensor networks to enhance network lifetime. The network is partitioned into virtual equal-sized grids. The proposed mode-switched grid-based routing protocol for WSN selects one node per grid as the grid head. The routing path to the sink is established using grid heads. Grid heads are switched between active and sleep modes alternately. Therefore, not all grid heads take part in the routing process at the same time. This saves energy in grid heads and improves the network lifetime. The proposed method builds a routing path using each active grid head which leads to the sink. For handling the mobile sink movement, the routing path changes only for some grid head nodes which are nearer to the grid, in which the mobile sink is currently positioned. Data packets generated at any source node are routed directly through the data disseminating grid head nodes on the routing path to the sink.
Sun, G, Cui, T, Beydoun, G, Chen, S, Dong, F, Xu, D & Shen, J 2017, 'Towards Massive Data and Sparse Data in Adaptive Micro Open Educational Resource Recommendation: A Study on Semantic Knowledge Base Construction and Cold Start Problem', Sustainability, vol. 9, no. 6, pp. 898-898.
View/Download from: Publisher's site
View description>>
© 2017 by the authors. Micro Learning through open educational resources (OERs) is becoming increasingly popular. However, adaptive micro learning support remains inadequate by current OER platforms. To address this, our smart system, Micro Learning as a Service (MLaaS), aims to deliver personalized OER with micro learning to satisfy their real-time needs. In this paper, we focus on constructing a knowledge base to support the decision-making process of MLaaS. MLaas is built using a top-down approach. A conceptual graph-based ontology construction is first developed. An educational data mining and learning analytic strategy is then proposed for the data level. The learning resource adaptation still requires learners' historical information. To compensate for the absence of this information initially (aka 'cold start'), we set up a predictive ontology-based mechanism. As the first resource is delivered to the beginning of a learner's learning journey, the micro OER recommendation is also optimized using a tailored heuristic.
Tian, F, Liu, B, Sun, X, Zhang, X, Cao, G & Gui, L 2017, 'Movement-Based Incentive for Crowdsourcing', IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp. 7223-7233.
View/Download from: Publisher's site
Tsai, Z-R, Chang, Y-Z, Zhang, H-W & Lin, C-T 2017, 'Relax the chaos-model-based human behavior by electrical stimulation therapy design', Computers in Human Behavior, vol. 67, pp. 151-160.
View/Download from: Publisher's site
View description>>
© 2016 Elsevier Ltd The brain's electrical activity is chaotic and unpredictable yet has a hidden order that is attracted to a certain region. There are numerous fractal strange attractors in the brain that change as thinking processes vary. Further, the thinking processes change the human behaviors, especially in schizophrenia or internet addiction. The proposed chaos modelling and control theory may offer useful and relevant information on electrical stimulation therapy design to change this thinking processes through stimulating the brain's electrical activities. The experimental result of relaxing body from a lots of electrotherapy clinics helps mental disorders relax the thinking chaos in mind to replace their chaotic behaviors from the brain's electrical activities. This paper tries to explain the above claim in the aspect of the electrotherapy and control theory to suggest the control signal of electrotherapy based on an assumption for chaos model of patient and its control signals design according to multiple stabilization solutions. In the future, the electrical stimulation therapy will be proof in the Raphael Humanistic Clinic or the other electrotherapy clinics.
Valenzuela Fernández, L, Merigó, JM & Nicolas, C 2017, 'Universidades influyentes en investigación sobre orientación al mercado. Una visión general entre 1990 y 2014', Estudios Gerenciales, vol. 33, no. 144, pp. 221-227.
View/Download from: Publisher's site
View description>>
El objetivo de este estudio es identificar las universidades más productivas e influyentes para la comunidad cientÃfica sobre el tópico de orientación al mercado. Lo anterior se realiza principalmente a través de indicadores bibliométricos —como el Ãndice h— y la relación total citas/total artÃculos para el periodo 1990-2014, a partir de la información encontrada en Web of  Science. Dentro de los hallazgos se destaca el interés de la comunidad cientÃfica en esta temática, lo que se ve reflejado en el aumento considerado en la contribución que se ha generado durante los últimos 25 años. Además, se determina un ranking de las 30 universidades más influyentes, junto con un ranking que relaciona universidades y revistas con mayor influencia en temas de orientación al mercado.
Valenzuela, LM, Merigó, JM, Johnston, WJ, Nicolas, C & Jaramillo, JF 2017, 'Thirty years of the Journal of Business & Industrial Marketing: a bibliometric analysis', Journal of Business & Industrial Marketing, vol. 32, no. 1, pp. 1-17.
View/Download from: Publisher's site
View description>>
PurposeThe aim of this study is to reveal the contribution that Journal of Business & Industrial Marketing has to scientific research and its most influential thematic work in B-to-B since its beginning in 1986 until 2015, in commemoration of the 30th anniversary.Design/methodology/approachThe paper begins with a qualitative introduction: the emergence of the magazine, its origins, editorial and positioning. Subsequently, it is based on bibliometric methodologies to develop quantitative analysis. The distribution of annual publications is analyzed, the most cited papers, the keywords that are mostly used, the influence on the publishing industry and authors, universities and the countries that have the most publications.FindingsThe predominant role of the USA at all levels is highlighted. It also highlights the presence (given its size and population) of the countries of Northern Europe. There is great interest in appreciating the evolution of the number of publications that are always increasing which demonstrates the growing and sustained interest in these types of articles, with certain times of retreat (often coincide with economic crisis).Research limitations/implicationsThe Scopus database gives one unit to each author, university or country involved in the paper, without distinguishing whether it was one or more authors in the study. Therefore, this may bring some deviations in the analysis. However, the study considers some figures with fractional counting to partially solve these limitations.
Walsh, L, Bluff, A & Johnston, A 2017, 'Water, image, gesture and sound: composing and performing an interactive audiovisual work', Digital Creativity, vol. 28, no. 3, pp. 177-195.
View/Download from: Publisher's site
View description>>
© 2017 Informa UK Limited, trading as Taylor & Francis Group. Performing and composing for interactive audiovisual system presents many challenges to the performer. Working with visual, sonic and gestural components requires new skills and new ways of thinking about performance. However, there are few studies that focus on performer experience with interactive systems. We present the work Blue Space for oboe and interactive audiovisual system, highlighting the evolving process of the collaborative development of the work. We consider how musical and technical demands interact in this process, and outline the challenges of performing with interactive systems. Using the development of Blue Space as a self-reflective case study, we examine the role of gestures in interactive audiovisual works and identify new modes of performance.
Wang, H, Zhang, P, Zhu, X, Tsang, IW-H, Chen, L, Zhang, C & Wu, X 2017, 'Incremental Subgraph Feature Selection for Graph Classification', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 1, pp. 128-142.
View/Download from: Publisher's site
View description>>
Graph classification is an important tool for analyzing data with structure dependency, where subgraphs are often used as features for learning. In reality, the dimension of the subgraphs crucially depends on the threshold setting of the frequency support parameter, and the number may become extremely large. As a result, subgraphs may be incrementally discovered to form a feature stream and require the underlying graph classifier to effectively discover representative subgraph features from the subgraph feature stream. In this paper, we propose a primal-dual incremental subgraph feature selection algorithm (ISF) based on a max-margin graph classifier. The ISF algorithm constructs a sequence of solutions that are both primal and dual feasible. Each primal-dual pair shrinks the dual gap and renders a better solution for the optimal subgraph feature set. To avoid bias of ISF algorithm on short-pattern subgraph features, we present a new incremental subgraph join feature selection algorithm (ISJF) by forcing graph classifiers to join short-pattern subgraphs and generate long-pattern subgraph features. We evaluate the performance of the proposed models on both synthetic networks and real-world social network data sets. Experimental results demonstrate the effectiveness of the proposed methods.
Wang, J, Merigó, JM & Jin, L 2017, 'S-H OWA Operators with Moment Measure', International Journal of Intelligent Systems, vol. 32, no. 1, pp. 51-66.
View/Download from: Publisher's site
View description>>
© 2016 Wiley Periodicals, Inc. Step-like or Hurwicz-like ordered weighted averaging (OWA) (S-H OWA) operators connect two fundamental OWA operators, step OWA operators and Hurwicz OWA operators. S-H OWA operators also generalize them and some other well-know OWA operators such as median and centered OWA operators. Generally, there are two types of determination methods for S-H OWA operators: One is from the motivation of some existed mathematical results; the other is by a set of “nonstrict” definitions and often via some intermediate elements. For the second type, in this study we define two sets of strict definitions for Hurwitz/step degree, which are more effective and necessary for theoretical studies and practical usages. Both sets of definitions are useful in different situations. In addition, they are based on the same concept moment of OWA operators proposed in this study, and therefore they become identical in limit forms. However, the Hurwicz/step degree (HD/SD) puts more concerns on its numerical measure and physical meaning, whereas the relative Hurwicz/step degree (rHD/rSD), still being accurate numerically, sometimes is more reasonable intuitively and has larger potential in further studies and practical applications.
Wang, J, Zhang, X, Guo, Z & Lu, H 2017, 'Developing an early-warning system for air quality prediction and assessment of cities in China', Expert Systems with Applications, vol. 84, pp. 102-116.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Ltd Air quality has received continuous attention from both environmental managers and citizens. Accordingly, early-warning systems for air pollution are very useful tools to avoid negative health effects and develop effective prevention programs. However, developing robust early-warning systems is very challenging, as well as necessary. This paper develops a reliable and effective early-warning system that consists of air quality prediction and assessment modules. In the prediction module, a hybrid forecasting method is developed for predicting pollutant concentrations that effectively estimates future air quality conditions. In developing this proposed model, we suggest the use of a back propagation neural network algorithm, combined with a probabilistic parameter model and data preprocessing techniques, to address the uncertainties involved in future air quality prediction. Meanwhile, a pre-analysis is implemented, primarily by using optimized distribution functions to examine and analyze statistical characteristics and emission behaviors of air pollutants. The second method, which is developed as part of the second module, is based on fuzzy set theory and the Analytic Hierarchy Process, and it performs air quality assessments to provide a clear and intelligible description of air quality conditions. Using data from the Ministry of Environmental Protection of China and six stages of air quality classification levels, specifically good, moderate, lightly polluted, moderately polluted, heavily polluted and severely polluted, two cities in China, Chengdu and Hangzhou, are used as illustrative examples to verify the effectiveness of the developed early-warning system. The results demonstrate that the proposed methods are effective and reliable for use by environmental supervisors in air pollution monitoring and management.
Wang, W, Yin, H, Chen, L, Sun, Y, Sadiq, S & Zhou, X 2017, 'ST-SAGE', ACM Transactions on Intelligent Systems and Technology, vol. 8, no. 3, pp. 1-25.
View/Download from: Publisher's site
View description>>
With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important mobile application, especially when users travel away from home. However, this type of recommendation is very challenging compared to traditional recommender systems. A user may visit only a limited number of spatial items, leading to a very sparse user-item matrix. This matrix becomes even sparser when the user travels to a distant place, as most of the items visited by a user are usually located within a short distance from the user’s home. Moreover, user interests and behavior patterns may vary dramatically across different time and geographical regions. In light of this, we propose ST-SAGE, a spatial-temporal sparse additive generative model for spatial item recommendation in this article. ST-SAGE considers both personal interests of the users and the preferences of the crowd in the target region at the given time by exploiting both the co-occurrence patterns and content of spatial items. To further alleviate the data-sparsity issue, ST-SAGE exploits the geographical correlation by smoothing the crowd’s preferences over a well-designed spatial index structure called the spatial pyramid . To speed up the training process of ST-SAGE, we implement a parallel version of the model inference algorithm on the GraphLab framework. We conduct extensive experiments; the experimental results clearly demonstrate that ST-SAGE outperforms the state-of-the-art recommender systems in terms of recommendation effectiveness, model training efficiency, and online recommendation efficiency.
Wang, X, Liu, Y, Zhang, G, Xiong, F & Lu, J 2017, 'Diffusion-based recommendation with trust relations on tripartite graphs', Journal of Statistical Mechanics: Theory and Experiment, vol. 2017, no. 8, pp. 083405-083405.
View/Download from: Publisher's site
Wang, X, Liu, Y, Zhang, G, Zhang, Y, Chen, H & Lu, J 2017, 'Mixed Similarity Diffusion for Recommendation on Bipartite Networks', IEEE Access, vol. 5, pp. 21029-21038.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. In recommender systems, collaborative filtering technology is an important method to evaluate user preference through exploiting user feedback data, and has been widely used in industrial areas. Diffusion-based recommendation algorithms inspired by diffusion phenomenon in physical dynamics are a crucial branch of collaborative filtering technology, which use a bipartite network to represent collection behaviors between users and items. However, diffusion-based recommendation algorithms calculate the similarity between users and make recommendations by only considering implicit feedback but neglecting the benefits from explicit feedback data, which would be a significant feature in recommender systems. This paper proposes a mixed similarity diffusion model to integrate both explicit feedback and implicit feedback. First, cosine similarity between users is calculated by explicit feedback, and we integrate it with resource-allocation index calculated by implicit feedback. We further improve the performance of the mixed similarity diffusion model by considering the degrees of users and items at the same time in diffusion processes. Some sophisticated experiments are designed to evaluate our proposed method on three real-world data sets. Experimental results indicate that recommendations given by the mixed similarity diffusion perform better on both the accuracy and the diversity than that of most state-of-the-art algorithms.
Wang, Y, He, Q, Zhang, X, Ye, D & Yang, Y 2017, 'Efficient QoS-Aware Service Recommendation for Multi-Tenant Service-Based Systems in Cloud', IEEE Transactions on Services Computing, vol. 13, no. 6, pp. 1-1.
View/Download from: Publisher's site
Wen, S, Jiang, J, Liu, B, Xiang, Y & Zhou, W 2017, 'Using epidemic betweenness to measure the influence of users in complex networks', Journal of Network and Computer Applications, vol. 78, pp. 288-299.
View/Download from: Publisher's site
Woodside, AG & Sood, S 2017, 'Vignettes in the two-step arrival of the internet of things and its reshaping of marketing management’s service-dominant logic', Journal of Marketing Management, vol. 33, no. 1-2, pp. 98-110.
View/Download from: Publisher's site
View description>>
© 2016 Westburn Publishers Ltd. This commentary offers vignettes on the introductions of the ‘internet of things’ (IoT) and their impacts on revising the service-dominant (S-D) logic paradigm in marketing. Except smart phones, most consumer households are not participating now in the IoT revolution–but most product-service radical innovations include a 20+ year low-growth start-up. Because the benefits really are enormous and the technical advances in smart devices are now rapidly improving, expect the IoT revolution to hit hard in all areas of daily life before 2025 similar to the great impacts occurring now in business-to-business applications. This study proposes substantial revisions in the S-D logic due to the upcoming take-off stage of adopting radically new IoT innovations.
Wu, D, Lance, BJ, Lawhern, VJ, Gordon, S, Jung, T-P & Lin, C-T 2017, 'EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 11, pp. 2157-2168.
View/Download from: Publisher's site
View description>>
© 2001-2011 IEEE. Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance. In this paper, for the first time, it is applied to BCI regression problems, an important category of BCI applications. More specifically, we propose a new feature extraction approach for electroencephalogram (EEG)-based BCI regression problems: a spatial filter is first used to increase the signal quality of the EEG trials and also to reduce the dimensionality of the covariance matrices, and then Riemannian tangent space features are extracted. We validate the performance of the proposed approach in reaction time estimation from EEG signals measured in a large-scale sustained-attention psychomotor vigilance task, and show that compared with the traditional powerband features, the tangent space features can reduce the root mean square estimation error by 4.30%-8.30%, and increase the estimation correlation coefficient by 6.59%-11.13%.
Wu, D, Lawhern, VJ, Gordon, S, Lance, BJ & Lin, C-T 2017, 'Driver Drowsiness Estimation From EEG Signals Using Online Weighted Adaptation Regularization for Regression (OwARR)', IEEE Transactions on Fuzzy Systems, vol. 25, no. 6, pp. 1522-1535.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. One big challenge that hinders the transition of brain-computer interfaces (BCIs) from laboratory settings to real-life applications is the availability of high-performance and robust learning algorithms that can effectively handle individual differences, i.e., algorithms that can be applied to a new subject with zero or very little subject-specific calibration data. Transfer learning and domain adaptation have been extensively used for this purpose. However, most previous works focused on classification problems. This paper considers an important regression problem in BCI, namely, online driver drowsiness estimation from EEG signals. By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR. Using a simulated driving dataset with 15 subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller estimation errors than several other approaches. We also provide comprehensive analyses on the robustness of OwARR and OwARR-SDS.
Wu, S-L, Liu, Y-T, Hsieh, T-Y, Lin, Y-Y, Chen, C-Y, Chuang, C-H & Lin, C-T 2017, 'Fuzzy Integral With Particle Swarm Optimization for a Motor-Imagery-Based Brain–Computer Interface', IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 21-28.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. A brain-computer interface (BCI) system using electroencephalography signals provides a convenient means of communication between the human brain and a computer. Motor imagery (MI), in which motor actions are mentally rehearsed without engaging in actual physical execution, has been widely used as a major BCI approach. One robust algorithm that can successfully cope with the individual differences in MI-related rhythmic patterns is to create diverse ensemble classifiers using the subband common spatial pattern (SBCSP) method. To aggregate outputs of ensemble members, this study uses fuzzy integral with particle swarm optimization (PSO), which can regulate subject-specific parameters for the assignment of optimal confidence levels for classifiers. The proposed system combining SBCSP, fuzzy integral, and PSO exhibits robust performance for offline single-trial classification of MI and real-time control of a robotic arm using MI. This paper represents the first attempt to utilize fuzzy fusion technique to attack the individual differences problem of MI applications in real-world noisy environments. The results of this study demonstrate the practical feasibility of implementing the proposed method for real-world applications.
Wu, Y, Liao, L-D, Pan, H-C, He, L, Lin, C-T & Tan, MC 2017, 'Fabrication and interfacial characteristics of surface modified Ag nanoparticle based conductive composites', RSC Advances, vol. 7, no. 47, pp. 29702-29712.
View/Download from: Publisher's site
View description>>
Surface modification of Ag nanoparticles with PAA–PVP complex was conducted and successfully improved the dispersion of Ag nanoparticles in PDMS.
Wu, Z, Lei, L, Li, G, Huang, H, Zheng, C, Chen, E & Xu, G 2017, 'A topic modeling based approach to novel document automatic summarization', Expert Systems with Applications, vol. 84, pp. 12-23.
View/Download from: Publisher's site
Wu, Z, Zhu, H, Li, G, Cui, Z, Huang, H, Li, J, Chen, E & Xu, G 2017, 'An efficient Wikipedia semantic matching approach to text document classification', Information Sciences, vol. 393, pp. 15-28.
View/Download from: Publisher's site
Xu, C, Han, Z, Zhao, G & Yu, S 2017, 'A Sleeping and Offloading Optimization Scheme for Energy-Efficient WLANs', IEEE Communications Letters, vol. 21, no. 4, pp. 877-880.
View/Download from: Publisher's site
View description>>
In this letter, we propose an access point (AP) sleeping and user offloading optimization scheme to improve energy efficiency in densely deployed WLANs. Through real trace analysis, we investigate AP energy efficiency to obtain the sleep-awake threshold, which is used to select sleep or awake APs according to real-time status information monitored on controller. Moreover, we formulate the user offloading problem as a reverse auction process to optimize energy efficiency of APs involved in offloading. Simulation results demonstrate that, comparing to traditional methods, our scheme can achieve up to 20% energy saving while maintaining effective system coverage and throughput.
Xu, C, Jin, W, Zhao, G, Tianfield, H, Yu, S & Qu, Y 2017, 'A Novel Multipath-Transmission Supported Software Defined Wireless Network Architecture', IEEE Access, vol. 5, pp. 2111-2125.
View/Download from: Publisher's site
View description>>
The inflexible management and operation of today's wireless access networks cannot meet the increasingly growing specific requirements, such as high mobility and throughput, service differentiation, and high-level programmability. In this paper, we put forward a novel multipath-transmission supported software-defined wireless network architecture (MP-SDWN), with the aim of achieving seamless handover, throughput enhancement, and flow-level wireless transmission control as well as programmable interfaces. In particular, this research addresses the following issues: 1) for high mobility and throughput, multi-connection virtual access point is proposed to enable multiple transmission paths simultaneously over a set of access points for users and 2) wireless flow transmission rules and programmable interfaces are implemented into mac80211 subsystem to enable service differentiation and flow-level wireless transmission control. Moreover, the efficiency and flexibility of MP-SDWN are demonstrated in the performance evaluations conducted on a 802.11 based-testbed, and the experimental results show that compared to regular WiFi, our proposed MP-SDWN architecture achieves seamless handover and multifold throughput improvement, and supports flow-level wireless transmission control for different applications.
Xu, X, Liu, Z, Wang, Z, Sheng, QZ, Yu, J & Wang, X 2017, 'S-ABC: A paradigm of service domain-oriented artificial bee colony algorithms for service selection and composition', Future Generation Computer Systems, vol. 68, pp. 304-319.
View/Download from: Publisher's site
Xuan, J, Lu, J, Zhang, G & Xu, RYD 2017, 'Cooperative Hierarchical Dirichlet Processes: Superposition vs. Maximization', Artificial Intelligence, vol. 271, pp. 43-73.
View/Download from: Publisher's site
View description>>
The cooperative hierarchical structure is a common and significant datastructure observed in, or adopted by, many research areas, such as: text mining(author-paper-word) and multi-label classification (label-instance-feature).Renowned Bayesian approaches for cooperative hierarchical structure modelingare mostly based on topic models. However, these approaches suffer from aserious issue in that the number of hidden topics/factors needs to be fixed inadvance and an inappropriate number may lead to overfitting or underfitting.One elegant way to resolve this issue is Bayesian nonparametric learning, butexisting work in this area still cannot be applied to cooperative hierarchicalstructure modeling. In this paper, we propose a cooperative hierarchical Dirichlet process (CHDP)to fill this gap. Each node in a cooperative hierarchical structure is assigneda Dirichlet process to model its weights on the infinite hidden factors/topics.Together with measure inheritance from hierarchical Dirichlet process, twokinds of measure cooperation, i.e., superposition and maximization, are definedto capture the many-to-many relationships in the cooperative hierarchicalstructure. Furthermore, two constructive representations for CHDP, i.e.,stick-breaking and international restaurant process, are designed to facilitatethe model inference. Experiments on synthetic and real-world data withcooperative hierarchical structures demonstrate the properties and the abilityof CHDP for cooperative hierarchical structure modeling and its potential forpractical application scenarios.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2017, 'A Bayesian nonparametric model for multi-label learning', Machine Learning, vol. 106, no. 11, pp. 1787-1815.
View/Download from: Publisher's site
View description>>
© 2017, The Author(s). Multi-label learning has become a significant learning paradigm in the past few years due to its broad application scenarios and the ever-increasing number of techniques developed by researchers in this area. Among existing state-of-the-art works, generative statistical models are characterized by their good generalization ability and robustness on large number of labels through learning a low-dimensional label embedding. However, one issue of this branch of models is that the number of dimensions needs to be fixed in advance, which is difficult and inappropriate in many real-world settings. In this paper, we propose a Bayesian nonparametric model to resolve this issue. More specifically, we extend a Gamma-negative binomial process to three levels in order to capture the label-instance-feature structure. Furthermore, a mixing strategy for Gamma processes is designed to account for the multiple labels of an instance. The mixed process also leads to a difficulty in model inference, so an efficient Gibbs sampling inference algorithm is then developed to resolve this difficulty. Experiments on several real-world datasets show the performance of the proposed model on multi-label learning tasks, comparing with three state-of-the-art models from the literature.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2017, 'Bayesian Nonparametric Relational Topic Model through Dependent Gamma Processes', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 7, pp. 1357-1369.
View/Download from: Publisher's site
View description>>
© 2016 IEEE. Traditional relational topic models provide a successful way to discover the hidden topics from a document network. Many theoretical and practical tasks, such as dimensional reduction, document clustering, and link prediction, could benefit from this revealed knowledge. However, existing relational topic models are based on an assumption that the number of hidden topics is known a priori, which is impractical in many real-world applications. Therefore, in order to relax this assumption, we propose a nonparametric relational topic model using stochastic processes instead of fixed-dimensional probability distributions in this paper. Specifically, each document is assigned a Gamma process, which represents the topic interest of this document. Although this method provides an elegant solution, it brings additional challenges when mathematically modeling the inherent network structure of typical document network, i.e., two spatially closer documents tend to have more similar topics. Furthermore, we require that the topics are shared by all the documents. In order to resolve these challenges, we use a subsampling strategy to assign each document a different Gamma process from the global Gamma process, and the subsampling probabilities of documents are assigned with a Markov Random Field constraint that inherits the document network structure. Through the designed posterior inference algorithm, we can discover the hidden topics and its number simultaneously. Experimental results on both synthetic and real-world network datasets demonstrate the capabilities of learning the hidden topics and, more importantly, the number of topics.
Xuan, J, Luo, X, Lu, J & Zhang, G 2017, 'Explicitly and implicitly exploiting the hierarchical structure for mining website interests on news events', Information Sciences, vol. 420, pp. 263-277.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier Inc. After a news event, many different websites publish coverage of that event, each expressing their own unique commentary, perspectives, and viewpoints. Websites form around a specific set of interests to cater to different audiences, and discovering these interests can help audiences C especially people and organizations that are interested in news C select the most appropriate websites to use as their sources of information. This paper presents three methods for formally defining and mining a websites interests, each of which is explicitly or implicitly based on a hierarchial structure: website-webpage-keyword. The first, and most straightforward, method explicitly uses keyword-layer network communities and the mapping relations between websites and keywords. The second method expands upon the first method with an iterative algorithm that combines both the mapping relations and the network relations from the website-webpage-keyword structure to further refine the keyword-layer network communities. In the third method, a website topic model implicitly captures the mapping relations among the websites, webpages, and keywords. The performance of three proposed methods in website interest mining is compared using a bespoke evaluation metric. The experimental results show that the iterative procedure designed in the second method is able to improve website interest mining performance, and the website topic model in the third method achieves the best performance among the three methods.
Yang, C, Zhu, D, Wang, X, Zhang, Y, Zhang, G & Lu, J 2017, 'Requirement-oriented core technological components’ identification based on SAO analysis', Scientometrics, vol. 112, no. 3, pp. 1229-1248.
View/Download from: Publisher's site
View description>>
© 2017, Akadémiai Kiadó, Budapest, Hungary. Technologies play an important role in the survival and development of enterprises. Understanding and monitoring the core technological components (e.g., technology process, operation method, function) of a technology is an important issue for researchers to develop R&D policy and manage product competitiveness. However, it is difficult to identify core technological components from a mass of terms, and we may experience some difficulties with describing complete technical details and understanding the terms-based results. This paper proposes a Subject-Action-Object (SAO)-based method, in which (1) a syntax-based approach is constructed to extract the SAO structures describing the function, relationship and operation in specified topics; (2) a systematic method is built to extract and screen technological components from SAOs; and (3) we propose a “relevance indicator” to calculate the relevance of the technological components to requirements, and finally identify core technological components based on this indicator. Based on the considerations for requirements and novelty, the core technological components identified have great market potential and can be useful in monitoring and forecasting new technologies. An empirical study of graphene is performed to demonstrate the proposed method. The resulting knowledge may hold interest for R&D management and corporate technology strategies in practice.
Yang, H, Jiang, Z & Lu, H 2017, 'A Hybrid Wind Speed Forecasting System Based on a ‘Decomposition and Ensemble’ Strategy and Fuzzy Time Series', Energies, vol. 10, no. 9, pp. 1422-1422.
View/Download from: Publisher's site
View description>>
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. Accurate and stable wind speed forecasting is of critical importance in the wind power industry and has measurable influence on power-system management and the stability of market economics. However, most traditional wind speed forecasting models require a large amount of historical data and face restrictions due to assumptions, such as normality postulates. Additionally, any data volatility leads to increased forecasting instability. Therefore, in this paper, a hybrid forecasting system, which combines the 'decomposition and ensemble' strategy and fuzzy time series forecasting algorithm, is proposed that comprises two modules-data pre-processing and forecasting. Moreover, the statistical model, artificial neural network, and Support Vector Regression model are employed to compare with the proposed hybrid system, which is proven to be very effective in forecasting wind speed data affected by noise and instability. The results of these comparisons demonstrate that the hybrid forecasting system can improve the forecasting accuracy and stability significantly, and supervised discretization methods outperform the unsupervised methods for fuzzy time series in most cases.
Yang, S, Liu, X, Liu, Q, Guan, L, Lee, JM & Jung, KH 2017, 'A Study of Storm Surge Disasters Based on Extreme Value Distribution Theory', Journal of Coastal Research, vol. 336, pp. 1423-1435.
View/Download from: Publisher's site
Yao, S-N, Lin, C-T, King, J-T, Liu, Y-C & Liang, C 2017, 'Learning in the visual association of novice and expert designers', Cognitive Systems Research, vol. 43, pp. 76-88.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Designers are adept at determining similarities between previously seen objects and new creations using visual association. However, extant research on the visual association of designers and the differences between expert and novice designers when they engage in the visual association task are scant. Using electroencephalography (EEG), this study attempted to narrow this research gap. Sixteen healthy designers—eight experts and eight novices—were recruited, and asked to perform visual association while EEG signals were acquired, subsequently analysed using independent component analysis. The results indicated that strong connectivity was observed among the prefrontal, frontal, and cingulate cortices, and the default mode network. The experts used both hemispheres and executive functions to support their association tasks, whereas the novices mainly used their right hemisphere and memory retrieval functions. The visual association of experts appeared to be more goal-directed than that of the novices. Accordingly, designing and implementing authentic and goal-directed activities for improving the executive functions of the prefrontal cortex and default mode network are critical for design educators and creativity researchers.
Ye, D, Zhang, M & Vasilakos, AV 2017, 'A Survey of Self-Organization Mechanisms in Multiagent Systems', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 3, pp. 441-461.
View/Download from: Publisher's site
Yin, H, Wang, W, Wang, H, Chen, L & Zhou, X 2017, 'Spatial-Aware Hierarchical Collaborative Deep Learning for POI Recommendation', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 11, pp. 2537-2551.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Point-of-interest (POI) recommendation has become an important way to help people discover attractive and interesting places, especially when they travel out of town. However, the extreme sparsity of user-POI matrix and cold-start issues severely hinder the performance of collaborative filtering-based methods. Moreover, user preferences may vary dramatically with respect to the geographical regions due to different urban compositions and cultures. To address these challenges, we stand on recent advances in deep learning and propose a Spatial-Aware Hierarchical Collaborative Deep Learning model (SH-CDL). The model jointly performs deep representation learning for POIs from heterogeneous features and hierarchically additive representation learning for spatial-aware personal preferences. To combat data sparsity in spatial-aware user preference modeling, both the collective preferences of the public in a given target region and the personal preferences of the user in adjacent regions are exploited in the form of social regularization and spatial smoothing. To deal with the multimodal heterogeneous features of the POIs, we introduce a late feature fusion strategy into our SH-CDL model. The extensive experimental analysis shows that our proposed model outperforms the state-of-the-art recommendation models, especially in out-of-town and cold-start recommendation scenarios.
Yu, S, Liu, M, Dou, W, Liu, X & Zhou, S 2017, 'Networking for Big Data: A Survey', IEEE Communications Surveys & Tutorials, vol. 19, no. 1, pp. 531-549.
View/Download from: Publisher's site
View description>>
Complementary to the fancy big data applications, networking for big data is an indispensable supporting platform for these applications in practice. This emerging research branch has gained extensive attention from both academia and industry in recent years. In this new territory, researchers are facing many unprecedented theoretical and practical challenges. We are therefore motivated to solicit the latest works in this area, aiming to pave a comprehensive and solid starting ground for interested readers. We first clarify the definition of networking for big data based on the cross disciplinary nature and integrated needs of the domain. Second, we present the current understanding of big data from different levels, including its formation, networking features, mathematical representations, and the networking technologies. Third, we discuss the challenges and opportunities from various perspectives in this hopeful field. We further summarize the lessons we learned based on the survey. We humbly hope this paper will shed light for forthcoming researchers to further explore the uncharted part of this promising land.
Yu, S, Muller, P & Zomaya, A 2017, 'Editorial: special issue on “big data security and privacy”', Digital Communications and Networks, vol. 3, no. 4, pp. 211-212.
View/Download from: Publisher's site
Yusoff, B, Merigó, JM & Ceballos, D 2017, 'Owa-based aggregation operations in multi-expert mcdm model', Economic Computation and Economic Cybernetics Studies and Research, vol. 51, no. 2, pp. 211-230.
View description>>
This paper presents an analysis of multi-expert multi-criteria decision making (ME-MCDM) model based on the ordered weighted averaging (OWA) operators. Two methods of modeling the majority opinion are studied as to aggregate the experts’ judgments, in which based on the induced OWA operators. Then, an overview of OWA with the inclusion of different degrees of importance is provided for aggregating the criteria. An alternative OWA operator with a new weighting method is proposed which termed as alternative OWAWA (AOWAWA) operator. Some extensions of ME-MCDM model with respect to two-stage aggregation processes are developed based on the classical and alternative schemes. A comparison of results of different decision schemes then is conducted. Moreover, with respect to the alternative scheme, a further comparison is given for different techniques in integrating the degrees of importance. A numerical example in the selection of investment strategy is used as to exemplify the model and for the analysis purpose.
Zamani, R, Brown, RBK, Beydoun, G & Tibben, W 2017, 'The architecture of an effective software application for managing enterprise projects', Journal of Modern Project Management, vol. 5, no. 1, pp. 114-122.
View/Download from: Publisher's site
View description>>
This paper presents the architecture of an effective software application for managing enterprise projects. Viewing the execution of an enterprise project as a highly complex system in which many delicate trade-offs among completion time, cost, safety, and quality are required, the architecture has been designed based on the fact that any action in one part of such a project can highly impact its other parts. Highlighting the complexity of the system, and the way computational intelligence should be employed in making these trade-offs are the base of the presented architecture. The architecture is also based on the fact that developing a software application for appropriate managing of such trade-offs is not a trivial task, and a robust application for this purpose should be involved with an array of sophisticated optimization techniques. A multi-agent system (MAS), as a software application composed of multiple interacting modules, has been used as the main component of architecture. In this multi-agent system, modules interact with environment on-line, and resolve various resource conflicts which are complex and hard-to-resolve on daily basis. Based on the proposed architecture, the paper also provides a template software application in which an array of optimization techniques show how the necessary trade-offs can be made. The template is the result of the integration of several highly sophisticated recent procedures for single and multimode resource-constrained projects scheduling problems.
Zeng, S, Merigó, JM, Palacios-Marqués, D, Jin, H & Gu, F 2017, 'Intuitionistic fuzzy induced ordered weighted averaging distance operator and its application to decision making', Journal of Intelligent & Fuzzy Systems, vol. 32, no. 1, pp. 11-22.
View/Download from: Publisher's site
View description>>
© 2017 - IOS Press and the authors. In this paper, we develop a new method for intuitionistic fuzzy decision making problems with induced aggregation operators and distance measures. Firstly, we introduce the intuitionistic fuzzy induced ordered weighted averaging distance (IFIOWAD) operator. It is an extension of the ordered weighted averaging (OWA) operator that uses the main characteristics of the induced OWA (IOWA), the distance measures and uncertain information represented by intuitionistic fuzzy numbers. The main advantage of this operator is that it is able to consider complex attitudinal characters of the decision-maker by using order-inducing variables in the aggregation of the distance measures. We further generalize the IFIOWAD by using weighted average. The result is the intuitionistic fuzzy induced ordered weighted averaging weighted average distance (IFIOWAWAD) operator. Finally, a practical example about the selection of investments is provided to illustrate the developed intuitionistic fuzzy aggregation operators.
Zhang, Q, Wu, D, Lu, J, Liu, F & Zhang, G 2017, 'A cross-domain recommender system with consistent information transfer', Decision Support Systems, vol. 104, pp. 49-63.
View/Download from: Publisher's site
View description>>
© 2017 Elsevier B.V. Recommender systems provide users with personalized online product and service recommendations and are a ubiquitous part of today's online entertainment smorgasbord. However, many suffer from cold-start problems due to a lack of sufficient preference data, and this is hindering their development. Cross-domain recommender systems have been proposed as one possible solution. These systems transfer knowledge from one domain that has adequate preference information to another domain that does not. The outlook for cross-domain recommendation is promising, but existing methods cannot ensure the knowledge extracted from the source domain is consistent with the target domain, which may impact the accuracy of the recommendations. To address this challenging issue, we propose a cross-domain recommender system with consistent information transfer (CIT). Knowledge consistency is based on user and item latent groups, and domain adaptation techniques are used to map and adjust these groups in both domains to maintain consistency during the transfer learning process. Experiments were conducted on five real-world datasets in three categories: movies, books, and music. The results for nine cross-domain recommendation tasks show that CIT outperforms five benchmarks and increases the accuracy of recommendations in the target domain, especially with sparse data. Practically, our proposed method is applied into a telecom product recommender system and a business partner recommender system (Smart BizSeeker) to enhance personalized decision making for both businesses and individual customers.
Zhang, Y, Chen, H, Lu, J & Zhang, G 2017, 'Detecting and predicting the topic change of Knowledge-based Systems: A topic-based bibliometric analysis from 1991 to 2016', Knowledge-Based Systems, vol. 133, pp. 255-268.
View/Download from: Publisher's site
View description>>
© 2017 The journal Knowledge-based Systems (KnoSys) has been published for over 25 years, during which time its main foci have been extended to a broad range of studies in computer science and artificial intelligence. Answering the questions: “What is the KnoSys community interested in?” and “How does such interest change over time?” are important to both the editorial board and audience of KnoSys. This paper conducts a topic-based bibliometric study to detect and predict the topic changes of KnoSys from 1991 to 2016. A Latent Dirichlet Allocation model is used to profile the hotspots of KnoSys and predict possible future trends from a probabilistic perspective. A model of scientific evolutionary pathways applies a learning-based process to detect the topic changes of KnoSys in sequential time slices. Six main research areas of KnoSys are identified, i.e., expert systems, machine learning, data mining, decision making, optimization, and fuzzy, and the results also indicate that the interest of KnoSys communities in the area of computational intelligence is raised, and the ability to construct practical systems through knowledge use and accurate prediction models is highly emphasized. Such empirical insights can be used as a guide for KnoSys submissions.
Zhang, Y, Qian, Y, Huang, Y, Guo, Y, Zhang, G & Lu, J 2017, 'An entropy-based indicator system for measuring the potential of patents in technological innovation: rejecting moderation', Scientometrics, vol. 111, no. 3, pp. 1925-1946.
View/Download from: Publisher's site
View description>>
© 2017, Akadémiai Kiadó, Budapest, Hungary. How to evaluate the value of a patent in technological innovation quantitatively and systematically challenges bibliometrics. Traditional indicator systems and weighting approaches mostly lead to “moderation” results; that is, patents ranked to a top list can have only good-looking values on all indicators rather than distinctive performances in certain individual indicators. Orienting patents authorized by the United States Patent and Trademark Office (USPTO), this paper constructs an entropy-based indicator system to measure their potential in technological innovation. Shannon’s entropy is introduced to quantitatively weight indicators and a collaborative filtering technique is used to iteratively remove negative patents. What remains is a small set of positive patents with potential in technological innovation as the output. A case study with 28,509 USPTO-authorized patents with Chinese assignees, covering the period from 1976 to 2014, demonstrates the feasibility and reliability of this method.
Zhang, Y, Zhang, G, Zhu, D & Lu, J 2017, 'Scientific evolutionary pathways: Identifying and visualizing relationships for scientific topics', Journal of the Association for Information Science and Technology, vol. 68, no. 8, pp. 1925-1939.
View/Download from: Publisher's site
View description>>
Whereas traditional science maps emphasize citation statistics and static relationships, this paper presents a term‐based method to identify and visualize the evolutionary pathways of scientific topics in a series of time slices. First, we create a data preprocessing model for accurate term cleaning, consolidating, and clustering. Then we construct a simulated data streaming function and introduce a learning process to train a relationship identification function to adapt to changing environments in real time, where relationships of topic evolution, fusion, death, and novelty are identified. The main result of the method is a map of scientific evolutionary pathways. The visual routines provide a way to indicate the interactions among scientific subjects and a version in a series of time slices helps further illustrate such evolutionary pathways in detail. The detailed outline offers sufficient statistical information to delve into scientific topics and routines and then helps address meaningful insights with the assistance of expert knowledge. This empirical study focuses on scientific proposals granted by the United States National Science Foundation, and demonstrates the feasibility and reliability. Our method could be widely applied to a range of science, technology, and innovation policy research, and offer insight into the evolutionary pathways of scientific activities.
Zhu, T, Li, G, Zhou, W & Yu, PS 2017, 'Differentially Private Data Publishing and Analysis: A Survey', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 8, pp. 1619-1638.
View/Download from: Publisher's site
View description>>
Differential privacy is an essential and prevalent privacy model that has been widely explored in recent decades. This survey provides a comprehensive and structured overview of two research directions: differentially private data publishing and differentially private data analysis. We compare the diverse release mechanisms of differentially private data publishing given a variety of input data in terms of query type, the maximum number of queries, efficiency, and accuracy. We identify two basic frameworks for differentially private data analysis and list the typical algorithms used within each framework. The results are compared and discussed based on output accuracy and efficiency. Further, we propose several possible directions for future research and possible applications.
Zhu, Z, Liu, X, Wang, Y, Lu, W, Gong, L, Yu, S & Ansari, N 2017, 'Impairment- and Splitting-Aware Cloud-Ready Multicast Provisioning in Elastic Optical Networks', IEEE/ACM Transactions on Networking, vol. 25, no. 2, pp. 1220-1234.
View/Download from: Publisher's site
View description>>
It is known that multicast provisioning is important for supporting cloud-based applications, and as the traffics from these applications are increasing quickly, we may rely on optical networks to realize high-throughput multicast. Meanwhile, the flexible-grid elastic optical networks (EONs) achieve agile access to the massive bandwidth in optical fibers, and hence can provision variable bandwidths to adapt to the dynamic demands from the cloud-based applications. In this paper, we consider all-optical multicast in EONs in a practical manner and focus on designing impairment- and splitting-aware multicast provisioning schemes. We first study the procedure of adaptive modulation selection for a light-tree, and point out that the multicast scheme in EONs is fundamentally different from that in the fixed-grid wavelength-division multiplexing networks. Then, we formulate the problem of impairment- and splitting-aware routing, modulation and spectrum assignment (ISa-RMSA) for all-optical multicast in EONs and analyze its hardness. Next, we analyze the advantages brought by the flexibility of routing structures and discuss the ISa-RMSA schemes based on light-trees and light-forests. This paper suggests that for ISa-RMSA, the light-forest-based approach can use less bandwidth than the light-tree-based one, while still satisfying the quality of transmission requirement. Therefore, we establish the minimum light-forest problem for optimizing a light-forest in ISa-RMSA. Finally, we design several time-efficient ISa-RMSA algorithms, and prove that one of them can solve the minimum light-forest problem with a fixed approximation ratio.
Zuo, H, Zhang, G, Pedrycz, W, Behbood, V & Lu, J 2017, 'Fuzzy Regression Transfer Learning in Takagi–Sugeno Fuzzy Models', IEEE Transactions on Fuzzy Systems, vol. 25, no. 6, pp. 1795-1807.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. Data science is a research field concerned with processes and systems that extract knowledge from massive amounts of data. In some situations, however, data shortage renders existing data-driven methods difficult or even impossible to apply. Transfer learning has recently emerged as a way of exploiting previously acquired knowledge to solve new yet similar problems much more quickly and effectively. In contrast to classical data-driven machine learning methods, transfer learning methods exploit the knowledge accumulated from data in auxiliary domains to facilitate predictive modeling in the current domain. A significant number of transfer learning methods that address classification tasks have been proposed, but studies on transfer learning in the case of regression problems are still scarce. This study focuses on using transfer learning techniques to handle regression problems in a domain that has insufficient training data. We propose an original fuzzy regression transfer learning method, based on fuzzy rules, to address the problem of estimating the value of the target for regression. A Takagi-Sugeno fuzzy regression model is developed to transfer knowledge from a source domain to a target domain. Experimental results using synthetic data and real-world datasets demonstrate that the proposed fuzzy regression transfer learning method significantly improves the performance of existing models when tackling regression problems in the target domain.
Abedin, B, Erfani, S & Blount, Y 1970, 'Social media adoption framework for aged care service providers in Australia', 2017 International Conference on Research and Innovation in Information Systems (ICRIIS), 2017 5th International Conference on Research and Innovation in Information Systems (ICRIIS), IEEE, Langkawi, Malaysia.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The aged care sector has been a late adopter of social media platforms for communicating, collaborating, marketing and creating brand awareness. There is little research that examines the adoption of social media by aged care service providers for these purposes. This paper reviews the status of social media adoption in the Australian aged care industry, to understand in what ways social media can serve older people's needs, and to develop recommendations for aged-care service providers to adopt social media applications to empower older people. Through a review of the literature and interviews with Australian experts, this paper suggests aged care providers use a three-phase framework when adopting social media in the aged care sector. The first phase is to adopt a popular public social media platform such as Facebook followed by Instagram and Twitter. The second phase supports interaction by encouraging posts and feedback by locally hosted member forums. The third phase is the adoption of specialised social applications for closed groups and specific functions. The paper concludes with a discussion on the implications of the framework and proposes directions for future research.
Abedini, A, Abedin, B & Miliszewska, I 1970, 'Peer to peer adult learning engagement in online collaborative learning: Characteristics and learning outcomes', Proceedings ot the 21st Pacific Asia Conference on Information Systems: ''Societal Transformation Through IS/IT'', PACIS 2017, Pacific Asia Conference on Information Systems, AIS Electronic Library (AISeL), Langkawi, Malaysia.
View description>>
The purpose of this study is to investigate an under-researched area of adult learning in informal and unstructured online spaces. The first phase of the project involved a systematic review of 31 studies on adult learners’ peer-to-peer (P2P) interactions in online learning environments. The aspects explored were: (1), the characteristics of adult P2P engagement in online collaborative learning environments; (2), the impacts of that engagement on the learning outcomes of adult learners; and (3), the factors that could facilitate or hinder adult engagement in such environments. The review revealed that most studies investigated the broad effects of P2P adult learning on learning outcomes. These effects suggest that: (1) the adult learning efficiency could be improved through the application of more specialized approaches; and (2), that various unexplored factors may be important in facilitating P2P adult learning. This research will allow for better consideration of adult learning processes and activities.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Impact of struck-out text on writer identification', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 1465-1471.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The presence of struck-out text in handwritten manuscripts may affect the accuracy of automated writer identification. This paper presents a study on such effects of struck-out text. Here we consider offline English and Bengali handwritten document images. At first, the struck-out texts are detected using a hybrid classifier of a CNN (Convolutional Neural Network) and an SVM (Support Vector Machine). Then the writer identification process is activated on normal and struck-out text separately, to ascertain the impact of struck-out texts. For writer identification, we use two methods: (a) a hand-crafted feature-based SVM classifier, and (b) CNN-extracted auto-derived features with a recurrent neural model. For the experimental analysis, we have generated a database from 100 English and 100 Bengali writers. The performance of our system is very encouraging.
Adak, C, Chaudhuri, BB & Blumenstein, M 1970, 'Legibility and Aesthetic Analysis of Handwriting', 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), IEEE, Kyoto, Japan, pp. 175-182.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper deals with computer-based cognitive analysis towards legibility and aesthetics of a handwritten document. The legible text creates a human perception that the writing can be read effortlessly because of its orthographic clarity. The aesthetic property relates to the beautiful appearance of a handwritten document. In this study, we deal with these properties on offline Bengali handwriting. We formulate both legibility and aesthetic analysis tasks as machine learning problems supervised by the human cognitive system. We employ automatically derived feature-based recurrent neural networks to investigate writing legibility. For aesthetics evaluation, we employ hand-crafted feature-based support vector machines (SVMs). We have collected contemporary Bengali handwritings, on which the subjective legibility and aesthetic scores are provided by human readers. On this corpus containing legibility and aesthetic ground-Truth information, we executed our experiments. The experimental results obtained on various handwritings are encouraging.
Ahadi, A, Lister, R, Lal, S, Leinonen, J & Hellas, A 1970, 'Performance and Consistency in Learning to Program', Proceedings of the Nineteenth Australasian Computing Education Conference, ACE '17: Nineteenth Australasian Computing Education Conference, ACM, Geelong.
View/Download from: Publisher's site
View description>>
Performance and consistency play a large role in learning. Decreasing the effort that one invests into course work may have short-term benefits such as reduced stress. However, as courses progress, neglected work accumulates and may cause challenges with learning the course content at hand.
In this work, we analyze students' performance and consistency with programming assignments in an introductory programming course. We study how performance, when measured through progress in course assignments, evolves throughout the course, study weekly fluctuations in students' work consistency, and contrast this with students' performance in the course final exam.
Our results indicate that whilst fluctuations in students' weekly performance do not distinguish poor performing students from well performing students with a high accuracy, more accurate results can be achieved when focusing on the performance of students on individual assignments which could be used for identifying struggling students who are at risk of dropping out of their studies.
Al-Doghman, F, Chaczko, Z & Jiang, J 1970, 'A Review of Aggregation Algorithms for the Internet of Things', 2017 25th International Conference on Systems Engineering (ICSEng), 2017 25th International Conference on Systems Engineering (ICSEng), IEEE, Piscataway, USA, pp. 480-487.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The Internet of Things (IoT) epitomizes the upcoming eminent transition in the world’s economy and human lifestyle where people and various objects are correlated within networks. Data Aggregation is a technique which can be used to mitigate Big Data challenges within IoT. This paper provides an overview of various approaches for aggregation of data in IoT infrastructure. A new class of reliable Data Aggregation algorithm is discussed as well. This new class of algorithm uses a consensus based aggregation with fault tolerance methodology in Fog Computing. The new approach allows promoting adaptive behavior and more efficient delivery of the aggregation outcomes to the ascendant node(s). The proposed method is fault tolerant and deals with nodes reliability issues.
Alghamdi, A, Hussain, W, Alharthi, A & Almusheqah, AB 1970, 'The Need of an Optimal QoS Repository and Assessment Framework in Forming a Trusted Relationship in Cloud: A Systematic Review', 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), IEEE, Shanghai, China, pp. 301-306.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Due to the cost-effectiveness and scalable features of the cloud the demand of its services is increasing every next day. Quality of Service (QOS) is one of the crucial factor in forming a viable Service Level Agreement (SLA) between a consumer and the provider that enable them to establish and maintain a trusted relationship with each other. SLA identifies and depicts the service requirements of the user and the level of service promised by provider. Availability of enormous service solutions is troublesome for cloud users in selecting the right service provider both in terms of price and the degree of promised services. On the other end a service provider need a centralized and reliable QoS repository and assessment framework that help them in offering an optimal amount of marginal resources to requested consumer. Although there are number of existing literatures that assist the interaction parties to achieve their desired goal in some way, however, there are still many gaps that need to be filled for establishing and maintaining a trusted relationship between them. In this paper we tried to identify all those gaps that is necessary for a trusted relationship between a service provider and service consumer. The aim of this research is to present an overview of the existing literature and compare them based on different criteria such as QoS integration, QoS repository, QoS filtering, trusted relationship and an SLA.
Alkalbani, AM, Gadhvi, L, Patel, B, Hussain, FK, Ghamry, AM & Hussain, OK 1970, 'Analysing Cloud Services Reviews Using Opining Mining', 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA), 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA), IEEE, Tamkang Univ, Taipei, TAIWAN, pp. 1124-1129.
View/Download from: Publisher's site
Anaissi, A, Khoa, NLD, Mustapha, S, Alamdari, MM, Braytee, A, Wang, Y & Chen, F 1970, 'Adaptive One-Class Support Vector Machine for Damage Detection in Structural Health Monitoring', Advances in Knowledge Discovery and Data Mining (LNAI), Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, Springer International Publishing, Jeju, South Korea, pp. 42-57.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Machine learning algorithms have been employed extensively in the area of structural health monitoring to compare new measurements with baselines to detect any structural change. One-class support vector machine (OCSVM) with Gaussian kernel function is a promising machine learning method which can learn only from one class data and then classify any new query samples. However, generalization performance of OCSVM is profoundly influenced by its Gaussian model parameter ϭ. This paper proposes a new algorithm named Appropriate Distance to the Enclosing Surface (ADES) for tuning the Gaussian model parameter. The semantic idea of this algorithm is based on inspecting the spatial locations of the edge and interior samples, and their distances to the enclosing surface of OCSVM. The algorithm selects the optimal value of ϭ which generates a hyperplane that is maximally distant from the interior samples but close to the edge samples. The sets of interior and edge samples are identified using a hard margin linear support vector machine. The algorithm was successfully validated using sensing data collected from the Sydney Harbour Bridge, in addition to five public datasets. The designed ADES algorithm is an appropriate choice to identify the optimal value of ϭ for OCSVM especially in high dimensional datasets.
Asadabadi, MR, Saberi, M & Chang, E 1970, 'A fuzzy game based framework to address ambiguities in performance based contracting', Proceedings of the International Conference on Web Intelligence, WI '17: International Conference on Web Intelligence 2017, ACM, Leipzig, GERMANY, pp. 1214-1217.
View/Download from: Publisher's site
Asadabadi, MR, Saberi, M & Chang, E 1970, 'Logistic informatics modelling using concept of stratification (CST)', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY.
View/Download from: Publisher's site
Azadeh, A, Pourreza, P, Saberi, M, Hussain, OK & Chang, E 1970, 'An integrated fuzzy cognitive map-Bayesian network model for improving HSEE in energy sector', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY.
View/Download from: Publisher's site
Balaji, S, Patil, M & McGregor, C 1970, 'A Cloud Based Big Data Based Online Health Analytics for Rural NICUs and PICUs in India: Opportunities and Challenges', 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Aristotle Univ Thessaloniki, Thessaloniki, GREECE, pp. 385-390.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. High frequency physiological data has great potential to provide new insights for many conditions patients can develop in critical care when utilized by Big Data Analytics based Clinical Decision Support Systems, such as Artemis. Artemis was deployed in NICU at SickKids Hospital in Toronto in August 2009. It employs all the potentiality of big data. Both original data together with newly generated analytics is stored in the data persistence component of Artemis. Real-time analytics is performed in the Online Analytics component. The knowledge extraction component of the system takes care of data mining which is enabled to support clinical research for various conditions. Artemis to date has been utilized in three different implementations. However the use of Artemis still holds many challenges for lower resource settings. This research demonstrates the challenges and opportunities to use Artemis cloud as a cloud computing based Health Analytics-as-a-Service approach for the provision of remote real-time patient monitoring for low resource settings. We present case study research to demonstrate the implications, opportunities and challenges of utilizing Artemis in a low resource setting for small and remote pediatric critical care units viz NICU/PICU in India. Utilizing potentiality of big data within pediatric intensive care units has great potential to improve healthcare in these low resource settings.
Bano, M & Zowghi, D 1970, 'Crowd Vigilante - Detecting Sabotage in Crowdsourcing.', APRES, Springer, pp. 114-120.
View/Download from: Publisher's site
View description>>
© Springer Nature Singapore Pte Ltd. 2018. Crowdsourcing is a complex and sociotechnical problem solving approach for collaboration of geographically distributed volunteer crowd to contribute to the achievement of a common task. One of the major issues faced by crowdsourced projects is the trustworthiness of the crowd. This paper presents a vision to develop a framework with supporting methods and tools for early detection of the malicious acts of sabotage in crowdsourced projects by utilizing and scaling digital forensic techniques. The idea is to utilize the crowd to build the digital evidence of sabotage with systematic collection and analysis of data from the same crowdsourced project where the threat is situated. The proposed framework aims to improve the security of the crowdsourced projects and their outcomes by building confidence about the trustworthiness of the workers.
Bano, M, Zowghi, D & Kearney, M 1970, 'Feature Based Sentiment Analysis for Evaluating the Mobile Pedagogical Affordances of Apps.', WCCE, IFIP TC 3 World Conference on Computers in Education, Springer, Dublin, Ireland, pp. 281-291.
View/Download from: Publisher's site
View description>>
© 2017, IFIP International Federation for Information Processing. The launch of millions of apps has made it challenging for teachers to select the most suitable educational app to support students’ learning. Several evaluation frameworks have been proposed in the research literature to assist teachers in selecting the right apps for their needs. This paper presents preliminary results of an innovative technique for evaluating educational mobile apps by analysing the feedback of past app users through the lens of a mobile pedagogical perspective. We have utilized a sentiment analysis tool to assess the opinions of the app users through the lens of the criteria offered by a rigorous mobile learning pedagogical framework highlighting the learners’ experience of Personalization, Authenticity and Collaboration (iPAC). The investigation has provided initial confirmation of the powerful utility of the feature based sentiment analysis technique for evaluating the mobile pedagogical affordances of learning apps.
Bashir, MR & Gill, AQ 1970, 'IoT enabled smart buildings: A systematic review', 2017 Intelligent Systems Conference (IntelliSys), 2017 Intelligent Systems Conference (IntelliSys), IEEE, London, UK, pp. 151-159.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. There is an increasing interest in the Internet of Things (IoT) enabled smart buildings. The main question is: What are the key challenges, which must be addressed to effectively manage and analyze the big data for IoT enabled smart buildings. There is a need for the systematic literature review to understand the challenges and the solutions to overcome such challenges. Using the SLR approach, 22 relevant studies were identified and reviewed in this paper. The data from these selected studies were extracted to identify the challenges and relevant solutions. The findings from this research paper will serve as a knowledge base for researchers and practitioners for conducting further research and development in this important area.
Bei, X, Qiao, Y & Zhang, S 1970, 'Networked Fairness in Cake Cutting', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, pp. 3632-3638.
View/Download from: Publisher's site
View description>>
We introduce a graphical framework for fair division in cake cutting, where comparisons between agents are limited by an underlying network structure. We generalize the classical fairness notions of envy-freeness and proportionality in this graphical setting. An allocation is called envy-free on a graph if no agent envies any of her neighbor's share, and is called proportional on a graph if every agent values her own share no less than the average among her neighbors, with respect to her own measure. These generalizations enable new research directions in developing simple and efficient algorithms that can produce fair allocations under specific graph structures.On the algorithmic frontier, we first propose a moving-knife algorithm that outputs an envy-free allocation on trees. The algorithm is significantly simpler than the discrete and bounded envy-free algorithm introduced in [Aziz and Mackenzie, 2016] for compete graphs. Next, we give a discrete and bounded algorithm for computing a proportional allocation on transitive closure of trees, a class of graphs by taking a rooted tree and connecting all its ancestor-descendant pairs.
Berry, DM, Cleland-Huang, J, Ferrari, A, Maalej, W, Mylopoulos, J & Zowghi, D 1970, 'Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and Beyond', 2017 IEEE 25th International Requirements Engineering Conference (RE), 2017 IEEE 25th International Requirements Engineering Conference (RE), IEEE, Lisbon, Portugal, pp. 570-573.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Context and Motivation Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. Problem Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. Principal Ideas This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. Contribution The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool’s task.
Bluff, A & Johnston, A 1970, 'Storytelling with Interactive Physical Theatre', Proceedings of the 4th International Conference on Movement Computing, MOCO '17: 4th International Conference on Movement Computing, ACM, London, United Kingdom, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 ACM. This paper examines the way movement based interactive visuals were used as a storytelling device in the physical theatre production of Creature: Dot and the Kangaroo. A number of performers and artists involved in the production were interviewed and their perceptions of the interactive technology have been contrasted against a similar study into abstract dance. The animated backgrounds and interactive animal graphics projected onto the stage were found to reduce the density of script by describing the location of action and spirit of the character, reducing the necessity for this to be spoken. Peak moments of the show were identified by those interviewed and a scene analysis revealed that the most successful scenes featured a more integrated storytelling where the interaction between performers and the digital projections portrayed a key narrative message.
Braytee, A, Liu, W & Kennedy, PJ 1970, 'Supervised context-aware non-negative matrix factorization to handle high-dimensional high-correlated imbalanced biomedical data', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 4512-4519.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Traditional feature selection techniques are used to identify a subset of the most useful features, and consider the rest as unimportant, redundant or noisy. In the presence of highly correlated features, many variable selection methods consider correlated features as redundant and need to be removed. In this paper, a novel supervised feature selection algorithm SCANMF is proposed by jointly integrating correlation analysis and structural analysis of the balanced supervised non-negative matrix factorization (NMF). Furthermore, ℓ2,1-norm minimization constraint is incorporated into the objective function to guarantee sparsity in the feature matrix rows and reduce noisy features. Our algorithm exploits the discriminative information, feature combinations, and the original features in the context of a supervised NMF method which can be beneficial for both classification and interpretation. An efficient iterative algorithm is designed to solve the constrained optimization problem with guaranteed convergence. Finally, a series of extensive experiments are conducted on 8 complex datasets. Promising results using multiple classifiers demonstrate the effectiveness and efficiency of our algorithm over state-of-the-art methods.
Braytee, A, Liu, W, Catchpoole, DR & Kennedy, PJ 1970, 'Multi-Label Feature Selection using Correlation Information', Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17: ACM Conference on Information and Knowledge Management, ACM, Singapore, Singapore, pp. 1649-1656.
View/Download from: Publisher's site
View description>>
© 2017 ACM. High-dimensional multi-labeled data contain instances, where each instance is associated with a set of class labels and has a large number of noisy and irrelevant features. Feature selection has been shown to have great benefits in improving the classification performance in machine learning. In multi-label learning, to select the discriminative features among multiple labels, several challenges should be considered: interdependent labels, different instances may share different label correlations, correlated features, and missing and .awed labels. This work is part of a project at .e Children's Hospital at Westmead (TB-CHW), Australia to explore the genomics of childhood leukaemia. In this paper, we propose a CMFS (Correlated-and Multi-label Feature Selection method), based on non-negative matrix factorization (NMF) for simultaneously performing feature selection and addressing the aforementioned challenges. Significantly, a major advantage of our research is to exploit the correlation information contained in features, labels and instances to select the relevant features among multiple labels. Furthermore, l2;1-norm regularization is incorporated in the objective function to undertake feature selection by imposing sparsity on the feature matrix rows. We employ CMFS to decompose the data and multi-label matrices into a low-dimensional space. To solve the objective function, an efficient iterative optimization algorithm is proposed with guaranteed convergence. Finally, extensive experiments are conducted on high-dimensional multi-labeled datasets. The experimental results demonstrate that our method significantly outperforms state-of-the-art multi-label feature selection methods.
Broekhuijsen, M, van den Hoven, E & Markopoulos, P 1970, 'Design Directions for Media-Supported Collocated Remembering Practices', Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17: Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama, Japan, pp. 21-30.
View/Download from: Publisher's site
View description>>
Since the widespread adoption of digital photography, people create many digital photos, often with the intention to use them for shared remembering. Practices around digital photography have changed along with advances in media sharing technologies such as smartphones, social media, and mobile connectivity. Although much research was done at the start of digital photography, commercially available tools for media-supported shared remembering still have many limitations. The objective of our research is to explore spatial and material design directions to better support the use of personal photos for collocated shared remembering. In this paper, we present seven design requirements that resulted from a redesign workshop with fifteen participants, and four design concepts (two spatial, two material) that we developed based on those requirements. By reflecting on the requirements and designs we conclude with challenges for interaction designers to support collocated remembering practices.
Buchan, J, Bano, M, Zowghi, D, MacDonell, SG & Shinde, A 1970, 'Alignment of Stakeholder Expectations about User Involvement in Agile Software Development.', EASE, International Conference on Evaluation and Assessment in Software Engineering, ACM, Karlskrona, Sweden, pp. 334-343.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. Context: User involvement is generally considered to contributing to user satisfaction and project success and is central to Agile software development. In theory, the expectations about user involvement, such as the PO's, are quite demanding in this Agile way of working. But what are the expectations seen in practice, and are the expectations of user involvement aligned among the development team and users? Any misalignment could contribute to conflict and miscommunication among stakeholders that may result in ineffective user involvement. Objective: Our aim is to compare and contrast the expectations of two stakeholder groups (software development team, and software users) about user involvement in order to understand the expectations and assess their alignment. Method: We have conducted an exploratory case study of expectations about user involvement in an Agile software development. Qualitative data was collected through interviews to design a novel method for the assessing the alignment of expectations about user involvement by applying Repertory Grids (RG). Results: By aggregating the results from the interviews and RGs, varying degrees of expectation alignments were observed between the development team and user representatives. Conclusion: Alignment of expectations can be assessed in practice using the proposed RG instrument and can reveal misalignment between user roles and activities they participate in Agile software development projects. Although we used RG instrument retrospectively in this study, we posit that it could also be applied from the start of a project, or proactively as a diagnostic tool throughout a project to assess and ensure that expectations are aligned.
Butler, A, Xu, G & He, X 1970, 'What comes first the Co-authorship Network or the Citation?', Proceedings of the 4th Multidisciplinary International Social Networks Conference, MISNC '17: 4th Multidisciplinary International Social Networks Conference, ACM, Bangkok, Thailand.
View/Download from: Publisher's site
View description>>
For many decades citation counting has been used as the way to quantify the nebulous notion of research "quality". Indeed, in conversation the terms "research quality", "impact" or "excellence in research" are simply a reference to a scientific document’s citation count. Moreover, the commonly used journal "impact" factors are simply manipulated forms of citation counting. In recent times, the word "impact" has morphed into the new ’mot du jour’. This paper investigates and discusses the association between co-Authorship networks and citations of institutions within an arbitrary, but defined, subject area. The data examined is readily available and the analytical techniques employed are deliberately simple. The simplicity of this analysis is driven by the desire to show that citation counts are not explicitly related to the quality of research but that citations are a result of multifaceted author networks that are inherent in scientific endeavor. The paper presents an argument that the improved ability to conduct effective network analysis and related research shows that the notion of high citations being the same as "research quality" has run its course. Citation performance is more likely to be a result of co-Authorship network dynamics rather than any perceived notion of "quality". Moreover, it is time the folly of citation counting is put to rest and that if one wants know what "impact" one is having that you need look no further than your co-Authorship network and the reach it has across whatever subject area you are interested in. The discussion and results herein highlight that rather than counting citations, the "impact" of research is driven by connections through networks of people. © 2017 ACM.
Cao, J, Ma, M, Li, H, Fu, Y, Niu, B & Li, F 1970, 'Trajectory Prediction-based Handover Authentication Mechanism for Mobile Relays in LTE-A High-Speed Rail Networks', 2017 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), IEEE International Conference on Communications (ICC), IEEE, FRANCE, Paris.
Cao, Z, Prasad, M & Lin, C-T 1970, 'Estimation of SSVEP-based EEG complexity using inherent fuzzy entropy', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This study considers the dynamic changes of complexity feature by fuzzy entropy measurement and repetitive steady-state visual evoked potential (SSVEP) stimulus. Since brain complexity reflects the ability of the brain to adapt to changing situations, we suppose such adaptation is closely related to the habituation, a form of learning in which an organism decreases or increases to respond to a stimulus after repeated presentations. By a wearable electroencephalograph (EEG) with Fpz and Oz electrodes, EEG signals were collected from 20 healthy participants in one resting and five-times 15 Hz SSVEP sessions. Moreover, EEG complexity feature was extracted by multi-scale Inherent Fuzzy Entropy (IFE) algorithm, and relative complexity (RC) was defined the difference between resting and SSVEP. Our results showed the enhanced frontal and occipital RC was accompanied with increased stimulus times. Compared with the 1st SSVEP session, the RC was significantly higher than the 5th SSVEP session at frontal and occipital areas (p < 0.05). It suggested that brain has adapted to changes in stimulus influence, and possibly connected with the habituation. In conclusion, effective evaluation of IFE has a potential EEG signature of complexity in the SSEVP-based experiment.
Chang, Y-C, Wang, Y-K, Wu, D & Lin, C-T 1970, 'Generating a fuzzy rule-based brain-state-drift detector by riemann-metric-based clustering', 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Banff, AB, Canada, pp. 1220-1225.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Brain-state drifts could significantly impact on the performance of machine-learning algorithms in brain computer interface (BCI). However, less is understood with regard to how brain transition states influence a model and how it can be represented for a system. Herein we are interested in the hidden information of brain state-drift occurring in both simulated and real-world human-system interaction. This research introduced the Riemann metric to categorize EEG data, and visualized the clustering result so that the distribution of the data can be observable. Moreover, to defeat subjective uncertainty of electroencephalography (EEG) signals, fuzzy theory was employed. In this study, we built a fuzzy rule-based brain-statedrift detector to observe the brain state and imported data from different subjects to testify the performance. The result of the detection is acceptable and shown in this paper. In the future, we expect that brain-state drifting can be connected with human behaviors via the proposed fuzzy rule-based classification. We also will develop a new structure for a fuzzy rule-based brain-statedrift detector to improve the detection accuracy.
Chauhan, J, Hu, Y, Seneviratne, S, Misra, A, Seneviratne, A & Lee, Y 1970, 'BreathPrint', Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys'17: The 15th Annual International Conference on Mobile Systems, Applications, and Services, ACM.
View/Download from: Publisher's site
Chen, S, Chen, S, Lin, L, Yuan, X, Liang, J & Zhang, X 1970, 'E-Map: A Visual Analytics Approach for Exploring Significant Event Evolutions in Social Media', 2017 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, AZ, Phoenix, pp. 36-47.
Chen, S, Chen, S, Lin, L, Yuan, X, Liang, J & Zhang, X 1970, 'E-Map: A Visual Analytics Approach for Exploring Significant Event Evolutions in Social Media', 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, Phoenix, Arizona, USA, pp. 2720-2729.
View/Download from: Publisher's site
View description>>
Significant events are often discussed and spread through social
media, involving many people. Reposting activities and opinions
expressed in social media offer good opportunities to understand
the evolution of events. However, the dynamics of reposting activities
and the diversity of user comments pose challenges to understand
event-related social media data. We propose E-Map, a visual
analytics approach that uses map-like visualization tools to help
multi-faceted analysis of social media data on a significant event
and in-depth understanding of the development of the event. E-Map
transforms extracted keywords, messages, and reposting behaviors
into map features such as cities, towns, and rivers to build a structured
and semantic space for users to explore. It also visualizes
complex posting and reposting behaviors as simple trajectories and
connections that can be easily followed. By supporting multi-level
spatial temporal exploration, E-Map helps to reveal the patterns
of event development and key players in an event, disclosing the
ways they shape and affect the development of the event. Two
cases analysing real-world events confirm the capacities of E-Map
in facilitating the analysis of event evolution with social media data
Chen, Z, Li, J, Chen, Z & You, X 1970, 'Generic Pixel Level Object Tracker Using Bi-Channel Fully Convolutional Network', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 666-676.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2017. As most of the object tracking algorithms predict bounding boxes to cover the target, pixel-level tracking methods provide a better description of the target. However, it remains challenging for a tracker to precisely identify detailed foreground areas of the target. In this work, we propose a novel bi-channel fully convolutional neural network to tackle the generic pixel-level object tracking problem. By capturing and fusing both low-level and high-level temporal information, our network is able to produce pixel-level foreground mask of the target accurately. In particular, our model neither updates parameters to fit the tracked target nor requires prior knowledge about the category of the target. Experimental results show that the proposed network achieves compelling performance on challenging videos in comparison with competitive tracking algorithms.
Chen, Z, You, X & Li, J 1970, 'Learning to focus for object proposals', 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), IEEE, Shenzhen, China, pp. 439-444.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Object proposal generators address the wasteful exhaustive search of the sliding window scheme in visual object detection and have been shown effective. However, the number of candidate windows is still large in order to ensure full coverage of potential objects. This paper presents a complementary technique that aims to work with any proposal generating system, amending the workflow from 'propose-assess' to 'propose-adjust-assess'. The adjustment serves as an auto-focus mechanism for the system and reduces the number of object proposals to be processed. The auto-focus is realized by two learning-based transformation models, one translating and the other deforming the windows towards better alignments of the objects, which are trained for identifying generic objects using image cues. Experiments on reallife image data sets show that the proposed technique can reduce the number of proposals without loss of performance.
Cheng, E-J, Prasad, M, Puthal, D, Sharma, N, Prasad, OK, Chin, P-H, Lin, C-T & Blumenstein, M 1970, 'Deep Learning Based Face Recognition with Sparse Representation Classification', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 24th International Conference on Neural Information Processing (ICONIP), Springer International Publishing, Guangzhou, PEOPLES R CHINA, pp. 665-674.
View/Download from: Publisher's site
View description>>
Feature extraction is an essential step in solving real-world pattern recognition and classification problems. The accuracy of face recognition highly depends on the extracted features to represent a face. The traditional algorithms uses geometric techniques, comprising feature values including distance and angle between geometric points (eyes corners, mouth extremities, and nostrils). These features are sensitive to the elements such as illumination, variation of poses, various expressions, to mention a few. Recently, deep learning techniques have been very effective for feature extraction, and deep features have considerable tolerance for various conditions and unconstrained environment. This paper proposes a two layer deep convolutional neural network (CNN) for face feature extraction and applied sparse representation for face identification. The sparsity and selectivity of deep features can strengthen sparseness for the solution of sparse representation, which generally improves the recognition rate. The proposed method outperforms other feature extraction and classification methods in terms of recognition accuracy.
Chiu, C-Y, Singh, AK, Wang, Y-K, King, J-T & Lin, C-T 1970, 'A wireless steady state visually evoked potential-based BCI eating assistive system.', IJCNN, International Joint Conference on Neural Networks, IEEE, Anchorage, AK, USA, pp. 3003-3007.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Brain-Computer interface (BCI) which aims at enabling users to perform tasks through their brain waves has been a feasible and worth developing solution for growing demand of healthcare. Current proposed BCI systems are often with lower applicability and do not provide much help for reducing burdens of users because of the time-consuming preparation required by adopted wet sensors and the shortage of provided interactive functions. Here, by integrating a state visually evoked potential (SSVEP)-based BCI system and a robotic eating assistive system, we propose a non-invasive wireless steady state visually evoked potential (SSVEP)-based BCI eating assistive system that enables users with physical disabilities to have meals independently. The analysis compared different methods of classification and indicated the best method. The applicability of the integrated eating assistive system was tested by an Amyotrophic Lateral Sclerosis (ALS) patient, and a questionnaire reply and some suggestion are provided. Fifteen healthy subjects engaged the experiment, and an average accuracy of 91.35%, and information transfer rate (ITR) of 20.69 bit per min are achieved. For online performance evaluation, the ALS patient gave basic affirmation and provided suggestions for further improvement. In summary, we proposed a usable SSVEP-based BCI system enabling users to have meals independently. With additional adjustment of movement design of the robotic arm and classification algorithm, the system may offer users with physical disabilities a new way to take care of themselves.
Choi, Y & McGregor, C 1970, 'A Flexible Parental Engaged Consent Model for the Secondary Use of Their Infant’s Physiological Data in the Neonatal Intensive Care Context', 2017 IEEE International Conference on Healthcare Informatics (ICHI), 2017 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, Park City, UT, USA, pp. 502-507.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The secondary use of health data, especially the use of physiological data for research holds many opportunities for improving the current understanding of neonatal conditions. As a neonate is unable to provide their consent regarding participation in research studies, a substitute decision maker (SDM) must provide parental or legal guardian consent. However it has been well documented that there are many emotional, mental and physical challenges associated with the parental consent process in the neonatal intensive care unit (NICU). It is proposed that a flexible parental engaged consent model could help alleviate some of these issues by providing parents with the ability to choose and change their clinical engagement level preference for their infant's participation in research at their convenience at any point in time. In this paper, an extension to Service based Multidimensional Temporal Data Mining Framework (STDMn0) to allow for the functionality of flexible patient or surrogate consent is presented based on the use of a flexible consent model initially proposed by Heath [1]. This functionality is demonstrated via an example implementation for a generic retrospective research study in the NICU setting.
Chou, K-P, Li, D-L, Prasad, M, Lin, C-T & Lin, W-C 1970, 'A method to enhance the deep learning in an aerial image', 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), IEEE, Xiamen, China, pp. 724-728.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this paper, we propose a kind of pre-processing method which can be applied to the depth learning method for the characteristics of aerial image. This method combines the color and spatial information to do the quick background filtering. In addition to increase execution speed, but also to reduce the rate of false positives.
Chou, K-P, Li, D-L, Prasad, M, Pratama, M, Su, S-Y, Lu, H, Lin, C-T & Lin, W-C 1970, 'Robust Facial Alignment for Face Recognition', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017 International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 497-504.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. This paper proposes a robust real-time face recognition system that utilizes regression tree based method to locate the facial feature points. The proposed system finds the face region which is suitable to perform the recognition task by geometrically analyses of the facial expression of the target face image. In real-world facial recognition systems, the face is often cropped based on the face detection techniques. The misalignment is inevitably occurred due to facial pose, noise, occlusion, and so on. However misalignment affects the recognition rate due to sensitive nature of the face classifier. The performance of the proposed approach is evaluated with four benchmark databases. The experiment results show the robustness of the proposed approach with significant improvement in the facial recognition system on the various size and resolution of given face images.
Chou, K-P, Prasad, M, Gupta, D, Sankar, S, Xu, T-W, Sundaram, S, Lin, C-T & Lin, W-C 1970, 'Block-based feature extraction model for early fire detection', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Every year the fire disaster always causes a lot of casualties and property damage. Many researchers are involved in the study of related disaster prevention. Early warning systems and stable fire can significantly reduce the damage caused by fire. Many existing image-based early warning systems can perform well in a particular field. In this paper, we propose a general framework that can be applied in most realistic environments. The proposed system is based on a block-based feature extraction method, which analyses local information in separate regions leading to a reduction in computing data. Local features of fire block are extracted from the detailed characteristics of fire objects, which include fire color, fire source immobility, and disorder. Each local feature has high detection rate and filter out different false-positive cases. Global analysis with fire texture and non-moving properties are applied to further reduce false alarm rate. The proposed system is composed of algorithms with low computation. Through a series of experiments, it can be observed that Experimental results show that the proposed system has higher detection rate and low false alarm rate under various environment.
Chou, K-P, Prasad, M, Li, D-L, Bharill, N, Lin, Y-F, Hussain, F, Lin, C-T & Lin, W-C 1970, 'Automatic Multi-view Action Recognition with Robust Features', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 554-563.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. This paper proposes view-invariant features to address multi-view action recognition for different actions performed in different views. The view-invariant features are obtained from clouds of varying temporal scale by extracting holistic features, which are modeled to explicitly take advantage of the global, spatial and temporal distribution of interest points. The proposed view-invariant features are highly discriminative and robust for recognizing actions as the view changes. This paper proposes a mechanism for real world application which can follow the actions of a person in a video based on image sequences and can separate these actions according to given training data. Using the proposed mechanism, the beginning and ending of an action sequence can be labeled automatically without the need for manual setting. It is not necessary in the proposed approach to re-train the system if there are changes in scenario, which means the trained database can be applied in a wide variety of environments. The experiment results show that the proposed approach outperforms existing methods on KTH and WEIZMANN datasets.
Chou, K-P, Prasad, M, Puthal, D, Chen, P-H, Vishwakarma, DK, Sundarami, S, Lin, C-T & Lin, W-C 1970, 'Fast Deformable Model for Pedestrian Detection with Haar-like features', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper proposes a novel Fast Deformable Model for Pedestrian Detection (FDMPD) to detect the pedestrians efficiently and accurately in the crowded environment. Despite of multiple detection methods available, detection becomes difficult due to variety of human postures and perspectives. The proposed study is divided into two parts. First part trains six Adaboost classifiers with Haar-like feature for different body parts (e.g., head, shoulders, and knees) to build the response feature maps. Second part uses these six response feature maps with full-body model to produce spatial deep features. The combined deep features are used as an input to SVM to judge the existence of pedestrian. As per the experiments conducted on the INRIA person dataset, the proposed FDMPD approach shows greater than 44.75 % improvement compared to other state-of-the-art methods in terms of efficiency and robustness.
Chu, C, Brownlow, J, Meng, Q, Fu, B, Culbert, B, Zhu, M, Xu, G & He, X 1970, 'Combining heterogeneous features for time series prediction', Proceedings of 4th International Conference on Behavioral, Economic, and Socio-Cultural Computing, BESC 2017, 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), IEEE, Krakow, Poland, pp. 1-2.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Time series prediction is a challenging task in reality, and various methods have been proposed for it. However, only the historical series of values are exploited in most of existing methods. Therefore, the predictive models might be not effective in some cases, due to: (1) the historical series of values is not sufficient usually, and (2) features from heterogeneous sources such as the intrinsic features of data samples themselves, which could be very useful, are not take into consideration. To address these issues, we proposed a novel method in this paper which learns the predictive model based on the combination of dynamic features extracted from series of historical values and static features of data samples. To evaluate the performance of our proposed method, we compare it with linear regression and boosted trees, and the experimental results validate our method's superiority.
Coluccia, A, Ghenescu, M, Piatrik, T, De Cubber, G, Schumann, A, Sommer, L, Klatte, J, Schuchert, T, Beyerer, J, Farhadi, M, Amandi, R, Aker, C, Kalkan, S, Saqib, M, Sharma, N, Daud, S, Makkah, K & Blumenstein, M 1970, 'Drone-vs-Bird detection challenge at IEEE AVSS2017', 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Lecce, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the 'drone-vs-bird detection challenge' to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results1.
Diesner, J, Ferrari, E & Xu, G 1970, 'Welcome from the ASONAM 2017 program chairs', Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2017, p. xviii.
Dong, F, Lu, J, Li, K & Zhang, G 1970, 'Concept drift region identification via competence-based discrepancy distribution estimation', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Real-world data analytics often involves cumulative data. While such data contains valuable information, the pattern or concept underlying these data may change over time and is known as concept drift. When learning under concept drift, it is essential to know when, how and where the context has evolved. Most existing drift detection methods focus only on triggering a signal when drift is detected, and little research has endeavored to explain how and where the data changes. To address this issue, we introduce kernel density estimation into competence-based drift detection method, and invent competence-based discrepancy distribution estimation to identify specific regions in the data feature space where drift has occurred. Two experiments demonstrate that our proposed approach, competence-based discrepancy density estimation, can quantitatively highlight drift regions through data feature space, and produce results that are very close to preset drift regions.
ElShaweesh, O, Hussain, FK, Lu, H, Al-Hassan, M & Kharazmi, S 1970, 'Personalized Web Search Based on Ontological User Profile in Transportation Domain', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 24th International Conference on Neural Information Processing 2017, Springer International Publishing, Guangzhou, China, pp. 239-248.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Current conventional search engines deliver similar results to all users for the same query. Because of the variety of user interests and preferences, personalized search engines, based on semantics, hold the promise of providing more efficient information that better reflects users’ needs. The main feature of building a personalized web search is to represent user interests in terms of user profiles. This paper proposes a personalized search approach using an ontology-based user profile. The aim of this approach is to build user profiles based on user browsing behavior and semantic knowledge of specific domain ontology to enhance the quality of the search results. The proposed approach utilizes a re-ranked algorithm to sort the results returned by the search engine to provide a search result that best relates to the user query. This algorithm evaluates the similarity between a user query, the retrieved search results and the ontological concepts. This similarity is computed by taking into account a user’s explicit browsing behavior, semantic knowledge of concepts, and synonyms of term-based vectors extracted from the WordNet API. A set of experiments using a case study from a transport service domain validates the effectiveness of the proposed approach and demonstrates promising results.
Erfani, SS, Lawrence, C, Abedin, B, Beydoun, G & Malimu, L 1970, 'Indigenous people living with cancer: Developing a mobile health application for improving their psychological well-being', AMCIS 2017 - America's Conference on Information Systems: A Tradition of Innovation, Americas Conference on Information Systems, AIS Electronic Library, Boston, pp. 1-5.
View description>>
Poor cancer outcomes experienced by Indigenous Australians result from advanced cancer stages at diagnosis, poorer uptake of and adherence to treatments, higher levels of co-morbidity, and poorer access to inclusive and culturally appropriate care compared with non-Indigenous Australians. Socio-economics and social support can mitigate these problems. Technology-based interventions hold considerable promise for enhancing social support. This paper asks what are the key features of a mobile health application designed to improve the social support and consequently psychological well-being of Indigenous Australians living with cancer? To answer this question, a comprehensive literature review of studies conducted in information systems and health disciplines has been undertaken and a theoretical model is proposed. This study contributes to the existing knowledge base through the development of a new theoretical model and the introduction of the features of a mobile health application that may have a positive impact among Indigenous Australian cancer patients’ psychological well-being.
Fang, XS, Sheng, QZ, Wang, X & Ngu, AHH 1970, 'Value Veracity Estimation for Multi-Truth Objects via a Graph-Based Approach', Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion, the 26th International Conference, ACM Press, pp. 777-778.
View/Download from: Publisher's site
Fang, XS, Sheng, QZ, Wang, X, Barhamgi, M, Yao, L & Ngu, AHH 1970, 'SourceVote: Fusing Multi-valued Data via Inter-source Agreements', CONCEPTUAL MODELING, ER 2017, 36th International Conference on Conceptual Modeling (ER), Springer International Publishing, Valencia, SPAIN, pp. 164-172.
View/Download from: Publisher's site
Fernando, KES, McGregor, C & James, AG 1970, 'CRISP-TDM<inf>0</inf> for standardized knowledge discovery from physiological data streams: Retinopathy of prematurity and blood oxygen saturation case study', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 226-229.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The CRoss Industry Standard Process for Temporal Data Mining (CRISP-TDM) that supports physiological stream temporal data mining and CRISP-DM0 that supports null hypothesis driven confirmatory data mining in combination was proposed by prior research. This combined CRISP-TDM0 is utilised as the standardised approach to managing, reporting and performing retrospective clinical research and is designed to solve the limitation in knowledge discovery amongst physiological data streams [1]. The temporal abstractions (TA) of high fidelity blood oxygenation saturation (SpO2) levels of nine premature neonates are analysed using data collected by the Artemis Platform that complies with the Big Data concept [2] and correlated with Retinopathy of Prematurity (ROP) data. The hourly SpO2, TA pattern visualisation manifested three clusters and this is further supported by mathematical review of time percentage spent in target, below and over oxygenation. Clustering based on ROP stage and gestational age identified probable association within these three clusters. However known risk factors showed no association with ROP.
Ferrari, A, Spoletini, P, Donati, B, Zowghi, D & Gnesi, S 1970, 'Interview Review: Detecting Latent Ambiguities to Improve the Requirements Elicitation Process', 2017 IEEE 25th International Requirements Engineering Conference (RE), 2017 IEEE 25th International Requirements Engineering Conference (RE), IEEE, Lisbon, pp. 400-405.
View/Download from: Publisher's site
View description>>
The review of software process artifacts, which include requirements as well as source code [1], is an effective practice to improve the quality of products [2]–[3][4][5]. In particular, the benefits of requirements reviews have been highlighted by several studies, especially for what concerns the identification of defects in requirements specifications [3], [6], [7]. Nevertheless, despite the usage of requirements reviews dates back at least 40 years [6], challenges exist for their widespread application in the software industry [8], [9]. Among the challenges, Salger highlights that “Software requirements are based on flawed ‘upstream’ requirements and reviews on requirements specifications are thus in vain” [8]. This observation poses an emphasis on the need to ameliorate early requirements elicitation activities, especially to improve the completeness of the specifications, a quality attribute that is recognised to be hard to assess by means of reviews [10].
Gao, F, Musial, K & Gabrys, B 1970, 'A Community Bridge Boosting Social Network Link Prediction Model', Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17: Advances in Social Networks Analysis and Mining 2017, ACM, pp. 683-689.
View/Download from: Publisher's site
View description>>
Link prediction in social networks is a very challenging research problem. The majority of existing approaches are based on the assumption that a given network evolves following a single phenomenon, e.g.”rich get richer” or”friend of my friend is my friend”. However, dynamics of network dynamic changes over time and different parts of the network evolve in different manner. Because of that, we hypothesise that the prediction accuracy can be improved by providing different treatment to different nodes and links. Building on that assumption, we propose a Community Bridge Boosting Prediction Model (CBBPM) that treats certain bridge nodes differently depending on their structural position. For such bridge nodes their similarity score obtained using traditional link-based prediction methods is boosted. By doing so the importance of these nodes is increased and at the same time ensuring that the CBBPM can be used with any existing link prediction method. Our experimental results show that such bridge node similarity boosting mechanism can improve the accuracy of traditional link prediction methods.
Ghamry, AM, Alkalbani, AM, Tran, V, Tsai, Y-C, Hoang, ML & Hussain, FK 1970, 'Towards a Public Cloud Services Registry', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 18th International Conference Web Information Systems Engineering, Springer International Publishing, Puschino, Russia, pp. 290-295.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Cloud services registry is a cloud services datadase which contains thousands of records of cloud consumers’ reviews and cloud services, such as Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). The data set is harvested from a web portal called www.serchen.com. Each record holds detail information about the service such as service name, service description, categories, key features, service provider link and review list. Each review contains reviewer name, review date and review content. This work is an extension of our previous work Blue Pages data set [6]. The data set is valuable for future research in cloud service identification, discovery, comparison and selection.
Ghantous, GB & Gill, AQ 1970, 'DevOps: Concepts, practices, tools, benefits and challenges', Proceedings ot the 21st Pacific Asia Conference on Information Systems: ''Societal Transformation Through IS/IT'', PACIS 2017, PACIS2017, AIS Electronic Library (AISeL), Malaysia.
View description>>
DevOps, originated in the context of agile software development, seems an appropriate approach to enable the continuous delivery and deployment of working software in small releases. Organizations are taking significant interest in adopting DevOps ways of working. The interest is there, however the challenge is how to effectively adopt DevOps in practice? Before disembarking on the journey of DevOps, there is a need to clearly understand the DevOps concepts, practice, tools, benefits and underlying challenges. Thus, in order to address the research question in hand, this paper adopts a Systematic Literature Review (SLR) approach to identify, review and synthesize the relevant studies published in public domain between: 2010-2016. SLR approach was applied to initially identify a set of 450 papers. Finally, 30 of 450 relevant papers were selected and reviewed to identify the eight key DevOps concepts, twenty practices, and a twelve categories tools. The research also identified seventeen benefits of using DevOps approach for application development and encountered four known challenges. The results of this review will serve as a knowledge base for researchers and practitioners, which can be used to effectively understand and establish the integrated DevOps capability in the local context.
Gill, AQ, Behbood, V, Ramadan-Jradi, R & Beydoun, G 1970, 'IoT architectural concerns: a systematic review.', ICC, International Conference on Internet of things and Cloud Computing, ACM, Cambridge, United Kingdom, pp. 117:1-117:1.
View/Download from: Publisher's site
View description>>
© 2017 ACM. There is increasing interest in studying and applying Internet of Things (IoT) within the overall context of digital-physical ecosystems. Most recently, much has been published on the benefits and applications of IoT. The main question is: what are the key IoT architectural concerns, which must be addressed to effectively develop and implement an IoT architecture? There is a need to systematically review and synthesize the literature on IoT architectural challenges or concerns. Using the SLR approach and applying customised search criteria derived from the research question, 22 relevant studies were identified and reviewed in this paper. The data from these papers were extracted to identify the IoT architectural challenges and relevant solutions. These results were organised into to 9 major challenge and 7 solution categories. The results of this research will serve as a resource for practitioners and researchers for the effective adoption, and setting future research priorities and directions in this emerging area of IoT architecture.
Gu, L, Wang, K, Liu, X, Guo, S & Liu, B 1970, 'A reliable task assignment strategy for spatial crowdsourcing in big data environment', 2017 IEEE International Conference on Communications (ICC), ICC 2017 - 2017 IEEE International Conference on Communications, IEEE.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. With the ubiquitous deployment of the mobile devices with increasingly better communication and computation capabilities, an emerging model called spatial crowdsourcing is proposed to solve the problem of unstructured big data by publishing location-based tasks to participating workers. However, massive spatial data generated by spatial crowdsourcing entails a critical challenge that the system has to guarantee quality control of crowdsourcing. This paper first studies a practical problem of task assignment, namely reliability aware spatial crowdsourcing (RA-SC), which takes the constrained tasks and numerous dynamic workers into consideration. Specifically, the worker confidence is introduced to reflect the completion reliability of the assigned task. Our RA-SC problem is to perform task assignments such that the reliability under budget constraints is maximized. Then, we reveal the typical property of the proposed problem, and design an effective strategy to achieve a high reliability of the task assignment. Besides the theoretical analysis, extensive experimental results also demonstrate that the proposed strategy is stable and effective for spatial crowdsourcing.
Guo, J, Yue, B, Xu, G, Yang, Z & Wei, J-M 1970, 'An Enhanced Convolutional Neural Network Model for Answer Selection', Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion, the 26th International Conference, ACM Press, Perth, Western Australia, pp. 789-790.
View/Download from: Publisher's site
View description>>
Answer selection is an important task in question answering (QA) from the Web. To address the intrinsic difficulty in encoding sentences with semantic meanings, we introduce a general framework, i.e., Lexical Semantic Feature based Skip Convolution Neural Network (LSF-SCNN), with several op- timization strategies. The intuitive idea is that the granular representations with more semantic features of sentences are deliberately designed and estimated to capture the similar- ity between question-answer pairwise sentences. The experi- mental results demonstrate the effectiveness of the proposed strategies and our model outperforms the state-of-the-art ones by up to 3.5% on the metrics of MAP and MRR.
Gupta, D, Borah, P & Prasad, M 1970, 'A fuzzy based Lagrangian twin parametric-margin support vector machine (FLTPMSVM)', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In the spirit of twin parametric-margin support vector machine (TPMSVM) and support vector machine based on fuzzy membership values (FSVM), a new method termed as fuzzy based Lagrangian twin parametric-margin support vector machine (FLTPMSVM) is proposed in this paper to reduce the effect of the outliers. In FLTPMSVM, we assign the weights to each data samples on the basis of fuzzy membership values to reduce the effect of outliers. Also, we consider the square of the 2-norm of slack variables to make the objective function strongly convex and find the solution of the proposed FLTPMSVM by solving simple linearly convergent iterative schemes instead of solving a pair of quadratic programming problems as in case of SVM, TWSVM, FTSVM and TPMSVM. No need of external toolbox is required for FLTPMSVM. The numerical experiments are performed on artificial as well as well known real-world datasets which show that our proposed FLTPMSVM is having better generalization performance and less training cost in comparison to support vector machine, twin support vector machine, fuzzy twin support vector machine and twin parametric-margin support vector machine.
Han, J, Wan, S, Lu, J & Zhang, G 1970, 'Tri-level multi-follower decision-making in a partial-cooperative situation', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Tri-level decision-making addresses compromises between interactive decision entities that are distributed throughout a three-level hierarchy. These decision entities are respectively termed as the top-level leader, the middle-level follower and the bottom-level follower. This paper considers a tri-level multi-follower (TLMF) decision problem where both cooperative and uncooperative relationships coexist between multiple followers at the same level. In this situation, followers share some decision variables with their counterparts and also control individual decision variables to achieve their respective goals; this is also known as partial-cooperative TLMF decision-making. To support this category of decision-making, this paper first presents a linear model to characterize the partial-cooperative TLMF decision-making process. It then develops a vertex enumeration algorithm to obtain a solution to the resulting model. Lastly, we apply the proposed TLMF decision techniques to handle an inventory management problem in applications.
Han, Y, Wan, Y, Chen, L, Xu, G & Wu, J 1970, 'Exploiting Geographical Location for Team Formation in Social Coding Sites', PAKDD 2017: Advances in Knowledge Discovery and Data Mining, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Jeju Island, Korea, pp. 499-510.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Social coding sites (SCSs) such as GitHub and BitBucket are collaborative platforms where developers from different background (e.g., culture, language, location, skills) form a team to contribute to a shared project collaboratively. One essential task of such collaborative development is how to form a optimal team where each member makes his/her greatest contribution, which may have a great effect on the efficiency of collaboration. To the best of knowledge, all existing related works model the team formation problem as minimizing the communication cost among developers or taking the workload of individuals into account, ignoring the impact of geographical location of each developer. In this paper, we aims to exploit the geographical proximity factor to improve the performance of team formation in social coding sites. Specifically, we incorporate the communication cost and geographical proximity into a unified objective function and propose a genetic algorithm to optimize it. Comprehensive experiments on a real-world dataset (e.g., GitHub) demonstrate the performance of the proposed model with the comparison of some state-of-the-art ones.
Haque, MN, Mathieson, L & Moscato, P 1970, 'A memetic algorithm for community detection by maximising the connected cohesion', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, Hawaii, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Community detection is an exciting field of research which has attracted the interest of many researchers during the last decade. While many algorithms and heuristics have been proposed to scale existing approaches a relatively smaller number of studies have looked at exploring different measures of quality of the detected community. Recently, a new score called 'cohesion' was introduced in the computing literature. The cohesion score is based comparing the number of triangles in a given group of vertices to the number of triangles only partly in that group. In this contribution, we propose a memetic algorithm that aims to find a subset of the vertices of an undirected graph that maximizes the cohesion score. The associated combinatorial optimisation problem is known to be NP-Hard and we also prove it to be W[1]-hard when parameterized by the score. We used a Local Search individual improvement heuristic to expand the putative solution. Then we removed all vertices from the group which are not a part of any triangle and expand the neighbourhood by adding triangles which have at least two nodes already in the group. Finally we compute the maximum connected component of this group. The highest quality solutions of the memetic algorithm have been obtained for four real-world network scenarios and we compare our results with ground-truth information about the graphs. We also compare the results to those obtained with eight other community detection algorithms via interrater agreement measures. Our results give a new lower bound on the parameterized complexity of this problem and give novel insights on its potential usefulness as a new natural score for community detection.
He, Q, Zhou, R, Zhang, X, Wang, Y, Ye, D, Chen, F, Chen, S, Grundy, J & Yang, Y 1970, 'Efficient Keyword Search for Building Service-Based Systems Based on Dynamic Programming', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 462-470.
View/Download from: Publisher's site
View description>>
The advances in service-oriented architecture (SOA) have fueled the demand for building service-based systems (SBSs) by composing existing services. Finding appropriate component services is a key step during the process for building SBSs. However, existing approaches require that system engineers have detailed knowledge of SOA techniques, which is often too demanding. A recent approach has been proposed to address this issue. However, it suffers from poor efficiency, which is increasingly critical as the service repository continues to grow. To address this issue, this paper proposes KS3+, a new, highly efficient approach that allows a system engineer to query for a system solution with a few keywords that represent the required system tasks. Modeling the problem of answering such a keyword query as a dynamic programming problem, KS3+ can quickly find a system solution composed of services that perform the required system tasks. It offers an efficient paradigm that significantly reduces the time and effort during the process for building SBSs. The results of extensive experiments on a real-world web service dataset demonstrate the high efficiency and effectiveness of KS3+.
Henderson, H & Leong, TW 1970, 'Lessons learned', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, pp. 533-537.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. This paper presents a study on user difficulties with parking meters. Using known Human-Computer Interaction (HCI) concepts as a guide, we explore the reasons for these difficulties and propose recommendations for designers of parking meters to improve the usability and experience. This paper also considers the applicability of these learnings to similar technologies that are of interest to HCI.
Herron, D, Moncur, W & van den Hoven, E 1970, 'Digital Decoupling and Disentangling', Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17: Designing Interactive Systems Conference 2017, ACM, Edinburgh, United Kingdom.
View/Download from: Publisher's site
View description>>
Romantic relationships are often facilitated through digital technologies, such as social networking sites and communication services. They are also facilitated through "digital possessions", such as messages sent to mobile devices and photos shared through social media. When individuals break up, digitally disconnecting can be facilitated by using those digital technologies and managing or curating these digital possessions. This research explores the break up stories of 13 individuals aged between 18 and 52. The aim of this work is to inform the design of systems focused on supporting individuals to decouple and disentangle digitally in the wake of a break up. Four areas of interest emerged from the data: communication, using digital possessions, managing digital possessions, and experiences of technology. Opportunities for design were identified in decoupling and disentangling, and designing around guilt.
Hu, L, Cao, L, Wang, S, Xu, G, Cao, J & Gu, Z 1970, 'Diversifying Personalized Recommendation with User-session Context', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 1858-1864.
View/Download from: Publisher's site
View description>>
Recommender systems (RS) have become an integral part of our daily life. However, most current RS often repeatedly recommend items to users with similar profiles. We argue that recommendation should be diversified by leveraging session contexts with personalized user profiles. For this, current session-based RS (SBRS) often assume a rigidly ordered sequence over data which does not fit in many real-world cases. Moreover, personalization is often omitted in current SBRS. Accordingly, a personalized SBRS over relaxedly ordered user-session contexts is more pragmatic. In doing so, deep-structured models tend to be too complex to serve for online SBRS owing to the large number of users and items. Therefore, we design an efficient SBRS with shallow wide-in-wide-out networks, inspired by the successful experience in modern language modelings. The experiments on a real-world e-commerce dataset show the superiority of our model over the state-of-the-art methods.
Huan, H, Wei, Z, Liang, L & Yang, L 1970, 'Collaborative Filtering Recommendation Model based on Convolutional Denoising Auto Encoder', Proceedings of the 12th Chinese Conference on Computer Supported Cooperative Work and Social Computing, ChineseCSCW '17: Chinese Conference on Computer Supported Cooperative Work and Social Computing, ACM.
View/Download from: Publisher's site
Huang, C, Yao, L, Wang, X, Benatallah, B & Sheng, QZ 1970, 'Expert as a Service: Software Expert Recommendation via Knowledge Domain Embeddings in Stack Overflow', 2017 IEEE International Conference on Web Services (ICWS), 2017 IEEE International Conference on Web Services (ICWS), IEEE, Honolulu, HI, pp. 317-324.
View/Download from: Publisher's site
Hung, Y-C, Wang, Y-K, Prasad, M & Lin, C-T 1970, 'Brain dynamic states analysis based on 3D convolutional neural network', 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, Banff, AB, Canada, pp. 222-227.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Drowsiness driving is one major factor of traffic accident. Monitoring the changes of brain signals provides an effective and direct way for drowsiness detection. One 3D convolutional neural network (3D CNN)-based forecasting system has been proposed to monitor electroencephalography (EEG) signals and predict fatigue level during driving. The limited weight sharing and channel-wise convolution were both applied to extract the significant phenomenon in various frequency bands of brain signals and the spatial information of EEG channel location, respectively. The proposed 3D CNN with limited weight sharing and channel-wise convolution has been demonstrated to predict reaction time (RT) of driving with low root mean square error (RMSE) through the brain dynamics. This proposed approach outperforms with the state-of-the-art algorithms, such as traditional CNN, Neural Network (NN), and support vector regression (SVR). Compared with traditional CNN and Artificial Neural Network, the RMSE of 3D CNN-based RT prediction has been improved 9.5% (RMSE from 0.6322 to 0.5720) and 8% (RMSE from 0.6217 to 0.5720), respectively. We envision that this study might open a new branch between deep learning application in neuro-cognitive analysis and real world application.
Huo, H, Liu, X, Zheng, D, Wu, Z, Yu, S & Liu, L 1970, 'Collaborative Filtering Fusing Label Features Based on SDAE', Springer International Publishing, pp. 223-236.
View/Download from: Publisher's site
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Risk Management Framework to Avoid SLA Violation in Cloud from a Provider’s Perspective', ADVANCES ON P2P, PARALLEL, GRID, CLOUD AND INTERNET COMPUTING, Advances on P2P, Parallel, Grid, Cloud and Internet Computing, Springer International Publishing, Soonchunhyang Univ, Asan, SOUTH KOREA, pp. 233-241.
View/Download from: Publisher's site
View description>>
Managing risk is an important issue for a service provider to avoid SLA violation in any business. The elastic nature of cloud allows consumers to use a number of resources depending on their business needs. Therefore, it is crucial for service providers; particularly SMEs to first form viable SLAs and then manage them. When a provider and a consumer execute an agreed SLA, the next step is monitoring and, if a violation is predicted, appropriate action should be taken to manage that risk. In this paper we propose a Risk Management Framework to avoid SLA violation (RMF-SLA) that assists cloud service providers to manage the risk of service violation. Our framework uses a Fuzzy Inference System (FIS) and considers inputs such as the reliability of a consumer; the attitude towards risk of the provider; and the predicted trajectory of consumer behavior to calculate the amount of risk and the appropriate action to manage it. The framework will help small-to-medium sized service providers manage the risk of service violation in an optimal way.
Inan, DI & Beydoun, G 1970, 'Facilitating disaster knowledge management with agent-based modelling', Proceedings ot the 21st Pacific Asia Conference on Information Systems: ''Societal Transformation Through IS/IT'', PACIS 2017.
View description>>
In developed countries, for recurring disasters (e.g. floods), there are dedicated document repositories of Disaster Management Plans (DISPLANs) that can be accessed as needs arise. Nevertheless, accessing the appropriate plan in a timely manner and sharing activities between plans often requires domain knowledge and intimate knowledge of the plans in the first place. In this paper, we introduce an Agent-Based (AB) knowledge analysis framework to convert DISPLANs into a collection of knowledge units that can be stored in a unified repository. The repository of DM actions then enables the mixing and matching knowledge between different plans. The repository is structured as a layered abstraction according to Meta Object Facility (MOF) to allow the free flow access to the knowledge across the layers. We use the flood DISPLAN of the SES (State Emergency Service), an authoritative DM agency in NSW (New State Wales) State of Australia to illustrate and validate the developed framework.
Inan, DI, Beydoun, G & Opper, S 1970, 'Customising Agent Based Analysis Towards Analysis of Disaster Management Knowledge', Australasian Conference on Information Systems, University of Wollongong, Wollongong NSW, pp. 1-12.
View description>>
In developed countries such as Australia, for recurring disasters (e.g.floods), there are dedicated document repositories of Disaster Management Plans(DISPLANs), and supporting doctrine and processes that are used to prepareorganisations and communities for disasters. They are maintained on an ongoingcyclical basis and form a key information source for community education,engagement and awareness programme in the preparation for and mitigation ofdisasters. DISPLANS, generally in semi-structured text document format, arethen accessed and activated during the response and recovery to incidents tocoordinate emergency service and community safety actions. However, accessingthe appropriate plan and the specific knowledge within the text document fromacross its conceptual areas in a timely manner and sharing activities betweenstakeholders requires intimate domain knowledge of the plan contents and itsdevelopment. This paper describes progress on an ongoing project with NSW StateEmergency Service (NSW SES) to convert DISPLANs into a collection of knowledgeunits that can be stored in a unified repository with the goal to form thebasis of a future knowledge sharing capability. All Australian emergencyservices covering a wide range of hazards develop DISPLANs of various structureand intent, in general the plans are created as instances of a template, forexample those which are developed centrally by the NSW and Victorian SESs Stateplanning policies. In this paper, we illustrate how by using selected templatesas part of an elaborate agent-based process, we can apply agent-orientedanalysis more efficiently to convert extant DISPLANs into a centralisedrepository. The repository is structured as a layered abstraction according toMeta Object Facility (MOF). The work is illustrated using DISPLANs along theflood-prone Murrumbidgee River in central NSW.
Inibhunu, C, Schauer, A, Redwood, O, Clifford, P & McGregor, C 1970, 'Predicting hospital admissions and emergency room visits using remote home monitoring data', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, pp. 282-285.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The costs of lengthy hospital admissions (HA) and multiple emergency room visits (ER Visits) from patients with conditions such as heart failure (HF) and chronic obstructive pulmonary disease (COPD) can place a significant burden on healthcare systems. Understanding the various factors contributing to hospitalization and ER visits could aid cost-effective management in the delivery of services leading to potential improvement on quality of life for patients. This can be facilitated by collecting data using remoting patient monitoring (RPM) services and using analytics to discover important factors about patients. This paper presents our research that utilizes predictive modeling to determine key factors that are significant determinant to hospitalization and multiple ER Visits. The results shows that gender, past medical history and vital status are key factors to hospital admissions and ER Visits. Additionally, when a factor to indicate the period before, during and after an ER Visits was included, the resulting model shows a very high likelihood ratio and improved p values on all vital status. Our results shows that more research is needed to fully understand the temporal patterns among variables during hospitalization or ER visit.
Inibhunu, C, Schauer, A, Redwood, O, Clifford, P & McGregor, C 1970, 'The impact of gender, medical history and vital status on emergency visits and hospital admissions: A remote patient monitoring case study', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 278-281.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Remote Program Monitoring (RPM) is considered to have potential to improve the quality of life on patients diagnosed with cardiac conditions such as heart failure (HF) and chronic obstructive pulmonary disease (COPD). Remote collection and analysis of patients data could aid in effective decision making on necessary care needed by patients monitored. This could lead to reduction on healthcare costs as well as improved outcomes for the patients. As a component of our predictive analytics research, this paper presents results of remote patient monitoring study of patients from the Cardiac Clinic of Southlake Regional Health Centre who were referred to WeCare for home based monitoring. Results indicate statistically significant evidence on impact of gender, medical history and vital status as risk factors for subsequent hospitalization and multiple emergency room visits.
Jayan Chirayath Kurian, J, Watkins, J & Macallum, K 1970, 'User-generated content on the Facebook page of Emergency Management Organizations: Perspectives of Emergency Management Administrators', Sydney.
Jiang, J, Chaczko, Z, Al-Doghman, F & Narantaka, W 1970, 'New LQR Protocols with Intrusion Detection Schemes for IOT Security', 2017 25th International Conference on Systems Engineering (ICSEng), 2017 25th International Conference on Systems Engineering (ICSEng), IEEE, Las Vegas, NV, USA, pp. 466-474.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Link quality protocols employ link quality estimators to collect statistics on the wireless link either independently or cooperatively among the sensor nodes. Furthermore, link quality routing protocols for wireless sensor networks may modify an estimator to meet their needs. Link quality estimators are vulnerable against malicious attacks that can exploit them. A malicious node may share false information with its neighboring sensor nodes to affect the computations of their estimation. Consequently, malicious node may behave maliciously such that its neighbors gather incorrect statistics about their wireless links. This paper aims to detect malicious nodes that manipulate the link quality estimator of the routing protocol. In order to accomplish this task, MINTROUTE and CTP routing protocols are selected and updated with intrusion detection schemes (IDSs) for further investigations with other factors. It is proved that these two routing protocols under scrutiny possess inherent susceptibilities, that are capable of interrupting the link quality calculations. Malicious nodes that abuse such vulnerabilities can be registered through operational detection mechanisms. The overall performance of the new LQR protocol with IDSs features is experimented, validated and represented via the detection rates and false alarm rates.
Jiang, J, Wen, S, Yu, S, Xiang, Y, Zhou, W & Hassan, H 1970, 'The structure of communities in scale‐free networks', Concurrency and Computation: Practice and Experience, Wiley, pp. e4040-e4040.
View/Download from: Publisher's site
View description>>
SummaryScale‐free networks are often used to model a wide range of real‐world networks, such as social, technological, and biological networks. Understanding the structure of scale‐free networks evolves into a big data problem for business, management, and protein function prediction. In the past decade, there has been a surge of interest in exploring the properties of scale‐free networks. Two interesting properties have attracted much attention: the assortative mixing and community structure. However, these two properties have been studied separately in either theoretical models or real‐world networks. In this paper, we show that the structural features of communities are highly related with the assortative mixing in scale‐free networks. According to the value of assortativity coefficient, scale‐free networks can be categorized into assortative, disassortative, and neutral networks, respectively. We systematically analyze the community structure in these three types of scale‐free networks through six metrics: node embeddedness, link density, hub dominance, community compactness, the distribution of community sizes, and the presence of hierarchical communities. We find that the three types of scale‐free networks exhibit significant differences in these six metrics of community structures. First, assortative networks present high embeddedness, meaning that many links lying within communities but few links lying between communities. This leads to the high link density of communities. Second, disassortative networks exhibit great hubs in communities, which results in the high compactness of communities that nodes can reach each other via short paths. Third, in neutral networks, a big portion of links act as community bridges, so they display sparse and less compact communities. In addition, we find that (dis)assortative networks show hierarchical community structure with power‐law‐distributed community sizes, while neutral ...
Kolamunna, H, Chauhan, J, Hu, Y, Thilakarathna, K, Perino, D, Makaroff, D & Seneviratne, A 1970, 'Are Wearables Ready for HTTPS? On the Potential of Direct Secure Communication on Wearables', 2017 IEEE 42nd Conference on Local Computer Networks (LCN), 2017 IEEE 42nd Conference on Local Computer Networks (LCN), IEEE.
View/Download from: Publisher's site
Kridalukmana, R, Lu, HY & Naderpour, M 1970, 'An object oriented Bayesian network approach for unsafe driving maneuvers prevention system', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. As the main contributor to the traffic accidents, unsafe driving maneuvers have taken attentions from automobile industries. Although driving feedback systems have been developed in effort of dangerous driving reduction, it lacks of drivers awareness development. Therefore, those systems are not preventive in nature. To cover this weakness, this paper presents an approach to develop drivers awareness to prevent dangerous driving maneuvers. The approach uses Object-Oriented Bayesian Network to model hazardous situations. The result of the model can truthfully reflect a driving environment based upon situation analysis, data generated from sensors, and maneuvers detectors. In addition, it also alerts drivers when a driving situation that has high probability to cause unsafe maneuver to be detected. This model then is used to design a system, which can raise drivers awareness and prevent unsafe driving maneuvers.
Kuppili Venkata, S, Musial, K, Mahmoud, S & Keppens, J 1970, 'Demonstration: Multi-agent System for Distributed Cache Maintenance', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 364-368.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2017. Innovations in science and technology is increasing the demand on huge data transfers and hence number of data caches. In this paper, we consider the community caching solution, CommCache, where many groups of users are working together on related projects distributed all over the world. We demonstrate the use of proactive caches for data placement problem with the help of multi-agent coordination.
Kuppili Venkata, S, Musial, K, Mahmoud, S & Keppens, J 1970, 'Multi-Agent System for Distributed Cache Maintenance', ADVANCES IN PRACTICAL APPLICATIONS OF CYBER-PHYSICAL MULTI-AGENT SYSTEMS: THE PAAMS COLLECTION, PAAMS 2017, 15th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS), Springer International Publishing, Porto, PORTUGAL, pp. 157-169.
View/Download from: Publisher's site
Kutay, CM & Lawrence, C 1970, 'Enduring Engineering for our Water Resources', Putting Water to Work: Australian Engineering Heritage, Mildura.
Kutay, CM & Lawrence, C 1970, 'Language Located', Information Technologies for Indigenous Communities, Melbourne September 2017.
Lawrence, C, Leong, TW, Gay, V, Woods, A & Wadley, G 1970, '#thismymob', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, Brisbane, pp. 646-647.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. We propose to hold a one-day workshop on developing projects relating to #thismymob: Digital Land Rights and Reconnecting Indigenous Communities at OzCHI 2017 Brisbane. See http://www.arc.gov.au/newsmedia/ news/thismymob-digital-land-rights-andreconnecting-indigenous-communities.
Li, B, Xiong, J, Liu, B, Gui, L, Qiu, M & Shi, Z 1970, 'On Services Pushing and Caching in High-speed Train by Using Converged Broadcasting and Cellular Networks', 2017 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 12th IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE, Cagliari, ITALY, pp. 457-463.
Li, B, Xiong, J, Liu, B, Gui, L, Qiu, M & Shi, Z 1970, 'On services pushing and caching in high-speed train by using converged broadcasting and cellular networks', 2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. This paper proposes a services pushing and caching algorithm in high-speed train (HST) by using converged wireless broadcasting and cellular networks (CWBCN). Services pushing and caching on the HST is an efficient way to improve the capacity of the network; and it can also lead to better user experience. In the proposed services pushing and caching model, the most popular services are delivered and cached on the vehicle relay station (VRS) of the train ahead the departure time. Then, the most popular services are broadcasted and cached on the User Equipments (UEs) after all the passengers are on the train; the less popular services are transmitted to the user by p2p mode by the relayed cellular network on the train. In order to maximize the network capacity in limited time slots, we transform the issue into the 0-1 Knapsack problem. Dynamic programming algorithm is adopted to solve it in polynomial time. As the passengers may get on or get off the train when pushing the most popular services, an information retransfer algorithm is also proposed when more intermediate stations are considered. Simulations show that the proposed algorithms can efficiently improve the capacity of the converged network.
Liu, A, Song, Y, Zhang, G & Lu, J 1970, 'Regional Concept Drift Detection and Density Synchronized Drift Adaptation', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 2280-2286.
View/Download from: Publisher's site
View description>>
In data stream mining, the emergence of new patterns or a pattern ceasing to exist is called concept drift. Concept drift makes the learning process complicated because of the inconsistency between existing data and upcoming data. Since concept drift was first proposed, numerous articles have been published to address this issue in terms of distribution analysis. However, most distribution-based drift detection methods assume that a drift happens at an exact time point, and the data arrived before that time point is considered not important. Thus, if a drift only occurs in a small region of the entire feature space, the other non-drifted regions may also be suspended, thereby reducing the learning efficiency of models. To retrieve non-drifted information from suspended historical data, we propose a local drift degree (LDD) measurement that can continuously monitor regional density changes. Instead of suspending all historical data after a drift, we synchronize the regional density discrepancies according to LDD. Experimental evaluations on three public data sets show that our concept drift adaptation algorithm improves accuracy compared to other methods.
Liu, A, Zhang, G & Lu, J 1970, 'Fuzzy time windowing for gradual concept drift adaptation', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The aim of machine learning is to find hidden insights into historical data, and then apply them to forecast the future data or trends. Machine learning algorithms optimize learning models for lowest error rate based on the assumption that the historical data and the data to be predicted conform to the same knowledge pattern (data distribution). However, if the historical data is not enough, or the knowledge pattern keeps changing (data uncertainty), this assumption will become invalid. In data stream mining, this phenomenon of knowledge pattern changing is called concept drift. To address this issue, we propose a novel fuzzy windowing concept drift adaptation (FW-DA) method. Compared to conventional windowing-based drift adaptation algorithms, FW-DA achieves higher accuracy by allowing the sliding windows to keep an overlapping period so that the data instances belonging to different concepts can be determined more precisely. In addition, FW-DA statistically guarantees that the upcoming data conforms to the inferred knowledge pattern with a certain confidence level. To evaluate FW-DA, four experiments were conducted using both synthetic and real-world data sets. The experiment results show that FW-DA outperforms the other windowing-based methods including state-of-the-art drift adaptation methods.
Liu, B, Chen, L, Zhu, X, Zhang, Y, Zhang, C & Qiu, W 1970, 'Protecting location privacy in spatial crowdsourcing using encrypted data', Advances in Database Technology - EDBT, International Conference on Extending Database Technology, Open Proceedings, Venice, Italy, pp. 478-481.
View/Download from: Publisher's site
View description>>
In spatial crowdsourcing, spatial tasks are outsourced to a set of workers in proximity of the task locations for efficient assignment. It usually requires workers to disclose their locations, which inevitably raises security concerns about the privacy of the workers’ locations. In this paper, we propose a secure SC framework based on encryption, which ensures that workers’ location information is never released to any party, yet the system can still assign tasks to workers situated in proximity of each task’s location. We solve the challenge of assigning tasks based on encrypted data using homomorphic encryption. Moreover, to overcome the efficiency issue, we propose a novel secure indexing technique with a newly devised SKD-tree to index encrypted worker locations. Experiments on real-world data evaluate various aspects of the performance of the proposed SC platform.
Liu, B, Zhou, W, Yu, S, Wang, K, Wang, Y, Xiang, Y & Li, J 1970, 'Home Location Protection in Mobile Social Networks: A Community Based Method (Short Paper)', INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2017, 13th International Conference on Information Security Practice and Experience (ISPEC) / 3rd International Symposium on Security and Privacy in Social Networks and Big Data (SocialSec), Springer International Publishing, Melbourne, AUSTRALIA, pp. 694-704.
View/Download from: Publisher's site
Liu, F, Zhang, G & Lu, J 1970, 'Heterogeneous unsupervised domain adaptation based on fuzzy feature fusion', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Domain adaptation is a transfer learning approach that has been widely studied in the last decade. However, existing works still have two limitations: 1) the feature spaces of the domains are homogeneous, and 2) the target domain has at least a few labeled instances. Both limitations significantly restrict the domain adaptation approach when knowledge is transferred across domains, especially in the current era of big data. To address both issues, this paper proposes a novel fuzzy-based heterogeneous unsupervised domain adaptation approach. This approach maps the feature spaces of the source and target domains onto the same latent space constructed by fuzzy features. In the new feature space, the label spaces of two domains are maintained to reduce the probability of negative transfer occurring. The proposed approach delivers superior performance over current benchmarks, and the heterogeneous unsupervised domain adaptation (HeUDA) method provides a promising means of giving a learning system the associative ability to judge unknown things using related knowledge.
Liu, S, Pang, N, Xu, G & Liu, H 1970, 'Collaborative Filtering via Different Preference Structures', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Knowledge Science, Engineering and Management, Springer International Publishing, Melbourne, Australia, pp. 309-321.
View/Download from: Publisher's site
View description>>
© Springer International Publishing AG 2017. Recently, social network websites start to provide third-parity sign-in options via the OAuth 2.0 protocol. For example, users can login Netflix website using their Facebook accounts. By using this service, accounts of the same user are linked together, and so does their information. This fact provides an opportunity of creating more complete profiles of users, leading to improved recommender systems. However, user opinions distributed over different platforms are in different preference structures, such as ratings, rankings, pairwise comparisons, voting, etc. As existing collaborative filtering techniques assume the homogeneity of preference structure, it remains a challenge task of how to learn from different preference structures simultaneously. In this paper, we propose a fuzzy preference relation-based approach to enable collaborative filtering via different preference structures. Experiment results on public datasets demonstrate that our approach can effectively learn from different preference structures, and show strong resistance to noises and biases introduced by cross-structure preference learning.
Liu, S, Xu, G, Zhu, X & Zhou, Z 1970, 'Towards simplified insurance application via sparse questionnaire optimization', 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), IEEE, Poland, pp. 1-2.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Life insurance application requires in-person meetings with underwriters, tedious paperwork, and an average waiting period of six weeks before an offer can be made. This outdated process has become a barrier for broader consumer adoption, resulting large coverage gap. In this work, we aim to closing this gap by leveraging data mining techniques to optimize the insurance questionnaire form. Our experiment on 10 years of insurance application data has identified that only ∼2% of all questions have shown high relevancy to determining the risks of applicants, resulting a significantly simplified questionnaire.
Liu, W, Chang, X, Chen, L & Yang, Y 1970, 'Early Active Learning with Pairwise Constraint for Person Re-identification', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer International Publishing, Skopje, Macedonia, pp. 103-118.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Research on person re-identification (re-id) has attached much attention in the machine learning field in recent years. With sufficient labeled training data, supervised re-id algorithm can obtain promising performance. However, producing labeled data for training supervised re-id models is an extremely challenging and time-consuming task because it requires every pair of images across no-overlapping camera views to be labeled. Moreover, in the early stage of experiments, when labor resources are limited, only a small number of data can be labeled. Thus, it is essential to design an effective algorithm to select the most representative samples. This is referred as early active learning or early stage experimental design problem. The pairwise relationship plays a vital role in the re-id problem, but most of the existing early active learning algorithms fail to consider this relationship. To overcome this limitation, we propose a novel and efficient early active learning algorithm with a pairwise constraint for person re-identification in this paper. By introducing the pairwise constraint, the closeness of similar representations of instances is enforced in active learning. This benefits the performance of active learning for re-id. Extensive experimental results on four benchmark datasets confirm the superiority of the proposed algorithm.
Lu, H, Heng, J & Wang, C 1970, 'An AI-Based Hybrid Forecasting Model for Wind Speed Forecasting', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Neural Information Processing, Springer International Publishing, Guangzhou, China, pp. 221-230.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Forecasting of wind speed plays an important role in wind power prediction for management of wind energy. Due to intermittent nature of wind, accurately forecasting of wind speed has been a long standing research challenge. Artificial neural networks (ANNs) is one of promising approaches to predict wind speed. However, since the results of ANN-based models are strongly dependent on the initial weights and thresholds values which are usually randomly generated, the stability of forecasting results is not always satisfactory. This paper presents a new hybrid model for short term forecasting of wind speed with high accuracy and strong stability by optimizing the parameters in a generalized regression neural network (GRNN) using a multi-objective firefly algorithm (MOFA). To evaluate the effectiveness of this hybrid algorithm, we apply it for short-term forecasting of wind speed from four wind power stations in Penglai, China, along with four typical ANN-based models, which are back propagation neural network (BPNN), radical basis function neural network (RBFNN), wavelet neural network (WNN) and GRNN. The comparison results clearly show that this hybrid model can significantly reduce the impact of randomness of initialization on the forecasting results and achieve good accuracy and stability.
Lucassen, G, Dalpiaz, F, van der Werf, JMEM, Brinkkemper, S & Zowghi, D 1970, 'Behavior-Driven Requirements Traceability via Automated Acceptance Tests', 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), IEEE, Lisbon, pp. 431-434.
View/Download from: Publisher's site
View description>>
Although information retrieval advances significantly improved automated traceability tools, their accuracy is still far from 100% and therefore they still need human intervention. Furthermore, despite the demonstrated benefits of traceability, many practitioners find the overhead for its creation and maintenance too high. We propose the Behavior Driven Traceability Method (BDT) that takes a different standpoint on automated traceability: we establish ubiquitous traceability between user story requirements and source code by taking advantage of the automated acceptance tests that are created as part of the Behavior Driven Development process.
Madhisetty, S & Williams, M-A 1970, 'Framework for Privacy in Photos and Videos When using Social Media', Proceedings of the 19th International Conference on Enterprise Information Systems, 19th International Conference on Enterprise Information Systems, SCITEPRESS - Science and Technology Publications, Porto, Portugal, pp. 331-336.
View/Download from: Publisher's site
View description>>
Privacy is a social construct. Having said that, how can it be contextualised and studied scientifically? This research contributes by investigating how to manage privacy better in the context of sharing and storing photos and videos using social media. Social media such as Facebook, Twitter, WhatsApp and many more applications are becoming popular. The instant sharing of tacit information via photos and videos makes the problem of privacy even more critical.The main problem was, nobody could define the actual meaning of privacy. Though there are definitions about privacy and Acts to protect it, there is no clear consensus as to what it actually means. I asked myself a question, how do I manage something when I don't know what it means exactly? I then decided to do this research by asking questions about privacy in particular categories of photos so that I could arrive at a general consensus. The data has been processed using the principles of Grounded Theory (GT) to develop a framework which assists in the effective management of privacy in photos and videos.
Maggs, CA & Musial-Gabrys, K 1970, 'LINNEAN SYSTEMATICS IN THE AGE OF BIG DATA', PHYCOLOGIA, INT PHYCOLOGICAL SOC, pp. 123-124.
Manzoor, S, Manzoor, M & Hussain, W 1970, 'An Analysis of Energy-Efficient Approaches Used for Virtual Machines and Data Centres', 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), IEEE, Shanghai, China, pp. 91-96.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The adoption of cloud computing has increased significantly, but this has given rise to the problem of efficient energy usage. The efficient use of energy by data centers and the use of virtual machines can help to minimize cost deadlines, resources, and utilization and execution times. There is a consequent need for different approaches that can reduce energy consumption whilst still achieving the multiple objectives of cloud computing. In this study, we examine a number of different approaches that have been discussed in the recent literature w.r.t. energy-efficient cloud workflow management, and we compare these approaches for energy-efficient usage of data centers and virtual machines. The results show that virtual machine scheduling and virtual machine allocation approaches are the most commonly used approaches that achieve an optimal energy consumption.
McGregor, C, Bonnis, B, Stanfield, B & Stanfield, M 1970, 'Integrating Big Data analytics, virtual reality, and ARAIG to support resilience assessment and development in tactical training', 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), IEEE.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Combat tactical training activities utilising virtual reality environments are being used increasingly to create training scenarios to promote resilience against stressors and to enable standardized training scenarios to allow trainees to learn techniques for various stressors. Resilience is an important component for mental health. However, assessment of the trainees' response to these training activities has either been limited to various pre and post training assessment metrics or collected in parallel during experiments and analysed after collection rather than in real-time. New Big Data approaches have the potential to provide real-time analytics. We have created a Big Data analytics platform, Athena, that in real-time acquires data from a first person shooter military combat simulation game, ArmA 3, as well as the data ArmA 3 sends to the muscle stimulation component of a multisensory garment, ARAIG that provides on the body feedback to the wearer for communications, weapon fire and being hit and integrates that data with physiological response data such as heart rate, breathing behaviour and blood oxygen saturation. We present results from our initial pilot study from an ethics approved equipment integration study. Our approach is equally applicable for Virtual Reality Graded Exposure Therapy with physiological monitoring.
McGregor, C, Orlov, O, Baevsky, R, Chernikova, A & Rusanov, V 1970, 'A Method for the Integration of Real-time Probabilistic Approaches for Astronaut Wellness in Human in the Loop Related Missions and Situations with Big Data Analytics', 19th AIAA Non-Deterministic Approaches Conference, 19th AIAA Non-Deterministic Approaches Conference, American Institute of Aeronautics and Astronautics, Grapevine, Texas.
View/Download from: Publisher's site
View description>>
© 2017, American Institute of Aeronautics and Astronautics Inc, AIAA. All rights reserved. The man-instrumentation-equipment-vehicle-environment ecosystem is complex in aerospace missions. Health status of the individual has important implications on decision making and performance that should be factored into assessments for probability of success/risk of failure both in offline and real-time models. To date probabilistic models have not considered the dynamic nature of health status. Big Data analytics is enabling new forms of analytics to assess health status in real-time. There is great potential to integrate dynamic health status information with platforms assessing risk and the probability of success for dynamic individualized real-time probabilistic predictive risk assessment. In this research we present an approach utilizing Big Data analytics to enable continuous assessment of astronaut health risk and show its implications for integration with HITL related aerospace mission.
Meng, Q, Catchpoole, D, Skillicom, D & Kennedy, PJ 1970, 'Relational autoencoder for feature extraction', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 364-371.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Feature extraction becomes increasingly important as data grows high dimensional. Autoencoder as a neural network based feature extraction method achieves great success in generating abstract features of high dimensional data. However, it fails to consider the relationships of data samples which may affect experimental results of using original and new features. In this paper, we propose a Relation Autoencoder model considering both data features and their relationships. We also extend it to work with other major autoencoder models including Sparse Autoencoder, Denoising Autoencoder and Variational Autoencoder. The proposed relational autoencoder models are evaluated on a set of benchmark datasets and the experimental results show that considering data relationships can generate more robust features which achieve lower construction loss and then lower error rate in further classification compared to the other variants of autoencoders.
Meng, Q, Wu, J, Ellis, J & Kennedy, PJ 1970, 'Dynamic island model based on spectral clustering in genetic algorithm', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 1724-1731.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. How to maintain relative high diversity is important to avoid premature convergence in population-based optimization methods. Island model is widely considered as a major approach to achieve this because of its flexibility and high efficiency. The model maintains a group of sub-populations on different islands and allows sub-populations to interact with each other via predefined migration policies. However, current island model has some drawbacks. One is that after a certain number of generations, different islands may retain quite similar, converged sub-populations thereby losing diversity and decreasing efficiency. Another drawback is that determining the number of islands to maintain is also very challenging. Meanwhile initializing many sub-populations increases the randomness of island model. To address these issues, we proposed a dynamic island model (DIM-SP) which can force each island to maintain different sub-populations, control the number of islands dynamically and starts with one sub-population. The proposed island model outperforms the other three state-of-the-art island models in three baseline optimization problems including job shop scheduler, travelling salesmen, and quadratic multiple knapsack.
Mi, J, Wang, K, Liu, B, Ding, F, Sun, Y & Huang, H 1970, 'A Multiobjective Evolution Algorithm Based Rule Certainty Updating Strategy in Big Data Environment', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, YourSingapore, Singapore, SINGAPORE.
View/Download from: Publisher's site
Min Lu, Jie Liang, Yu Zhang, Guozheng Li, Siming Chen, Zongru Li & Yuan, X 1970, 'Interaction+: Interaction enhancement for web-based visualizations', 2017 IEEE Pacific Visualization Symposium (PacificVis), 2017 IEEE Pacific Visualization Symposium (PacificVis), IEEE, Seoul, Korea, pp. 61-70.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this work, we present Interaction+, a tool that enhances the interactive capability of existing web-based visualizations. Different from the toolkits for authoring interactions during the visualization construction, Interaction+ takes existing visualizations as input, analyzes the visual objects, and provides users with a suite of interactions to facilitate the visual exploration, including selection, aggregation, arrangement, comparison, filtering, and annotation. Without accessing the underlying data or process how the visualization is constructed, Interaction+ is application-independent and can be employed in various visualizations on the web. We demonstrate its usage in two scenarios and evaluate its effectiveness with a qualitative user study.
Mols, I, van den Hoven, E & Eggen, B 1970, 'Balance, Cogito and Dott', Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17: Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama, Japan, pp. 31-35.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Reflection in and on everyday life can provide selfinsight, increase gratitude and have a positive effect on well-being. To integrate reflection in everyday life, media technologies can provide support. In this paper, we explore how both media creation & use in different modalities can support reflection. We present the ongoing work of designing and building Balance, Cogito, and Dott, focusing on media in audible, textual and visual form. We discuss our research-Through-design process and address the differences between modalities in terms of interaction, tangibility, and the integration in everyday life.
Mustapha, S, Braytee, A & Ye, L 1970, 'Detection of surface cracking in steel pipes based on vibration data using a multi-class support vector machine classifier', SPIE Proceedings, SPIE Smart Structures and Materials + Nondestructive Evaluation and Health Monitoring, SPIE, Portland, OR.
View/Download from: Publisher's site
Naik, T, McGregor, C & James, A 1970, 'Automated partial premature infant pain profile scoring using big data analytics', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 246-249.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Lack of valid and reliable pain assessment in the neonatal population has become a significant challenge in the Neonatal Intensive Care Unit (NICU). In an attempt to forego the manual pain scoring system, this paper presents an initial framework to automate a partial pain score for newborn infants using big data analytics that automates the analysis of high speed physiological data. An ethically approved retrospective clinical research study was performed to calculate Artemis Premature Infant Pain Profile (APIPP) scores from premature infant data collected from the Artemis platform. Using the Premature Infant Pain Profile (PIPP) as the gold standard scale, scoring techniques were automated to create data abstractions from gestational age and the physiological streams of Heart Rate (HR) and Oxygen Saturation (SpO2). These were then brought together to compute an automated partial pain score. APIPP was retrospectively compared with the PIPP which was manually scored by nursing staff at The Hospital for Sick Children, Toronto. Differences within both the scales were evaluated and analysed by creating a data model. Future research will focus on the clinical validation of this work by implementing this work into a clinical decision support system (CDSS) named Artemis.
Nascimben, M, Wang, YK, Singh, AK, King, JT & Lin, CT 1970, 'Influence of EEG tonic changes on Motor Imagery performance', 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), IEEE, Shanghai, PEOPLES R CHINA, pp. 46-49.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In Motor Imagery literature, performance predictors are commonly divided in four categories: personal, psychological, anatomical and neurophysiological. However these predictors are limited to inter-subjects changes. To overcome this limitation and evaluate intra-subjects performance, we tried to combine two groups of these measures: psychological and neurophysiological. As neurophysiological variables tonic changes in resting EEG theta and alpha sub-bands were considered. As psychological parameter we analyzed internalized attention and its correlates in lower alpha. We found that when internalized attention doesn't decrease, Motor Imagery performance outcome can be correctly predicted by resting EEG tonic variations.
Nejad, MZ, Lu, J & Behbood, V 1970, 'Applying dynamic Bayesian tree in property sales price estimation', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, NanJing, JiangSu, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Accurate prediction of Residential Property Sale Price is very important and significant in the operation of the real estate market. Property sellers and buyers/Investors wish to know a fair value for their properties in particular at the time of the sales transaction. The main reason to build an Automated Valuation Model is to be accurate enough to replace human. To select a most suitable model for the property sale price prediction, this paper examined seven Tree-based machine learning models including Dynamic Bayesian Tree (online learning method), Random Forest, Stochastic Gradient Boosting, CART, Bagged CART, Tree Bagged Ensembles and Boosted Tree (batch learning methods) by comparing their RMSE and MAE performances. The performance of these models are tested on 1967 records of unit sales from 19 suburbs of Sydney, Australia. The main purpose of this study is to compare the performance of batch models with the online model. The results demonstrated that Dynamic Bayesian Tree as an online model stands in the middle of batch models based on the root mean square error (RMSE) and mean absolute error (MAE). It shows using online model for estimating the property sale price is reasonable for real world application.
Nguyen, H-P, Do, T-TN & Kim, J 1970, 'Exponential coordinates based rotation stabilization for panoramic videos', 2017 IEEE International Conference on Image Processing (ICIP), 2017 IEEE International Conference on Image Processing (ICIP), IEEE, Beijing, PEOPLES R CHINA, pp. 46-50.
View/Download from: Publisher's site
Nie, L, Jiang, D, Yu, S & Song, H 1970, 'Network Traffic Prediction Based on Deep Belief Network in Wireless Mesh Backbone Networks', 2017 IEEE Wireless Communications and Networking Conference (WCNC), 2017 IEEE Wireless Communications and Networking Conference (WCNC), IEEE.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Wireless mesh network is prevalent for providing a decentralized access for users. For a wireless mesh backbone network, it has obtained extensive attention because of its large capacity and low cost. Network traffic prediction is important for network planning and routing configurations that are implemented to improve the quality of service for users. This paper proposes a network traffic prediction method based on a deep belief network and a Gaussian model. The proposed method first adopts discrete wavelet transform to extract the low-pass component of network traffic that describes the long-range dependence of itself. Then a prediction model is built by learning a deep belief network from the extracted low-pass component. Otherwise, for the rest high-pass component that expresses the gusty and irregular fluctuations of network traffic, a Gaussian model is used to model it. We estimate the parameters of the Gaussian model by the maximum likelihood method. Then we predict the high-pass component by the built model. Based on the predictors of two components, we can obtain a predictor of network traffic. From the simulation, the proposed prediction method outperforms three existing methods.
Ning, X, Yao, L, Wang, X & Benatallah, B 1970, 'Calling for Response: Automatically Distinguishing Situation-Aware Tweets During Crises', ADVANCED DATA MINING AND APPLICATIONS, ADMA 2017, International Conference on Advanced Data Mining and Applications (ADMA), Springer International Publishing, Singapore, SINGAPORE, pp. 195-208.
View/Download from: Publisher's site
Orlov, O, McGregor, C, Baevsky, R, Chernikova, A, Prysyazhnyuk, A & Rusanov, V 1970, 'Perspective use of the technologies for big data analysis in manned space flights on the international space station', Proceedings of the International Astronautical Congress, IAC, pp. 1951-1960.
View description>>
Recent technologies in the area of Big Data analytics which provide fast and effective review of various and diverse files of information arriving from different sources are being developed increasingly. Various new software are being proposed to provide useful results in this area. Such technologies are the important stimulus of modern scientific and technical progress, in particular in the field of development of piloted space flights. In this publication we present the prospects of the use of Big Data analytics technology in a system of medical control of crews of the International space station (ISS). Today there is an active accumulation of experience of piloted space flights on ISS where the international scientific and technical cooperation actively develops. An important step withm this direction is the organisation of a new joint Russian-Canadian space experiment 'Cosmocard 2018' It will build on the Russian experiment 'Cosmocard' which is currently being carried out on the ISS since September, 2014. In this project we have begun work for the modernisation of the software for the onboard computer which will enable the estimation in real-time of a mode of state of health of members of the crew. The Artemis platform, a Big Data analytics platform proposed by McGregor for the analysis of great volumes of physiological and other environmental data, will be used for this purpose. We have begun to reengrneer algorithms for definition of a functional condition of an organism and risk of development of diseases developed previously by the Institute of Biomedical Problems of the Russian Academy of Sciences to run in real-time within the structure of the new software for the onboard computer that is based on Artemis. These new algorithms will be tested, in the beginning, during simulation experiments with long isolation using the same 'Cosmocard' physiological monitoring devices, currently used on the ISS as part of the current 'Cosmocard' experiments. T...
Padmanabha, AGA, Appaji, MA, Prasad, M, Lu, H & Joshi, S 1970, 'Classification of diabetic retinopathy using textural features in retinal color fundus image', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Early, diagnosis is essential for diabetic patients to avoid partial or complete blindness. This work presents a new analysis method of texture features for classification of Diabetic Retinopathy (DR). The proposed method masks the blood vessels and optic disk segmented and directly extracts the textural features from the remaining retinal region. The proposed method is much simpler with comparison of the other methods that detect the defective regions first and then extract the required features for classification. The Haralick texture measures calculated are used for classification of DR. The proposed method is evaluated through a classification of DR using both Support Vector Machine (SVM) and Artificial Neural Network (ANN). The results of SVM have a better accuracy (87.5%) over ANN (79%). The performance of the proposed method is presented also in terms of sensitivity and specificity.
Pan, P, Feng, J, Chen, L & Yang, Y 1970, 'Online compressed robust PCA', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 1041-1048.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. In this work, we consider the problem of robust principal component analysis (RPCA) for streaming noisy data that has been highly compressed. This problem is prominent when one deals with high-dimensional and large-scale data and data compression is necessary. To solve this problem, we propose an online compressed RPCA algorithm to efficiently recover the low-rank components of raw data. Though data compression incurs severe information loss, we provide deep analysis on the proposed algorithm and prove that the low-rank component can be asymptotically recovered under mild conditions. Compared with other recent works on compressed RPCA, our algorithm reduces the memory cost significantly by processing data in an online fashion and reduces the communication cost by accepting sequential compressed data as input.
Pang, G, Cao, L, Chen, L & Liu, H 1970, 'Learning Homophily Couplings from Non-IID Data for Joint Feature Selection and Noise-Resilient Outlier Detection', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 2585-2591.
View/Download from: Publisher's site
View description>>
This paper introduces a novel wrapper-based outlier detection framework (WrapperOD) and its instance (HOUR) for identifying outliers in noisy data (i.e., data with noisy features) with strong couplings between outlying behaviors. Existing subspace or feature selection-based methods are significantly challenged by such data, as their search of feature subset(s) is independent of outlier scoring and thus can be misled by noisy features. In contrast, HOUR takes a wrapper approach to iteratively optimize the feature subset selection and outlier scoring using a top-k outlier ranking evaluation measure as its objective function. HOUR learns homophily couplings between outlying behaviors (i.e., abnormal behaviors are not independent - they bond together) in constructing a noise-resilient outlier scoring function to produce a reliable outlier ranking in each iteration. We show that HOUR (i) retains a 2-approximation outlier ranking to the optimal one; and (ii) significantly outperforms five state-of-the-art competitors on 15 real-world data sets with different noise levels in terms of AUC and/or P@n. The source code of HOUR is available at https://sites.google.com/site/gspangsite/sourcecode.
Pickrell, M, van den Hoven, E & Bongers, B 1970, 'Exploring in-hospital rehabilitation exercises for stroke patients', Proceedings of the 29th Australian Conference on Computer-Human Interaction, OzCHI '17: 29th Australian Conference on Human-Computer Interaction, ACM, Brisbane, Australia, pp. 228-237.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. All rights reserved. Rehabilitation 1 exercises following stroke are by necessity repetitive and consequently can be tedious for patients. Hospitals are set up with equipment such as clothes pegs, wooden blocks and mechanical hand counters, which patients use to re-learn how to manipulate objects. The aim of this study is to understand the context of stroke patients rehabilitation as well as which types of feedback are most appropriate for patients when performing their rehabilitation exercises. Over 60 hours were spent observing stroke patients undergoing rehabilitation. Fourteen stroke patients who had attended a balance class were interviewed about their experiences and the feedback they received. From this fieldwork, a set of design guidelines has been developed to guide researchers and designers developing computer-based equipment for stroke patient rehabilitation.
Prysyazhnyuk, A, Baevsky, R, Berseneva, A, Chernikova, A, Luchitskaya, E, Rusanov, V & McGregor, C 1970, 'Big data analytics for enhanced clinical decision support systems during spaceflight', 2017 IEEE Life Sciences Conference (LSC), 2017 IEEE Life Sciences Conference (LSC), IEEE, Sydney, NSW, Australia, pp. 296-299.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Recent advancements in the field of space medicine and technology have extended the boundaries of space travel, presenting humankind with the ability to explore undiscovered habitats. As humans embark on long range missions, adaptation mechanisms will be put to the test, challenging provision of medical care in space. To date, a vast amount of knowledge has been accumulated through a series of experiments, both in terrestrial simulation environments and space missions on the ISS. As a result, functional health state algorithm has been developed and validated by IBMP, to identify transitional states between health and disease. Significant limitations on provision of medical care in space are imposed due to retrospective data processing and analysis techniques. Some of these limitations can be addressed by the proposed instantiation of the functional state algorithm within the Online Analytics component of the Artemis platform, to enhance clinical decision support systems during spaceflight.
Qiao, M, Yu, J, Bian, W, Li, Q & Tao, D 1970, 'Improving Stochastic Block Models by Incorporating Power-Law Degree Characteristic', Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Melbourne, Australia, pp. 2620-2626.
View/Download from: Publisher's site
View description>>
Stochastic block models (SBMs) provide a statistical way modeling network data, especially in representing clusters or community structures. However, most block models do not consider complex characteristics of networks such as scale-free feature, making them incapable of handling degree variation of vertices, which is ubiquitous in real networks. To address this issue, we introduce degree decay variables into SBM, termed power-law degree SBM (PLD-SBM), to model the varying probability of connections between node pairs. The scale-free feature is approximated by a power-law degree characteristic. Such a property allows PLD-SBM to correct the distortion of degree distribution in SBM, and thus improves the performance of cluster prediction. Experiments on both simulated networks and two real-world networks including the Adolescent Health Data and the political blogs network demonstrate the validity of the motivation of PLD-SBM, and its practical superiority.
Qin, M, Jin, D, He, D, Gabrys, B & Musial, K 1970, 'Adaptive Community Detection Incorporating Topology and Content in Social Networks', Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17: Advances in Social Networks Analysis and Mining 2017, ACM, pp. 675-682.
View/Download from: Publisher's site
View description>>
In social network analysis, community detection is a basic step to understand the structure, function and semantics of networks. Some conventional community detection methods may have limited performance because they merely focus on topological structure of networks. In addition to topology, content information is another significant aspect of social networks. Some state-of-the-art methods started to combine these two aspects of information, but they often assume that topology and content share the same characteristics. However, for some examples of social networks, content may mismatch with topological structure. In order to better cope with such situations, we introduce a novel community detection method under the framework of nonnegative matrix factorization (NMF). Our proposed method integrates topology and content of networks, and introduces a novel adaptive parameter for controlling the contribution of content with respect to the identified mismatch degree between the topological and content information. The case study using real social networks show that our new method can simultaneously obtain community partition and the corresponding semantic descriptions. Experiments on both artificial networks and real social networks further indicate that our method outperforms some state-of-the-art methods while exhibiting more robust behaviour when the mismatch topological and content information is observed.
Ramezani, F & Naderpour, M 1970, 'A fuzzy virtual machine workload prediction method for cloud environments', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Due to the dynamic nature of cloud environments, the workload of virtual machines (VMs) fluctuates leading to imbalanced loads and utilization of virtual and physical cloud resources. It is, therefore, essential that cloud providers accurately forecast VM performance and resource utilization so they can appropriately manage their assets to deliver better quality cloud services on demand. Current workload and resource prediction methods forecast the workload or CPU utilization pattern of the given web-based applications based on their historical data. This gives cloud providers an indication of the required number of resources (VMs or CPUs) for these applications to optimize resource allocation for software as a service (SaaS) or platform as a service (PaaS), reducing their service costs. However, historical data cannot be used as the only data source for VM workload predictions as it may not be available in every situation. Nor can historical data provide information about sudden and unexpected peaks in user demand. To solve these issues, we have developed a fuzzy workload prediction method that monitors both historical and current VM CPU utilization and workload to predict VMs that are likely to be performing poorly. This model can also predict the utilization of physical machine (PM) resources for virtual resource discovery.
Saberi, M, Hussain, OK & Chang, E 1970, 'An online statistical quality control framework for performance management in crowdsourcing', Proceedings of the International Conference on Web Intelligence, WI '17: International Conference on Web Intelligence 2017, ACM, Leipzig, GERMANY, pp. 476-482.
View/Download from: Publisher's site
Saberi, Z, Hussain, OK, Saberi, M & Chang, E 1970, 'Online Retailer Assortment Planning and Managing under Customer and Supplier Uncertainty Effects Using Internal and External Data', 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), 2017 IEEE 14th International Conference on e-Business Engineering (ICEBE), IEEE, Shanghai, PEOPLES R CHINA, pp. 7-14.
View/Download from: Publisher's site
Salama, U, Yao, L, Wang, X, Paik, H-Y & Beheshti, A 1970, 'Multi-Level Privacy-Preserving Access Control as a Service for Personal Healthcare Monitoring', 2017 IEEE International Conference on Web Services (ICWS), 2017 IEEE International Conference on Web Services (ICWS), IEEE, Honolulu, HI, pp. 878-881.
View/Download from: Publisher's site
Saqib, M, Daud Khan, S, Sharma, N & Blumenstein, M 1970, 'A study on detecting drones using deep convolutional neural networks', 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Lecce, Italy.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. The object detection is a challenging problem in computer vision with various potential real-world applications. The objective of this study is to evaluate the deep learning based object detection techniques for detecting drones. In this paper, we have conducted experiments with different Convolutional Neural Network (CNN) based network architectures namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16) etc. Due to sparse data available for training, networks are trained with pre-trained models using transfer learning. The snapshot of trained models is saved at regular interval during training. The best models having high mean Average Precision (mAP) for each network architecture are used for evaluation on the test dataset. The experimental results show that VGG16 with Faster R-CNN perform better than other architectures on the training dataset. Visual analysis of the test dataset is also presented.
Saqib, M, Daud Khan, S, Sharma, N & Blumenstein, M 1970, 'Extracting descriptive motion information from crowd scenes', 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Christchurch, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. An important contribution that automated analysis tools can generate for management of pedestrians and crowd safety is the detection of conflicting large pedestrian flows: this kind of movement pattern, in fact, may lead to dangerous situations and potential threats to pedestrian's safety. For this reason, detecting dominant motion patterns and summarizing motion information from the scene are inevitable for crowd management. In this paper, we develop a framework that extracts motion information from the scene by generating point trajectories using particle advection approach. The trajectories obtained are then clustered by using unsupervised hierarchical clustering algorithm, where the similarity is measured by the Longest Common Sub-sequence (LCS) metric. The achieved motions patterns in the scene are summarized and represented by using color-coded arrows, where speeds of the different flows are encoded with colors, the width of an arrow represents the density (number of people belonging to a particular motion pattern) while the arrowhead represents the direction. This novel representation of crowded scene provides a clutter free visualization which helps the crowd managers in understanding the scene. Experimental results show that our method outperforms state-of-the-art methods.
Saqib, M, Khan, SD & Blumenstein, M 1970, 'Detecting dominant motion patterns in crowds of pedestrians', Eighth International Conference on Graphic and Image Processing (ICGIP 2016), Eighth International Conference on Graphic and Image Processing, SPIE, Tokyo, Japan.
View/Download from: Publisher's site
View description>>
© 2017 SPIE. As the population of the world increases, urbanization generates crowding situations which poses challenges to public safety and security. Manual analysis of crowded situations is a tedious job and usually prone to errors. In this paper, we propose a novel technique of crowd analysis, the aim of which is to detect different dominant motion patterns in real-time videos. A motion field is generated by computing the dense optical flow. The motion field is then divided into blocks. For each block, we adopt an Intra-clustering algorithm for detecting different flows within the block. Later on, we employ Inter-clustering for clustering the flow vectors among different blocks. We evaluate the performance of our approach on different real-time videos. The experimental results show that our proposed method is capable of detecting distinct motion patterns in crowded videos. Moreover, our algorithm outperforms state-of-the-art methods.
Scopigno, R, Cignoni, P, Pietroni, N, Callieri, M & Dellepiane, M 1970, 'Digital Fabrication Techniques for Cultural Heritage: A Survey.', Comput. Graph. Forum, pp. 6-21.
View/Download from: Publisher's site
Shang, D, Zhang, G & Lu, J 1970, 'Fast concept drift detection using singular vector decomposition', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Nanjing, China, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Data stream mining is widely used in online applications such as sensor networks, financial transactions, etc. Such systems generate data at high velocity and their underlying distributions may change over time. This is referred to as concept drift problem and it is considered to be the root cause of performance degradation of online machine learning models. To tackle this problem, a reliable and fast drift detection method is required to achieve real time responsiveness to the drifts. This paper presents a fast and accurate drift detection method, namely KS-SVD test - KSSVD, to monitor the distribution changes of the data stream. Our method employs the SVD technique to first check the direction change of the data, followed by a KS test on each direction to detect the univariate distribution changes. Experiments show that our method is efficient and accurate, especially in high dimension situation.
Sharma, N, Sengupta, A, Sharma, R, Pal, U & Blumenstein, M 1970, 'Pincode detection using deep CNN for postal automation', 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), IEEE, Christchurch, New Zealand, pp. 1-6.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Postal automation has been a topic of research over a decade. The challenges and complexity involved in developing a postal automation system for a multi-lingual and multi-script country like India are many-fold. The characteristics of Indian postal documents include: multi-lingual behaviour, unconstrained handwritten addresses, structured/unstructured envelopes and postcards, being among the most challenging aspects. This paper examines the state-of-the-art Deep CNN architectures for detecting pin-code in both structured and unstructured postal envelopes and documents. Region-based Convolutional Neural Networks (RCNN) are used for detecting the various significant regions, namely Pin-code blocks/regions, destination address block, seal and stamp in a postal document. Three network architectures, namely Zeiler and Fergus (ZF), Visual Geometry Group (VGG16), and VGG M were considered for analysis and identifying their potential. A dataset consisting of 2300 multilingual Indian postal documents of three different categories was developed and used for experiments. The VGG-M architecture with Faster-RCNN performed better than others and promising results were obtained.
Singh, J, Prasad, M, Daraghmi, YA, Tiwari, P, Yadav, P, Bharill, N, Pratama, M & Saxena, A 1970, 'Fuzzy logic hybrid model with semantic filtering approach for pseudo relevance feedback-based query expansion', 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Honolulu, HI, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Individual query expansion term selection methods have been widely investigated in an attempt to improve their performance. Each expansion term selection method has its own weaknesses and strengths. To overcome the weaknesses and utilize the strengths of individual methods, this paper combined multiple term selection methods. In this paper, initially the possibility of improving the overall performance using individual query expansion (QE) term selection methods are explored. Secondly, some well-known rank aggregation approaches are used for combining multiple QE term selection methods. Thirdly, a new fuzzy logic-based QE approach that considers the relevance score produced by different rank aggregation approaches is proposed. The proposed fuzzy logic approach combines different weights of each term using fuzzy rules to infer the weights of the additional query terms. Finally, Word2vec approach is used to filter semantically irrelevant terms obtained after applying the fuzzy logic approach. The experimental results demonstrate that the proposed approaches achieve significant improvements over each individual term selection method, aggregated method and related state-of-the-art method.
Sohaib, O, Lu, H & Hussain, W 1970, 'Internet of Things (IoT) in E-commerce: For people with disabilities', 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), IEEE, Cambodia, pp. 419-423.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Internet of Things (IoT) is an interconnection between the physical object and digital world. As a result, many e-commerce companies seize the advantages of the IoT to grow their business. However, the world's largest minority are people with disabilities. IoT can lower barriers for the disabled people by offering assistance in accessing information. Increasing Internet accessibility can help to make that happen for both social and economic benefit. This paper presents the proposed integrated framework of the IoT and cloud computing for people with disabilities such as sensory (hearing and vision), motor (limited use of hands) and cognitive (language and learning disabilities) impairments in the context of business-to-consumer e-commerce context. We conclude that IoT-enabled services offer great potential for success of disabled people in the context of online shopping.
Song, X, Zhang, X, Yu, S, Jiao, S & Xu, Z 1970, 'Resource-Efficient Virtual Network Function Placement in Operator Networks', GLOBECOM 2017 - 2017 IEEE Global Communications Conference, 2017 IEEE Global Communications Conference (GLOBECOM 2017), IEEE, Singapore, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Network Function Virtualization (NFV) is an emerging network resource utilization approach which decouples network functions from proprietary hardware. To accommodate Service Function Chain (SFC) requests, service providers offer Virtual Network Function (VNF) instances in operator networks. However, how to efficiently place VNFs at various network locations while jointly optimizing computing and communication resource is still an open problem. To this end, we study the resource-efficient VNF placement problem in operator networks. We firstly formulate this problem as an Integer Linear Programming (ILP) model. Then we design an efficient heuristic algorithm named Resource- efficient Virtual Network Function Placement (RVNFP) based on Hidden Markov Model (HMM). Extensive simulation results show that compared with the previous VNF placement algorithms, RVNFP saves up to 12.51% network cost, and achieves a good tradeoff between computing resource cost and communication resource cost.
Sun, G, Cui, T, Beydoun, G, Chen, S, Xu, D & Shen, J 1970, 'Organizing Online Computation for Adaptive Micro Open Education Resource Recommendation.', ICWL, International Conference on Web-Based Learning, Springer, Cape Town, South Africa, pp. 177-182.
View description>>
Our previous work, Micro Learning as a Service (MLaaS), aimed to deliver adaptive micro open education resources (OERs). However, relying solely on the offline computation, the recommendation lacks rationality and timeliness. It is also difficult to make the first recommendation to a new learner. In this paper we introduce the organization of the online computation of the MLaaS. It targets at solving the cold start problem due to the shortage of learner information and real-time updates of the learner-micro OER profile.
Sun, G, Cui, T, Shen, J, Xu, D, Beydoun, G & Chen, S 1970, 'Ontological Learner Profile Identification for Cold Start Problem in Micro Learning Resources Delivery.', ICALT, IEEE 17th International Conference on Advanced Learning Technologies, IEEE Computer Society, Timisoara, Romania, pp. 16-20.
View/Download from: Publisher's site
View description>>
Open learning is a rising trend in the educational sector and it attracts millions of learners to be engaged to enjoy massive latest and free open education resources (OERs). Through the use of mobile devices, open learning is often carried out in a micro learning mode, where each unit of learning activity is commonly shorter than 15 minutes. Learners are often at a loss in the process of choosing OER leading to their long term objectives and short term demands. Our pilot work, namely MLaaS, proposed a smart system to deliver personalized OER with micro learning to satisfy their real-time needs, while its decision-making process is scarcely supported due to the lack of historical data. Inspired by this, MLaaS now embeds a new solution to tackle the cold start problem, by opening up a brand new profile for each learner and delivering them the first resources in their fresh start learning journey. In this paper, we also propose an ontology-based mechanism for learning prediction and recommendation.
Sun, Y, Li, L, Xie, Z, Xie, Q, Li, X & Xu, G 1970, 'Co-training an Improved Recurrent Neural Network with Probability Statistic Models for Named Entity Recognition', Database Systems for Advanced Applications (LNCS), International Conference on Database Systems for Advanced Applications, Springer International Publishing, Suzhou, China, pp. 545-555.
View/Download from: Publisher's site
View description>>
Named Entity Recognition (NER) is a subtask of information extraction in Natural Language Processing (NLP) field and thus being wildly studied. Currently Recurrent Neural Network (RNN) has become a popular way to do NER task, but it needs a lot of train data. The lack of labeled train data is one of the hard problems and traditional co-training strategy is a way to alleviate it. In this paper, we consider this situation and focus on doing NER with co-training using RNN and two probability statistic models i.e. Hidden Markov Model (HMM) and Conditional Random Field (CRF). We proposed a modified RNN model by redefining its activation function. Compared to traditional sigmoid function, our new function avoids saturation to some degree and makes its output scope very close to [0, 1], thus improving recognition accuracy. Our experiments are conducted ATIS benchmark. First, supervised learning using those models are compared when using different train data size. The experimental results show that it is not necessary to use whole data, even small part of train data can also get good performance. Then, we compare the results of our modified RNN with original RNN. 0.5% improvement is obtained. Last, we compare the co-training results. HMM and CRF get higher improvement than RNN after co-training. Moreover, using our modified RNN in co-training, their performances are improved further.
Tsai, W-C, Orth, D & van den Hoven, E 1970, 'Designing Memory Probes to Inform Dialogue', Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17: Designing Interactive Systems Conference 2017, ACM, Edinburgh, United Kingdom, pp. 889-901.
View/Download from: Publisher's site
View description>>
To investigate the phenomenon that occurs during
interactions between used objects and autobiographical
memories, which are both ever-changing and imbedded
with personal significance, an adapted probing method
capable of managing these complex qualities is
needed. This pictorial is our attempt to find a nuanced
indication of how probes could go beyond common
usage to facilitate complex felt experience, and how
probes can be used in less prescriptive ways to instead
promote reminiscent dialogues that are rich and open to
interpretation for both participants and researchers. It
illustrates our exploration into potential Memory Probes
and how this might be done that reflects the value we
see in creating restrictions or limitations in technology mediated
interactions to encourage active participation
by users in social acts such as memory creation and
remembrance.
Venkata, SK, Keppens, J & Musial, K 1970, 'Adaptive Caching Using Sub-query Fragmentation for Reduction in Data Transfers from Distributed Databases', ASTRONOMICAL DATA ANALYSIS SOFTWARE AND SYSTEMS XXV, 25th Annual Conference on Astronomical Data Analysis Software and Systems (ADASS XXV), ASTRONOMICAL SOC PACIFIC, ARC Ctr Excellence All Sky Astrophys (CAASTRO), Sydney, AUSTRALIA, pp. 85-88.
Verma, S, Liu, W, Wang, C & Zhu, L 1970, 'Extracting highly effective features for supervised learning via simultaneous tensor factorization', 31st AAAI Conference on Artificial Intelligence, AAAI 2017, AAAI Conference on Artificial Intelligence, AAAI, San Francisco, USA, pp. 4995-4996.
View description>>
Real world data is usually generated over multiple time periods associated with multiple labels, which can be represented as multiple labeled tensor sequences. These sequences are linked together, sharing some common features while exhibiting their own unique features. Conventional tensor factorization techniques are limited to extract either common or unique features, but not both simultaneously. However, both types of these features are important in many machine learning systems as they inherently affect the systems' performance. In this paper, we propose a novel supervised tensor factorization technique which simultaneously extracts ordered common and unique features. Classification results using features extracted by our method on CIFAR-10 database achieves significantly better performance over other factorization methods, illustrating the effectiveness of the proposed technique.
Vo, NNY & Xu, G 1970, 'The volatility of Bitcoin returns and its correlation to financial markets', 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), IEEE, Krakow, Poland.
View/Download from: Publisher's site
View description>>
The 2008 financial crisis had scattered incredulity around the globe regarding traditional financial systems, which made investors and non-financial customers turn to other alternative such as digital banking systems. The existence and development of blockchain technology make cryptocurrency in recent years believably become a complete alternative to traditional ones. Bitcoin is the world's first peer-to-peer and decentralized digital cash system initiated by Nakamoto [1]. Though being the most prominent cryptocurrency, Bitcoin has not been a legal trading currency in various countries. Its exchange rate has appeared to be an exceptionally high-risk portfolio with extreme volatility, which requires a more detailed evaluation before making any decision. This paper utilizes knowledge of statistics for financial time series and machine learning to (i) fit the parametric distribution and (ii) model and forecast the volatility of Bitcoin returns, and (iii) analyze its correlation to other financial market indicators. The fitted parametric time series model significantly outperforms other standard models in explaining the stylized facts and statistical variances in the behavior of Bitcoin returns. The model forecast also outperforms some machine learning methodologies, which would benefit policy makers, banks and financial investors in trading activities for both long-term and short-term strategies.
Wang, D, Xu, G & Deng, S 1970, 'Music recommendation via heterogeneous information graph embedding', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, Alaska, USA, pp. 596-603.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Traditional music recommendation techniques suffer from limited performance due to the sparsity of user-music interaction data, which is addressed by incorporating auxiliary information. In this paper, we study the problem of personalized music recommendation that takes different kinds of auxiliary information into consideration. To achieve this goal, a Heterogeneous Information Graph (HIG) is first constructed to encode different kinds of heterogeneous information, including the interactions between users and music pieces, music playing sequences, and the metadata of music pieces. Based on HIG, a Heterogeneous Information Graph Embedding method (HIGE) is proposed to learn the latent low-dimensional representations of music pieces. Then, we further develop a context-aware music recommendation method. Extensive experiments have been conducted on real-world datasets to compare the proposed method with other state-of-the-art recommendation methods. The results demonstrate that the proposed method significantly outperforms those baselines, especially on sparse datasets.
Wang, G, Wang, W, Wang, J & Bu, Y 1970, 'Better Deep Visual Attention with Reinforcement Learning in Action Recognition', 2017 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, Baltimore, MD.
View/Download from: Publisher's site
Wang, G, Zhang, G, Choi, K-S, Lam, K-M & Lu, J 1970, 'An output-based knowledge transfer approach and its application in bladder cancer prediction', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, AK, USA, pp. 356-363.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Many medical applications face a situation that the on-hand data cannot fully fit an existing predictive model or on-line tool, since these models or tools only use the most common predictors and the other valuable features collected in the current scenario are not considered altogether. On the other hand, the training data in the current scenario is not sufficient to learn a predictive model effectively yet. In order to overcome these problems and construct an efficient classifier, for these real situations in medical fields, in this work we present an approach based on the least squares support vector machine (LS-SVM), which utilizes a transfer learning framework to make maximum use of the data and guarantee its enhanced generalization capability. The proposed approach is capable of effectively learning a target domain with limited samples by relying on the probabilistic outputs from the other previously learned model using a heterogeneous method in the source domain. Moreover, it autonomously and quickly decides how much output knowledge to transfer from source domain to the target one using a fast leave-one-out cross validation strategy. This approach is applied on a real-world clinical dataset to predict 5-year mortality of bladder cancer patients after radical cystectomy, and the experimental results indicate that the proposed method can achieve better performances compared to traditional machine learning methods, consistently showing the potential of the proposed method under the circumstances with insufficient data.
Wang, Y, He, Q, Ye, D & Yang, Y 1970, 'Formulating Criticality-Based Cost-Effective Monitoring Strategies for Multi-Tenant Service-Based Systems', 2017 IEEE International Conference on Web Services (ICWS), 2017 IEEE International Conference on Web Services (ICWS), IEEE, Honolulu, HI, pp. 325-332.
View/Download from: Publisher's site
Wang, Y, He, Q, Ye, D & Yang, Y 1970, 'Service Selection Based on Correlated QoS Requirements', 2017 IEEE International Conference on Services Computing (SCC), 2017 IEEE International Conference on Services Computing (SCC), IEEE, Honolulu, HI, pp. 241-248.
View/Download from: Publisher's site
Wen, D, Qin, L, Lin, X, Zhang, Y & Chang, L 1970, 'Enumerating k-Vertex Connected Components in Large Graphs.', CoRR, International Conference on Data Engineering, IEEE, Macao, Macao, pp. 52-63.
View/Download from: Publisher's site
View description>>
In social network analysis, structural cohesion (or vertex connectivity) is a fundamental metric in measuring the cohesion of social groups. Given an undirected graph, a k-vertex connected component (k-VCC) is a maximal connected subgraph whose structural cohesion is at least k. A k-VCC has many outstanding structural properties, such as high cohesiveness, high robustness, and subgraph overlapping. In this paper, given a graph G and an integer k, we study the problem of computing all k-VCCs in G. The general idea for this problem is to recursively partition the graph into overlapped subgraphs. We prove the upper bound of the number of partitions, which implies the polynomial running time algorithm for the k-VCC enumeration. However, the basic solution is costly in computing the vertex cut. To improve the algorithmic efficiency, we observe that the key is reducing the number of local connectivity testings. We propose two effective optimization strategies, namely neighbor sweep and group sweep, to significantly reduce the number of local connectivity testings. We conduct extensive performance studies using ten large real datasets to demonstrate the efficiency of our proposed algorithms. The experimental results demonstrate that our approach can achieve a speedup of up to two orders of magnitude compared to the state-of-the-art algorithm.
Wu, D, Sharma, N & Blumenstein, M 1970, 'Recent advances in video-based human action recognition using deep learning: A review', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, Alaska, USA, pp. 2865-2872.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Video-based human action recognition has become one of the most popular research areas in the field of computer vision and pattern recognition in recent years. It has a wide variety of applications such as surveillance, robotics, health care, video searching and human-computer interaction. There are many challenges involved in human action recognition in videos, such as cluttered backgrounds, occlusions, viewpoint variation, execution rate, and camera motion. A large number of techniques have been proposed to address the challenges over the decades. Three different types of datasets namely, single viewpoint, multiple viewpoint and RGB-depth videos, are used for research. This paper presents a review of various state-of-the-art deep learning-based techniques proposed for human action recognition on the three types of datasets. In light of the growing popularity and the recent developments in video-based human action recognition, this review imparts details of current trends and potential directions for future work to assist researchers.
Wu, R, Xu, G, Chen, E, Liu, Q & Ng, W 1970, 'Knowledge or Gaming?', Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion, the 26th International Conference, ACM Press, Perth, Western Australia, pp. 321-329.
View/Download from: Publisher's site
View description>>
© 2017 International World Wide Web Conference Committee (IW3C2), published under Creative Commons CC BY 4.0 License. Recent decades have witnessed the rapid growth of intelligent tutoring systems (ITS), in which personalized adaptive techniques are successfully employed to improve the learning of each individual student. However, the problem of using cognitive analysis to distill the knowledge and gaming factor from students learning history is still underexplored. To this end, we propose a Knowledge Plus Gaming Response Model (KPGRM) based on multiple-attempt responses. Specifically, we first measure the explicit gaming factor in each multiple-attempt response. Next, we utilize collaborative filtering methods to infer the implicit gaming factor of one-attempt responses. Then we model student learning cognitively by considering both gaming and knowledge factors simultaneously based on a signal detection model. Extensive experiments on two real-world datasets prove that KPGRM can model student learning more effectively as well as obtain a more reasonable analysis.
Wu, W, Li, B, Chen, L & Zhang, C 1970, 'Consistent Weighted Sampling Made More Practical', Proceedings of the 26th International Conference on World Wide Web, WWW '17: 26th International World Wide Web Conference, International World Wide Web Conferences Steering Committee, Perth, Australia, pp. 1035-1043.
View/Download from: Publisher's site
View description>>
© 2017 International World Wide Web Conference Committee (IW3C2) Min-Hash, which is widely used for efficiently estimating similarities of bag-of-words represented data, plays an increasingly important role in the era of big data. It has been extended to deal with real-value weighted sets – Improved Consistent Weighted Sampling (ICWS) is considered as the state-of-the-art for this problem. In this paper, we propose a Practical CWS (PCWS) algorithm. We first transform the original form of ICWS into an equivalent expression, based on which we find some interesting properties that inspire us to make the ICWS algorithm simpler and more efficient in both space and time complexities. PCWS is not only mathematically equivalent to ICWS and preserves the same theoretical properties, but also saves 20% memory footprint and substantial computational cost compared to ICWS. The experimental results on a number of real-world text data sets demonstrate that PCWS obtains the same (even better) classification and retrieval performance as ICWS with 1/5 ∼ 1/3 reduced empirical runtime.
Yang, G, Dai, Y, Zhao, H, Hirota, K & Lu, H 1970, 'Intelligent web-based experiment management system using multi-agent concept', Proceedings IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, IEEE, Beijing, China, pp. 8508-8514.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Many web-based online learning systems focus more on textual and/or image based content delivery without including experiment systems, or if included they are usually operated under pre-defined conditions, such as fixed scenarios and pre-determined delivery orders. These limitations hinder personalized learning and collaboration between students and discourage student engagement. To circumvent these limitations, an Intelligent Web-based Experiment Management System (IWEMS) using multi-agent concept is presented. In the system, three kinds of software agents are used: (i) Student-Agent, responsible for assessing the knowledge levels of students. A fuzzy set based algorithm is used and the results are plotted through a dynamic polar chart; (ii) Teacher-Agent, responsible for tracking experiment progress of each student and recommending personalized the next-to-do experiment to him or her; and (iii) Co-Agent, responsible for group formation based on similar knowledge levels to facilitate collaborative learning between students. A prototype of this system is developed using a Java Agent Development Framework(JADE), where a client/server architecture and a MySQL database are used. It demonstrates the validity of the design and effectiveness of this system's functionality, achieves the personalization recommendation of next-to-do experiment and collaborative learning environment.
Yang, M, Zhu, T, Xiang, Y & Zhou, W 1970, 'Personalized Privacy Preserving Collaborative Filtering', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 371-385.
View/Download from: Publisher's site
View description>>
Recommendation systems are widely applied these years as a result of significant growth in the amount of online information. To provide accurate recommendation, a great deal of personal information are collected, which gives rise to privacy concerns for many individuals. Differential privacy is a well accepted technique for providing a strong privacy guarantee. However, traditional differential privacy can only preserve privacy at a uniform level for all users. When, in reality, different people have different privacy requirements. A uniform privacy standard cannot preserve enough privacy for users with a strong privacy requirement and will likely provide unnecessary protection for users who do not care about the disclosure of their personal information. In this paper, we propose a personalized privacy preserving collaborative filtering method that considers an individual’s privacy preferences to overcome this problem. A Johnson Lindenstrauss transform is introduced to pre-process the original dataset to improve the quality of the selected neighbours - an important factor for final prediction. Our method was tested on two real-world datasets. Extensive experiments prove that our method maintains more utility while guaranteeing privacy.
Ye, D, He, Q, Wang, Y & Yang, Y 1970, 'An Agent-Based Decentralised Service Monitoring Approach in Multi-Tenant Service-Based Systems', 2017 IEEE International Conference on Web Services (ICWS), 2017 IEEE International Conference on Web Services (ICWS), IEEE, Honolulu, HI, pp. 204-211.
View/Download from: Publisher's site
Yin, R, Li, K, Zhang, G & Lu, J 1970, 'Detecting overlapping protein complexes in dynamic protein-protein interaction networks by developing a fuzzy clustering algorithm', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, Italy.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Protein complexes play important roles in proteinprotein interaction networks. Recent studies reveal that many proteins have multiple functions and belong to more than one different complexes. To get better complex division, we need to consider time-dependent information of networks. However, only few studies can be found to concentrate on detecting overlapping clusters in time-dependent networks. To solve this problem, we propose integrated model of time-dependent network (IM-TDN) to describe time-dependent networks. On the base of this model, we propose similarity based dynamic fuzzy clustering (SDFC) algorithm to detect overlapping clusters. We apply the algorithm to synthetic data and real world protein-protein interaction network dataset. The results showed that our algorithm by using the model which we proposed achieved better results over the state-of-the-art baseline algorithms.
Yu, H, Lu, J & Zhang, G 1970, 'Learning a fuzzy decision tree from uncertain data', 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Piscataway, USA, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Uncertainty in data exists when the value of a data item is not a precise value, but rather by an interval data with a probability distribution function, or a probability distribution of multiple values. Since there are intrinsic differences between uncertain and certain data, it is difficult to deal with uncertain data using traditional classification algorithms. Therefore, in this paper, we propose a fuzzy decision tree algorithm based on a classical ID3 algorithm, it integrates fuzzy set theory and ID3 to overcome the uncertain data classification problem. Besides, we propose a discretization algorithm that enables our proposed Fuzzy-ID3 algorithm to handle the interval data. Experimental results show that our Fuzzy-ID3 algorithm is a practical and robust solution to the problem of uncertain data classification and that it performs better than some of the existing algorithms.
Zekveld, J, Bakker, S, Zijlema, A & van den Hoven, E 1970, 'Wobble', Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17: Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Yokohama, pp. 31-35.
View/Download from: Publisher's site
View description>>
© 2017 ACM. Reminders are designed to support remembering actions or intentions to be performed later in time. Most technologies that have a reminding functionality do so by asking attention (e.g., by using auditory alerts or vibration patterns) from users at a certain point in time or location. Because of their obtrusive nature, the reminders of many (digital) prospective memory AIDS we use on a daily basis are hard to ignore, regardless of our ability and motivation to perform the reminded action or intention. In this paper, we present Wobble: An interactive cone-shaped artefact for reminding in the home environment. Wobble was designed to investigate peripheral reminders. Our results imply that wobble is best suitable for reminding intentions that do not require direct action but can be carried out over a period of time, which is a type of reminding currently not met by most electronic memory AIDS.
Zhang, X, Yao, L, Huang, C, Sheng, QZ & Wang, X 1970, 'Intent Recognition in Smart Living Through Deep Recurrent Neural Networks', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 748-758.
View/Download from: Publisher's site
View description>>
© 2017, Springer International Publishing AG. Electroencephalography (EEG) signal based intent recognition has recently attracted much attention in both academia and industries, due to helping the elderly or motor-disabled people controlling smart devices to communicate with outer world. However, the utilization of EEG signals is challenged by low accuracy, arduous and time-consuming feature extraction. This paper proposes a 7-layer deep learning model to classify raw EEG signals with the aim of recognizing subjects’ intents, to avoid the time consumed in pre-processing and feature extraction. The hyper-parameters are selected by an Orthogonal Array experiment method for efficiency. Our model is applied to an open EEG dataset provided by PhysioNet and achieves the accuracy of 0.9553 on the intent recognition. The applicability of our proposed model is further demonstrated by two use cases of smart living (assisted living with robotics and home automation).
Zhang, X, Yao, L, Zhang, D, Wang, X, Sheng, QZ & Gu, T 1970, 'Multi-Person Brain Activity Recognition via Comprehensive EEG Signal Analysis', Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2017: Computing, Networking and Services, ACM, pp. 28-37.
View/Download from: Publisher's site
View description>>
© 2017 Association for Computing Machinery. An electroencephalography (EEG) based brain activity recognition is a fundamental field of study for a number of significant applications such as intention prediction, appliance control, and neurological disease diagnosis in smart home and smart healthcare domains. Existing techniques mostly focus on binary brain activity recognition for a single person, which limits their deployment in wider and complex practical scenarios. Therefore, multi-person and multi-class brain activity recognition has obtained popularity recently. Another challenge faced by brain activity recognition is the low recognition accuracy due to the massive noises and the low signal-to-noise ratio in EEG signals. Moreover, the feature engineering in EEG processing is time-consuming and highly relies on the expert experience. In this paper, we attempt to solve the above challenges by proposing an approach which has better EEG interpretation ability via raw Electroencephalography (EEG) signal analysis for multi-person and multi-class brain activity recognition. Specifically, we analyze inter-class and inter-person EEG signal characteristics, based on which to capture the discrepancy of inter-class EEG data. Then, we adopt an Autoencoder layer to automatically refine the raw EEG signals by eliminating various artifacts. We evaluate our approach on both a public and a local EEG datasets and conduct extensive experiments to explore the effect of several factors (such as normalization methods, training data size, and Autoencoder hidden neuron size) on the recognition results. The experimental results show that our approach achieves a high accuracy comparing to competitive state-of-the-art methods, indicating its potential in promoting future research on multi-person EEG recognition.
Zhang, Y, Huang, Y, Porter, AL, Zhang, G & Lu, J 1970, 'Discovering Interactions in Big Data Research: A Learning-Enhanced Bibliometric Study', 2017 Portland International Conference on Management of Engineering and Technology (PICMET), 2017 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, OR, USA, pp. 1-12.
View/Download from: Publisher's site
View description>>
© 2017 PICMET. As one of the most representative emerging technologies, big data analytics and its related applications are rapidly leading the development of information technologies and are significantly shaping thinking and behavior in today's interconnected world. Exploring the technological evolution of big data research is an effective way to enhance technology management and create value for research and development strategies for both government and industry. This paper uses a learning-enhanced bibliometric study to discover interactions in big data research by detecting and visualizing its evolutionary pathways. Concentrating on a set of 5840 articles derived from Web of Science covering the period between 2000 and 2015, text mining and bibliometric techniques are combined to profile the hotspots in big data research and its core constituents. A learning process is used to enhance the ability to identify the interactive relationships between topics in sequential time slices, revealing technological evolution and death. The outputs include a landscape of interactions within big data research from 2000 to 2015 with a detailed map of the evolutionary pathways of specific technologies. Empirical insights for related studies in science policy, innovation management, and entrepreneurship are also provided.
Zhang, Y, Saberi, M & Chang, E 1970, 'Semantic-based lightweight ontology learning framework', Proceedings of the International Conference on Web Intelligence, WI '17: International Conference on Web Intelligence 2017, ACM, Leipzig, GERMANY, pp. 1171-1177.
View/Download from: Publisher's site
Zhang, Z, Oberst, S & Lai, JCS 1970, 'Uncertainty analysis for the prediction of disc brake squeal propensity', INTER-NOISE 2017 - 46th International Congress and Exposition on Noise Control Engineering: Taming Noise and Moving Quiet, Internoise 2017, Hong Kong, China.
View description>>
ACT Since brake squeal was first investigated in the 1930s, it has been a noise, vibration and harshness (NVH) problem plaguing the automotive industry due to warranty-related claims and customer dissatisfaction. Accelerating research efforts in the last decade, represented by almost 70% of the papers published in the open literature, have improved the understanding of the generation mechanisms of brake squeal, resulting in better analysis of the problem and better development of countermeasures by combining numerical simulations with noise dynamometer tests. However, it is still a challenge to predict brake squeal propensity with any confidence. This is because of modelling difficulties that include the often transient and nonlinear nature of brake squeal, and uncertainties in material properties, operating conditions (brake pad pressure and temperature, speed), contact conditions between pad and disc, and friction. Although the conventional Complex Eigenvalue Analysis (CEA) method, widely used in industry, is a good linear analysis tool for identifying unstable vibration modes to complement noise dynamometer tests, it is not a predictive tool as it may either over-predict or under-predict the number of unstable vibration modes. In addition, there is no correlation between the magnitude of the positive real part of a complex eigenvalue and the likelihood that the unstable vibration mode will squeal. Transient nonlinear simulations are still computationally too expensive to be implemented in industries for even exploratory predictions. In this paper, a stochastic approach, incorporating uncertainties in the surface roughness of the lining, material properties and the friction coefficient, is applied to predict the squeal propensity of a full disc brake system by using CEA on a finite element model updated by experimental modal testing results. Results compared with noise dynamometer squeal tests illustrate the potential of the stochastic CEA approach ov...
Zhou, Z, Xu, G, Zhu, W, Li, J & Zhang, W 1970, 'Structure embedding for knowledge base completion and analytics', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, Alaska, USA.
View/Download from: Publisher's site
View description>>
To explore the latent information of Human Knowledge, the analysis for Knowledge Bases (KBs) (e.g. WordNet, Freebase) is essential. Some previous KB element embedding frameworks are used for KBs structure analysis and completion. These embedding frameworks use low-dimensional vector space representation for large scale of entities and relations in KB. Based on that, the vector space representation of entities and relations which are not contained by KB can be measured. The embedding idea is reasonable, while the current embedding methods have some issues to get proper embeddings for KB elements. The embedding methods use entity-relation-entity triplet, contained by most of current KB, as training data to output the embedding representation of entities and relations. To measure the truth of one triplet (The knowledge represented by triplet is true or false), some current embedding methods such as Structured Embedding (SE) project entity vectors into subspace, the meaning of such subspace is not clear for knowledge reasoning. Some other embedding methods such as TransE use simple linear vector transform to represent relation (such as vector add or minus), which can't deal with the multiple relations match or multiple entities match problem. For example, there are multiple relations between two entities, or there are multiple entities have same relation with one entity. Insipred by previous KB element structured embedding methods, we propose a new method, Bipartite Graph Network Structured Embedding (BGNSE). BGNSE combines the current KB embedding methods with bipartite graph network model, which is widely used in many fields including image data compression, collaborative filtering. BGNSE embeds each entity-relation-entity KB triplet into a bipartite graph network structure model, represents each entity by one bipartite graph layer, represents relation by link weights matrix of bipartite graph network. Based on bipartite graph model, our proposed method has followi...
Zhou, Z, Xu, G, Zhu, X & Liu, S 1970, 'Latent factor analysis for low-dimensional implicit preference prediction', 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), 2017 International Conference on Behavioral, Economic, Socio-cultural Computing (BESC), IEEE, Poland, pp. 1-2.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. User preference prediction aims to predict a users future preferences on a large number of items according to his/her preference history. To achieve this goal, many models have been proposed, but mainly for explicit preference data, such as 5-star ratings. Nevertheless, real-world data are often in implicit format, such as purchase action, and the number of items is not always large. In this paper, we demonstrate the use of latent factor models for solving the task of predicting user preferences on implicit and low-dimensional dataset.
Zhu, F, Zhang, G, Lu, J & Zhu, D 1970, 'First-order causal process for causal modelling with instantaneous and cross-temporal relations', 2017 International Joint Conference on Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks (IJCNN), IEEE, Anchorage, USA, pp. 380-387.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. Motivated by the real damped simple harmonic oscillator (SHO) system, in this paper, we propose a process interpretation of causality and the first-order causal process (FoCP) model for temporal causal modelling. Compared with existing causal models that are able to model feedbacks, such as the structural equation model (SEM) and the structure vector autoregressive (SVAR) model, the FoCP model entails a novel 2-stage evolution semantic for instantaneous and cross-temporal causal relations existing in many real world dynamic systems. Graphical representations are developed to illustrate the causal structure compactly. Useful properties of the new model are identified and used to develop a conditional independence based algorithm for learning the causal structure from a multivariate time series dataset. Experiments on both simulated and real data validate the feasibility of the method to discover simple while meaningful causal structures of dynamic systems.
Zhu, T, Xiong, P, Li, G, Zhou, W & Yu, PS 1970, 'Differentially Private Query Learning: from Data Publishing to Model Publishing', Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017, IEEE International Conference on Big Data, IEEE, Boston, MA, USA, pp. 1117-1122.
View/Download from: Publisher's site
View description>>
With the development of Big Data and cloud data sharing, privacy preservingdata publishing becomes one of the most important topics in the past decade. Asone of the most influential privacy definitions, differential privacy providesa rigorous and provable privacy guarantee for data publishing. Differentiallyprivate interactive publishing achieves good performance in many applications;however, the curator has to release a large number of queries in a batch or asynthetic dataset in the Big Data era. To provide accurate non-interactivepublishing results in the constraint of differential privacy, two challengesneed to be tackled: one is how to decrease the correlation between large setsof queries, while the other is how to predict on fresh queries. Neither is easyto solve by the traditional differential privacy mechanism. This papertransfers the data publishing problem to a machine learning problem, in whichqueries are considered as training samples and a prediction model will bereleased rather than query results or synthetic datasets. When the model ispublished, it can be used to answer current submitted queries and predictresults for fresh queries from the public. Compared with the traditionalmethod, the proposed prediction model enhances the accuracy of query resultsfor non-interactive publishing. Experimental results show that the proposedsolution outperforms traditional differential privacy in terms of Mean AbsoluteValue on a large group of queries. This also suggests the learning model cansuccessfully retain the utility of published queries while preserving privacy.
Zuo, H, Zhang, G, Lu, J & Pedrycz, W 1970, 'Fuzzy rule-based transfer learning for label space adaptation', 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Naples, ITALY.
View/Download from: Publisher's site
View description>>
© 2017 IEEE. As the age of big data approaches, methods of massive scale data management are rapidly evolving. The traditional machine learning methods can no longer satisfy the exponential development of big data; there is a common assumption in these data-driving methods that the distribution of both the training data and testing data should be equivalent. A model built using today's data will not adequately address the classification tasks tomorrow if the distribution of the data item values has changed. Transfer learning is emerging as a solution to this issue, and many methods have been proposed. Few of the existing methods, however, explicitly indicate the solution to the case where the labels' distributions in two domains are different. This work proposes the fuzzy rule-based methods to deal with transfer learning problems where the discrepancy between the two domains shows in the label spaces. The presented methods are validated in both the synthetic and real-world datasets, and the experimental results verify the effectiveness of the introduced methods.