Gao, L, Yu, S, Luan, TH & Zhou, W 2015, Delay Tolerant Networks, Springer International Publishing.
View/Download from: Publisher's site
Gill, AQ 2015, Adaptive Cloud Enterprise Architecture, World Scientific.
View/Download from: Publisher's site
Marshall, JP, Goodman, J, Zowghi, D & da Rimini, F 2015, Disorder and the Disinformation Society, Routledge, New York, USA.
View/Download from: Publisher's site
View description>>
© 2015 Taylor & Francis. All rights reserved. This book is the first general social analysis that seriously considers the daily experience of information disruption and software failure within contemporary Western society. Through an investigation of informationalism, defined as a contemporary form of capitalism, it describes the social processes producing informational disorder. While most social theory sees disorder as secondary, pathological or uninteresting, this book takes disordering processes as central to social life. The book engages with theories of information society which privilege information order, offering a strong counterpoint centred on 'disinformation.' Disorder and the Disinformation Society offers a practical agenda, arguing that difficulties in producing software are both inherent to the process of developing software and in the social dynamics of informationalism. It outlines the dynamics of software failure as they impinge on of information workers and on daily life, explores why computerized finance has become inherently self-disruptive, asks how digital enclosure and intellectual property create conflicts over cultural creativity and disrupt informational accuracy and scholarship, and reveals how social media can extend, but also distort, the development of social movements.
Zhang, G, Lu, J & Gao, Y 2015, Multi-Level Decision Making Models, Methods and Applications, Springer, Berlin, Germany.
View description>>
This book addresses an important decision making area—multi-level decisionmaking. To help readers understand the following chapters of this book, this chapter presents fundamental concepts, models, and techniques of decision making ...
Brady, F & Dyson, LE 2015, 'Why Mobile? Indigenous People and Mobile Technologies at the Edge' in Dyson, LE, Grant, S & Hendriks, M (eds), Indigenous People and Mobile Technologies, Routledge, USA, pp. 39-58.
View/Download from: Publisher's site
View description>>
This chapter outlines a process of introducing mobile journalism skills (mojo) and technologies to Indigenous people living in remote communities in Australia. It explores the degree to which citizens can be taught how to create and publish empowering digital stories using just a smartphone. The digital era creates possibilities for new cross-cultural online communications where participants with similar social and cultural backgrounds create content and engage in activities concerning local issues and interests of importance to them. It also demonstrated that mojos could think outside the patterned story introduced. The irony was that because of the digital divide many families could only view stories on the mojos’ iPhones, at school, at the local media and community centers, or when the stories were broadcast on television. The degree to which objectivity impacts a subjective treatment will depend on the mojo, the story, a level of journalistic comprehension and in a commercial situation the brief and/or response from a commissioning body.
Braytee, A, Gill, AQ, Kennedy, PJ & Hussain, FK 2015, 'A Review and Comparison of Service E-Contract Architecture Metamodels' in Neural Information Processing, Springer International Publishing, pp. 583-595.
View/Download from: Publisher's site
Devece, C, Peris-Ortiz, M, Merigó, JM & Fuster, V 2015, 'Linking the Development of Teamwork and Communication Skills in Higher Education' in Sustainable Learning in Higher Education, Springer International Publishing, pp. 63-73.
View/Download from: Publisher's site
Dyson, LE 2015, 'Framing the Indigenous Mobile Revolution' in Dyson, LE, Grant, S & Hendriks, M (eds), Indigenous People and Mobile Technologies, Routledge, USA, pp. 15-36.
View/Download from: Publisher's site
View description>>
This chapter examines the role of mobile telephony in rural communities in Papua New Guinea (PNG) with particular emphasis on the attitudes expressed by rural villagers. It reports on a threshold study, with research conducted during the earliest stages of mobile phone adoption in ten such villages in Madang Province. Traditional communication methods vary across PNG. In Madang Province, a key method involves drumming on a large, wooden drum or slit gong known locally as a garamut. An important part of the communicative ecology in many places is now mass media, although for most people even this is not available since they live in rural poverty without electricity supply, movie theatres or transmitters. Developing nations have experienced significant growth in mobile telecommunication markets, particularly since 2000, but the Pacific has been one of the last regions of the world to experience widespread mobile phone access and uptake.
Fernandez-Llatas, C, Pileggi, SF, Ibañez, G, Valero, Z & Sala, P 2015, 'Cloud Computing for Context-Aware Enhanced m-Health Services' in Methods in Molecular Biology, Springer New York, pp. 147-155.
View/Download from: Publisher's site
View description>>
© Springer Science+Business Media New York 2015. m-Health services are increasing its presence in our lives due to the high penetration of new smartphone devices. This new scenario proposes new challenges in terms of information accessibility that require new paradigms which enable the new applications to access the data in a continuous and ubiquitous way, ensuring the privacy required depending on the kind of data accessed. This paper proposes an architecture based on cloud computing paradigms in order to empower new m-Health applications to enrich their results by providing secure access to user data.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Conclusions and Future Work' in Delay Tolerant Networks, Springer International Publishing, pp. 81-85.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. This chapter reviews the whole monograph and summarizes its main contributions. Furthermore, potential research topics and their implementation are discussed for future work.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Data Dissemination in Delay Tolerant Networks with Geographic Information' in Delay Tolerant Networks, Springer International Publishing, pp. 53-67.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. “Unstable Network” is one of the most important features of delay tolerant networks (DTNs), where most of services or data dissemination are processed in a mobile environment. Geographic information, therefore, is an important factor needed to be carefully analyzed to overcome this “unstable” environment. In this chapter, how to use geographic information to improve data dissemination in DTNs is presented and two DTN applications based on geographic information are illustrated. The first one is vehicle based DTNs, where the vehicle is a carrier to disseminate data. The second one is human associated DTN, where the human is the carrier.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Delay Tolerant Networks Based Applications' in Delay Tolerant Networks, Springer International Publishing, pp. 9-17.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. This chapter surveys the typical applications of delay tolerant network techniques from four aspects: digital communication for rural areas, personal/wildlife communications, battlefield communications, disaster rescues and environmental monitoring communications.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Preface', pp. V-VI.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Privacy Protected Routing in Delay Tolerant Networks' in Delay Tolerant Networks, Springer International Publishing, pp. 69-79.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. As illustrated in Chap. 3, the most recent research on human associated DTNs uses social information to improve performance, thus demonstrating social characteristics are important factors to improve performance. However, little has been done concerning the privacy issue in DTNs. With increasing use of social information, more and more social characteristics related data is disclosed without permission. Due to the privacy issue, most trace providers do not want to release data with meaningful social information, that may be used to identify the data entered especially for a large dataset. For example, AOL published a query logs but quickly removed it due to the re-identification issues addressed in [1]. Consequently, researchers are trying to find ways to anonymize personal information while maintaining network functioning.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Routing Protocols in Delay Tolerant Networks' in Delay Tolerant Networks, Springer International Publishing, pp. 19-34.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. In this chapter, typical routing protocols in delay tolerant networks are presented. These routing protocols are categorized into epidemic based routing protocols, probability based routing protocols, geographic based routing protocols, social concept based routing protocols and time related routing protocols.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Social Characteristics Based Multiple Dimensional Routing Protocol in Human Associated Delay Tolerant Networks' in Delay Tolerant Networks, Springer International Publishing, pp. 35-51.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. This chapter introduces how to use multiple dimensional social attributes to improve data forwarding performance in human associated delay tolerant networks. As mobile nodes in these human associated DTNs are carried by humans, data forwarding processes are determined by human social behaviors [1]. If a node has two or more social attributes, it can be treated as a social hub (popular nodes with many social attributes), where characteristics with other nodes are shared in the overlapping area of the attributes.
García, JÁ, de la Cruz del Río Rama, M, González-Vázquez, E & Lindahl, JMM 2015, 'Motivations for Implementing a System of Quality Management in Spanish Thalassotherapy Centers' in Health and Wellness Tourism, Springer International Publishing, pp. 101-115.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. This article presents results from an empirical study of 31 thalassotherapy centers out of the total national number of 44 (as of 2011). The objective was to identify motivations that drive these centers to implement and certify a Quality Management System (QMS). Following a comprehensive theoretical review, the empirical research method consisted of descriptive and factor analyses to determine the importance and structure of motivations. Results show that the key motivations driving thalassotherapy centers to implement a QMS are enhancing service quality, improving processes and procedures, and creating awareness of quality in centers.
Grochow, JA & Qiao, Y 2015, 'Polynomial-Time Isomorphism Test of Groups that are Tame Extensions' in Algorithms and Computation, Springer Berlin Heidelberg, pp. 578-589.
View/Download from: Publisher's site
Lance, BJ, Touryan, J, Wang, Y-K, Lu, S-W, Chuang, C-H, Khooshabeh, P, Sajda, P, Marathe, A, Jung, T-P, Lin, C-T & McDowell, K 2015, 'Towards Serious Games for Improved BCI' in Nakatsu, R, Rauterberg, M & Ciancarini, P (eds), Handbook of Digital Games and Entertainment Technologies, Springer Singapore, Germany, pp. 1-28.
View/Download from: Publisher's site
View description>>
Brain-computer interface (BCI) technologies, or technologies that use online brain signal processing, have a great promise to improve human interactions with computers, their environment, and even other humans. Despite this promise, there are no current serious BCI technologies in widespread use, due to the lack of robustness in BCI technologies. The key neural aspect of this lack of robustness is human variability, which has two main components: (1) individual differences in neural signals and (2) intraindividual variability over time. In order to develop widespread BCI technologies, it will be necessary to address this lack of robustness. However, it is currently unknown how neural variability affects BCI performance. To accomplish these goals, it is essential to obtain data from large numbers of individuals using BCI technologies over considerable lengths of time. One promising method for this is through the use of BCI technologies embedded into games with a purpose (GWAP). GWAP are a game-based form of crowdsourcing which players choose to play for enjoyment and during which the player performs key tasks which cannot be automated but that are required to solve research questions. By embedding BCI paradigms in GWAP and recording neural and behavioral data, it should be possible to much more clearly understand the differences in neural signals between individuals and across different time scales, enabling the development of novel and increasingly robust adaptive BCI algorithms.
Mattiasson, B & Ye, L 2015, 'Preface.', pp. v-vi.
View/Download from: Publisher's site
Pickrell, M, Bongers, B & van den Hoven, E 2015, 'Understanding Persuasion and Motivation in Interactive Stroke Rehabilitation' in Persuasive Technology, Springer International Publishing, pp. 15-26.
View/Download from: Publisher's site
Rosing, MV, Scheel, JV & Gill, AQ 2015, 'Applying Agile Principles to BPM.' in Rosing, MV, Scheel, HV & Scheer, A-W (eds), The Complete Business Process Handbook, Vol. I, Morgan Kaufmann/Elsevier, USA, pp. 553-577.
View/Download from: Publisher's site
View description>>
Agile provides alternative ways of working. Organizations are showing significant interest in applying agile ways of working to business process management (BPM). BPM is concerned with the automation and management of business processes. Agile ways of working can be applied to BPM planning, analysis, architecture, design, implementation, operation, monitoring, and improvement. However, the application of agile thinking to BPM is not a straight forward task. There is a need to understand the basic concepts, underlying values and principles of agile ways of working. This chapter discusses the constituent building blocks of agile and explains how to establish an Agile BPM capability.
Rosing, MV, Scheer, A-W, Scheel, HV, Svendsen, ADM, Kokkonen, A, Ross, AM, Bøgebjerg, AF, Olsen, A, Dicks, A, Gill, AQ, Bach, B, Storms, BJ, Smit, C, Clemmensen, C, Swierczynski, CK, Utschig-Utschig, C, Moorcroft, D, Jones, DT, Coloma, D, Boykin, D, Muhita, DH, Gonçalves, D, Maggi, FM, Zhao, F, Senghore, F, Dandashi, F, Cummins, F, Stoffel, F, Scheel, GV, Rosing, GV, Doucet, G, Meiling, G, Jansson, GO, Scheruhn, H, Bohn, H, Man, HD, Kuil, H, Vester, HN, Gammelgaard, J, Womack, JP, Ross, JW, Greer, J, Nielsen, JT, Zachman, JA, Bertram, J, Golden, J, Rogers, JM, Erasmus, J, Scheel, JV & Waters, J 2015, 'Business Process Trends.' in Rosing, MV, Scheel, HV & Scheer, A-W (eds), The Complete Business Process Handbook, Vol. I, Morgan Kaufmann/Elsevier, USA, pp. 187-216.
View/Download from: Publisher's site
Zhang, G, Lu, J & Gao, Y 2015, 'Bi-level Decision Making in Railway Transportation Management' in Multi-Level Decision Making, Springer Berlin Heidelberg, Germany, pp. 337-356.
View/Download from: Publisher's site
View description>>
Transportation management is an important application field of bi-level decision-making. For example, transportation facilities, resources planning and moving, as well as staff relocation all involve sub-optimization and optimization problems, that is, the decision entities are often at two decision levels. This chapter presents two real applications of the bi-level decision techniques in railway transportation management.
Zhang, G, Lu, J & Gao, Y 2015, 'Decision Making and Decision Support Systems' in Multi-Level Decision Making, Springer Berlin Heidelberg, pp. 3-24.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. This book addresses an important decision making area—multi-level decision-making. To help readers understand the following chapters of this book, this chapter presents fundamental concepts, models, and techniques of decision making and decision support systems (DSS), thus providing an introduction for the remaining chapters of this book.
Zhang, G, Lu, J & Gao, Y 2015, 'Fuzzy Bi-level and Tri-level Decision Support Systems' in Multi-Level Decision Making, Springer Berlin Heidelberg, pp. 289-314.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. This chapter presents two multi-level decision support systems that implement related algorithms developed in previous chapters to support decision making in practice.
Zhang, G, Lu, J & Gao, Y 2015, 'Fuzzy Bi-level Decision Making' in Multi-Level Decision Making, Springer Berlin Heidelberg, pp. 175-205.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. Various uncertain issues naturally appear in organizational bi-level decision problems. Fuzzy sets and fuzzy systems can be used to handle uncertainties. This chapter introduces related definitions, theorems and models of fuzzy bi-level decision-making (FBLDM) and develops related algorithms to solve the uncertain issues in bi-level decision-making.
Zhang, G, Lu, J & Gao, Y 2015, 'Fuzzy Multi-objective Bi-level Decision Making' in Multi-Level Decision Making, Springer Berlin Heidelberg, pp. 207-228.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. we presented a set of solution approaches and related algorithms to solve a fuzzy bi-level programming problem. This chapter extends the results given in Chap. 7 by adding the capability to handle the multi-objective issue, that is, the leader, or the follower, or both have multiple objectives.
Al-Hassan, M, Lu, H & Lu, J 2015, 'A semantic enhanced hybrid recommendation approach: A case study of e-Government tourism service recommendation system', DECISION SUPPORT SYSTEMS, vol. 72, pp. 97-109.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V.All rights reserved. Recommender systems are effectively used as a personalized information filtering technology to automatically predict and identify a set of interesting items on behalf of users according to their personal needs and preferences. Collaborative Filtering (CF) approach is commonly used in the context of recommender systems; however, obtaining better prediction accuracy and overcoming the main limitations of the standard CF recommendation algorithms, such as sparsity and cold-start item problems, remain a significant challenge. Recent developments in personalization and recommendation techniques support the use of semantic enhanced hybrid recommender systems, which incorporate ontology-based semantic similarity measure with other recommendation approaches to improve the quality of recommendations. Consequently, this paper presents the effectiveness of utilizing semantic knowledge of items to enhance the recommendation quality. It proposes a new Inferential Ontology-based Semantic Similarity (IOBSS) measure to evaluate semantic similarity between items in a specific domain of interest by taking into account their explicit hierarchical relationships, shared attributes and implicit relationships. The paper further proposes a hybrid semantic enhanced recommendation approach by combining the new IOBSS measure and the standard item-based CF approach. A set of experiments with promising results validates the effectiveness of the proposed hybrid approach, using a case study of the Australian e-Government tourism services.
Alzoubi, YI, Gill, AQ & Al-Ani, A 2015, 'Distributed Agile Development Communication: An Agile Architecture Driven Framework.', J. Softw., vol. 10, no. 6, pp. 681-694.
View/Download from: Publisher's site
View description>>
Agile methods depend on active communication and effective knowledge sharing among team
members for producing high quality working software systems in short releases and iterations. However,
effective communication in Distributed Agile Development (DAD) can be challenging due to a number of
different factors, such as physical locations, multi-cultures and time-zones. The agile body of knowledge
mainly discusses some technology and non-technology solutions and strategies to mitigate the DAD
communication challenges from a project management perspective. Nevertheless, it has recently been
argued that there is a need to understand and analyze DAD communication from other related but different
perspectives, such as enterprise strategy, enterprise architecture and service management. Due to the fact
that agile EA provides a holistic view and blueprint of the whole environment in which a number of projects
are developed and managed, we attempt in this study to explore the effect of agile Enterprise Architecture
(EA) on DAD communication. Particularly, we propose the development of an agile EA driven approach from
the architecture body of knowledge for handling the DAD communication challenges that have not been
thoroughly investigated before
Anaissi, A, Goyal, M, Catchpoole, DR, Braytee, A & Kennedy, PJ 2015, 'Case-Based Retrieval Framework for Gene Expression Data', Cancer Informatics, vol. 14, pp. CIN.S22371-CIN.S22371.
View/Download from: Publisher's site
View description>>
Background The process of retrieving similar cases in a case-based reasoning system is considered a big challenge for gene expression data sets. The huge number of gene expression values generated by microarray technology leads to complex data sets and similarity measures for high-dimensional data are problematic. Hence, gene expression similarity measurements require numerous machine-learning and data-mining techniques, such as feature selection and dimensionality reduction, to be incorporated into the retrieval process. Methods This article proposes a case-based retrieval framework that uses a k-nearest-neighbor classifier with a weighted-feature-based similarity to retrieve previously treated patients based on their gene expression profiles. Results The herein-proposed methodology is validated on several data sets: a childhood leukemia data set collected from The Children's Hospital at Westmead, as well as the Colon cancer, the National Cancer Institute (NCI), and the Prostate cancer data sets. Results obtained by the proposed framework in retrieving patients of the data sets who are similar to new patients are as follows: 96% accuracy on the childhood leukemia data set, 95% on the NCI data set, 93% on the Colon cancer data set, and 98% on the Prostate cancer data set. Conclusion The designed case-based retrieval framework is an appropriate choice for retrieving previous patients who are similar to a new patient, on the basis of their gene expression data, for better diagnosis and treatment of childhood leukemia. Moreover, this framework can be applied to other gene expression data sets using some or all of its steps.
Andrews, T, Dyson, LE & Wishart, J 2015, 'Advancing ethics frameworks and scenario-based learning to support educational research into mobile learning', International Journal of Research & Method in Education, vol. 38, no. 3, pp. 320-334.
View/Download from: Publisher's site
Angelini, L, Lalanne, D, Hoven, E, Khaled, O & Mugellini, E 2015, 'Move, Hold and Touch: A Framework for Tangible Gesture Interactive Systems', Machines, vol. 3, no. 3, pp. 173-207.
View/Download from: Publisher's site
View description>>
Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF.
Ashraf, J, Chang, E, Hussain, OK & Hussain, FK 2015, 'Ontology usage analysis in the ontology lifecycle: A state-of-the-art review', KNOWLEDGE-BASED SYSTEMS, vol. 80, pp. 34-47.
View/Download from: Publisher's site
Ashraf, J, Hussain, OK & Hussain, FK 2015, 'Making sense from Big RDF Data: OUSAF for measuring ontology usage', Software: Practice and Experience, vol. 45, no. 8, pp. 1051-1071.
View/Download from: Publisher's site
View description>>
SummaryRecent growth and advancements in the Semantic Web have shifted the research focus from being knowledge‐centered to data‐centered. This has led to the increased use of ontologies to structurally represent the data, thereby generating huge amounts of RDF data, which we term Big RDF Data. Nevertheless, the literature lacks the tools to analyze Big RDF Data and make sense of it. Access to such tools would enable pragmatic inputs and insights for users in respect of such tasks as the usage and adoption of Ontologies, their uptake by different users in the community, and the identification of prevalent patterns. This analysis, which we term Ontology Usage, is important from the viewpoint of users who need informed inputs in the various stages of the ontology engineering lifecycle, such as ontology evolution, ontology population, and ontology deployment. In this paper, we propose the Ontology USage Analysis F̌ramework (OUSAF), which performs analysis of Ontology Usage on Big RDF Data and synthesizes the usage knowledge acquired. OUSAF provides a methodological approach to performing the various phases such as identifying, analyzing, representing, and utilizing the Ontology usage results from Big RDF Data. We describe in detail each of those phases and the metrics required to perform the analysis of each phase. The utilization of the OUSAF results obtained by users such as data publishers and ontology developers is demonstrated by a dataset collected in the e‐business domain. Copyright © 2014 John Wiley & Sons, Ltd.
Atif, A, Richards, D, Busch, P & Bilgin, A 2015, 'Assuring graduate competency: a technology acceptance model for course guide tools', Journal of Computing in Higher Education, vol. 27, no. 2, pp. 94-113.
View/Download from: Publisher's site
Azadeh, A, Asadzadeh, SM, Mirseraji, GH & Saberi, M 2015, 'An emotional learning-neuro-fuzzy inference approach for optimum training and forecasting of gas consumption estimation models with cognitive data', Technological Forecasting and Social Change, vol. 91, pp. 47-63.
View/Download from: Publisher's site
Azadeh, A, Mianaei, HS, Asadzadeh, SM, Saberi, M & Sheikhalishahi, M 2015, 'A flexible ANN-GA-multivariate algorithm for assessment and optimization of machinery productivity in complex production units', Journal of Manufacturing Systems, vol. 35, pp. 46-75.
View/Download from: Publisher's site
Azadeh, A, Saberi, M, Rouzbahman, M & Valianpour, F 2015, 'A neuro-fuzzy algorithm for assessment of health, safety, environment and ergonomics in a large petrochemical plant', Journal of Loss Prevention in the Process Industries, vol. 34, pp. 100-114.
View/Download from: Publisher's site
Azadeh, A, Sohrabi, P & Saberi, M 2015, 'A unique meta-heuristic algorithm for optimization of electricity consumption in energy-intensive industries with stochastic inputs', The International Journal of Advanced Manufacturing Technology, vol. 78, no. 9-12, pp. 1691-1703.
View/Download from: Publisher's site
Azadeh, A, Zia, NP, Saberi, M, Hussain, FK, Yoon, JH, Hussain, OK & Sadri, S 2015, 'A trust-based performance measurement modeling using t-norm and t-conorm operators', APPLIED SOFT COMPUTING, vol. 30, pp. 491-500.
View/Download from: Publisher's site
Bakker, S, Hausen, D, van den Hoven, E & Selker, T 2015, 'Preface: Designing for peripheral interaction: Seamlessly integrating interactive technology in everyday life', Interaction Design and Architecture(s), vol. 26, no. 1, pp. 3-5.
Bakker, S, van den Hoven, E & Eggen, B 2015, 'Evaluating Peripheral Interaction Design', HUMAN-COMPUTER INTERACTION, vol. 30, no. 6, pp. 473-506.
View/Download from: Publisher's site
View description>>
Many actions in the physical world take place in the background or periphery of peoples attention. However interactions with computing technologies usually require focused attention. This paper explores the concept of peripheral interaction: physical interaction with technology that takes place outside the focus of attention. A peripheral interaction design (called FireFlies), which supports primary school teachers in their everyday routine through open-ended light-objects on the childrens desks, was deployed in four classrooms for six weeks. Results of interviews and video analysis indicate that the six participating teachers were able to physically interact with the FireFlies interactive artefact quickly and frequently without disturbing ongoing tasks. In the final weeks of the study, the teachers seemed able to easily shift their focus of attention between their main task and the interactive system. We therefore conclude that, even though it is difficult to measure peoples attention, a longitudinal approach seemed effective to find indicators for peripheral interaction.
Bakker, S, van den Hoven, E & Eggen, B 2015, 'Peripheral interaction: characteristics and considerations', PERSONAL AND UBIQUITOUS COMPUTING, vol. 19, no. 1, pp. 239-254.
View/Download from: Publisher's site
View description>>
In everyday life, we are able to perceive information and perform physical actions in the background or periphery of attention. Inspired by this observation, several researchers have studied interactive systems that display digital information in the periphery of attention. To broaden the scope of this research direction, a few recent studies have focused on interactive systems that cannot only be perceived in the background, but also enable users to physically interact with digital information in their periphery. Such peripheral interaction designs can support computing technology to fluently embed in, and become a meaningful part of peoples everyday routines. With the increasing ubiquity of technology in our everyday environment, we believe that this direction is highly relevant nowadays. This paper presents an in-depth analysis of three case studies on peripheral interaction. These case studies involved the design and development of peripheral interactive systems and deployment of these systems in the real context of use for a number of weeks. Based on the insights gained through these case studies, we discuss generalized characteristics and considerations for peripheral interaction design and evaluation. The aim of the work presented in this paper is to support interaction design researchers and practitioners in anticipating and facilitating peripheral interaction with the designs they are evaluating or developing.
Ban, L, Huo, H & Xu, B 2015, 'Discovery of hot regions about crowd activities based on mobility data', Journal of University of Science and Technology of China, vol. 45, no. 10, pp. 829-835.
View/Download from: Publisher's site
View description>>
Mobility data records the change of location and time about crowd activities, showing semantic knowledge about human mobility. From the perspective of regional semantic knowledge, mining the hot regions visited frequently by moving crowds is essential to understand regional characteristics in the smart city applications. This paper studied how to discover hot regions and how to constraint their coverage size. Based on an analysis of the location sequence of moving crowd, a discovery method for discovering hot regions based on kernel function was proposed. This discovery method uses the grid as a spatial data indexing structure and the Top- & sorting method. A discovery algorithm of hot regions was presented based on the discovery method. Finally, experimental results validate accurately the feasibility and effectiveness of the method on practical datasets.
Bano, M & Zowghi, D 2015, 'A systematic review on the relationship between user involvement and system success', INFORMATION AND SOFTWARE TECHNOLOGY, vol. 58, pp. 148-169.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier B.V. All rights reserved. Context: For more than four decades it has been intuitively accepted that user involvement (UI) during system development lifecycle leads to system success. However when the researchers have evaluated the user involvement and system success (UI-SS) relationship empirically, the results were not always positive. Objective: Our objective was to explore the UI-SS relationship by synthesizing the results of all the studies that have empirically investigated this complex phenomenon. Method: We performed a Systematic Literature Review (SLR) following the steps provided in the guidelines of Evidence Based Software Engineering. From the resulting studies we extracted data to answer our 9 research questions related to the UI-SS relationship, identification of users, perspectives of UI, benefits, problems and challenges of UI, degree and level of UI, relevance of stages of software development lifecycle (SDLC) and the research method employed on the UI-SS relationship. Results: Our systematic review resulted in selecting 87 empirical studies published during the period 1980-2012. Among 87 studies reviewed, 52 reported that UI positively contributes to system success, 12 suggested a negative contribution and 23 were uncertain. The UI-SS relationship is neither direct nor binary, and there are various confounding factors that play their role. The identification of users, their degree/level of involvement, stage of SDLC for UI, and choice of research method have been claimed to have impact on the UI-SS relationship. However, there is not sufficient empirical evidence available to support these claims. Conclusion: Our results have revealed that UI does contribute positively to system success. But it is a double edged sword and if not managed carefully it may cause more problems than benefits. Based on the analysis of 87 studies, we were able to identify factors for effective management of UI alluding to the causes for inconsi...
Behbood, V, Lu, J, Zhang, G & Pedrycz, W 2015, 'Multistep Fuzzy Bridged Refinement Domain Adaptation Algorithm and Its Application to Bank Failure Prediction', IEEE TRANSACTIONS ON FUZZY SYSTEMS, vol. 23, no. 6, pp. 1917-1935.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Machine learning plays an important role in data classification and data-based prediction. In some real-world applications, however, the training data (coming from the source domain) and test data (from the target domain) come from different domains or time periods, and this may result in the different distributions of some features. Moreover, the values of the features and/or labels of the datasets might be nonnumeric and involve vague values. Traditional learning-based prediction and classification methods cannot handle these two issues. In this study, we propose a multistep fuzzy bridged refinement domain adaptation algorithm, which offers an effective way to deal with both issues. It utilizes a concept of similarity to modify the labels of the target instances that were initially predicted by a shift-unaware model. It then refines the labels using instances that are most similar to a given target instance. These instances are extracted from mixture domains composed of source and target domains. The proposed algorithm is built on a basis of some data and refines the labels, thus performing completely independently of the shift-unaware prediction model. The algorithm uses a fuzzy set-based approach to deal with the vague values of the features and labels. Four different datasets are used in the experiments to validate the proposed algorithm. The results, which are compared with those generated by the existing domain adaptation methods, demonstrate a significant improvement in prediction accuracy in both the above-mentioned datasets.
Bonilla, CA, Merigó, JM & Torres-Abad, C 2015, 'Economics in Latin America: a bibliometric analysis', Scientometrics, vol. 105, no. 2, pp. 1239-1252.
View/Download from: Publisher's site
View description>>
Bibliometrics is a research field that studies quantitatively the bibliographic material. This study analyzes the academic research developed in Latin America in economics between 1994 and 2013. The article uses the Web of Science database in order to collect the information and provides several bibliometric indicators including the total number of publications and citations, and the h-index. The results indicate that Brazil, Mexico, Chile, Argentina and Colombia are the only countries with a significant amount of publications in economics in Web of Science although Costa Rica and Uruguay have considerable results in per capita terms. The annual evolution shows a significant increase during the last 5 years that seems to continue in the future, probably with the objective of reaching similar standards than the most competitive countries around the World. The results also show that development, agricultural and health economics are the most significant topics in the region.
Burdon, S & Dovey, KA 2015, 'Exploring the cultural basis of innovation', Journal of Innovation Management, vol. 3, no. 3, pp. 20-34.
View/Download from: Publisher's site
View description>>
The paper explores the relationship between leadership, culture and innovation. Through an analysis of four enterprises, voted by their peers as having strong innovation-friendly cultures, we explicate the assumptions embedded in these innovation-supporting cultures, and outline the leadership practices that have created them. By locating the study within the interpretivist research paradigm and adopting the 'practice turn' perspective that has characterised recent leadership research, this study has been able to acknowledge and address the political dynamics involved in the creation of innovation-conducive cultures.
Casanovas, M, Torres-Martinez, A & Merigo, JM 2015, 'DECISION MAKING PROCESSES OF NON-LIFE INSURANCE PRICING USING FUZZY LOGIC AND OWA OPERATORS', ECONOMIC COMPUTATION AND ECONOMIC CYBERNETICS STUDIES AND RESEARCH, vol. 49, no. 2, pp. 169-187.
Casanovas, M, Torres-Martínez, A & Merigó, JM 2015, 'Decision making processes of non-life insurance pricing using fuzzy logic and OWA operators', Economic Computation and Economic Cybernetics Studies and Research, vol. 49, no. 2, pp. 1-19.
View description>>
Setting a commercial premium for an insurance policy is a complex process, even, though statistical tools provide fairly reliable information on the behavior of the frequency and cost of claims differentiated by risk profiles reflected in pure premium calculations. However lately setting the price the customer must pay has not been easy, because of the uncertainty of, having to use subjective criteria to analyze how demand may be affected by different price alternatives and economic situations. This article aims to develop this process in two stages. The first stage is carried out with the opinion of experts applied to uncertain numbers and Ordered Weighted Average (OWA) operators to assess the overall benefits of each profile to choose the best alternative. The second stage, which uses Heavy OWA (HOWA) operators, is based on the results obtained in the first stage and chooses a general price alternative for all profiles.
Chelliah, J, Sood, S & Scholfield, S 2015, 'Realising the strategic value of RFID in academic libraries: a case study of the University of Technology Sydney', AUSTRALIAN LIBRARY JOURNAL, vol. 64, no. 2, pp. 113-127.
View/Download from: Publisher's site
View description>>
© 2015 Australian Library & Information Association. Radio Frequency Identification (RFID) technology is being increasingly implemented in academic libraries due to a promise of increased collections management efficiency. This paper reports on the recent implementation of RFID technology in the library at the University of Technology Sydney, providing insights into the change management process of RFID implementation. The paper focuses on the implications of the implementation and indigenisation of RFID technology for three specific and symbiotic areas of the library: people, processes and technology. Data from interviews with eight participants involved at various levels of the academic library were collected. This paper develops a best practice model through the insights gained by the people involved in the RFID implementation. The case study posits the dynamic relationships between people, processes and technology as greatly impacted by the implementation process, and analyses the divergence between projected and actual outcomes in the implementation process.
Chen, H, Zhang, G, Zhu, D & Lu, J 2015, 'A patent time series processing component for technology intelligence by trend identification functionality', NEURAL COMPUTING & APPLICATIONS, vol. 26, no. 2, pp. 345-353.
View/Download from: Publisher's site
Chen, J, Liu, B, Zhou, H, Gui, L, Liu, N & Wu, Y 2015, 'Providing Vehicular Infotainment Service Using VHF/UHF TV Bands via Spatial Spectrum Reuse', IEEE Transactions on Broadcasting, vol. 61, no. 2, pp. 279-289.
View/Download from: Publisher's site
Chen, Y, Zhang, X, Feng, Y, Liang, J & Chen, H 2015, 'Sunburst with ordered nodes based on hierarchical clustering: a visual analyzing method for associated hierarchical pesticide residue data', Journal of Visualization, vol. 18, no. 2, pp. 237-254.
View/Download from: Publisher's site
View description>>
© 2015, The Visualization Society of Japan. Abstract: According to the characteristics of pesticide residue data and analyzing requirements in food safety fields, we presented a visual analyzing method for associated hierarchical data, called sunburst with ordered nodes based on hierarchical clustering (SONHC). SONHC arranged the leaf nodes in sunburst in order using hierarchical clustering algorithm, put the associated dataset as a node in center of the sunburst, and connected it with the associated leaf nodes in sunburst using colored lines. So, it can present not only two hierarchical structures but also the relationships between them. Based on SONHC and some interaction techniques (clicking, contraction and expansion, etc) we developed an associated visual analyzing system (AVAS) for pesticide residues detection results data, which can help users to inspect the hierarchical structure of pesticide and agricultural products and to explore the associations between pesticides and agricultural products, and associations between different pesticides. The results of user experience test showed that SONHC algorithm overperforms than SA and SR algorithm in ULE and ULE’s variance. AVAS system is effective in helping users to analyze the pesticide residues data. Furthermore, SONHC algorithm can also be adopted to analyze associated hierarchical data in other fields, such as finance, insurance and e-commerce.Graphical Abstract: [Figure not available: see fulltext.]
Chuang, C-H, Huang, C-S, Ko, L-W & Lin, C-T 2015, 'An EEG-based perceptual function integration network for application to drowsy driving', Knowledge-Based Systems, vol. 80, pp. 143-152.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. All rights reserved. Drowsy driving is among the most critical causes of fatal crashes. Thus, the development of an effective algorithm for detecting a driver's cognitive state demands immediate attention. For decades, studies have observed clear evidence using electroencephalography that the brain's rhythmic activities fluctuate from alertness to drowsiness. Recognition of this physiological signal is the major consideration of neural engineering for designing a feasible countermeasure. This study proposed a perceptual function integration system which used spectral features from multiple independent brain sources for application to recognize the driver's vigilance state. The analysis of brain spectral dynamics demonstrated physiological evidenced that the activities of the multiple cortical sources were highly related to the changes of the vigilance state. The system performances showed a robust and improved accuracy as much as 88% higher than any of results performed by a single-source approach.
Deng, S, Wang, D, Li, X & Xu, G 2015, 'Exploring user emotion in microblogs for music recommendation', Expert Systems with Applications, vol. 42, no. 23, pp. 9284-9293.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Context-aware recommendation has become increasingly important and popular in recent years when users are immersed in enormous music contents and have difficulty to make their choices. User emotion, as one of the most important contexts, has the potential to improve music recommendation, but has not yet been fully explored due to the great difficulty of emotion acquisition. This article utilizes users' microblogs to extract their emotions at different granularity levels and during different time windows. The approach then correlates three elements: user, music and the user's emotion when he/she is listening to the music piece. Based on the associations extracted from a data set crawled from a Chinese Twitter service, we develop several emotion-aware methods to perform music recommendation. We conduct a series of experiments and show that the proposed solution proves that considering user emotional context can indeed improve recommendation performance in terms of hit rate, precision, recall, and F1 score.
Dong, H & Hussain, FK 2015, 'Service-requester-centered service selection and ranking model for digital transportation ecosystems', COMPUTING, vol. 97, no. 1, pp. 79-102.
View/Download from: Publisher's site
View description>>
Transport services are a fundamental utility that drives human society. A Digital Transportation Ecosystem is a sub-system of the Digital Ecosystem, which uses ICT resources to facilitate transport service transactions. This research focuses on the selection and ranking of online transport service information. The previous research in this area has been unable to achieve satisfactory performance or give sufficient freedom to service requesters to rank services based on their preferences. `User-centered design is a broad term to describe how end-users influence system design. In this research, we propose a Service-Requester-Centered Service Selection and Ranking Model, guided by the philosophy of user-centered design. Three major sub-models are involved in this model: a model for assisting service requesters to search appropriate transport service ontology concepts to denote their service requests, a model for enhancing the accuracy of automatic transport service concept recommendation by observing service requesters click behaviours, and a model for enabling service-requester-preference-based service ranking. Implementations and empirical experiments are conducted to evaluate the three sub-models and the drawn conclusions along with directions for future work are outlined.
Dong, Y, Fan, Z-P & Yu, S 2015, 'Consensus Building in a Local Context for the AHP-GDM With the Individual Numerical Scale and Prioritization Method', IEEE Transactions on Fuzzy Systems, vol. 23, no. 2, pp. 354-368.
View/Download from: Publisher's site
Dovey, K & Rembach, M 2015, 'Invisible practices; innovative outcomes: intrapreneurship within the academy', Action Learning: Research and Practice, vol. 12, no. 3, pp. 276-292.
View/Download from: Publisher's site
View description>>
Across the world, higher education is facing new challenges as governments cut subsidies, new technologies enable ‘massively open’ online courses, students are accessed from global locations, and the centuries-old mission of universities is commercialised. In spite of these profound changes, most institutions of higher education have remained unaltered in terms of how they are structured and governed. Similarly, the consequent commodification of knowledge has not been challenged in general even though the lack of the deep knowledge that underpins competent professional practice is periodically lamented. This paper outlines an experiment in an alternative form of academic programme management; one which is perhaps more appropriate in current times. It describes an initiative at an Australian university where an action-research approach is being used to engage the full spectrum of stakeholders in the governance and execution of the strategic intent of a particular ‘flagship’ postgraduate programme. In this way, it demonstrates how knowing (knowledge manifesting in practice) is achieved through a form of praxis that continuously refines, through interactive ‘creatively abrasive’ forums, the enactment of mission-pertinent practices. However, as an initiative that threatens the political status quo within the university, much of the action, until recently, has had to be conducted ‘invisibly’.
Fan, H, Hussain, FK & Hussain, OK 2015, 'Semantic client-side approach for web personalization of SaaS-based cloud services', CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, vol. 27, no. 8, pp. 2144-2169.
View/Download from: Publisher's site
Fan, H, Hussain, FK, Younas, M & Hussain, OK 2015, 'An integrated personalization framework for SaaS-based cloud services', Future Generation Computer Systems, vol. 53, pp. 157-173.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. Software as a Service (SaaS) has recently emerged as one of the most popular service delivery models in cloud computing. The number of SaaS services and their users is continuously increasing and new SaaS service providers emerge on a regular basis. As users are exposed to a wide range of SaaS services, they may soon become more demanding when receiving/consuming such services. Similar to the web and/or mobile applications, personalization can play a critical role in modern SaaS-based cloud services. This paper introduces a fully designed, cloud-enabled personalization framework to facilitate the collection of preferences and the delivery of corresponding SaaS services. The approach we adapt in the design and development of the proposed framework is to synthesize various models and techniques in a novel way. The objective is to provide an integrated and structured environment wherein SaaS services can be provisioned with enhanced personalization quality and performance.
Frati, F, Gaspers, S, Gudmundsson, J & Mathieson, L 2015, 'Augmenting Graphs to Minimize the Diameter', Algorithmica, vol. 72, no. 4, pp. 995-1010.
View/Download from: Publisher's site
View description>>
© 2014, Springer Science+Business Media New York. We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT $$4$$4-approximation algorithm for the problem.
Gao, F, Musial, K, Cooper, C & Tsoka, S 2015, 'Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics', Scientific Programming, vol. 2015, pp. 1-13.
View/Download from: Publisher's site
View description>>
Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is theprediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.
Garcia, JA & Felix Navarro, K 2015, 'StepKinnection: A Fall Prevention Game Mindfully Designed for the Elderly.', Stud Health Technol Inform, vol. 214, pp. 43-49.
View/Download from: Publisher's site
View description>>
This paper presents the StepKinnection game, a Kinect-driven stepping game for the elderly that delivers stepping exercises to train specific cognitive and physical abilities associated with falls. This system combines a set of suitable age-related features, meaningful exercise routines and an embedded clinical test for fall risk assessment. The combination of these three aspects makes the game potentially useful in practice as the game is appealing to the elderly cohort, trains one of the most important abilities to prevent falls and at the same time allows for a continuous assessment of health outcomes; characteristics not available in the literature nor in current commercial games.
Gill, A, Bunker, D & Seltsikas, P 2015, 'Moving Forward: Emerging Themes in Financial Services Technologies’ Adoption', Communications of the Association for Information Systems, vol. 36, pp. 205-230.
View/Download from: Publisher's site
View description>>
© 2015 by the Association for Information Systems. Financial services technologies (FST) are core to the continuous transformation of financial services organizations (FSO). To date, however, there has been a lack of empirical research into FST adoption against the backdrop of the recent financial crisis. In this paper, we re-examine how FSO are currently positioned to take advantage of emerging FST. Note that, in this paper, we look forward rather than provide a commentary on the state of the art in technology adoption research. We conducted this research by applying an exploratory qualitative study method: we analyzed interview transcripts from thirty recent interviews of FSO technology executives and CIOs by using a thematic network analysis tool. This analysis uncovered nineteen basic, eight organizing, and two global FST adoption research themes along with their links to FST adoption objectives, challenges, customer centricity, human resources, outsourcing, and overall IT strategy maintenance. This research has both practical and theoretical research implications and serves as a resource base for FSO and researchers to set future research priorities and directions. We intend for the emerging themes that we present in this paper to facilitate research directions by shedding light on the areas of greatest value and potential return in FST adoption.
Gill, AQ 2015, 'Agile enterprise architecture modelling: Evaluating the applicability and integration of six modelling standards.', Inf. Softw. Technol., vol. 67, pp. 196-206.
View/Download from: Publisher's site
Gill, AQ 2015, 'Distributed Agile Development: Applying a Coverage Analysis Approach to the Evaluation of a Communication Technology Assessment Tool.', Int. J. e Collab., vol. 11, no. 1, pp. 57-76.
View/Download from: Publisher's site
View description>>
Copyright © 2015, IGI Global. Organizations have shown a significant interest in the adoption of emerging social technologies to support communication and collaboration needs of their Distributed Agile or Adaptive Development Environment (DADE). However, the challenge is how best to assess contemporary social technologies for supporting communication and collaboration in the DADE. Here, a communication technology assessment tool, called CTAT, is developed as a part of the Adaptive Enterprise Service System (AESS) toolkit by using the design research approach. This paper presents the evaluation of the CTAT construct through its use in the assessment of three social technologies within the context of a DADE. The results of this evaluation indicate that CTAT is shown to be useful, for example, when assessing a particular social technology for a specific DADE communication and collaboration context. The CTAT is intended to be used by senior developers for assessing social technologies for their DADE context.
Gill, AQ 2015, 'Social architecture considerations in assessing social media for emergency information management applications', Australian Journal of Emergency Management, vol. 30, no. 1, pp. 17-21.
View description>>
The emergency management industry is showing a significant interest in the adoption of social media for sourcing and disseminating crisis information. The emergency management industry needs to identify social architecture concerns when considering the adoption of a specific social media technology. Social architecture describes the properties and environment of a social system such as the 'emergency management system'. This paper identifies a set of 21 social architecture concerns based on recent qualitative research. This set of social architecture concerns can be used as a criteria list to assess the effectiveness of social media platforms for emergency information management applications.
Gill, AQ & Qureshi, MA 2015, 'Adaptive Enterprise Architecture Modelling.', J. Softw., vol. 10, no. 5, pp. 628-638.
View/Download from: Publisher's site
View description>>
Agile or adaptive enterprise architecture driven software development approach requires a modelling standard to describe the existing and to-be developed artifacts both at the high enterprise level and low, detailed level. However, a single modelling standard may not be used off-the-shelf to fully support the modelling needs of an adaptive enterprise architecture driven software development needs. The modelling standards need to be systematically analyzed and integrated for a particular modelling context. This paper reviews two well-known modeling standards ArchiMate and BPMN by using the interoperability research framework. Based on the syntax, semantics and structural analysis of these two modelling standards’ metamodels, it proposes a hybrid adaptive enterprise architecture modelling approach for describing and analysing the artifacts both at the high enterprise level and low, detailed level for a particular context. This paper has both theoretical and practical implications for researchers and practitioners pursuing to integrate various modelling standards.
Gill, AQ, Alam, SL & Eustace, J 2015, 'Social Architecture: An Emergency Management Case Study.', Australas. J. Inf. Syst., vol. 19, pp. 23-40.
View description>>
© 2015 Gill, Alam & Eustace. Emergency management agencies are progressively using social media for the sourcing and distribution of disaster information. Emergency management agencies are often unsure as to how to best identify and assess social media concerns (e.g. information security, trust) which must be addressed to develop a social media-enabled disaster information management environment. This paper adopts the Social Architecture Viewpoint Assessment (SAVA) framework for identifying and assessing social media concerns from four different viewpoints: IT, Value, Resource and Management. This paper demonstrates the use of the SAVA framework in the context of an in-depth empirical case study of an Australian emergency management agency. The results of this study indicate that the SAVA framework is useful for emergency information management managers in identifying and assessing social media concerns.
Gill, AQ, Bunker, D & Seltsikas, P 2015, 'Moving Forward: Emerging Themes in Financial Services Technologies' Adoption', COMMUNICATIONS OF THE ASSOCIATION FOR INFORMATION SYSTEMS, vol. 36, pp. 205-230.
Goodswen, SJ, Barratt, JLN, Kennedy, PJ & Ellis, JT 2015, 'Improving the gene structure annotation of the apicomplexan parasite Neospora caninum fulfils a vital requirement towards an in silico-derived vaccine', INTERNATIONAL JOURNAL FOR PARASITOLOGY, vol. 45, no. 5, pp. 305-318.
View/Download from: Publisher's site
View description>>
© 2015 Australian Society for Parasitology Inc. Neospora caninum is an apicomplexan parasite which can cause abortion in cattle, instigating major economic burden. Vaccination has been proposed as the most cost-effective control measure to alleviate this burden. Consequently the overriding aspiration for N. caninum research is the identification and subsequent evaluation of vaccine candidates in animal models. To save time, cost and effort, it is now feasible to use an in silico approach for vaccine candidate prediction. Precise protein sequences, derived from the correct open reading frame, are paramount and arguably the most important factor determining the success or failure of this approach. The challenge is that publicly available N. caninum sequences are mostly derived from gene predictions. Annotated inaccuracies can lead to erroneously predicted vaccine candidates by bioinformatics programs. This study evaluates the current N. caninum annotation for potential inaccuracies. Comparisons with annotation from a closely related pathogen, Toxoplasma gondii, are also made to distinguish patterns of inconsistency. More importantly, a mRNA sequencing (RNA-Seq) experiment is used to validate the annotation. Potential discrepancies originating from a questionable start codon context and exon boundaries were identified in 1943 protein coding sequences. We conclude, where experimental data were available, that the majority of N. caninum gene sequences were reliably predicted. Nevertheless, almost 28% of genes were identified as questionable. Given the limitations of RNA-Seq, the intention of this study was not to replace the existing annotation but to support or oppose particular aspects of it. Ideally, many studies aimed at improving the annotation are required to build a consensus. We believe this study, in providing a new resource on gene structure and annotation, is a worthy contributor to this endeavour.
Han, J, Lu, J, Hu, Y & Zhang, G 2015, 'Tri-level decision-making with multiple followers: Model, algorithm and case study', INFORMATION SCIENCES, vol. 311, pp. 182-204.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Inc. Tri-level decision-making arises to address compromises among interacting decision entities distributed throughout a three-level hierarchy; these entities are respectively termed the top-level leader, the middle-level follower and the bottom-level follower. This study considers an uncooperative situation where multiple followers at the same (middle or bottom) level make their individual decisions independently but consider the decision results of their counterparts as references through information exchanged among themselves. This situation is called a reference-based uncooperative multi-follower tri-level (MFTL) decision problem which appears in many real-world applications. To solve this problem, we need to find an optimal solution achieving both the Stackelberg equilibrium in the three-level vertical structure and the Nash equilibrium among multiple followers at the same horizontal level. In this paper, we first propose a general linear MFTL decision model for this situation. We then develop a MFTL Kth-Best algorithm to find an optimal solution to the model. Since the optimal solution means a compromised result in the uncooperative situation and it is often imprecise or ambiguous for decision entities to identify their related satisfaction, we use a fuzzy programming approach to characterize and evaluate the solution obtained. Lastly, a real-world case study on production-inventory planning illustrates the effectiveness of the proposed MFTL decision techniques.
Hasselmann, K, Cremades, R, Filatova, T, Hewitt, R, Jaeger, C, Kovalevsky, D, Voinov, A & Winder, N 2015, 'Free-riders to forerunners', Nature Geoscience, vol. 8, no. 12, pp. 895-898.
View/Download from: Publisher's site
Hayes, G, Khazaei, H, El-Khatib, K, McGregor, C & Eklund, JM 2015, 'Design and analytical model of a platform-as-a-service cloud for healthcare', Journal of Internet Technology, vol. 16, no. 1, pp. 139-149.
View/Download from: Publisher's site
View description>>
Recent progression in health informatics data analysis has been impeded due to lack of hospital resources and computation power. To remedy this, some researchers have proposed a cloud-based web service patient monitoring system capable of providing offsite collection, analysis, and dissemination of remote patient physiological data. Unfortunately, some of these cloud services are not effective without utilizing next-generation hardware management techniques. In order to make cloud based patient monitoring a reality, this paper shows how leveraging an underlying platform-as-a-service (PaaS) cloud model can provide integration with web service patient monitoring systems while providing high availability, scalability, and security. We also present an analytical model of the proposed platform and obtain performance measures such as delay in servicing as well as reject probability.
Hossain, L, Karimi, F & Wigand, RT 2015, 'Dynamics of a Global Zoonotic Research Network Over 33 Years (1980–2012)', Disaster Medicine and Public Health Preparedness, vol. 9, no. 5, pp. 496-503.
View/Download from: Publisher's site
View description>>
AbstractObjectiveThe increasing rate of outbreaks in humans of zoonotic diseases requires detailed examination of the education, research, and practice of animal health and its connection to human health. This study investigated the collaboration network of different fields engaged in conducting zoonotic research from a transdisciplinary perspective.MethodsExamination of the dynamics of this network for a 33-year period from 1980 to 2012 is presented through the development of a large scientometric database from Scopus. In our analyses we compared several properties of these networks, including density, clustering coefficient, giant component, and centrality measures over time. We also elicited patterns in different fields of study collaborating with various other fields for zoonotic research.ResultsWe discovered that the strongest collaborations across disciplines are formed among the fields of medicine; biochemistry, genetics, and molecular biology; immunology and microbiology; veterinary; agricultural and biological sciences; and social sciences. Furthermore, the affiliation network is growing overall in terms of collaborative research among different fields of study such that more than two-thirds of all possible collaboration links among disciplines have already been formed.ConclusionsOur findings indicate that zoonotic research scientists in different fields (human or animal health, social science, earth and environmental sciences, engineering) have been actively collaborating with each other over the past 1...
Hossain, L, Karimi, F, Wigand, RT & Crawford, JW 2015, 'Evolutionary longitudinal network dynamics of global zoonotic research', Scientometrics, vol. 103, no. 2, pp. 337-353.
View/Download from: Publisher's site
Hoven, E & van Bergen, T 2015, 'Tangible Cooperative Gestures: Improving Control and Initiative in Digital Photo Sharing', Machines, vol. 3, no. 4, pp. 268-295.
View/Download from: Publisher's site
View description>>
This paper focuses on co-present digital photo sharing on a notebook and investigates how this could be supported. While analyzing the current digital photo sharing situation we noticed that there was a high threshold for visitors to take control of the personal computer of the photo owner, resulting in inequity of participation. It was assumed that visitors would have the opportunity to interact with the notebook more freely if this threshold was lowered by distributing the user interface and creating a more public, instead of personal, interaction space. This, in turn, could make them feel more involved and in control during a session, creating a more enjoyable experience. To test these assumptions a design prototype was created that stimulates participants to use tangible artifacts for cooperative gestures, a promising direction for the future of HCI. The situation with the cooperative gestures was compared with the regular digital photo sharing situation, which makes use of a keyboard. In dyads, visitors felt more involved and in control in the design prototype cooperative gestures condition (especially during storytelling), resulting in a more enjoyable digital photo sharing experience.
Huang, C-S, Pal, NR, Chuang, C-H & Lin, C-T 2015, 'Identifying changes in EEG information transfer during drowsy driving by transfer entropy', Frontiers in Human Neuroscience, vol. 9, no. OCTOBER.
View/Download from: Publisher's site
View description>>
© 2015 Huang, Pal, Chuang and Lin. Drowsy driving is a major cause of automobile accidents. Previous studies used neuroimaging based approaches such as analysis of electroencephalogram (EEG) activities to understand the brain dynamics of different cortical regions during drowsy driving. However, the coupling between brain regions responding to this vigilance change is still unclear. To have a comprehensive understanding of neural mechanisms underlying drowsy driving, in this study we use transfer entropy, a model-free measure of effective connectivity based on information theory. We investigate the pattern of information transfer between brain regions when the vigilance level, which is derived from the driving performance, changes from alertness to drowsiness. Results show that the couplings between pairs of frontal, central, and parietal areas increased at the intermediate level of vigilance, which suggests that an enhancement of the cortico-cortical interaction is necessary to maintain the task performance and prevent behavioral lapses. Additionally, the occipital-related connectivity magnitudes monotonically decreases as the vigilance level declines, which further supports the cortical gating of sensory stimuli during drowsiness. Neurophysiological evidence of mutual relationships between brain regions measured by transfer entropy might enhance the understanding of cortico-cortical communication during drowsy driving.
Hussain, OK, Zia-ur-Rahman, Hussain, FK, Singh, J, Janjua, NK & Chang, E 2015, 'A User-Based Early Warning Service Management Framework in Cloud Computing', COMPUTER JOURNAL, vol. 58, no. 3, pp. 472-496.
View/Download from: Publisher's site
View description>>
Cloud computing is a very attractive option for service users and service providers for their businesses because of the benefits it provides. A major concern among service users regarding cloud adoption, however, is the unpredictability of performance in relation to the services provided. Even though guarantees in the form of service-level agreements are provided to users by service providers, real-time service-level degradability remains a critical concern; hence, there is a need for an approach that assists users to manage a service before it fails. The approaches proposed in the literature assess and evaluate the performance of the cloud infrastructure of providers, but this does not guarantee that a given service instance will meet the desired quality level because there may be factors other than the provider's infrastructure that will affect the level of quality of the service instance. In this paper, we present an approach that measures the quality of a service instance in real time and provides important analysis for service users as to whether they will achieve their desired objectives. This analysis also constitutes an important input for service users in the assessment and management of a service to avoid the failure to achieve objectives.
Ivanyos, G, Karpinski, M, Qiao, Y & Santha, M 2015, 'Generalized Wong sequences and their applications to Edmonds' problems', Journal of Computer and System Sciences, vol. 81, no. 7, pp. 1373-1386.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Inc. Given a linear subspace B of the n×n matrices over some field F, we consider the following problems: symbolic matrix rank (SMR) asks to determine the maximum rank among matrices in B, while symbolic determinant identity testing (SDIT) asks to decide whether there exists a nonsingular matrix in B. The constructive versions of these problems ask to find a matrix of maximum rank, respectively a nonsingular matrix, if there exists one. Our first algorithm solves the constructive SMR when B is spanned by unknown rank one matrices, answering an open question of Gurvits. Our second algorithm solves the constructive SDIT when B is spanned by triangularizable matrices. (The triangularization is not given explicitly.) Both algorithms work over fields of size ≥n+1. Our framework is based on generalizing Wong sequences, a classical method to deal with pairs of matrices, to pairs of matrix spaces.
Janjua, NK, Hussain, OK, Hussain, FK & Chang, E 2015, 'Philosophical and Logic-Based Argumentation-Driven Reasoning Approaches and their Realization on the WWW: A Survey', The Computer Journal, vol. 58, no. 9, pp. 1967-1999.
View/Download from: Publisher's site
View description>>
Argumentation is the practice of systematic conscious reasoning involving the construction and evaluation of arguments to justify or support a particular conclusion. This article discusses, compares, contrasts and categorizes existing argumentation-based frameworks and applications as either philosophical or logic-based, and provides critical analysis that emphasizes the structure of arguments and the interactions between them. This review compares and contrasts the frameworks and applications of argumentation-based approaches on Web 2.0 and the Semantic Web, and subsequently highlights the importance and challenges of attaining monological argumentation on the Semantic Web.
Jeong, Y-S, Shyu, M-L, Xu, G & Wagner, RR 2015, 'Guest Editorial: Advanced Technologies and Services for Multimedia Big Data Processing', Multimedia Tools and Applications, vol. 74, no. 10, pp. 3413-3418.
View/Download from: Publisher's site
Jiang, H, Wang, J, Dong, Y & Lu, H 2015, 'Comprehensive assessment of wind resources and the low-carbon economy: An empirical study in the Alxa and Xilin Gol Leagues of inner Mongolia, China', Renewable and Sustainable Energy Reviews, vol. 50, pp. 1304-1319.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. Due to atmospheric pollution from fossil fuels, the reduction of wind turbine costs, and the rise of the low-carbon economy, wind energy conversion systems have become one of the most significant forms of new energy in China. Therefore, to reduce investment risk and maximize profits, it is necessary to assess wind resources before building large wind farms. This paper develops a comprehensive system containing four steps to evaluate the potential of wind resources at two sites in Xilin Gol League and at additional two sites in Alxa League of Inner Mongolia, China: (1) By calculating the total scores of three indexes, including the effective wind power density (EWPD), wind available time (WAT) and population density (PD), an indexes method is applied to assess the theoretical wind energy potential from 2001 to 2010. (2) To judge the fluctuations in the wind speed, the Fisher optimal partition method and the Jonckheere-Terpstra test are used to analyze the changes in the average monthly and yearly wind speeds from 2001 to 2010. (3) Three probability density functions, i.e., Weibull, Gamma and Lognormal, are used to assess the wind speed frequency distribution in 2010. To enhance the evaluation accuracy, three intelligent optimization parameter estimation algorithms, i.e., the particle swarm optimization algorithm (PSO), differential evolution algorithm (DE) and ant colony algorithm (ACO), are used to estimate the parameters of these distributions. (4) It is helpful to analyze the wind characteristics when assessing wind resources and selecting wind turbines. Therefore, the optimal frequency distribution based on the best parameter estimation method can be chosen to calculate the wind power density, the most probable wind speed and the wind speed carrying the maximum energy. The experimental results show that Site 1 and Site 4 are more suitable for large wind farms than Site 2 or Site 3.
Jiang, J, Wen, S, Yu, S, Xiang, Y & Zhou, W 2015, 'K-Center: An Approach on the Multi-Source Identification of Information Diffusion', IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, vol. 10, no. 12, pp. 2616-2626.
View/Download from: Publisher's site
Kamaleswaran, R, Wehbe, RR, Edward Pugh, J, Nacke, L, McGregor, C & James, A 2015, 'Collaborative multi-touch clinical handover system for the neonatal intensive care unit', Electronic Journal of Health Informatics, vol. 9, no. 1.
View description>>
Background: A critically ill infant admitted to a neonatal intensive care unit requires complex, critical, and coordinated care performed by multidisciplinary healthcare teams. Since the infant's care is not provided by a single, individual physician during the infant's hospital stay, clinical handover is essential to enable the transfer of health information between physicians involved in the infant's care. Objective: Handover at present is largely conducted in an informal and ad hoc way. A study of clinical handover is required to inform the development of automated intelligent systems that facilitate communication and collaboration between critical care health providers. Methods: A qualitative study in a quaternary neonatal intensive care unit, at The Hospital for Sick Children was undertaken to understand clinical handover and derive usability requirements. This is then used to inform a high level design of a multi-touch tabletop application for handover the design was then evaluated against senior neonatologists and neonatal fellows using rapid prototyping methods. Results: The results of the qualitative study showed that an effective handover application should at minimum include: tight integration with workflow and the physical environment, intuitive and simplicity, and minimalistic design following the 'less is more' philosophy. Conclusion: There is a need to optimize handover such that the information transferred is standardized, and the loss of information and/or misinformation is minimized. We argue that natural user interface design employed in the proposed design will result in improved care and less information loss during clinical handover.
Karimi, F & Khalilpour, R 2015, 'Evolution of carbon capture and storage research: Trends of international collaborations and knowledge maps', International Journal of Greenhouse Gas Control, vol. 37, pp. 362-376.
View/Download from: Publisher's site
Karimi, F, Poo, DCC & Tan, YM 2015, 'Clinical information systems end user satisfaction: The expectations and needs congruencies effects', Journal of Biomedical Informatics, vol. 53, pp. 342-354.
View/Download from: Publisher's site
Khazaei, H, McGregor, C, Eklund, JM & El-Khatib, K 2015, 'Real-Time and Retrospective Health-Analytics-as-a-Service: A Novel Framework', JMIR Medical Informatics, vol. 3, no. 4, pp. e36-e36.
View/Download from: Publisher's site
View description>>
Background: Analytics-as-a-service (AaaS) is one of the latest provisions emerging from the cloud services family. Utilizing this paradigm of computing in health informatics will benefit patients, care providers, and governments significantly. This work is a novel approach to realize health analytics as services in critical care units in particular. Objective: To design, implement, evaluate, and deploy an extendable big-data compatible framework for health-analytics-as-a-service that offers both real-time and retrospective analysis. Methods: We present a novel framework that can realize health data analytics-as-a-service. The framework is flexible and configurable for different scenarios by utilizing the latest technologies and best practices for data acquisition, transformation, storage, analytics, knowledge extraction, and visualization. We have instantiated the proposed method, through the Artemis project, that is, a customization of the framework for live monitoring and retrospective research on premature babies and ill term infants in neonatal intensive care units (NICUs). Results: We demonstrated the proposed framework in this paper for monitoring NICUs and refer to it as the Artemis-In-Cloud (Artemis-IC) project. A pilot of Artemis has been deployed in the SickKids hospital NICU. By infusing the output of this pilot set up to an analytical model, we predict important performance measures for the final deployment of Artemis-IC. This process can be carried out for other hospitals following the same steps with minimal effort. SickKids' NICU has 36 beds and can classify the patients generally into 5 different types including surgical and premature babies. The arrival rate is estimated as 4.5 patients per day, and the average length of stay was calculated as 16 days. Mean number of medical monitoring algorithms per patient is 9, which renders 311 live algorithms for the whole NICU running on the framework. The memory and computation power required for Artemis-I...
Khazaei, H, Mench-Bressan, N, McGregor, C & Pugh, JE 2015, 'Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective', IEEE Journal of Translational Engineering in Health and Medicine, vol. 3, pp. 1-9.
View/Download from: Publisher's site
View description>>
The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children's Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children's Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto.
Kong, Y, Zhang, M & Ye, D 2015, 'A negotiation‐based method for task allocation with time constraints in open grid environments', Concurrency and Computation: Practice and Experience, vol. 27, no. 3, pp. 735-761.
View/Download from: Publisher's site
View description>>
SummaryThis paper addresses the task allocation problem in an open, dynamic grid environments and service‐oriented environments. In such environments, both grid/service providers and consumers can be modelled as intelligent agents. These agents can leave and enter the environment freely at any time. Task allocation under time constraints becomes a challenging issue in such environments because it is difficult to apply a central controller during the allocation process due to the openness and dynamism of the environments. This paper proposes a negotiation‐based method for task allocation under time constraints in an open, dynamic grid environment, where both consumer and provider agents can freely enter or leave the environment. In this method, there is no central controller available, and agents negotiate with each other for task allocation based only on local views. The experimental results show that the proposed method can outperform the current methods in terms of the success rate of task allocation and the total profit obtained from the allocated tasks by agents under different time constraints. Copyright © 2014 John Wiley & Sons, Ltd.
Kong, Y, Zhang, M, Ye, D & Luo, X 2015, 'RETRACTED CHAPTER: A Negotiation Method for Task Allocation with Time Constraints in Open Grid Environments', Studies in Computational Intelligence, vol. 596, pp. 19-36.
View/Download from: Publisher's site
Kulkarni, R, Qiao, Y & Sun, X 2015, 'Any monotone property of 3-uniform hypergraphs is weakly evasive', Theoretical Computer Science, vol. 588, pp. 16-23.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier B.V. For a Boolean function f, let D(f) denote its deterministic decision tree complexity, i.e., minimum number of (adaptive) queries required in worst case in order to determine f. In a classic paper, Rivest and Vuillemin [11] show that any non-constant monotone property P:{0,1}(n2)→{0,1} of n-vertex graphs has D(P)=Ω(n2).We extend their result to 3-uniform hypergraphs. In particular, we show that any non-constant monotone property P:{0,1}(n3)→{0,1} of n-vertex 3-uniform hypergraphs has D(P)=Ω(n3).Our proof combines the combinatorial approach of Rivest and Vuillemin with the topological approach of Kahn, Saks, and Sturtevant [6]. Interestingly, our proof makes use of Vinogradov's Theorem (weak Goldbach Conjecture), inspired by its recent use by Babai et al. [1] in the context of the topological approach. Our work leaves the generalization to k-uniform hypergraphs as an intriguing open question.
Kurian, JC 2015, 'Facebook use by the open access repository users', Online Information Review, vol. 39, no. 7, pp. 903-922.
View/Download from: Publisher's site
View description>>
Purpose – The purpose of this paper is to explore the type and implications of user-generated content posted by users of an open access institutional repository (DSpace) on Facebook. Design/methodology/approach – The identified user-generated content was organised into three categories: personal; professional; and social information. It encompassed all content from the members of the “DSpace” Facebook group, posted during the seven-year period (2007-2014). The posts were read and analysed to identify and categorise user-generated content posted by users to determine how Facebook is used by open access repository users. Findings – The results of analysis demonstrate the importance of social information posted by users over personal and professional information. Major types of user-generated content posted by users in the social information category were request, greetings, status-update, and announcement. Further, there has been a threefold increase in the number of user postings in the last two years (2013-2014), when posts were analysed over a seven-year period. Research limitations/implications – This study contributes to the theory on the implications eventuating from user-generated content posted by users of an open access institutional repository. An analysis of user-generated content identified in this study implies that users of DSpace open access repository are primarily using Facebook for ...
Lee, J-S, Filatova, T, Ligmann-Zielinska, A, Hassani-Mahmooei, B, Stonedahl, F, Lorscheid, I, Voinov, A, Polhill, G, Sun, Z & Parker, DC 2015, 'The Complexities of Agent-Based Modeling Output Analysis', Journal of Artificial Societies and Social Simulation, vol. 18, no. 4.
View/Download from: Publisher's site
Li, J, Lin, X, Rui, X, Rui, Y & Tao, D 2015, 'A Distributed Approach Toward Discriminative Distance Metric Learning.', IEEE Trans. Neural Networks Learn. Syst., vol. 26, no. 9, pp. 2111-2122.
View/Download from: Publisher's site
View description>>
Distance metric learning (DML) is successful in
discovering intrinsic relations in data. However, most algorithms
are computationally demanding when the problem size becomes
large. In this paper, we propose a discriminative metric learning
algorithm, develop a distributed scheme learning metrics on
moderate-sized subsets of data, and aggregate the results into
a global solution. The technique leverages the power of parallel
computation. The algorithm of the aggregated DML (ADML)
scales well with the data size and can be controlled by the
partition. We theoretically analyze and provide bounds for the
error induced by the distributed treatment. We have conducted
experimental evaluation of the ADML, both on specially designed
tests and on practical image annotation tasks. Those tests have
shown that the ADML achieves the state-of-the-art performance
at only a fraction of the cost incurred by most existing methods.
Li, S-Y, Chen, S-A, Lin, C-T, Ko, L-W, Yang, C-H & Chen, H-H 2015, 'Generalized Synchronization of Nonlinear Chaotic Systems through Natural Bioinspired Controlling Strategy', Abstract and Applied Analysis, vol. 2015, pp. 1-14.
View/Download from: Publisher's site
View description>>
A novel bioinspired control strategy design is proposed for generalized synchronization of nonlinear chaotic systems, combining the bioinspired stability theory, fuzzy modeling, and a novel, simple-form Lyapunov control function design of derived high efficient, heuristic and bioinspired controllers. Three main contributions are concluded: (1) apply the bioinspired stability theory to further analyze the stability of fuzzy error systems; the high performance of controllers has been shown in previous study by Li and Ge 2009, (2) a new Lyapunov control function based on bioinspired stability theory is designed to achieve synchronization without using traditional LMI method, which is a simple linear homogeneous function of states and the process of designing controller to synchronize two fuzzy chaotic systems becomes much simpler, and (3) three different situations of synchronization are proposed; classical master and slave Lorenz systems, slave Chen’s system, and Rossler’s system as functional system are illustrated to further show the effectiveness and feasibility of our novel strategy. The simulation results show that our novel control strategy can be applied to different and complicated control situations with high effectiveness.
Li, W, Dai, Y, Ma, L, Hao, H, Lu, H, Albinson, R & Li, Z 2015, 'Oil-saving pathways until 2030 for road freight transportation in China based on a cost-optimization model', Energy, vol. 86, pp. 369-384.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. This paper proposed a COSM (cost-optimization superstructure model) and derived the optimized oil-saving pathways for road freight transportation in China until 2030. The optimization target of the COSM was to minimize the accumulated energy and vehicle costs from 2010 to 2030 by choosing the most cost-effective fuel option for newly registered trucks each year. Based on the COSM, three scenarios were developed to evaluate the oil-saving pathway in terms of imported crude oil price, available alternative fuels and GHG emission reduction. The scenario analysis results indicate that: (1) for scenario A, the accumulated oil-saving potential was approximately about 13%, while the oil-saving potential of improving fuel consumption rate and load running rate was 17% and 16%; (2) for scenario B, the accumulated oil-saving potential increased to 82% in reference oil price and 23% in low oil price; (3) for scenario C, to reduce per ton of GHG emission, the increased cost will increase from 34 USD to 450 USD when the GHG emission target decreased from 15.4 billion tons to the turn point of 13.5 billion tons.
Li, X, Xu, G, Chen, E & Zong, Y 2015, 'Learning recency based comparative choice towards point-of-interest recommendation', Expert Systems with Applications, vol. 42, no. 9, pp. 4274-4283.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. All rights reserved. With the prevalence of GPS-enabled smart phones, Location Based Social Network (LBSN) has emerged and become a hot research topic during the past few years. As one of the most important components in LBSN, Points-of-Interests (POIs) has been extensively studied by both academia and industry, yielding POI recommendations to enhance user experience in exploring the city. In conventional methods, rating vectors for both users and POIs are utilized for similarity calculation, which might yield inaccuracy due to the differences of user biases. In our opinion, the rating values themselves do not give exact preferences of users, however the numeric order of ratings given by a user within a certain period provides a hint of preference order of POIs by such user. Firstly, we propose an approach to model users preference by employing utility theory. Secondly, We devise a collection-wise learning method over partial orders through an effective stochastic gradient descent algorithm. We test our model on two real world datasets, i.e., Yelp and TripAdvisor, by comparing with some state-of-the-art approaches including PMF and several user preference modeling methods. In terms of MAP and Recall, we averagely achieve 15% improvement with regard to the baseline methods. The results show the significance of comparative choice in a certain time window and show its superiority to the existing methods.
Liang, J, Vinh Nguyen, Q, Simoff, S & Lin Huang, M 2015, 'Divide and Conquer treemaps: Visualizing large trees with various shapes', Journal of Visual Languages & Computing, vol. 31, no. 2015, pp. 104-127.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. Most existing treemaps achieve the space utilization of a single geometrical area, mostly rectangle. Limiting visualization to rectangles could block the human capability on graph recognition, including orientation, shape and differentiation etc. To relax rectangular constraint, we propose a flexible enclosure approach with three algorithms. It partitions large hierarchical structures within a confined display area with different shapes for real-time applications. Our approach is based on the combination of Divide-and-Conquer method and the treemap paradigm. The partitioning algorithms generate three types of layouts with polygonal, angular and rectangular titling, which are flexible to be used separately or combined. We present technical details including the visualization results in the experiments and in the cases studies with real data sets. We evaluated the visualization based on graph drawing aesthetics and optimization criteria. Our usability study shows that (1) treemaps with layout variability support utilization of human capability in graph perception and (2) treemaps adopted in different shaped containers could have a positive impact on user satisfaction and awareness during visual data exploration.
Liao, H, Xu, Z, Zeng, X-J & Merigo, JM 2015, 'Framework of Group Decision Making With Intuitionistic Fuzzy Preference Information', IEEE Transactions on Fuzzy Systems, vol. 23, no. 4, pp. 1211-1227.
View/Download from: Publisher's site
Liao, H, Xu, Z, Zeng, X-J & Merigó, JM 2015, 'Qualitative decision making with correlation coefficients of hesitant fuzzy linguistic term sets', Knowledge-Based Systems, vol. 76, pp. 127-138.
View/Download from: Publisher's site
Liao, S-H, Hsieh, J-G, Chang, J-Y & Lin, C-T 2015, 'Training neural networks via simplified hybrid algorithm mixing Nelder–Mead and particle swarm optimization methods', Soft Computing, vol. 19, no. 3, pp. 679-689.
View/Download from: Publisher's site
View description>>
© 2014, Springer-Verlag Berlin Heidelberg. In this paper, a new and simplified hybrid algorithm mixing the simplex method of Nelder and Mead (NM) and particle swarm optimization algorithm (PSO), abbreviated as SNM-PSO, is proposed for the training of the parameters of the Artificial Neural Network (ANN). Our method differs from other hybrid PSO methods in that, n+1 particles, where n is the dimension of the search space, are randomly selected (without sorting), at each iteration of the proposed algorithm for use as the initial vertices of the NM algorithm, and each such particle is replaced by the corresponding final vertex after executing the NM algorithm. All the particles are then updated using the standard PSO algorithm. Our proposed method is simpler than other similar hybrid PSO methods and places more emphasis on the exploration of the search space. Some simulation problems will be provided to compare the performances of the proposed method with PSO and other similar hybrid PSO methods in training an ANN. These simulations show that the proposed method outperforms the other compared methods.
Lin, C-T, Chiu, T-C & Gramann, K 2015, 'EEG correlates of spatial orientation in the human retrosplenial complex', NeuroImage, vol. 120, pp. 123-132.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Inc. Studies on spatial navigation reliably demonstrate that the retrosplenial complex (RSC) plays a pivotal role for allocentric spatial information processing by transforming egocentric and allocentric spatial information into the respective other spatial reference frame (SRF). While more and more imaging studies investigate the role of the RSC in spatial tasks, high temporal resolution measures such as electroencephalography (EEG) are missing. To investigate the function of the RSC in spatial navigation with high temporal resolution we used EEG to analyze spectral perturbations during navigation based on allocentric and egocentric SRF. Participants performed a path integration task in a clearly structured virtual environment providing allothetic information. Continuous EEG recordings were decomposed by independent component analysis (ICA) with subsequent source reconstruction of independent time source series using equivalent dipole modeling. Time-frequency transformation was used to investigate reference frame-specific orientation processes during navigation as compared to a control condition with identical visual input but no orientation task. Our results demonstrate that navigation based on an egocentric reference frame recruited a network including the parietal, motor, and occipital cortices with dominant perturbations in the alpha band and theta modulation in frontal cortex. Allocentric navigation was accompanied by performance-related desynchronization of the 8-13. Hz frequency band and synchronization in the 12-14. Hz band in the RSC. The results support the claim that the retrosplenial complex is central to translating egocentric spatial information into allocentric reference frames. Modulations in different frequencies with different time courses in the RSC further provide first evidence of two distinct neural processes reflecting translation of spatial information based on distinct reference frames and the computation of heading changes.
Lin, C-T, Prasad, M & Saxena, A 2015, 'An Improved Polynomial Neural Network Classifier Using Real-Coded Genetic Algorithm', IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 45, no. 11, pp. 1389-1401.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. In this paper, a novel approach is proposed to improve the classification performance of a polynomial neural network (PNN). In this approach, the partial descriptions (PDs) are generated at the first layer based on all possible combinations of two features of the training input patterns of a dataset. The set of PDs from the first layer, the set of all input features, and a bias constitute the chromosome of the real-coded genetic algorithm (RCGA). A system of equations is solved to determine the values of the real coefficients of each chromosome of the RCGA for the training dataset with the mean classification accuracy (CA) as the fitness value of each chromosome. To adjust these values for unknown testing patterns, the RCGA is iterated in the usual manner using simple selection, crossover, mutation, and elitist selection. The method is tested extensively with the University of California, Irvine benchmark datasets by utilizing tenfold cross validation of each dataset, and the performance is compared with various well-known state-of-the-art techniques. The results obtained from the proposed method in terms of CA are superior and outperform other known methods on various datasets.
Lin, H, Batty, M, Jørgensen, SE, Fu, B, Konecny, M, Voinov, A, Torrens, P, Lu, G, Zhu, A, Wilson, JP, Gong, J, Kolditz, O, Bandrova, T & Chen, M 2015, 'Virtual Environments Begin to Embrace Process‐based Geographic Analysis', Transactions in GIS, vol. 19, no. 4, pp. 493-498.
View/Download from: Publisher's site
Liu, H, Laba, T, Massi, L, Jan, S, Usherwood, T, Patel, A, Hayman, NE, Cass, A, Eades, A, Lawrence, C & Peiris, DP 2015, 'Facilitators and barriers to implementation of a pragmatic clinical trial in Aboriginal health services', Medical Journal of Australia, vol. 203, no. 1, pp. 24-27.
View/Download from: Publisher's site
View description>>
© 2015, Australasian Medical Publishing Co. Ltd. All rights reserved. Objective: To identify facilitators and barriers to clinical trial implementation in Aboriginal health services. Design: Indepth interview study with thematic analysis. Setting: Six Aboriginal community-controlled health services and one government-run service involved in the Kanyini Guidelines Adherence with the Polypill (KGAP) study, a pragmatic randomised controlled trial that aimed to improve adherence to indicated drug treatments for people at high risk of cardiovascular disease. Participants: 32 health care providers and 21 Aboriginal and Torres Strait Islander patients. Results: A fundamental enabler was that participants considered the research to be governed and endorsed by the local health service. That the research was perceived to address a health priority for communities was also highly motivating for both providers and patients. Enlisting the support of Aboriginal and Torres Strait Islander staff champions who were visible to the community as the main source of information about the trial was particularly important. The major implementation barrier for staff was balancing their service delivery roles with adherence to often highly demanding trial-related procedures. This was partially alleviated by the research team’s provision of onsite support and attempts to make trial processes more streamlined. Although more intensive support was highly desired, there were usually insufficient resources to provide this. Conclusion: Despite strong community and health service support, major investments in time and resources are needed to ensure successful implementation and minimal disruption to already overstretched, routine services. Trial budgets will necessarily be inflated as a result. Funding agencies need to consider these additional resource demands when supporting trials of a similar nature.
Liu, M, Dou, W, Yu, S & Zhang, Z 2015, 'A Decentralized Cloud Firewall Framework with Resources Provisioning Cost Optimization', IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 3, pp. 621-631.
View/Download from: Publisher's site
Llopis-Albert, C, Merigó, JM & Palacios-Marqués, D 2015, 'Structure Adaptation in Stochastic Inverse Methods for Integrating Information', Water Resources Management, vol. 29, no. 1, pp. 95-107.
View/Download from: Publisher's site
Lu, J, Behbood, V, Hao, P, Zuo, H, Xue, S & Zhang, G 2015, 'Transfer learning using computational intelligence: A survey', KNOWLEDGE-BASED SYSTEMS, vol. 80, pp. 14-23.
View/Download from: Publisher's site
View description>>
Transfer learning aims to provide a framework to utilize previously-acquired knowledge to solve new but similar problems much more quickly and effectively. In contrast to classical machine learning methods, transfer learning methods exploit the knowledge accumulated from data in auxiliary domains to facilitate predictive modeling consisting of different data patterns in the current domain. To improve the performance of existing transfer learning methods and handle the knowledge transfer process in real-world systems, computational intelligence has recently been applied in transfer learning. This paper systematically examines computational intelligence-based transfer learning techniques and clusters related technique developments into four main categories: (a) neural network-based transfer learning; (b) Bayes-based transfer learning; (c) fuzzy transfer learning, and (d) applications of computational intelligence-based transfer learning. By providing state-of-the-art knowledge, this survey will directly support researchers and practice-based professionals to understand the developments in computational intelligence-based transfer learning research and applications.
Lu, J, Wu, D, Mao, M, Wang, W & Zhang, G 2015, 'Recommender system application developments: A survey', DECISION SUPPORT SYSTEMS, vol. 74, pp. 12-32.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. A recommender system aims to provide users with personalized online product or service recommendations to handle the increasing online information overload problem and improve customer relationship management. Various recommender system techniques have been proposed since the mid-1990s, and many sorts of recommender system software have been developed recently for a variety of applications. Researchers and managers recognize that recommender systems offer great opportunities and challenges for business, government, education, and other domains, with more recent successful developments of recommender systems for real-world applications becoming apparent. It is thus vital that a high quality, instructive review of current trends should be conducted, not only of the theoretical research results but more importantly of the practical developments in recommender systems. This paper therefore reviews up-to-date application developments of recommender systems, clusters their applications into eight main categories: e-government, e-business, e-commerce/e-shopping, e-library, e-learning, e-tourism, e-resource services and e-group activities, and summarizes the related recommendation techniques used in each category. It systematically examines the reported recommender systems through four dimensions: recommendation methods (such as CF), recommender systems software (such as BizSeeker), real-world application domains (such as e-business) and application platforms (such as mobile-based platforms). Some significant new topics are identified and listed as new directions. By providing a state-of-the-art knowledge, this survey will directly support researchers and practical professionals in their understanding of developments in recommender system applications.
Lu, J, Zheng, Z, Zhang, G, He, Q & Shi, Z 2015, 'A new solution algorithm for solving rule-sets based bilevel decision problems', CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, vol. 27, no. 4, pp. 830-854.
View/Download from: Publisher's site
View description>>
Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd. Bilevel decision addresses compromises between two interacting decision entities within a given hierarchical complex system under distributed environments. Bilevel programming typically solves bilevel decision problems. However, formulation of objectives and constraints in mathematical functions is required, which are difficult, and sometimes impossible, in real-world situations because of various uncertainties. Our study develops a rule-set based bilevel decision approach, which models a bilevel decision problem by creating, transforming and reducing related rule sets. This study develops a new rule-sets based solution algorithm to obtain an optimal solution from the bilevel decision problem described by rule sets. A case study and a set of experiments illustrate both functions and the effectiveness of the developed algorithm in solving a bilevel decision problem.
Luo, F, Jiang, C, Du, J, Yuan, J, Ren, Y, Yu, S & Guizani, M 2015, 'A Distributed Gateway Selection Algorithm for UAV Networks', IEEE Transactions on Emerging Topics in Computing, vol. 3, no. 1, pp. 22-33.
View/Download from: Publisher's site
Marcias, G, Takayama, K, Pietroni, N, Panozzo, D, Sorkine-Hornung, O, Puppo, E & Cignoni, P 2015, 'Data-driven interactive quadrangulation.', ACM Trans. Graph., vol. 34, no. 4, pp. 65:1-65:1.
View/Download from: Publisher's site
View description>>
We propose an interactive quadrangulation method based on a large collection of patterns that are learned from models manually designed by artists. The patterns are distilled into compact quadrangulation rules and stored in a database. At run-time, the user draws strokes to define patches and desired edge flows, and the system queries the database to extract fitting patterns to tessellate the sketches' interiors. The quadrangulation patterns are general and can be applied to tessellate large regions while controlling the positions of the singularities and the edge flow. We demonstrate the effectiveness of our algorithm through a series of live retopology sessions and an informal user study with three professional artists.
Mathieson, L 2015, 'Graph Editing Problems with Extended Regularity Constraints', Theoretical Computer Science, vol. 677, pp. 56-68.
View/Download from: Publisher's site
View description>>
Graph editing problems offer an interesting perspective on sub- andsupergraph identification problems for a large variety of target properties.They have also attracted significant attention in recent years, particularly inthe area of parameterized complexity as the problems have rich parameterecologies. In this paper we examine generalisations of the notion of editing a graph toobtain a regular subgraph. In particular we extend the notion of regularity toinclude two variants of edge-regularity along with the unifying constraint ofstrong regularity. We present a number of results, with the central observationthat these problems retain the general complexity profile of theirregularity-based inspiration: when the number of edits $k$ and the maximumdegree $r$ are taken together as a combined parameter, the problems aretractable (i.e. in \FPT{}), but are otherwise intractable. We also examine variants of the basic editing to obtain a regular subgraphproblem from the perspective of parameterizing by the treewidth of the inputgraph. In this case the treewidth of the input graph essentially becomes alimiting parameter on the natural $k+r$ parameterization.
Merigó, JM, Engemann, KJ & Gil-Lafuente, AM 2015, 'Guest Editorial: Intelligent Systems in Business and Economics', Cybernetics and Systems, vol. 46, no. 3-4, pp. 145-149.
View/Download from: Publisher's site
Merigó, JM, Gil-Lafuente, AM & Yager, RR 2015, 'An overview of fuzzy research with bibliometric indicators', Applied Soft Computing, vol. 27, pp. 420-433.
View/Download from: Publisher's site
Merigó, JM, Guillén, M & Sarabia, JM 2015, 'The Ordered Weighted Average in the Variance and the Covariance', International Journal of Intelligent Systems, vol. 30, no. 9, pp. 985-1005.
View/Download from: Publisher's site
Merigó, JM, Mas-Tur, A, Roig-Tierno, N & Ribeiro-Soriano, D 2015, 'A bibliometric overview of the Journal of Business Research between 1973 and 2014', Journal of Business Research, vol. 68, no. 12, pp. 2645-2653.
View/Download from: Publisher's site
View description>>
The Journal of Business Research is a leading international journal in business research dating back to 1973. This study analyzes all the publications in the journal since its creation by using a bibliometric approach. The objective is to provide a complete overview of the main factors that affect the journal. This analysis includes key issues such as the publication and citation structure of the journal, the most cited articles, and the leading authors, institutions, and countries in the journal. Unsurprisingly, the USA is the leading region in the journal although a considerable dispersion exists, especially during the last years when European and Asian universities are taking a more significant position.
Merigó, JM, Palacios-Marqués, D & del Mar Benavides-Espinosa, M 2015, 'Aggregation methods to calculate the average price', Journal of Business Research, vol. 68, no. 7, pp. 1574-1580.
View/Download from: Publisher's site
Merigó, JM, Palacios-Marqués, D & Ribeiro-Navarrete, B 2015, 'Aggregation systems for sales forecasting', Journal of Business Research, vol. 68, no. 11, pp. 2299-2304.
View/Download from: Publisher's site
Muniz, KM, Woodside, AG & Sood, S 2015, 'Consumer storytelling of brand archetypal enactments', International Journal of Tourism Anthropology, vol. 4, no. 1, pp. 67-67.
View/Download from: Publisher's site
Musial, K, Brodka, P & Magnani, M 2015, 'Social Network Analysis in Applications', AI Communications, vol. 29, no. 1, pp. 55-56.
View/Download from: Publisher's site
Naderpour, M, Lu, J & Zhang, G 2015, 'A human-system interface risk assessment method based on mental models', SAFETY SCIENCE, vol. 79, pp. 286-297.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier Ltd. In many safety-critical systems, it is necessary to maintain operators' situation awareness at a high level to ensure the safety of operations. Today, in many such systems, operators have to rely on the principles and design of human-system interfaces (HSIs) to observe and comprehend the overwhelming amount of process data. Thus, poor HSIs may cause serious consequences, such as occupational accidents and diseases including stress, and they have therefore been considered an emerging risk. Despite the importance of this, very few methods have as yet been developed to assess the risk of HSIs. This paper presents a new risk assessment method that relies upon operators' mental models, human reliability analysis (HRA) event tree, and the situation awareness global assessment technique (SAGAT) to produce a risk profile for the intended HSI. In the proposed method, the operator's understanding (i.e. mental models) about possible abnormal situations in the intended plant is modeled on the basis of the capabilities of Bayesian networks. The situation models are combined with the HRA event tree, which paves the way for the incorporation of operator responses in the assessment method. Probe questions in line with the SAGAT through simulated scenarios in a virtual environment are then administrated to gather operator responses. Finally, the proposed method determines a risk level for the HSI by assigning the operator responses to the developed situational networks. The performance of the proposed method is investigated through a case study at a chemical plant.
Naderpour, M, Lu, J & Zhang, G 2015, 'An abnormal situation modeling method to assist operators in safety-critical systems', RELIABILITY ENGINEERING & SYSTEM SAFETY, vol. 133, pp. 33-47.
View/Download from: Publisher's site
View description>>
© 2014 Elsevier Ltd. One of the main causes of accidents in safety-critical systems is human error. In order to reduce human errors in the process of handling abnormal situations that are highly complex and mentally taxing activities, operators need to be supported, from a cognitive perspective, in order to reduce their workload, stress, and the consequent error rate. Of the various cognitive activities, a correct understanding of the situation, i.e. situation awareness (SA), is a crucial factor in improving performance and reducing errors. Despite the importance of SA in decision-making in time- and safety-critical situations, the difficulty of SA modeling and assessment means that very few methods have as yet been developed. This study confronts this challenge, and develops an innovative abnormal situation modeling (ASM) method that exploits the capabilities of risk indicators, Bayesian networks and fuzzy logic systems. The risk indicators are used to identify abnormal situations, Bayesian networks are utilized to model them and a fuzzy logic system is developed to assess them. The ASM method can be used in the development of situation assessment decision support systems that underlie the achievement of SA. The performance of the ASM method is tested through a real case study at a chemical plant.
Noguera, M, Alvarez, C, Merigó, JM & Urbano, D 2015, 'Determinants of female entrepreneurship in Spain: an institutional approach', Computational and Mathematical Organization Theory, vol. 21, no. 4, pp. 341-355.
View/Download from: Publisher's site
Oberst, S & Lai, JCS 2015, 'A statistical approach to estimate the Lyapunov spectrum in disc brake squeal', Journal of Sound and Vibration, vol. 334, pp. 120-135.
View/Download from: Publisher's site
Oberst, S & Lai, JCS 2015, 'Nonlinear transient and chaotic interactions in disc brake squeal', Journal of Sound and Vibration, vol. 342, pp. 272-289.
View/Download from: Publisher's site
Oberst, S & Lai, JCS 2015, 'Pad-mode-induced instantaneous mode instability for simple models of brake systems', Mechanical Systems and Signal Processing, vol. 62-63, pp. 490-505.
View/Download from: Publisher's site
Oberst, S & Lai, JCS 2015, 'Squeal noise in simple numerical brake models', JOURNAL OF SOUND AND VIBRATION, vol. 352, pp. 129-141.
View/Download from: Publisher's site
Oberst, S, Nava-Baro, E, Lai, JCS & Evans, TA 2015, 'An Innovative Signal Processing Method to Extract Ants’ Walking Signals', Acoustics Australia, vol. 43, no. 1, pp. 87-96.
View/Download from: Publisher's site
Palacios-Marqués, D, Merigó, JM & Soto-Acosta, P 2015, 'Online social networks as an enabler of innovation in organizations', Management Decision, vol. 53, no. 9, pp. 1906-1920.
View/Download from: Publisher's site
View description>>
Purpose – The purpose of this paper is to study the effect of online social networks on firm performance and how this technology can help to create value. The authors approach the problem from the Resource-Based View in order to analyze if online social networks can be considered source of competitive advantage and how it can enhance or complement essential marketing competences. Design/methodology/approach – The data were obtained from a survey based on the Spanish hospitality firms. This sector was chosen because Web 2.0 is becoming an important marketing channel in the tourism industry, and especially in hospitality firms. In addition, Spain is the one of the largest tourist destination in the world and has a strong presence of social media and Web 2.0 use by the population and hospitality enterprises. Between February and June 2012, the questionnaire was sent to all top managers of four-star and five-star Spanish hospitality firms. The authors received 197 questionnaires, but four of them were eliminated due to errors or because they were received too late. Findings – Results show that there is a statistically significant positive relationship between online social networks and innovation capacity and that the relationship between online social networks and firm performance is fully mediated by innovation capacity. In turn, the authors find a statistically significant positive relationship between innovation capacity and performance in the hotel industry.
Palacios-Marqués, D, Soto-Acosta, P & Merigó, JM 2015, 'Analyzing the effects of technological, organizational and competition factors on Web knowledge exchange in SMEs', Telematics and Informatics, vol. 32, no. 1, pp. 23-32.
View/Download from: Publisher's site
Paler, A, Polian, I, Nemoto, K & Devitt, SJ 2015, 'Fault-Tolerant High Level Quantum Circuits: Form, Compilation and Description', Quantum Science and Technology, vol. 2, no. 2, p. 025003.
View/Download from: Publisher's site
View description>>
Fault-tolerant quantum error correction is a necessity for any quantumarchitecture destined to tackle interesting, large-scale problems. Itstheoretical formalism has been well founded for nearly two decades. However, westill do not have an appropriate compiler to produce a fault-tolerant, errorcorrected description from a higher level quantum circuit for state of the arthardware models. There are many technical hurdles, including dynamic circuitconstructions that occur when constructing fault-tolerant circuits withcommonly used error correcting codes. We introduce a package that converts highlevel quantum circuits consisting of commonly used gates into a form employingall decompositions and ancillary protocols needed for fault-tolerant errorcorrection. We call this form the (I)initialisation, (C)NOT, (M)measurementform (ICM) and consists of an initialisation layer of qubits into one of fourdistinct states, a massive, deterministic array of CNOT operations and a seriesof time ordered $X$- or $Z$-basis measurements. The form allows a more flexbileapproach towards circuit optimisation. At the same time, the package outputs astandard circuit or a canonical geometric description which is a necessity foroperating current state-of-the-art hardware architectures using topologicalquantum codes.
Panetta, J, Zhou, Q, Malomo, L, Pietroni, N, Cignoni, P & Zorin, D 2015, 'Elastic textures for additive fabrication.', ACM Trans. Graph., vol. 34, no. 4, pp. 135:1-135:1.
View/Download from: Publisher's site
View description>>
We introduce elastic textures: a set of parametric, tileable, printable, cubic patterns achieving a broad range of isotropic elastic material properties: the softest pattern is over a thousand times softer than the stiffest, and the Poisson's ratios range from below zero to nearly 0.5. Using a combinatorial search over topologies followed by shape optimization, we explore a wide space of truss-like, symmetric 3D patterns to obtain a small family. This pattern family can be printed without internal support structure on a single-material 3D printer and can be used to fabricate objects with prescribed mechanical behavior. The family can be extended easily to create anisotropic patterns with target orthotropic properties. We demonstrate that our elastic textures are able to achieve a user-supplied varying material property distribution. We also present a material optimization algorithm to choose material properties at each point within an object to best fit a target deformation under a prescribed scenario. We show that, by fabricating these spatially varying materials with elastic textures, the desired behavior is achieved. Copyright is held by the owner/author(s).
Percival, J, McGregor, C, Percival, N & James, A 2015, 'Enabling the integration of clinical event and physiological data for real-time and retrospective analysis', Information Systems and e-Business Management, vol. 13, no. 4, pp. 693-711.
View/Download from: Publisher's site
Peris-Ortiz, M & Merigó Lindahl, JM 2015, 'Preface', Innovation, Technology and Knowledge Management, pp. ix-xv.
Pietroni, N, Tonelli, D, Puppo, E, Froli, M, Scopigno, R & Cignoni, P 2015, 'Statics Aware Grid Shells.', Comput. Graph. Forum, vol. 34, no. 2, pp. 627-641.
View/Download from: Publisher's site
View description>>
We introduce a framework for the generation of polygonal grid-shell architectural structures, whose topology is designed in order to excel in static performances. We start from the analysis of stress on the input surface and we use the resulting tensor field to induce an anisotropic non-Euclidean metric over it. This metric is derived by studying the relation between the stress tensor over a continuous shell and the optimal shape of polygons in a corresponding grid-shell. Polygonal meshes with uniform density and isotropic cells under this metric exhibit variable density and anisotropy in Euclidean space, thus achieving a better distribution of the strain energy over their elements. Meshes are further optimized taking into account symmetry and regularity of cells to improve aesthetics. We experiment with quad meshes and hex-dominant meshes, demonstrating that our grid-shells achieve better static performances than state-of-the-art grid-shells.
Prasad, M, Li, DL, Lin, CT, Prakash, S, Singh, J & Joshi, S 2015, 'Designing Mamdani-Type Fuzzy Reasoning for Visualizing Prediction Problems Based on Collaborative Fuzzy Clustering', IAENG International Journal of Computer Science, vol. 42, no. 4, pp. 404-411.
View description>>
In this paper a collaborative fuzzy c-means (CFCM) is used to generate fuzzy rules for fuzzy inference systems to evaluate the time series model. CFCM helps system to integrate two or more different datasets having similar features which are collected at the different environment with the different time period and it integrates these datasets together in order to visualize some common patterns among the datasets. In order to do any mode of integration between datasets, there is a necessity to define the common features between datasets by using some kind of collaborative process and also need to preserve the privacy and security at higher levels. This collaboration process gives a common structure between datasets which helps to define an appropriate number of rules for structural learning and also improve the accuracy of the system modeling.
Prasad, M, Lin, YY, Lin, CT, Er, MJ & Prasad, OK 2015, 'A new data-driven neural fuzzy system with collaborative fuzzy clustering mechanism', Neurocomputing, vol. 167, pp. 558-568.
View/Download from: Publisher's site
View description>>
© 2015 Elsevier B.V. In this paper, a novel fuzzy rule transfer mechanism for self-constructing neural fuzzy inference networks is being proposed. The features of the proposed method, termed data-driven neural fuzzy system with collaborative fuzzy clustering mechanism (DDNFS-CFCM) are; (1) Fuzzy rules are generated facilely by fuzzy c-means (FCM) and then adapted by the preprocessed collaborative fuzzy clustering (PCFC) technique, and (2) Structure and parameter learning are performed simultaneously without selecting the initial parameters. The DDNFS-CFCM can be applied to deal with big data problems by the virtue of the PCFC technique, which is capable of dealing with immense datasets while preserving the privacy and security of datasets. Initially, the entire dataset is organized into two individual datasets for the PCFC procedure, where each of the dataset is clustered separately. The knowledge of prototype variables (cluster centers) and the matrix of just one halve of the dataset through collaborative technique are deployed. The DDNFS-CFCM is able to achieve consistency in the presence of collective knowledge of the PCFC and boost the system modeling process by parameter learning ability of the self-constructing neural fuzzy inference networks (SONFIN). The proposed method outperforms other existing methods for time series prediction problems.
Qiao, M, Bian, W, Xu, RYD & Tao, D 2015, 'Diversified Hidden Markov Models for Sequential Labeling.', IEEE Trans. Knowl. Data Eng., vol. 27, no. 11, pp. 2947-2960.
View/Download from: Publisher's site
Ramezani, F, Lu, J, Taheri, J & Hussain, FK 2015, 'Evolutionary algorithm-based multi-objective task scheduling optimization model in cloud environments', WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, vol. 18, no. 6, pp. 1737-1757.
View/Download from: Publisher's site
View description>>
© 2015, Springer Science+Business Media New York. Optimizing task scheduling in a distributed heterogeneous computing environment, which is a nonlinear multi-objective NP-hard problem, plays a critical role in decreasing service response time and cost, and boosting Quality of Service (QoS). This paper, considers four conflicting objectives, namely minimizing task transfer time, task execution cost, power consumption, and task queue length, to develop a comprehensive multi-objective optimization model for task scheduling. This model reduces costs from both the customer and provider perspectives by considering execution and power cost. We evaluate our model by applying two multi-objective evolutionary algorithms, namely Multi-Objective Particle Swarm Optimization (MOPSO) and Multi-Objective Genetic Algorithm (MOGA). To implement the proposed model, we extend the Cloudsim toolkit by using MOPSO and MOGA as its task scheduling algorithms which determine the optimal task arrangement among VMs. The simulation results show that the proposed multi-objective model finds optimal trade-off solutions amongst the four conflicting objectives, which significantly reduces the job response time and makespan. This model not only increases QoS but also decreases the cost to providers. From our experimentation results, we find that MOPSO is a faster and more accurate evolutionary algorithm than MOGA for solving such problems.
Rehman, Z-U, Hussain, OK & Hussain, FK 2015, 'User-side cloud service management: State-of-the-art and future directions', Journal of Network and Computer Applications, vol. 55, pp. 108-122.
View/Download from: Publisher's site
Schoene, D, Valenzuela, T, Toson, B, Delbaere, K, Severino, C, Garcia, J, Davies, TA, Russell, F, Smith, ST & Lord, SR 2015, 'Interactive Cognitive-Motor Step Training Improves Cognitive Risk Factors of Falling in Older Adults – A Randomized Controlled Trial', PLOS ONE, vol. 10, no. 12, pp. e0145161-e0145161.
View/Download from: Publisher's site
View description>>
Copyright © 2015 Schoene et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Purpose Interactive cognitive-motor training (ICMT) requires individuals to perform both gross motor movements and complex information processing. This study investigated the effectiveness of ICMT on cognitive functions associated with falls in older adults. Methods A single-blinded randomized controlled trial was conducted in community-dwelling older adults (N = 90, mean age 81.5±7) without major cognitive impairment. Participants in the intervention group (IG) played four stepping games that required them to divide attention, inhibit irrelevant stimuli, switch between tasks, rotate objects and make rapid decisions. The recommended minimum dose was three 20-minute sessions per week over a period of 16 weeks unsupervised at home. Participants in the control group (CG) received an evidence- based brochure on fall prevention. Measures of processing speed, attention/executive function (EF), visuo-spatial ability, concerns about falling and depression were assessed before and after the intervention. Results Eighty-one participants (90%) attended re-assessment. There were no improvements with respect to the Stroop Stepping Test (primary outcome) in the intervention group. Compared to the CG, the IG improved significantly in measures of processing speed, visuo-spatial ability and concern about falling. Significant interactions were observed for measures of EF and divided attention, indicating group differences varied for different levels of the covariate with larger improvements in IG participants with poorer baseline performance. The interaction for depression showed no change for the IG but an increase in the CG for those with low depressive symptoms at baseline. Additionally, low and high-adhe...
Sharma, N, Shivakumara, P, Pal, U, Blumenstein, M & Tan, CL 2015, 'Piece-wise linearity based method for text frame classification in video', Pattern Recognition, vol. 48, no. 3, pp. 862-881.
View/Download from: Publisher's site
Soltanmohammadi, M, Saberi, M, Yoon, JH, Soltanmohammadi, K & Pazhoheshfar, P 2015, 'Risk Critical Point (RCP): A Quantifying Safety-Based Method Developed to Screen Construction Safety Risks', Industrial Engineering and Management Systems, vol. 14, no. 3, pp. 221-235.
View/Download from: Publisher's site
View description>>
©2015 KIIE. Risk assessment is an important phase of risk management. It is the stage in which risk is measured thoroughly to achieve effective management. Some factors such as probability and impact of risk have been used in the literature related to construction projects. Because in high-rise projects safety issues are paramount, this study has tried to develop a quantifying technique that takes into account three factors: probability, impact and Safety Performance Index (SPI) where the SPI is defined as the capability of an appropriate response to reduce or limit the effect of an event after its occurrence with regard to safety pertaining to a project. Regarding risk-related literatures which cover an uncertain subject, the proposed method developed in this research is based on a fuzzy logic approach. This approach entails a questionnaire in which the subjectivity and vagueness of responses is dealt with by using triangular fuzzy numbers instead of linguistic terms. This method returns a Risk Critical Point (RCP) on a zoning chart that places risks under categories: critical, critical-probability, critical-impact, and non-critical. The high-rise project in the execution phase has been taken as a case study to confirm the applicability of the proposed method. The monitoring results showed that the RCP method has the inherent ability to be extended to subsequent applications in the phases of risk response and control.
Song, R, Catchpoole, DR, Kennedy, PJ & Li, J 2015, 'Identification of lung cancer miRNA–miRNA co-regulation networks through a progressive data refining approach', Journal of Theoretical Biology, vol. 380, pp. 271-279.
View/Download from: Publisher's site
Tang, F, You, I, Tang, C & Yu, S 2015, 'A profiling based task scheduling approach for multicore network processors', Concurrency and Computation: Practice and Experience, vol. 27, no. 4, pp. 855-869.
View/Download from: Publisher's site
View description>>
SummaryMulticore network processors have been playing an increasingly important role in computational processes, which emphasize on scalability and parallelism of the systems, in distributed environments especially in Internet‐based delay‐sensitive applications. It is an important but unsolved issue, however, to efficiently schedule tasks in network processors with multicore and multithread for improving the system throughput as much as possible. Profiling can gather runtime environment information and guide the compiler to optimize programs through scheduling tasks based on the runtime context. This paper proposes a profiling‐based task scheduling approach, targeting on improving the throughput of multicore network processor (Intel IXP) systems in the balanced pipeline way. In this work, we investigate a profiling‐based task scheduling framework, a task scheduling algorithm, and a set of performance models. Our task allocation scheme maps tasks onto the pipeline architecture and multiple threads of network processors in parallel, which incorporates the profiling context and global thread refinement. We evaluate our task scheduling algorithm by implementing representative network applications on the Intel IXP network processor. Experimental results demonstrate that our algorithm is able to schedule tasks in a balanced pipeline fashion and achieve the high throughput and data transmission rate. Copyright © 2012 John Wiley & Sons, Ltd.
Templeton, DJ, Wright, ST, McManus, H, Lawrence, C, Russell, DB, Law, MG, Petoumenos, K & Observational, AHIV 2015, 'Antiretroviral treatment use, co-morbidities and clinical outcomes among Aboriginal participants in the Australian HIV Observational Database (AHOD)', BMC INFECTIOUS DISEASES, vol. 15.
View/Download from: Publisher's site
Tian, Y, Li, J, Yu, S & Huang, T 2015, 'Learning Complementary Saliency Priors for Foreground Object Segmentation in Complex Scenes', International Journal of Computer Vision, vol. 111, no. 2, pp. 153-170.
View/Download from: Publisher's site
Tianqing Zhu, Ping Xiong, Gang Li & Wanlei Zhou 2015, 'Correlated Differential Privacy: Hiding Information in Non-IID Data Set', IEEE Transactions on Information Forensics and Security, vol. 10, no. 2, pp. 229-242.
View/Download from: Publisher's site
View description>>
Privacy preserving on data mining and data release has attracted an increasing research interest over a number of decades. Differential privacy is one influential privacy notion that offers a rigorous and provable privacy guarantee for data mining and data release. Existing studies on differential privacy assume that in a data set, records are sampled independently. However, in real-world applications, records in a data set are rarely independent. The relationships among records are referred to as correlated information and the data set is defined as correlated data set. A differential privacy technique performed on a correlated data set will disclose more information than expected, and this is a serious privacy violation. Although recent research was concerned with this new privacy violation, it still calls for a solid solution for the correlated data set. Moreover, how to decrease the large amount of noise incurred via differential privacy in correlated data set is yet to be explored. To fill the gap, this paper proposes an effective correlated differential privacy solution by defining the correlated sensitivity and designing a correlated data releasing mechanism. With consideration of the correlated levels between records, the proposed correlated sensitivity can significantly decrease the noise compared with traditional global sensitivity. The correlated data releasing mechanism correlated iteration mechanism is designed based on an iterative method to answer a large number of queries. Compared with the traditional method, the proposed correlated differential privacy solution enhances the privacy guarantee for a correlated data set with less accuracy cost. Experimental results show that the proposed solution outperforms traditional differential privacy in terms of mean square error on large group of queries. This also suggests the correlated differential privacy can successfully retain the utility while preserving the privacy.
ur Rehman, Z, Hussain, OK, Hussain, FK, Chang, E & Dillon, T 2015, 'User-side QoS forecasting and management of cloud services', World Wide Web, vol. 18, no. 6, pp. 1677-1716.
View/Download from: Publisher's site
van Duren, I, Voinov, A, Arodudu, O & Firrisa, MT 2015, 'Where to produce rapeseed biodiesel and why? Mapping European rapeseed energy efficiency', Renewable Energy, vol. 74, pp. 49-59.
View/Download from: Publisher's site
Vasauskaite, J & Gill, AQ 2015, 'Rethinking enterprise architecture for sustainable energy system development', Journal of Electronic Science and Technology, vol. 13, no. 3, pp. 212-220.
View/Download from: Publisher's site
View description>>
The development of a sustainable energy system throughout an enterprise is a complex task, which requires an agile holistic approach.Such an approach needs to include a variety of objectives including energy strategy formation and strategic decision-making, which are directly related to the analysis and management of the main areas of sustainable development: The economic, technological, environmental, and social.These multidimensional requirements of sustainability are often difficult to achieve within the enterprise, because these aspects are interrelated and influenced by various internal and external environment factors. This paper first reviews the main challenges for an energy system, and then demonstrates how a strategic agile enterprise architecture driven approach could effectively guide the sustainable energy system development.The study presented in this paper provides a holistic approach that contributes to the advancement and usage of literature dealing with issues of sustainable energy system development and agile enterprise architecture, which has not been discussed before to any great extent.
Versendaal, J & Merigó, JM 2015, 'Service business track at INBAM, Barcelona, 2014 “Service Design and Technology”', Service Business, vol. 9, no. 2, pp. 183-184.
View/Download from: Publisher's site
VIZUETE-LUCIANO, E, MERIGÓ, JM, GIL-LAFUENTE, AM & BORIA-REVERTER, S 2015, 'DECISION MAKING IN THE ASSIGNMENT PROCESS BY USING THE HUNGARIAN ALGORITHM WITH OWA OPERATORS', Technological and Economic Development of Economy, vol. 21, no. 5, pp. 684-704.
View/Download from: Publisher's site
View description>>
Assignment processes permit to coordinate two set of variables so each variable of the first set is connected to another variable of the second set. This paper develops a new assignment algorithm by using a wide range of aggregation operators in the Hungarian algorithm. A new process based on the use of the ordered weighted averaging distance (OWAD) operator and the induced OWAD (IOWAD) operator in the Hungarian algorithm is introduced. We refer to it as the Hungarian algorithm with the OWAD operator (HAOWAD) and the Hungarian algorithm with the IOWAD operator (HAIOWAD). The main advantage of this approach is that we can provide a parameterized family of aggregation operators between the minimum and the maximum. Thus, the information can be represented in a more complete way. Furthermore, we also present a general framework by using generalized and quasi-arithmetic means. Therefore, we can consider a wide range of particular cases including the Euclidean and the Minkowski distance. The paper ends with a practical application of the new approach in a financial decision making problem regarding the assignment of investments.
Voinov, A, Arodudu, O, van Duren, I, Morales, J & Qin, L 2015, 'Estimating the potential of roadside vegetation for bioenergy production', Journal of Cleaner Production, vol. 102, pp. 213-225.
View/Download from: Publisher's site
Wang, W, Zhang, G & Lu, J 2015, 'Collaborative Filtering with Entropy-Driven User Similarity in Recommender Systems', INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, vol. 30, no. 8, pp. 854-870.
View/Download from: Publisher's site
View description>>
© 2015 Wiley Periodicals, Inc. Collaborative filtering (CF) is the most popular approach in personalized recommender systems. Although CF approaches have successfully been used and have the advantage in that it is unnecessary to analyze item content when generating recommendations, they nevertheless suffer from problems with accuracy. In this paper, we propose a new CF approach to improve recommendation performance. First, a new information entropy-driven user similarity measure model is proposed to measure the relative difference between ratings. A Manhattan distance-based model is then developed to address the fat tail problem by estimating the alternative active user average rating. The effectiveness of the proposed approach is analyzed on public and private data sets. As a result of the introduction of the new similarity measure and average rating estimation, we demonstrate that the proposed new CF recommendation approach provides better recommendations.
Wang, Y-K, Jung, T-P & Lin, C-T 2015, 'EEG-Based Attention Tracking During Distracted Driving', IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 23, no. 6, pp. 1085-1094.
View/Download from: Publisher's site
Williams, R, Lawrence, C, Wilkes, E, Shipp, M, Henry, B, Eades, S, Mathers, B, Kaldor, J, Maher, L & Gray, D 2015, 'Sexual behaviour, drug use and health service use by young Noongar people in Western Australia: a snapshot', Sexual Health, vol. 12, no. 3, pp. 188-188.
View/Download from: Publisher's site
View description>>
BackgroundThis study aimed to describe sexual health behaviour, alcohol and other drug use, and health service use among young Noongar people in the south-west of Western Australia. Method: A cross-sectional survey was undertaken among a sample of 244 Noongar people aged 16−30 years. Results: The sample was more disadvantaged than the wider Noongar population. Sexual activity was initiated at a young age, 18% had two or more casual sex partners in the previous 12 months, with men more likely to have done so than women (23% vs 14%). Condoms were always or often carried by 57% of men and 37% of women, and 36% of men and 23% of women reported condom use at last sex with a casual partner. Lifetime sexually transmissible infection diagnosis was 14%. Forty percent currently smoked tobacco and 25% reported risky alcohol consumption on a weekly and 7% on an almost daily basis. Cannabis was used by 37%, 12% used drugs in addition to cannabis and 11% reported recently injecting drugs. In the previous 12 months, 66% had a health check and 31% were tested for HIV or sexually transmissible infections. Additionally, 25% sought advice or assistance for mental health or alcohol and other drug issues. Discussion: Although some respondents engaged in risky sexual behaviour, alcohol and other drug use or both, most did not. Particularly encouraging was the engagement of respondents with the health care system, especially among those engaging in risky behaviours. The results confound negative stereotypes of Aboriginal people and demonstrate a level of resilience among respondents.
Wu, D, Lu, J & Zhang, G 2015, 'A Fuzzy Tree Matching-Based Personalized E-Learning Recommender System', IEEE TRANSACTIONS ON FUZZY SYSTEMS, vol. 23, no. 6, pp. 2412-2426.
View/Download from: Publisher's site
View description>>
© 1993-2012 IEEE. The rapid development of e-learning systems provides learners with great opportunities to access learning activities online, and this greatly supports and enhances the learning practices. However, an issue reduces the success of application of e-learning systems; too many learning activities (such as various leaning materials, subjects, and learning resources) are emerging in an e-learning system, making it difficult for individual learners to select proper activities for their particular situations/requirements because there is no personalized service function. Recommender systems, which aim to provide personalized recommendations for products or services, can be used to solve this issue. However, e-learning systems need to be able to handle certain special requirements: 1) leaning activities and learners' profiles often present tree structures; 2) learning activities contain vague and uncertain data, such as the uncertain categories that the learning activities belong to; 3) there are pedagogical issues, such as the precedence relations between learning activities. To deal with the three requirements, this study first proposes a fuzzy tree-structured learning activity model, and a learner profile model to comprehensively describe the complex learning activities and learner profiles. In the two models, fuzzy category trees and related similarity measures are presented to infer the semantic relations between learning activities or learner requirements. Since it is impossible to have two completely same trees, in practice, a fuzzy tree matching method is carefully discussed. A fuzzy tree matching-based hybrid learning activity recommendation approach is then developed. This approach takes advantage of both the knowledge-based and collaborative filtering-based recommendation approaches, and considers both the semantic and collaborative filtering similarities between learners. Finally, an e-learning recommender system prototype is well de...
Wu, D, Zhang, G & Lu, J 2015, 'A Fuzzy Preference Tree-Based Recommender System for Personalized Business-to-Business E-Services', IEEE TRANSACTIONS ON FUZZY SYSTEMS, vol. 23, no. 1, pp. 29-43.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. The Web creates excellent opportunities for businesses to provide personalized online services to their customers. Recommender systems aim to automatically generate personalized suggestions of products/services to customers (businesses or individuals). Although recommender systems have been well studied, there are still two challenges in the development of a recommender system, particularly in real-world B2B e-services: 1) items or user profiles often present complicated tree structures in business applications, which cannot be handled by normal item similarity measures and 2) online users' preferences are often vague and fuzzy, and cannot be dealt with by existing recommendation methods. To handle both these challenges, this study first proposes a method for modeling fuzzy tree-structured user preferences, in which fuzzy set techniques are used to express user preferences. A recommendation approach to recommending tree-structured items is then developed. The key technique in this study is a comprehensive tree matching method, which can match two tree-structured data and identify their corresponding parts by considering all the information on tree structures, node attributes, and weights. Importantly, the proposed fuzzy preference tree-based recommendation approach is tested and validated using an Australian business dataset and the MovieLens dataset. Experimental results show that the proposed fuzzy tree-structured user preference profile reflects user preferences effectively and the recommendation approach demonstrates excellent performance for tree-structured items, especially in e-business applications. This study also applies the proposed recommendation approach to the development of a web-based business partner recommender system.
Wu, Z, Shi, J, Lu, C, Chen, E, Xu, G, Li, G, Xie, S & Yu, PS 2015, 'Constructing plausible innocuous pseudo queries to protect user query intention', Information Sciences, vol. 325, pp. 215-226.
View/Download from: Publisher's site
Xu, G, Wu, Z, Li, G & Chen, E 2015, 'Improving contextual advertising matching by using Wikipedia thesaurus knowledge', Knowledge and Information Systems, vol. 43, no. 3, pp. 599-631.
View/Download from: Publisher's site
View description>>
As a prevalent type of Web advertising, contextual advertising refers to the placement of the most relevant commercial ads within the content of a Web page, to provide a better user experience and as a result increase the user’s ad-click rate. However, due to the intrinsic problems of homonymy and polysemy, the low intersection of keywords, and a lack of sufficient semantics, traditional keyword matching techniques are not able to effectively handle contextual matching and retrieve relevant ads for the user, resulting in an unsatisfactory performance in ad selection. In this paper, we introduce a new contextual advertising approach to overcome these problems, which uses Wikipedia thesaurus knowledge to enrich the semantic expression of a target page (or an ad). First, we map each page into a keyword vector, upon which two additional feature vectors, the Wikipedia concept and category vector derived from the Wikipedia thesaurus structure, are then constructed. Second, to determine the relevant ads for a given page, we propose a linear similarity fusion mechanism, which combines the above three feature vectors in a unified manner. Last, we validate our approach using a set of real ads, real pages along with the external Wikipedia thesaurus. The experimental results show that our approach outperforms the conventional contextual advertising matching approaches and can substantially improve the performance of ad selection.
Xu, G, Wu, Z, Zhang, Y & Cao, J 2015, 'Social networking meets recommender systems: survey', International Journal of Social Network Mining, vol. 2, no. 1, pp. 64-64.
View/Download from: Publisher's site
View description>>
Today, the emergence of web-based communities and hosted services such as social networking sites, wikis and folksonomies, brings in tremendous freedom of web autonomy and facilitate collaboration and knowledge sharing between users. Along with the interaction between users and computers, social media is rapidly becoming an important part of our digital experience, ranging from digital textual information to diverse multimedia forms. These aspects and characteristics constitute of the core of second generation of web. Social networking (SN) and recommender system (RS) are two hot and popular topics in the current Web 2.0 era, where the former emphasises the generation, dissemination and evolution of user relations, and the latter focuses on the use of collective preferences of users so as to provide the better experience and loyalty of users in various web applications. Leveraging user social connections is able to alleviate the common problems of sparsity and cold-start encountered in RS. This paper aims to summarise the research progresses and findings in these two areas and showcase the empowerment of integrating these two kinds of research strengths.
Xu, G, Zong, Y, Jin, P, Pan, R & Wu, Z 2015, 'KIPTC: a kernel information propagation tag clustering algorithm', Journal of Intelligent Information Systems, vol. 45, no. 1, pp. 95-112.
View/Download from: Publisher's site
View description>>
In the social annotation systems, users annotate digital data sources by using tags which are freely chosen textual descriptions. Tags are used to index, annotate and retrieve resource as an additional metadata of resource. Poor retrieval performance remains a major challenge of most social annotation systems resulting from several problems of ambiguity, redundancy and less semantic nature of tags. Clustering is a useful tool to handle these problems in social annotation systems. In this paper, we propose a novel tag clustering algorithm based on kernel information propagation. This approach makes use of the kernel density estimation of the kNN neighborhood directed graph as a start to reveal the prestige rank of tags in tagging data. The random walk with restart algorithm is then employed to determine the center points of tag clusters. The main strength of the proposed approach is the capability of partitioning tags from the perspective of tag prestige rank rather than the intuitive similarity calculation itself. Experimental studies on the six real world data sets demonstrate the effectiveness and superiority of the proposed method against other state-of-the-art clustering approaches in terms of various evaluation metrics.
Xu, Y, Xu, A, Merigó, JM & Wang, H 2015, 'Hesitant fuzzy linguistic ordered weighted distance operators for group decision making', Journal of Applied Mathematics and Computing, vol. 49, no. 1-2, pp. 285-308.
View/Download from: Publisher's site
View description>>
Since the concept of hesitant fuzzy sets was put forward, different types of extensions have been proposed to deal with actual problems. A hesitant fuzzy linguistic term set provides a linguistic and computational basis to increase the flexibility and richness of linguistic elicitation based on the fuzzy linguistic approach. In this paper, we consider the concept of distance operator and develop a hesitant fuzzy linguistic ordered weighted distance (HFLOWD) operator. The HFLOWD operator is very suitable to deal with the uncertain situations with linguistic information. Moreover, it is also a new aggregation operator that provides parameterized families of distance aggregation operators between the minimum and the maximum distance. Some of its main properties and different families of HFLOWD operators are investigated. Finally, an application of the new approach is offered and comparative analyses are also provided to show the advantages over existing methods.
Xuan, J, Lu, J, Zhang, G & Luo, X 2015, 'Topic Model for Graph Mining', IEEE TRANSACTIONS ON CYBERNETICS, vol. 45, no. 12, pp. 2792-2803.
View/Download from: Publisher's site
View description>>
© 2013 IEEE. Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the 'bag-of-word' assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2015, 'Infinite Author Topic Model based on Mixed Gamma-Negative Binomial Process.', CoRR, vol. abs/1503.08535, pp. 1-10.
View description>>
Incorporating the side information of text corpus, i.e., authors,
time stamps, and emotional tags, into the traditional
text mining models has gained significant interests in the
area of information retrieval, statistical natural language
processing, and machine learning. One branch of these works
is the so-called Author Topic Model (ATM), which incorporates
the authors’s interests as side information into the
classical topic model. However, the existing ATM needs to
predefine the number of topics, which is difficult and inappropriate
in many real-world settings. In this paper, we propose
an Infinite Author Topic (IAT) model to resolve this
issue. Instead of assigning a discrete probability on fixed
number of topics, we use a stochastic process to determine
the number of topics from the data itself. To be specific, we
extend a gamma-negative binomial process to three levels in
order to capture the author-document-keyword hierarchical
structure. Furthermore, each document is assigned a mixed
gamma process that accounts for the multi-author’s contribution
towards this document. An efficient Gibbs sampling
inference algorithm with each conditional distribution being
closed-form is developed for the IAT model. Experiments
on several real-world datasets show the capabilities of our
IAT model to learn the hidden topics, authors’ interests on
these topics and the number of topics simultaneously.
Ye, D & Zhang, M 2015, 'A Self-Adaptive Strategy for Evolution of Cooperation in Distributed Networks', IEEE Transactions on Computers, vol. 64, no. 4, pp. 899-911.
View/Download from: Publisher's site
Ye, D, Zhang, M & Sutanto, D 2015, 'Decentralised dispatch of distributed energy resources in smart grids via multi-agent coalition formation', Journal of Parallel and Distributed Computing, vol. 83, pp. 30-43.
View/Download from: Publisher's site
Ye, D, Zhang, M & Yang, Y 2015, 'A Multi-Agent Framework for Packet Routing in Wireless Sensor Networks', Sensors, vol. 15, no. 5, pp. 10026-10047.
View/Download from: Publisher's site
Yin, H, Cui, B, Chen, L, Hu, Z & Zhang, C 2015, 'Modeling Location-Based User Rating Profiles for Personalized Recommendation', ACM Transactions on Knowledge Discovery from Data, vol. 9, no. 3, pp. 1-41.
View/Download from: Publisher's site
View description>>
This article proposes LA-LDA, a location-aware probabilistic generative model that exploits location-based ratings to model user profiles and produce recommendations. Most of the existing recommendation models do not consider the spatial information of users or items; however, LA-LDA supports three classes of location-based ratings, namely spatial user ratings for nonspatial items, nonspatial user ratings for spatial items, and spatial user ratings for spatial items. LA-LDA consists of two components, ULA-LDA and ILA-LDA, which are designed to take into account user and item location information, respectively. The component ULA-LDA explicitly incorporates and quantifies the influence from local public preferences to produce recommendations by considering user home locations, whereas the component ILA-LDA recommends items that are closer in both taste and travel distance to the querying users by capturing item co-occurrence patterns, as well as item location co-occurrence patterns. The two components of LA-LDA can be applied either separately or collectively, depending on the available types of location-based ratings. To demonstrate the applicability and flexibility of the LA-LDA model, we deploy it to both top- k recommendation and cold start recommendation scenarios. Experimental evidence on large-scale real-world data, including the data from Gowalla (a location-based social network), DoubanEvent (an event-based social network), and MovieLens (a movie recommendation system), reveal that LA-LDA models user profiles more accurately by outperforming existing recommendation models for top- k recommendation and the cold start problem.
Yin, H, Cui, B, Chen, L, Hu, Z & Zhou, X 2015, 'Dynamic User Modeling in Social Media Systems', ACM Transactions on Information Systems, vol. 33, no. 3, pp. 1-44.
View/Download from: Publisher's site
View description>>
Social media provides valuable resources to analyze user behaviors and capture user preferences. This article focuses on analyzing user behaviors in social media systems and designing a latent class statistical mixture model, named temporal context-aware mixture model (TCAM), to account for the intentions and preferences behind user behaviors. Based on the observation that the behaviors of a user in social media systems are generally influenced by intrinsic interest as well as the temporal context (e.g., the public's attention at that time), TCAM simultaneously models the topics related to users' intrinsic interests and the topics related to temporal context and then combines the influences from the two factors to model user behaviors in a unified way. Considering that users' interests are not always stable and may change over time, we extend TCAM to a dynamic temporal context-aware mixture model (DTCAM) to capture users' changing interests. To alleviate the problem of data sparsity, we exploit the social and temporal correlation information by integrating a social-temporal regularization framework into the DTCAM model. To further improve the performance of our proposed models (TCAM and DTCAM), an item-weighting scheme is proposed to enable them to favor items that better represent topics related to user interests and topics related to temporal context, respectively. Based on our proposed models, we design a temporal context-aware recommender system (TCARS). To speed up the process of producing the top- k recommendations from large-scale social media data, we develop an efficient query-processing technique to support TCARS. Extensive experiments have been conducted to evaluate the performance of our models on four real-world dataset...
Yu, S 2015, 'Special Issue on networking aspects in Big Data', International Journal of Parallel, Emergent and Distributed Systems, vol. 30, no. 1, pp. 3-4.
View/Download from: Publisher's site
Yu, S, Gu, G, Barnawi, A, Guo, S & Stojmenovic, I 2015, 'Malware Propagation in Large-Scale Networks', IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 1, pp. 170-179.
View/Download from: Publisher's site
Yu, S, Guo, S & Stojmenovic, I 2015, 'Fool Me If You Can: Mimicking Attacks and Anti-Attacks in Cyberspace', IEEE Transactions on Computers, vol. 64, no. 1, pp. 139-151.
View/Download from: Publisher's site
Yu, S, Lin, X & Misic, J 2015, 'Networking for Big Data: Part 2', IEEE NETWORK, vol. 29, no. 5, pp. 4-5.
Yu, S, Lin, X & Misic, J 2015, 'Networking for big data: part 2 [Guest Editorial]', IEEE Network, vol. 29, no. 5, pp. 4-5.
View/Download from: Publisher's site
Yu, S, Wang, G & Zhou, W 2015, 'Modeling malicious activities in cyber space', IEEE Network, vol. 29, no. 6, pp. 83-87.
View/Download from: Publisher's site
Zeng, D, Guo, S, Barnawi, A, Yu, S & Stojmenovic, I 2015, 'An Improved Stochastic Modeling of Opportunistic Routing in Vehicular CPS', IEEE Transactions on Computers, vol. 64, no. 7, pp. 1819-1829.
View/Download from: Publisher's site
Zeng, D, Guo, S, Huang, H, Yu, S & Leung, VCM 2015, 'Optimal VM placement in data centres with architectural and resource constraints', International Journal of Autonomous and Adaptive Communications Systems, vol. 8, no. 4, pp. 392-392.
View/Download from: Publisher's site
View description>>
Copyright © 2015 Inderscience Enterprises Ltd. Recent advance in virtualisation technology enables service provisioning in a flexible way by consolidating several virtual machines (VMs) into a single physical machine (PM). The inter-VM communications are inevitable when a group of VMs in a data centre provide services in a collaborative manner. With the increasing demands of such intra-data-centre traffics, it becomes essential to study the VM-to-PM placement such that the aggregated communication cost within a data centre is minimised. Such optimisation problem is proved NP-hard and formulated as an integer programming with quadratic constraints in this paper. Different from existing work, our formulation takes into consideration of data-centre architecture, inter-VM traffic pattern, and resource capacity of PMs. Furthermore, a heuristic algorithm is proposed and its high efficiency is extensively validated.
Zeng, Y, Chen, C, Liu, W, Fu, Q, Han, Z, Li, Y, Feng, S, Li, X, Qi, C, Wu, J, Wang, D, Corbett, C, Chan, BP, Ruan, D & Du, Y 2015, 'Injectable microcryogels reinforced alginate encapsulation of mesenchymal stromal cells for leak-proof delivery and alleviation of canine disc degeneration', Biomaterials, vol. 59, pp. 53-65.
View/Download from: Publisher's site
View description>>
In situ crosslinked thermo-responsive hydrogel applied for minimally invasive treatment of intervertebral disc degeneration (IVDD) may not prevent extrusion of cell suspension from injection site due to high internal pressure of intervertebral disc (IVD), causing treatment failure or osteophyte formation. In this study, mesenchymal stromal cells (MSCs) were encapsulated in alginate precursor and loaded into previously developed macroporous PGEDA-derived microcryogels (PMs) to form three-dimensional (3D) microscale cellular niches, enabling non-thermo-responsive alginate hydrogel to be injectable. The PMs reinforced alginate hydrogel showed superior elasticity compared to alginate hydrogel alone and could well protect encapsulated cells through injection. Chondrogenic committed MSCs in the injectable microniches expressed higher level of nucleus pulposus (NP) cell markers compared to 2D cultured cells. In an exvivo organ culture model, injection of MSCs-laden PMs into NP tissue prevented cell leakage, improved cell retention and survival compared to free cell injection. In canine IVDD models, alleviated degeneration was observed in MSCs-laden PMs treated group after six months which was superior to other treated groups. Our results provide in-depth demonstration of injectable alginate hydrogel reinforced by PMs as a leak-proof cell delivery system for augmented regenerative therapy of IVDD in canine models.
Zhang, G, Lu, J & Gao, Y 2015, 'Bi-level Multi-follower Decision Making', Intelligent Systems Reference Library, vol. 82, pp. 65-104.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. A bi-level decision problem may involve multiple decision entities (decision units or decision makers) at the lower level, and these followers may have different reactions for a possible decision made by the leader.
Zhang, G, Lu, J & Gao, Y 2015, 'Bi-level Multi-leader Decision Making', Intelligent Systems Reference Library, vol. 82, pp. 105-120.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. In real-world applications, a bi-level decision problem may involve multiple decision entities on the upper level, that is, the bi-level decision problem has multiple leaders. The leaders may have their individual decision variables, objective functions and/or constraint conditions. This kind of bi-level decision problem is called a bi-level multi-leader (BLML) decision problem.
Zhang, G, Lu, J & Gao, Y 2015, 'Bi-level Pricing and Replenishment in Supply Chains', Intelligent Systems Reference Library, vol. 82, pp. 325-336.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. Effective pricing and replenishment strategies in supply chain management are the keys to business success. Notably, with rapid technological innovation and strong competition in hi-tech industries such as computer and communication organizations, the upstream component price and the down-stream product cost usually decline significantly with time. As a result, effective pricing and replenishment decision models are very important in supply chain management. This chapter first establishes a bi-level pricing and replenishment strategy optimization model in hi-tech industry. Then, two bi-level pricing models for pricing problems, in which the buyer and the vendor in a supply chain are respectively designated as the leader and the follower, are presented. Experiments illustrate that bi-level decision techniques can solve problems defined by these models and can achieve a profit increase under some situations, compared with the existing methods.
Zhang, G, Lu, J & Gao, Y 2015, 'Bi-level Programming for Competitive Strategic Bidding Optimization in Electricity Markets', Intelligent Systems Reference Library, vol. 82, pp. 315-324.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. We focus on the application of bi-level programming in electricity markets (power market) in this chapter. Competitive strategic bidding optimization of electric power plants (companies) is becoming one of the key issues in electricity markets. This chapter presents a strategic bidding optimization technique developed by applying the bi-level programming. By analyzing the strategic bidding behavior of power plants, we understand that this bidding problem includes several power plants and only one market operator respectively known as multiple leaders and single follower.
Zhang, G, Lu, J & Gao, Y 2015, 'Bi-level Programming Models and Algorithms', Intelligent Systems Reference Library, vol. 82, pp. 47-62.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. This chapter introduces basic definitions, theorems, models and algorithms for bi-level programming (bi-level decision-making) and also basic models of multi-level programming, which will be used in the remaining chapters of this book.
Zhang, G, Lu, J & Gao, Y 2015, 'Fuzzy Multi-objective Bi-level Goal Programming', Intelligent Systems Reference Library, vol. 82, pp. 229-247.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. we presented the definitions, solutions, and algorithms for the fuzzy multi-objective bi-level programming (FMO-BLP) problems. This chapter still addresses the fuzzy multi-objective bi-level problem but applies a goal programming approach. We call it fuzzy multi-objective bi-level goal programming (FMO-BLGP). This chapter will discuss related definitions, solution concepts, and algorithms for the FMO-BLGP problem and will focus on the linear version of the FMO-BLGP problem. First, a fuzzy ranking method is used to give a mathematical definition for a FMO-BLGP problem, and then, based on a fuzzy vectors distance measure definition, a fuzzy bi-level goal programming (FBLGP) model is proposed. An algorithm for solving the FMO-BLGP problem is also developed.
Zhang, G, Lu, J & Gao, Y 2015, 'Optimization Models', Intelligent Systems Reference Library, vol. 82, pp. 25-46.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. To model and solve a bi-level or multi-level optimization problem, we have to first understand basic single-level optimization models and related solution methods. This chapter introduces related concepts, models and solution methods of basic single-level optimization including linear programming, non-linear programming, multi-objective programming, goal programming, Stackelberg game theory, and particle swarm optimization. These knowledge will be used in the rest of the book.
Zhang, G, Lu, J & Gao, Y 2015, 'Rule-Set-Based Bi-level Decision Making', Intelligent Systems Reference Library, vol. 82, pp. 251-286.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. As discussed in previous chapters, bi-level decision-making problems are normally modeled by bi-level programming.
Zhang, G, Lu, J & Gao, Y 2015, 'Tri-level Multi-follower Decision Making', Intelligent Systems Reference Library, vol. 82, pp. 121-171.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. In a tri-level hierarchical decision problem, each decision entity at one level has its objective, constraints and decision variables affected in part by the decision entities at the other two levels. The choice of values for its variables may allow it to influence the decisions made at other levels, and thereby improve its own objective. We called this a tri-level decision problem. When multiple decision entities are involved at the middle and bottom levels, the top-level entity’s decision will be affected not only by these followers’ individual reactions but also by the relationships among the followers. We call this problem a tri-level multi-follower (TLMF) decision.
Zhang, Y, Xu, C, Yu, S, Li, H & Zhang, X 2015, 'SCLPV: Secure Certificateless Public Verification for Cloud-Based Cyber-Physical-Social Systems Against Malicious Auditors', IEEE Transactions on Computational Social Systems, vol. 2, no. 4, pp. 159-170.
View/Download from: Publisher's site
View description>>
© 2014 IEEE. Cyber-physical-social system (CPSS) allows individuals to share personal information collected from not only cyberspace but also physical space. This has resulted in generating numerous data at a user's local storage. However, it is very expensive for users to store large data sets, and it also causes problems in data management. Therefore, it is of critical importance to outsource the data to cloud servers, which provides users an easy, cost-effective, and flexible way to manage data, whereas users lose control on their data once outsourcing their data to cloud servers, which poses challenges on integrity of outsourced data. Many schemes have been proposed to allow a third-party auditor to verify data integrity using the public keys of users. Most of these schemes bear a strong assumption: the auditors are honest and reliable, and thereby are vulnerability in the case that auditors are malicious. Moreover, in most of these schemes, an auditor needs to manage users certificates to choose the correct public keys for verification. In this paper, we propose a secure certificateless public integrity verification scheme (SCLPV). The SCLPV is the first work that simultaneously supports certificateless public verification and resistance against malicious auditors to verify the integrity of outsourced data in CPSS. A formal security proof proves the correctness and security of our scheme. In addition, an elaborate performance analysis demonstrates that the SCLPV is efficient and practical. Compared with the only existing certificateless public verification scheme (CLPV), the SCLPV provides stronger security guarantees in terms of remedying the security vulnerability of the CLPV and resistance against malicious auditors. In comparison with the best of integrity verification scheme achieving resistance against malicious auditors, the communication cost between the auditor and the cloud server of the SCLPV is independent of the size of the processed data, meanw...
Zheng, Z, Peng, K, Du, W & Zhang, G 2015, 'Modeling, Control, and Optimization in Aeronautical Engineering', The Scientific World Journal, vol. 2015, pp. 1-2.
View/Download from: Publisher's site
Zhou, H, Liu, B, Hou, F, Luan, TH, Zhang, N, Gui, L, Yu, Q & Shen, XS 2015, 'Spatial Coordinated Medium Sharing: Optimal Access Control Management in Drive-Thru Internet', IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 5, pp. 2673-2686.
View/Download from: Publisher's site
Zhou, H, Liu, B, Hou, F, Zhang, N, Gui, L, Chen, J & Shen, X 2015, 'Database-assisted dynamic spectrum access with QoS guarantees: A double-phase auction approach', China Communications, vol. 12, no. 1, pp. 66-77.
View/Download from: Publisher's site
Abdullaev, S, McBurney, P & Musial, K 1970, 'Direct Exchange Mechanisms for Option Pricing', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer International Publishing, pp. 269-284.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. This paper presents the design and simulation of direct exchange mechanisms for pricing European options. It extends McAfee's single-unit double auction to multi-unit format, and then applies it for pricing options through aggregating agent predictions of future asset prices. We will also propose the design of a combinatorial exchange for the simulation of agents using option trading strategies. We present several option trading strategies that are commonly used in real option markets to minimise the risk of future loss, and assume that agents can submit them as a combinatorial bid to the market maker. We provide simulation results for proposed mechanisms, and compare them with existing Black-Scholes model mostly used for option pricing. The simulation also tests the effect of supply and demand changes on option prices. It also takes into account agents with different implied volatility. We also observe how option prices are affected by the agents’ choices of option trading strategies.
Adak, C & Chaudhuri, BB 1970, 'Writer Identification from offline isolated Bangla characters and numerals', 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 2015 13th International Conference on Document Analysis and Recognition (ICDAR), IEEE, Nancy, FRANCE, pp. 486-490.
View/Download from: Publisher's site
Adak, C, Maitra, P, Chaudhuri, BB & Blumenstein, M 1970, 'Binarization of old halftone text documents', TENCON 2015 - 2015 IEEE Region 10 Conference, TENCON 2015 - 2015 IEEE Region 10 Conference, IEEE, Macao, PEOPLES R CHINA, pp. 1-5.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. A degraded document image should be cleaned before subjecting to Optical Character Recognition (OCR), otherwise the result may be erroneous. Though major studies have been conducted on degraded document image cleaning, halftone documents received less attention. Since halftone documents contain halftone dot patterns, classical binarization techniques do not produce proper output for feeding into the OCR engine. In this paper, old halftone documents are considered for text area cleaning and binarization. At the beginning, the zone of interest (text area) is found using local binary pattern and contour analysis. Reasonably smaller zones are filtered out as noise. Then the foreground pixels are separated using background estimation. After this, an automated spatial smoothing technique is employed on the foreground. At last, a local binarization technique is used to produce the binary image. The proposed method is tested on various old and degraded halftone documents, which has produced fairly good results.
Ahadi, A, Lister, R, Haapala, H & Vihavainen, A 1970, 'Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance', Proceedings of the eleventh annual International Conference on International Computing Education Research, ICER '15: International Computing Education Research Conference, ACM, Omaha, Nebraska, USA, pp. 121-130.
View/Download from: Publisher's site
Ahadi, A, Prior, J, Behbood, V & Lister, R 1970, 'A Quantitative Study of the Relative Difficulty for Novices of Writing Seven Different Types of SQL Queries', Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education, ITICSE '15: Innovation and Technology in Computer Science Education Conference 2015, ACM, Lithuania, pp. 201-206.
View/Download from: Publisher's site
View description>>
Copyright © 2015 ACM. This paper presents a quantitative analysis of data collected by an online testing system for SQL 'select' queries. The data was collected from almost one thousand students, over eight years. We examine which types of queries our students found harder to write. The seven types of SQL queries studied are: simple queries on one table; grouping, both with and without 'having'; natural joins; simple and correlated sub-queries; and self-joins. The order of queries in the preceding sentence reflects the order of student difficulty we see in our data.
Aiello, R, Banterle, F, Pietroni, N, Malomo, L, Cignoni, P & Scopigno, R 1970, 'Compression and Querying of Arbitrary Geodesic Distances.', ICIAP (1), Springer, Springer, Cham, pp. 282-293.
Ali, M, Behbood, V & IEEE 1970, 'Operation Properties and delta-Equalities of Complex Fuzzy Classes', 2015 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (ISKE), International Conference on Intelligent Systems and Knowledge Engineering, IEEE, Taipei, pp. 586-593.
View/Download from: Publisher's site
View description>>
A complex fuzzy class is a set of fuzzy sets which is characterized by a pure complex fuzzy grade of membership where both the real and imaginary parts are fuzzy functions. The values that a pure complex fuzzy grade of membership may receive all lie within the unite square or unit circle in the complex plane. In this paper, we investigate different operation properties and propose a distance measure for complex fuzzy classes. The distance of two complex fuzzy classes measures the difference between the memberships of the fuzzy sets in the two complex fuzzy classes as well as the difference between the memberships in the related fuzzy sets in the two complex fuzzy classes. d-equalities of two complex fuzzy classes are then defined which mainly base on this distance measure. If the distance between two complex fuzzy classes is less than or equal to d, then they are said to be d-equal. This paper reveals that different operations between complex fuzzy classes can affect given delta-equalities of complex fuzzy classes. Further, an application of utilizing the concept of d-equalities of complex fuzzy classes in stocks and mutual funds in the stock market is presented.
Alkalbani, A, Shenoy, A, Hussain, FK, Hussain, OK & Xiang, Y 1970, 'Design and Implementation of the Hadoop-based Crawler for SaaS Service Discovery', 2015 IEEE 29th International Conference on Advanced Information Networking and Applications (IEEE AINA 2015), International Conference on Advanced Information Networking and Applications (was ICOIN), IEEE, Gwangju, SOUTH KOREA, pp. 785-790.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Software as a Service is the most adopted cloud service (46%) compared with Infrastructure as a Service (IaaS) (35%) and Platform as a Service (PaaS) (34%) [1]. Currently, the capability of discovering a SaaS of interest online across multiple cloud providers and reviews websites is a significant challenge, especially when using general search mechanisms (Google and Yahoo!) and search tools provided by existing reviews and directories. Discovering a SaaS is time-consuming, requiring consumers to browse several websites to select the appropriate service. This paper addresses the issues related to the efficient discovery of SaaS across review websites by developing the SaaS Nutch Hadoop-based Crawler Engine - SaaS Nhbased Crawler. The crawler is capable of crawling cloud reviews to find SaaSs of interest and enable the establishment of a central repository that could be used to discover SaaSs much more efficiently. The results show that the SaaS Nhbased crawler can effectively crawl review websites and provide a list of the latest SaaS being offered.
Alshehri, MD & Hussain, FK 1970, 'A Comparative Analysis of Scalable and Context-Aware Trust Management Approaches for Internet of Things', NEURAL INFORMATION PROCESSING, ICONIP 2015, PT IV, International Conference on Neural Information Processing, Springer, Istanbul, TURKEY, pp. 596-605.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. The Internet of Things - IoT - is a new paradigm in technology that allows most physical ‘things’ to contact each other. Trust between IoT devices is a critical factor. Trust in the IoT environment can be modeled using various approaches, such as confidence level and reputation parameters. Furthermore, trust is an important element in engineering reliable and scalable networks. In this paper, we survey scalable and context-aware trust management for IoT from three perspectives. First, we present an overview of the IoT and the importance of trust in relation to it, and then we provide an in-depth trust/reliable management protocol for the IoT and evaluate comparable trust management protocols. We also investigate a scalable solution for trust management in the IoT and provide a comparative evaluation of existing trust solutions. We then pre-sent a context-aware assessment for the IoT and compare the different trust solutions. Lastly, we give a full comparative analysis of trust/reliability management in the IoT. Our results are drawn from this comparative analysis, and directions for future research are outlined.
Alzoubi, YI & Gill, AQ 1970, 'An agile enterprise architecture driven model for geographically distributed agile development', International Conference on Information Systems Development, ISD 2015.
View description>>
Agile development is a highly collaborative environment, which requires active communication (i.e. effective and efficient communication) among stakeholders. The active communication in geographically distributed agile development (GDAD) environment is difficult to achieve due to many challenges. Literature has reported that active communication play critical role in enhancing GDAD performance through reducing the cost and time of a project. However, little empirical evidence is known about how to study and establish active communication construct in GDAD in terms of its dimensions, determinants and effects on GDAD performance. To address this knowledge gap, this paper describes an enterprise architecture (EA) driven research model to identify and empirically examine the GDAD active communication construct. This model can be used by researchers and practitioners to examine the relationships among two dimensions of GDAD active communication (effectiveness and efficiency), one antecedent that can be controlled (agile EA), and four dimensions of GDAD performance (on-Time completion, on-budget completion, software functionality and software quality).
Angelini, L, Lalanne, D, van den Hoven, E, Mazalek, A, Abou Khaled, O & Mugellini, E 1970, 'Tangible Meets Gestural', Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '15: Ninth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, New York, pp. 473-476.
View/Download from: Publisher's site
View description>>
More and more objects of our everyday environment are becoming smart and connected, offering us new interaction possibilities. Tangible interaction and gestural interaction are promising communication means with these objects in this post-WIMP interaction era. Although based on different principles, they both exploit our body awareness and our skills to provide a richer and more intuitive interaction. Occasionally, when user gestures involve physical artifacts, tangible interaction and gestural interaction can blend into a new paradigm, i.e., tangible gesture interaction [5]. This workshop fosters the comparison among these different interaction paradigms and offers a unique opportunity to discuss their analogies and differences, as well as the definitions, boundaries, strengths, application domains and perspectives of tangible gesture interaction. Participants from different backgrounds are invited.
Ariffin, SA & Dyson, LE 1970, 'Culturally Appropriate Design of Mobile Learning Applications in the Malaysian Context', CROSS-CULTURAL DESIGN: APPLICATIONS IN MOBILE INTERACTION, EDUCATION, HEALTH, TRANSPORT AND CULTURAL HERITAGE, CCD 2015, PT II, International Conference on Cross-Cultural Design, Springer, Losa Angeles, USA, pp. 3-14.
View/Download from: Publisher's site
View description>>
Many developing countries lack culturally appropriate design guidelines to inform the development of m-learning applications suitable for local use. This study presents the findings from a heuristic evaluation by academics and students at public universities in Malaysia for three locally produced mobile learning applications. The local cultural content and aesthetic values of the applications found a high level of acceptance with the participants. As a result, four principles were identified to support the design of culturally appropriate interfaces for mobile learning applications for the Malaysian context. These were: suitable local cultural content; aesthetic value according to local culture, including appropriate choice of color, and traditional designs and motifs derived largely from local flora and fauna; local language or bilingual communication; and local philosophical values embedded in the content and design.
Atif, A, Richards, D & Bilgin, A 1970, 'Student preferences and attitudes to the use of early alerts', 2015 Americas Conference on Information Systems, AMCIS 2015.
View description>>
Learning analytics is receiving increased attention because it offers to assist higher education institutions in improving and increasing student success by automating the identification of at-risk students, thereby enabling interventions. While learning analytics research has focused on detection and appropriate interventions, such as early alerts, there has been little investigation of student attitudes and preferences towards receiving early alerts. In this paper, we report the results of a study involving three first year units that sought to determine the opinions and preferences of students on their attitudes towards the interventions; how to best contact students; their academic issues; type(s) and quality of communication with the teaching staff; and types of university services required and received. We found that the majority of students did want to be alerted, preferred to receive alerts as soon as performance was unsatisfactory, and strongly preferred to be alerted via email, then face-to-face then phone.
Bakker, S, de Waart, S & van den Hoven, E 1970, 'Tactility Trialing: Exploring Materials to Inform Tactile Experience Design', Proceedings of the Design and semantics of Form and Movement, International Conference on Design and Semantics of Form and Movement, Politecnico di Milano, Milan, Italy, pp. 119-128.
View description>>
Although materials of tangible interaction designs largely determinetheir user experience, material choices are often steered by practical motives. This paper presents ‘tactility trialing’, an approach to explore tactile experiences of materials to inform the design of tangible artifacts. Through experience formulation, material selection, artifact creation and short user studies, designers and design-researchers are enabled to make informed decisions on the materials to be used in order to evoke the intended experience. The approach is illustrated through two case studies of student work. Tactility trialing helped them in getting acquainted with tactile material qualities in practice, and with the applicability of material characteristics such as resilience and hardness in design.
Bano, M & Zowghi, D 1970, 'EVALUATOR: An Automated Tool for Service Selection', REQUIREMENTS ENGINEERING IN THE BIG DATA ERA, Asia Pacific Symposium, Springer Verlag (Germany), Wuhan, China, pp. 170-184.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. The large number of third party services creates a paradox of choice and make service selection challenging for business analysts. The enormous online reviews and feedback by the past users provide a great opportunity to gauge their sentiments towards a particular product or service. The benefits of sentiment analysis have not been fully utilized in third party service selection. In this paper we present a tool that assists the business analysts in making better decisions for service selection by providing qualitative as well as quantitative data regarding the sentiments of the past users of the service. The tool has been applied and evaluated in an observational case study for service selection. The results show that sentiment analysis helps in increasing relevant information for business analysts, assists in making more informed decisions, and allows us to overcome some of the challenges of service selection.
Bano, M, Ferrari, A, Zowghi, D, Gervasi, V & Gnesi, S 1970, 'Automated Service Selection Using Natural Language Processing', REQUIREMENTS ENGINEERING IN THE BIG DATA ERA, Asia Pacific Symposium, Springer Verlag (Germany), Wuhan, China, pp. 3-17.
View/Download from: Publisher's site
View description>>
© Springer-Verlag Berlin Heidelberg 2015. With the huge number of services that are available online, requirements analysts face an overload of choice when they have to select the most suitable service that satisfies a set of customer requirements. Both service descriptions and requirements are often expressed in natural language (NL), and natural language processing (NLP) tools that can match requirements and service descriptions, while filtering out irrelevant options, might alleviate the problem of choice overload faced by analysts. In this paper, we propose a NLP approach based on Knowledge Graphs that automates the process of service selection by ranking the service descriptions depending on their NL similarity with the requirements. To evaluate the approach, we have performed an experiment with 28 customer requirements and 91 service descriptions, previously ranked by a human assessor. We selected the top-15 services, which were ranked with the proposed approach, and found 53% similar results with respect to top-15 services of the manual ranking. The same task, performed with the traditional cosine similarity ranking, produces only 13% similar results. The outcomes of our experiment are promising, and new insights have also emerged for further improvement of the proposed technique.
Blanco-Mesa, FR, Gil-Lafuente, AM & Merigó, JM 1970, 'New Aggregation Methods for Decision-Making in the Selection of Business Opportunities', SCIENTIFIC METHODS FOR THE TREATMENT OF UNCERTAINTY IN SOCIAL SCIENCES, 18th International SIGEF Congress on Scientific methods for the treatment of uncertainty in social sciences, Springer International Publishing, Girona, SPAIN, pp. 3-18.
View/Download from: Publisher's site
Braytee, A, Gill, AQ, Kennedy, PJ & Hussain, FK 1970, 'A Review and Comparison of Service E-Contract Architecture Metamodels.', ICONIP (4), International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 583-595.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. An adaptive service e-contract is an electronic agreement which is required to enable adaptive or agile service sourcing and pro- visioning. There are a number of e-contract metamodels that can be used to create a context specific adaptive service e-contract. The chal- lenge is which one to choose and adopt for adaptive services. This paper presents a review and comparison of well-known e-contract metamod- els using the architecture theory. The architecture theory allows the analysis of the e-contract metamodels using a three-dimension analyt- ical lens: structure, behavior and technology. The results of this paper highlight the metamodels structural, behavioral and technological differ- ences and similarities. This paper will help researchers and practitioners to observe the existing e-contract metamodels are appropriate to the adaptive services or if thwhetherere is a need to merge and integrate the concepts of these metamodels to propose a new unifying adaptive service e-contract metamodel. This paper is limited to the number of compared metamodels.
Braytee, A, Hussain, FK, Anaissi, A & Kennedy, PJ 1970, 'ABC-sampling for Balancing Imbalanced Datasets Based on Artificial Bee Colony Algorithm', 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), IEEE, Miami, Florida, pp. 594-599.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Class imbalanced data is a common problem for predictive modelling in domains such as bioinformatics. It occurs when the distribution of classes is not uniform among samples and results in a biased prediction of learning towards majority classes. In this study, we propose the ABC-Sampling algorithm based on a swarm optimization method called Artificial Bee Colony, which models the natural foraging behaviour of honeybees. Our algorithm lessens the effects of imbalanced classes by selecting the most informative majority samples using a forward search and storing them in a ranked subset. Then we construct a balanced dataset with a planned undersampling strategy to extract the most frequent majority samples from the top ranked subset and combine them with all minority samples. Our algorithm is superior to a state-of-the-art method on nine benchmark datasets with various levels of imbalance ratios.
Bremner, MJ, Montanaro, A & Shepherd, D 1970, 'Average-case complexity versus approximate simulation of commuting quantum computations', 15th Asian Quantum Information Science Conference, Seoul, Korea.
Burdon, S & Dovey, K 1970, 'The Cultural Antecedents of Successful Innovation', IFKAD 2015: 10TH INTERNATIONAL FORUM ON KNOWLEDGE ASSET DYNAMICS, 10th International Forum on Knowledge Asset Dynamics (IFKAD), IKAM-INST KNOWLEDGE ASSET MANAGEMENT, Polytechn Univ Bari, Bari, ITALY, pp. 1061-1072.
View description>>
This paper outlines the leadership practices that support an organisation’s strategic intent to innovate through the creation of an innovation-conducive culture. By surveying the opinions of member organisations of the Australian Information Industry Association (AIIA), four companies (each within a particular revenue category) were selected by AIIA members as having the most innovation-friendly cultures. The paper explicates the cultural basis of effective innovation within these four companies by drawing on survey data; analyses of the presentations given at the awards ceremony by senior members of each of the winning companies; and follow-up interviews with the leaders of these companies. The results point to the vital role that leadership plays in the creation of an appropriate cultural platform for successful innovation; and indicate how the execution of the strategic intent to innovate depends on the appropriateness of the cultural assumptions held by a stakeholder community. In particular, the study shows that within companies that are recognised as having innovation-supporting cultures, innovation is assumed to be a human/social process that is enhanced by open and honest communication, strong interpersonal relationships, mission-pertinent learning, and permission to experiment and fail.
Cao, Z-H, Ko, L-W, Lai, K-L, Huang, S-B, Wang, S-J & Lin, C-T 1970, 'Classification of migraine stages based on resting-state EEG power', 2015 International Joint Conference on Neural Networks (IJCNN), 2015 International Joint Conference on Neural Networks (IJCNN), IEEE, Killarney, Ireland.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Migraine is a chronic neurological disease characterized by recurrent moderate to severe headaches during a period like one month often in association with symptoms in human brain and autonomic nervous system. Normally, migraine symptoms can be categorized into four different stages: inter-ictal, pre-ictal, ictal, and post-ictal stages. Since migraine patients are difficulty knowing when they will suffer migraine attacks, therefore, early detection becomes an important issue, especially for low-frequency migraine patients who have less than 5 times attacks per month. The main goal of this study is to develop a migraine-stage classification system based on migraineurs' resting-state EEG power. We collect migraineurs' O1 and O2 EEG activities during closing eyes from occipital lobe to identify pre-ictal and non-pre-ictal stages. Self-Constructing Neural Fuzzy Inference Network (SONFIN) is adopted as the classifier in the migraine stages classification which can reach the better classification accuracy (66%) in comparison with other classifiers. The proposed system is helpful for migraineurs to obtain better treatment at the right time.
Cetindamar, D 1970, 'Organizations with purpose: Benefit corporations', 2015 Portland International Conference on Management of Engineering and Technology (PICMET), 2015 Portland International Conference on Management of Engineering and Technology (PICMET), IEEE, Portland, OR, pp. 28-32.
View/Download from: Publisher's site
Chen, H, Zhang, G, Lu, J & Zhu, D 1970, 'A Fuzzy Approach for Measuring Development of Topics in Patents Using Latent Dirichlet Allocation', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul, Turkey, pp. 1-7.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Technology progress brings the very rapid growth of patent publications, which increases the difficulty of domain experts to measure the development of various topics, handle linguistic terms used in evaluation and understand massive technological content. To overcome the limitations of keyword-ranking type of text mining result in existing research, and at the same time deal with the vagueness of linguistic terms to assist thematic evaluation, this research proposes a fuzzy set-based topic development measurement (FTDM) approach to estimate and evaluate the topics hidden in a large volume of patent claims using Latent Dirichlet Allocation. In this study, latent semantic topics are first discovered from patent corpus and measured by a temporal-weight matrix to reveal the importance of all topics in different years. For each topic, we then calculate a temporal-weight coefficient based on the matrix, which is associated with a set of linguistic terms to describe its development state over time. After choosing a suitable linguistic term set, fuzzy membership functions are created for each term. The temporal-weight coefficients are then transformed to membership vectors related to the linguistic terms, which can be used to measure the development states of all topics directly and effectively. A case study using solar cell related patents is given to show the effectiveness of the proposed FTDM approach and its applicability for estimating hidden topics and measuring their corresponding development states efficiently.
Chen, H, Zhang, Y, Zhang, G, Zhu, D & Lu, J 1970, 'Modeling Technological Topic Changes in Patent Claims', PICMET '15 PORTLAND INTERNATIONAL CENTER FOR MANAGEMENT OF ENGINEERING AND TECHNOLOGY, Portland International Center for Management of Engineering and Technology Conference, IEEE, Portland, USA, pp. 2049-2059.
View/Download from: Publisher's site
View description>>
© 2014 Portland International Conference on Management of Engineering and Technology. Patent claims usually embody the most essential terms and the core technological scope to define the protection of an invention, which makes them the ideal resource for patent content and topic change analysis. However, manually conducting content analysis on massive technical terms is very time consuming and laborious. Even with the help of traditional text mining techniques, it is still difficult to model topic changes over time, because single keywords alone are usually too general or ambiguous to represent a concept. Moreover, term frequency which used to define a topic cannot separate polysemous words that are actually describing a different theme. To address this issue, this research proposes a topic change identification approach based on Latent Dirichlet Allocation to model and analyze topic changes with minimal human intervention. After textual data cleaning, underlying semantic topics hidden in large archives of patent claims are revealed automatically. Concepts are defined by probability distributions over words instead of term frequency, so that polysemy is allowed. A case study using patents published in the United States Patent and Trademark Office (USPTO) from 2009 to 2013 with Australia as their assignee country is presented to demonstrate the validity of the proposed topic change identification approach. The experimental result shows that the proposed approach can be used as an automatic tool to provide machine-identified topic changes for more efficient and effective R&D management assistance.
Chen, J, Liu, B, Gui, L, Sun, F & Zhou, H 1970, 'Engineering Link Utilization in Cellular Offloading Oriented VANETs', 2015 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2015 - 2015 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
Chen, Q, Hu, L, Xu, J, Liu, W & Cao, L 1970, 'Document similarity analysis via involving both explicit and implicit semantic couplings', 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), IEEE, Paris.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Document similarity analysis is increasingly critical since roughly 80% of big data is unstructured. Accordingly, semantic couplings (relatedness) have been recognized valuable for capturing the relationships between terms (words or phrases). Existing work focuses more on explicit relatedness, with respective models built. In this paper, we propose a comprehensive semantic similarity measure: Semantic Coupling Similarity (SCS), which (1) captures intra-term pair couplings within term pairs represented by patterns of explicit term co-occurrences in a document set, (2) extracts inter-term pair couplings between term pairs indicated by implicit couplings between term pairs through indirectly linked terms and paths between terms after term connections are converted to a graph presentation; and (3) semantic coupling similarity, integrating intra- and inter-term pair couplings towards a comprehensive capturing of explicit and implicit couplings between terms across documents. SCS caters for both synonymy and polysemy, and outperforms baseline methods consistently on all real data sets.
Chin-Teng Lin, Yu-Kai Wang, Chieh-Ning Fang, Yi-Hsin Yu & Jung-Tai King 1970, 'Extracting patterns of single-trial EEG using an adaptive learning algorithm', 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Milan, ITALY, pp. 6642-6645.
View/Download from: Publisher's site
View description>>
The improvement of brain imaging technique brings about an opportunity for developing and investigating brain-computer interface (BCI) which is a way to interact with computer and environment. The measured brain activities usually constitute the signals of interest and noises. Applying the portable device and removing noise are the benefits to real-world BCI. In this study, one portable electroencephalogram (EEG) system non-invasively acquired brain dynamics through wireless transmission while six subjects participated in the rapid serial visual presentation (RSVP) paradigm. The event-related potential (ERP) was traditionally estimated by ensemble averaging (EA) to increase the signal-to-noise ratio. One adaptive filter of data-reusing radial basis function network (DR-RBFN) was also utilized as the estimator. The results showed that this portable EEG system stably acquired brain activities. Furthermore, the task-related potentials could be clearly explored from the limited samples of EEG data through DR-RBFN. According to the artifact-free data from the portable device, this study demonstrated the potential to move the BCI from laboratory research to real-life application in the near future.
Chotipant, S, Hussain, FK & Hussain, OK 1970, 'An Automated and Fuzzy Approach for Semantically Annotating Services', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul, TURKEY.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. In the recent past, semantic technologies have played an significant role in service retrieval and service querying. Annotating services semantically enables machines to understand the purpose of services and can further assist in intelligent and precise service retrieval, selection and composition. A key issue in semantically annotating services is the manual nature of service annotation. Manual service annotation requires a large amount of time and updating happens infrequently, hence annotations may get out-of-date due to service description changes. Although some researchers have studied semantic service annotation, they have only focused on web services not business service information. Moreover, their approaches are semi-automated, and still require service providers to select appropriate service annotations. In this paper, we propose a completely automated semantic annotation approach for e-services. The aim of this paper is to semantically annotate a service to relevant service concepts in domain-specific ontologies. Services and service concepts are represented by an extended VSM model, based on fuzzy rules. Then, we link a service to a concept, based on the similarity value of the representing vectors. We found during the experimentation process that the performances of the proposed approach and the VSM-based approach were quite similar and, as a result, developed a system to retrieve services that are annotated to relevant concepts. Experiments using a high service retrieval threshold demonstrated a retrieval approach based on extended VSM annotation performed much better than an approach based on VSM annotation.
Chotipant, S, Hussain, FK, Dong, H & Hussain, OK 1970, 'A Neural Network Based Approach for Semantic Service Annotation', NEURAL INFORMATION PROCESSING, PT II, International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 292-300.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Nowadays, a large number of business owners provide advertising for their services on the web. Semantically annotating those services, which assists machines to understand their purpose, is a significant factor for improving the performance of automated service retrieval, selection, and composition. Unfortunately, most of the existing research into semantic service annotation focuses on annotating web services, not on business service information. Moreover, all are semi-automated approaches that require service providers to select proper annotations. As a result, those approaches are unsuitable for annotating very large numbers of services that have accrued or been updated over time. This paper outlines our proposal for a Neural Network (NN)-based approach to annotate business services. Its aim is to link a given service to a relevant service concept. In this case, we treat the task as a service classification problem. We apply a feed forward neural network and a radial basis function network to determine relevance scores between service information and service concepts. A service is then linked to a service concept if its relevance score reaches the threshold. To evaluate the performance of this approach, it is compared with the ECBR algorithm. The experimental results demonstrate that the NN-based approach performs significantly better than the ECBR approach.
Curiskis, SA, Osborn, TR & Kennedy, PJ 1970, 'Link prediction and topological feature importance in social networks', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, Australian Computer Society, Sydney, pp. 39-50.
View description>>
The problem of link prediction describes how to account for the development of connection structure in a graph. There are many applications of link prediction, such as predicting missing links and future links in online social networks. Much of the literature has focused on limited characteristics of the graph topology or on node attributes, rather than a broad range of measures. There is a rich spectrum of topological features associated with a graph, such as neighbourhood similarity scores, node centrality measures, community structure and path-based distance measures. In this paper we formulate a supervised learning approach to link prediction using a feature set of graph measures chosen to capture a wide range of topological structure. This approach has the advantage that it can be applied to any graph where the connection structure is known. Random forest learning models are used for their high accuracy and measures of feature importance. The feature importance scores reveal the strength of contribution of the topological predictors for link prediction in a variety of synthetically generated network datasets, as well as three real world citation networks. We investigate both undirected and directed cases. Our results show that this approach can deliver very high model precision and recall performance in certain graphs, and good performance generally. Our models also consistently outperform a simpler comparison model we developed to resemble earlier work. In addition, our analysis of variable importance for each dataset reveals meaningful information regarding deep network properties.
Cuzzocrea, A, Moussa, R, Xu, G & Grasso, GM 1970, 'Cloud-Based OLAP over Big Data: Application Scenarios and Performance Analysis', 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), IEEE, Shen Zhen, pp. 921-927.
View/Download from: Publisher's site
View description>>
Following our previous research results, in this paper we provide two authoritative application scenarios that build on top of OLAP*, a middleware for parallel processing of OLAP queries that truly realizes effective and efficiently OLAP over Big Data. We have provided two authoritative case studies, namely parallel OLAP data cube processing and virtual OLAP data cube design, for which we also propose a comprehensive performance evaluation and analysis. Derived analysis clearly confirms the benefits of our proposed framework.
Cuzzocrea, A, Xu, G & Grasso, GM 1970, 'OLAP-enabled web search of complex objects', Proceedings of the 17th International Conference on Information Integration and Web-based Applications & Services, iiWAS '15: The 17th International Conference on Information Integration and Web-based Application & Services, ACM, Brussels, Belgium.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Inspired by the actual trend of empowering traditional Web search methodologies by means of novel computational paradigms, in this paper we propose and experimentally assess WebClustCube, a novel system that allows OLAP-enabled Web search of complex objects, thus adding new value to the potentialities of current Web search paradigms. In particular, WebClustCube supports the building and the interactive manipulation of OLAP-enabled Web views over complex objects extracted from distributed databases. The data management, OLAP-like support of WebClustCube is provided by ClustCube, a state-of-the-art framework for coupling OLAP methodologies and clustering algorithms with the goal of analyzing and mining of complex database objects. A case study that clearly shows the potentialities of WebClustCube in the context of next-generation Web search environments is provided. We complement of analytical contribution by means of an experimental assessment and analysis of WebClustCube according to several metric perspectives.
Dou, W, Xu, X, Meng, S & Yu, S 1970, 'An Energy-Aware QoS Enhanced Method for Service Computing across Clouds and Data Centers', 2015 Third International Conference on Advanced Cloud and Big Data, 2015 Third International Conference on Advanced Cloud and Big Data (CBD), IEEE, Yangzhou, PEOPLES R CHINA, pp. 80-87.
View/Download from: Publisher's site
Du, J, Jiang, C, Wang, J, Yu, S & Ren, Y 1970, 'Stability Analysis and Resource Allocation for Space-Based Multi-Access Systems', 2015 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2015 - 2015 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
Dyson, LE, Frawley, JK, Tyler, J & Wakefield, J 1970, 'Introducing an iPad Innovation into Accounting Tutorials', MOBILE LEARNING VOYAGE - FROM SMALL RIPPLES TO MASSIVE OPEN WATERS, World Conference on Mobile and Contextual Learning, Springer, Venice, Italy, pp. 217-228.
View/Download from: Publisher's site
View description>>
This study reports on the second phase of a trial to change tutorials in an Introductory Accounting subject into more interactive, student-centred learning experiences using an iPad combined with sharing and annotation technology. The technology allows student homework to be photographed, shown to the class instantaneously through a data projector and annotated live by the tutor using the iPad, with student input. The innovation addresses calls from the Accounting Profession for educational approaches which use technology in imaginative ways to engage students and shift from the didactic paradigm that has dominated so much of accounting education in the past. The approach has the advantage that only one iPad is required per class and is used in conjunction with free software: it is thus cost effective and scalable to the large numbers of students enrolled in the subject. The trial reported in this paper involved two classes conducted with the iPads and two traditional classes without. Evaluation comprised observations of the four classes and a survey of the students regarding their experiences in the tutorials. The results revealed that the use of the technology did not of itself transform the classes into interactive, student-centred events: the teaching style of the tutor to a large extent determined how the iPads were used and how much interaction occurred. However, students in classes with the iPads were mostly enthusiastic about their use, even if the results of the survey generally failed to show statistically significant differences between the classes with iPads and those without.
El-Hawary, M, Yang, J, Alhajj, R, Wang, J, Yu, Z, Thulasiram, RT, Lee, W, Yi, X, Santambrogio, MD, Yu, S, Tao, XD, Clashe, A, Tolosana, R, Wu, Z, Ma, J, Yang, LT, Xia, F, Li, W, Ning, H & Liu, L 1970, 'Message from the GreenCom2015 Chairs', 2015 IEEE International Conference on Data Science and Data Intensive Systems, 2015 IEEE International Conference on Data Science and Data Intensive Systems (DSDIS), IEEE, p. xxiii.
View/Download from: Publisher's site
Fang, XS, Wang, X & Sheng, QZ 1970, 'Ontology Augmentation via Attribute Extraction from Multiple Types of Sources', DATABASES THEORY AND APPLICATIONS, 26th Australasian Database Conference (ADC), Springer International Publishing, Melbourne, AUSTRALIA, pp. 16-27.
View/Download from: Publisher's site
Fu, B, Xu, G, Cao, L, Wang, Z & Wu, Z 1970, 'Coupling Multiple Views of Relations for Recommendation', Advances in Knowledge Discovery and Data Mining - LNCS, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer International Publishing, Ho Chi Minh City, Vietnam, pp. 732-743.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Learning user/item relation is a key issue in recommender system, and existing methods mostly measure the user/item relation from one particular aspect, e.g., historical ratings, etc. However, the relations between users/items could be influenced by multifaceted factors, so any single type of measure could get only a partial view of them. Thus it is more advisable to integrate measures from different aspects to estimate the underlying user/item relation. Furthermore, the estimation of underlying user/item relation should be optimal for current task. To this end, we propose a novel model to couple multiple relations measured on different aspects, and determine the optimal user/item relations via learning the optimal way of integrating these relation measures. Specifically, matrix factorization model is extended in this paper by considering the relations between latent factors of different users/items. Experiments are conducted and our method shows good performance and outperforms other baseline methods.
Gao, J, Lei, L & Yu, S 1970, 'Big Data Sensing and Service: A Tutorial', 2015 IEEE First International Conference on Big Data Computing Service and Applications, 2015 IEEE First International Conference on Big Data Computing Service and Applications (BigDataService), IEEE, San Francisco, CA, pp. 79-88.
View/Download from: Publisher's site
Gay, V, Leijdekkers, P, Gill, A & Felix Navarro, K 1970, 'Le Bon Samaritain: A Community-Based Care Model Supported by Technology.', Stud Health Technol Inform, Health Informatics Conference, IOS Press, Netherlands, pp. 50-55.
View/Download from: Publisher's site
View description>>
BACKGROUND: The effective care and well-being of a community is a challenging task especially in an emergency situation. Traditional technology-based silos between health and emergency services are challenged by the changing needs of the community that could benefit from integrated health and safety services. Low-cost smart-home automation solutions, wearable devices and Cloud technology make it feasible for communities to interact with each other, and with health and emergency services in a timely manner. OBJECTIVES: This paper proposes a new community-based care model, supported by technology, that aims at reducing healthcare and emergency services costs while allowing community to become resilient in response to health and emergency situations. METHODS: We looked at models of care in different industries and identified the type of technology that can support the suggested new model of care. Two prototypes were developed to validate the adequacy of the technology. RESULTS: The result is a new community-based model of care called 'Le Bon Samaritain'. It relies on a network of people called 'Bons Samaritains' willing to help and deal with the basic care and safety aspects of their community. Their role is to make sure that people in their community receive and understand the messages from emergency and health services. The new care model is integrated with existing emergency warning, community and health services. CONCLUSION: Le Bon Samaritain model is scalable, community-based and can help people feel safer, less isolated and more integrated in their community. It could be the key to reduce healthcare cost, increase resilience and drive the change for a more integrated emergency and care system.
Gil-Aluja, J, Terceño-Gómez, A, Ferrer-Comalat, JC, Merigó-Lindahl, JM & Linares-Mustarós, S 1970, 'Scientific Methods for the Treatment of Uncertainty in Social Sciences', Advances in Intelligent Systems and Computing, Springer International Publishing.
View/Download from: Publisher's site
Gill, AQ 1970, 'Adaptive enterprise architecture drivenagiledevelopment', International Conference on Information Systems Development, ISD 2015, International Conference on Information Systems Development, City University of Hong Kong, Harbin, China..
View description>>
Agile development practices focus on developing and delivering working software systems in small iterations with minimal documentation. However, locally project focused agile practices overlook the need for holistic enterprise architecture. Lack of enterprise architecture in agile, especially in the large agile environments, may lead to a number of problems such as technical debt, unnecessary re-work, inconsistent communication, locally focused isolated architecture, design and implementation. There is a missing link between the enterprise architecture and agile development. Enterprise architecture is a strategic capability that should enable and enhance agility of agile development. However, organisations are not sure how best to approach strategic enterprise architecture capability for supporting agile development. This paper proposes and demonstrate the applicability of an integrated adaptive enterprise architecture driven agile development approach for large agile environments.
Gill, AQ 1970, 'Learning Enterprise Agile Software Engineering: Scaling Agility at the Enterprise Level.', ASWEC, Australian Software Engineering Conference, IEEE Computer Society, Adelaide, AUSTRALIA, pp. 148-154.
View/Download from: Publisher's site
View description>>
Agile software engineering practices, originated in the context of individual software project development, are getting vast attention from enterprises for handling multiple agile software engineering projects at a large program and portfolio level. Adoption of agility at a large scale is a challenging task. The success of agility adoption at a large scale is dependent on the knowledge and skills of people involved. This suggests that agile software engineering education and training remains one of the important factors for organizations pursuing to scale agile practices for large environments. However, the teaching of agile software engineering practices for a large scale poses many challenges to software engineering educators. These difficulties include how to establish and simulate an appropriate large scale software engineering environment. This paper presents learnings from teaching agile software engineering practices for large scale at the University of Technology - Sydney (UTS), Australia. The learnings from this paper can be used by other educators who are aiming to teach enterprise scale agile software engineering practices.
Gill, AQ, Chew, E, Bird, G & Kricker, D 1970, 'An Agile Service Resilience Architecture Capability: Financial Services Case Study.', CBI (1), IEEE Conference on Business Informatics (CBI), IEEE Computer Society, Lisbon, Portugal, pp. 209-216.
View/Download from: Publisher's site
View description>>
Service resilience in the face of constant business change is an imperative and complex task for any service organization including those in financial services. Yet, due to its systemic complexity, service resilience as a practice in most organisations is performed in an ad-hoc and inefficient manner resulting in periodic disruptions to day-to-day business operations. Therefore, there is an urgent need for organisations to formulate an agile or adaptive capability for service resilience architecture design and implementation that meets their dynamic business needs. This paper presents one such agile or adaptive service resilience architecture (ASRA) design and implementation capability that has been developed using an adaptive enterprise service system meta-framework (a.k.a. The Gill Framework®). An action-design research method was employed in collaboration with a financial services organisation (FSO) for the establishment of a holistic ASRA design and implementation capability.
Gong, C, Tao, D, Liu, W, Maybank, SJ, Fang, M, Fu, K, Yang, J & IEEE 1970, 'Saliency Propagation from Simple to Difficult', 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, MA, pp. 2531-2539.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Saliency propagation has been widely adopted for identifying the most attractive object in an image. The propagation sequence generated by existing saliency detection methods is governed by the spatial relationships of image regions, i.e., the saliency value is transmitted between two adjacent regions. However, for the inhomogeneous difficult adjacent regions, such a sequence may incur wrong propagations. In this paper, we attempt to manipulate the propagation sequence for optimizing the propagation quality. Intuitively, we postpone the propagations to difficult regions and meanwhile advance the propagations to less ambiguous simple regions. Inspired by the theoretical results in educational psychology, a novel propagation algorithm employing the teaching-to-learn and learning-to-teach strategies is proposed to explicitly improve the propagation quality. In the teaching-to-learn step, a teacher is designed to arrange the regions from simple to difficult and then assign the simplest regions to the learner. In the learning-to-teach step, the learner delivers its learning confidence to the teacher to assist the teacher to choose the subsequent simple regions. Due to the interactions between the teacher and learner, the uncertainty of original difficult regions is gradually reduced, yielding manifest salient objects with optimized background suppression. Extensive experimental results on benchmark saliency datasets demonstrate the superiority of the proposed algorithm over twelve representative saliency detectors.
Grochow, JA & Qiao, Y 1970, 'Polynomial-time isomorphism test of groups that are tame extensions (Extended abstract)', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 578-589.
View/Download from: Publisher's site
View description>>
We give new polynomial-time algorithms for testing isomorphism of a class of groups given by multiplication tables (GpI). Two results (Cannon & Holt, J. Symb. Comput. 2003; Babai, Codenotti & Qiao, ICALP 2012) imply that GpI reduces to the following: given groups G,H with characteristic subgroups of the same type and isomorphic to ℤdp , and given the coset of isomorphisms Iso(G/ℤdp ,H/ℤdp), compute Iso(G,H) in time poly(|G|). Babai&Qiao (STACS 2012) solved this problem when a Sylow p-subgroup of G/ℤdp is trivial. In this paper, we solve the preceding problem in the so-called “tame” case, i. e., when a Sylow p-subgroup of G/ℤdp is cyclic, dihedral, semi-dihedral, or generalized quaternion. These cases correspond exactly to the group algebra (Formula presented.) being of tame type, as in the celebrated tame-wild dichotomy in representation theory. We then solve new cases of GpI in polynomial time. Our result relies crucially on the divide-and-conquer strategy proposed earlier by the authors (CCC 2014), which splits GpI into two problems, one on group actions (representations), and one on group cohomology. Based on this strategy, we combine permutation group and representation algorithms with new mathematical results, including bounds on the number of indecomposable representations of groups in the tame case, and on the size of their cohomology groups. Finally, we note that when a group extension is not tame, the preceding bounds do not hold. This suggests a precise sense in which the tame-wild dichotomy from representation theory may also be a key barrier to cross to put GpI into P.
Grochow, JA & Qiao, Y 1970, 'Polynomial-Time Isomorphism Test of Groups that are Tame Extensions (Extended Abstract)', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 578-589.
View/Download from: Publisher's site
View description>>
We give new polynomial-time algorithms for testing isomorphism of a class of groups given by multiplication tables (GpI). Two results (Cannon & Holt, J. Symb. Comput. 2003; Babai, Codenotti & Qiao, ICALP 2012) imply that GpI reduces to the following: Given groups G,H with characteristic subgroups of the same type and isomorphic to Zdp, and given the coset of isomorphisms Iso(G/Zdp,H/Zdp), compute Iso(G,H) in time poly(|G|). Babai&Qiao (STACS 2012) solved this problem when a Sylow p-subgroup of G/Zdp is trivial. In this paper, we solve the preceding problem in the so-called “tame” case, i. e., when a Sylow p-subgroup of G/Zdp is cyclic, dihedral, semi-dihedral, or generalized quaternion. These cases correspond exactly to the group algebra Fp[G/Zdp] being of tame type, as in the celebrated tame-wild dichotomy in representation theory. We then solve new cases of GpI in polynomial time. Our result relies crucially on the divide-and-conquer strategy proposed earlier by the authors (CCC 2014), which splits GpI into two problems, one on group actions (representations), and one on group cohomology. Based on this strategy, we combine permutation group and representation algorithms with new mathematical results, including bounds on the number of indecomposable representations of groups in the tame case, and on the size of their cohomology groups. Finally, we note that when a group extension is not tame, the preceding bounds do not hold. This suggests a precise sense in which the tame-wild dichotomy from representation theory may also be a key barrier to cross to put GpI into P.
Guo, M, Yang, K, Musial-Gabrys, K, Min, G, Yin, H, Nguyen, NP, Jiang, Y, Kourtellis, N, Cheng, X, Leng, S, Wang, H & Dokoohaki, N 1970, 'Message from the MSNCom 2015 Workshop Chairs', 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), IEEE, p. lvi.
View/Download from: Publisher's site
Haishuai Wang, Zhang, P, Ling Chen, Huan Liu & Chengqi Zhang 1970, 'Online diffusion source detection in social networks', 2015 International Joint Conference on Neural Networks (IJCNN), 2015 International Joint Conference on Neural Networks (IJCNN), IEEE, Killarney, Ireland, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. In this paper we study a new problem of online diffusion source detection in social networks. Existing work on diffusion source detection focuses on offline learning, which assumes data collected from network detectors are static and a snapshot of network is available before learning. However, an offline learning model does not meet the needs of early warning, real-time awareness, and real-time response of malicious information spreading in social networks. In this paper, we combine online learning and regression-based detection methods for real-time diffusion source detection. Specifically, we propose a new ℓ1 non-convex regression model as the learning function, and an Online Stochastic Sub-gradient algorithm (OSS for short). The proposed model is empirically evaluated on both synthetic and real-world networks. Experimental results demonstrate the effectiveness of the proposed model.
Han, J, Hu, Y, Han, J, Zhang, G, Lu, J & IEEE 1970, 'A Compromise-based Particle Swarm Optimization Algorithm for Solving Bi-level Programming Problems with Fuzzy Parameters', 2015 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (ISKE), International Conference on Intelligent Systems and Knowledge Engineering, IEEE, Taipei, pp. 214-221.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Bi-level programming has arisen to handle decentralized decision-making problems that feature interactive decision entities distributed throughout a bi-level hierarchy. Fuzzy parameters often appear in such a problem in applications and this is called a fuzzy bi-level programming problem. Since the existing approaches lack universality in solving such problems, this study aims to develop a particle swarm optimization (PSO) algorithm to solve fuzzy bi-level programming problems in the linear and nonlinear versions. In this paper, we first present a general fuzzy bi-level programming problem and discuss related theoretical properties based on a fuzzy number ranking method commonly used. A PSO algorithm is then developed to solve the fuzzy bi-level programming problem based on different compromised selections by decision entities on the feasible degree for constraint conditions under fuzziness. Lastly, an illustrative numerical example and two benchmark examples are adopted to state the effectiveness of the compromise-based PSO algorithm.
Han, J, Zhang, G, Hu, Y, Lu, J & IEEE 1970, 'Solving Tri-level Programming Problems Using a Particle Swarm Optimization Algorithm', PROCEEDINGS OF THE 2015 10TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, IEEE Conference on Industrial Electronics and Applications, IEEE, Auckland, New Zealand, pp. 575-580.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Tri-level programming, a special case of multilevel programming, arises to deal with decentralized decision-making problems that feature interacting decision entities distributed throughout three hierarchical levels. As tri-level programming problems are strongly NP-hard and the existing solution approaches lack universality in solving such problems, the purpose of this study is to propose an intelligence-based heuristic algorithm to solve tri-level programming problems involving linear and nonlinear versions. In this paper, we first propose a general tri-level programming problem and discuss related theoretical properties. A particle swarm optimization (PSO) algorithm is then developed to solve the tri-level programming problem. Lastly, a numerical example is adopted to illustrate the effectiveness of the proposed PSO algorithm.
Hazber, MAG, Li, R, Gu, X, Xu, G & Li, Y 1970, 'Semantic SPARQL Query in a Relational Database Based on Ontology Construction', 2015 11th International Conference on Semantics, Knowledge and Grids (SKG), 2015 11th International Conference on Semantics, Knowledge and Grids (SKG), IEEE, China, pp. 25-32.
View/Download from: Publisher's site
View description>>
© 2015 IEEE.Constructing an ontology from RDBs and its query through ontologies is a fundamental problem for the development of the semantic web. This paper proposes an approach to extract ontology directly from RDB in the form of OWL/RDF triples, to ensure its availability at semantic web. We automatically construct an OWL ontology from RDB schema using direct mapping rules. The mapping rules provide the basic rules for generating RDF triples from RDB data even for column contents null value, and enable semantic query engines to answer more relevant queries. Then we rewriting SPARQL query from SQL by translating SQL relational algebra into an equivalent SPARQL. The proposed method is demonstrated with examples and the effectiveness of the proposed approach is evaluated by experimental results.
Hazber, MAG, Li, R, Zhang, Y & Xu, G 1970, 'An Approach for Mapping Relational Database into Ontology', 2015 12th Web Information System and Application Conference (WISA), 2015 12th Web Information System and Application Conference (WISA), IEEE, Jinan, China, pp. 120-125.
View/Download from: Publisher's site
View description>>
© 2015 IEEE.Sharing and reusing the big data in relational databases in a semantic way have become a big challenge. In this paper, we propose a new approach to enable semantic web applications to access relational databases (RDBs) and their contents by semantic methods. Domain ontologies can be used to formulate RDB schema and data in order to simplify the mapping of the underlying data sources. Our method consists of two main phases: building ontology from an RDB schema and the generation of ontology instances from an RDB data automatically. In the first phase, we studied different cases of RDB schema to be mapped into ontology represented in RDF(S)-OWL, while in the second phase, the mapping rules are used to transform RDB data to ontological instances represented in RDF triples. Our approach is demonstrated with examples and validated by ontology validator.
Huo, H, Chen, S, Song, L, Ban, L, Wu, Z, Liu, L & Gao, L 1970, 'Anomalous Region Detection on the Mobility Data', 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), IEEE.
View/Download from: Publisher's site
Huo, H, Chen, S-Y, Xu, B & Liu, L 1970, 'A Trajectory Prediction Method for Location-Based Services', Springer International Publishing, pp. 127-138.
View/Download from: Publisher's site
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Comparative Analysis of Consumer Profile-based Methods to Predict SLA Violation', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. A Service Level Agreement (SLA) is a contract between a service provider and a consumer which specifies in detail the level of service expected from the service provider, obligations, commitment and objectives. In the cloud computing environment, both the cloud provider and the cloud consumer want to know of a likely service violation before the actual violation occurs and to adjust the scaling of the cloud resources appropriately. A consumer's previous resource usage profile is a key element in determining the possibility of service violation in the cloud computing environment, which has not been an area of research focus so far. In this paper, we analyze and compare QoS prediction by considering the consumer's previous resource usage profile in various conditions. From comparative analysis, we observe that by combining a consumer's previous resource usage profile history along with the previous resource usage profile history of its nearest neighbors, we obtain an optimal result.
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Towards Soft Computing Approaches for Formulating Viable Service Level Agreements in Cloud', NEURAL INFORMATION PROCESSING, ICONIP 2015, PT IV, International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 639-646.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. A service level agreement (SLA) is a legal document that binds consumers and providers together for the delivery of specific services for a certain period of time. Providers need a viable SLA to maintain successful relationships with consumers. A viable SLA, based on the previous profile of a consumer, will help a service provider determine whether to accept or reject a consumer’s request and the amount of resources to offer them. In this paper we propose a softcomputing based approach to form a personalized and viable SLA. This process is carried out in the pre-interaction time phase. We build a Fuzzy Inference System (FIS) and consider a consumer’s reliability value and contract duration as the input factors to determine the amount of resources to offer to the consumer. In addition to the Fuzzy Inference System, we tested various Neural Network-based methods for viable SLA formation and compared their prediction accuracy with the output of the FIS.
Hussain, W, Hussain, FK & Hussain, OK 1970, 'Transmitting Scalable Video Streaming over Wireless Ad-hoc Networks', 2015 IEEE 29th International Conference on Advanced Information Networking and Applications (IEEE AINA 2015), International Conference on Advanced Information Networking and Applications (was ICOIN), IEEE, Gwangju, SOUTH KOREA, pp. 201-206.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Due to the rapid increase in the use of social networking websites and applications, the need to stream video over wireless networks has increased. There are a number of considerations when transmitting streaming video between the nodes connected through wireless networks, such as throughput, the size of the multimedia file, response time, delay, scalability and loss of data. The scalability of ad-hoc networks needs to be analyzed by considering various aspects, such as self-organization, security, routing flexibility, availability of bandwidth, data distribution, Quality of Service, throughput, response time and efficiency. In this paper, we discuss the existing approaches to multimedia routing and transmission over wireless ad-hoc networks by considering scalability. The study draws several conclusions and makes recommendations for future directions.
Hussain, W, Hussain, FK, Hussain, O & Chang, E 1970, 'Profile-based viable Service Level Agreement (SLA) Violation Prediction Model in the Cloud', 2015 10TH INTERNATIONAL CONFERENCE ON P2P, PARALLEL, GRID, CLOUD AND INTERNET COMPUTING (3PGCIC), International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, IEEE, Krakow, Poland, pp. 268-272.
View/Download from: Publisher's site
View description>>
The worldwide web (www) provides a platform that enables service providers to transcend barriers and engage with current or potential customers globally resulting in their economic growth and expanded business horizons -- thereby creating the internet economy. It enables customers to receive desired services in a cost effective way, but given the open and ubiquitous nature of the www, particularly in cloud computing, both service providers and service consumers need efficient approach that guarantee their business requirements will be met. Additionally, all stakeholders need an efficient system that predicts any violation before it occurs and recommends how to mitigate those violations to avoid any penalties. In this paper we propose an intelligent, profile-based SLA violation prediction model, from the provider's perspective. The model begins monitoring an SLA in the pre-interaction time phase, before finalizing the SLA. It intelligently predicts the consumer's likely resouce usage, by considering the consumer's reputation from its previous transaction history, and determines the level of required resources based on their reliability. The framework helps service providers: make decisions about whether to form SLAs, maximize profit, and avoid service violations in post-interaction time phase.
Ikram, MA, Alshehri, MD, Hussain, FK & IEEE 1970, 'Architecture of an IoT-based System for Football Supervision (IoT Football)', 2015 IEEE 2ND WORLD FORUM ON INTERNET OF THINGS (WF-IOT), IEEE World Forum on Internet of Things (WF-IoT), IEEE, Milan, Italy, pp. 69-74.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Football, also called soccer, is one of the most popular sports in the world, if one considers the number of fans as well as the number of players. However, footballers face serious injuries during the match and even during training. Concussion, hypoglycemia, swallowing the tongue and shortness of breath are examples of the health problems footballers face, and in extreme cases, may lead to death. In addition, many sport clubs and sport academies spend millions of dollars contracting new professional footballers or even developing new professional footballers. The Internet of Things (IoT) is a new paradigm that combines various technologies to enhance our lives. Today's technology can protect footballers by diagnosing any health problems, which may occur during the match or training session, which, if detected early, may prevent any adverse effects on their long-term health. This paper proposes an IoT-based architecture for the sport of football, called IoT Football. Our proposal aims to embed sensing devices (e.g. sensors and RFID), telecommunication technologies (e.g. ZigBee) and cloud computing in the sport of football in order monitor the health of footballers and reduce the occurrence of adverse health conditions. The aim is to integrate the IoT environment, in particular the IoT application, into the field of sport in the form of a new application.
Jalali, R, El-khatib, K & McGregor, C 1970, 'Smart city architecture for community level services through the internet of things', 2015 18th International Conference on Intelligence in Next Generation Networks, 2015 18th International Conference on Intelligence in Next Generation Networks (ICIN), IEEE, Paris, FRANCE, pp. 108-113.
View/Download from: Publisher's site
Jia, K, Li, H, Liu, D & Yu, S 1970, 'Enabling Efficient and Secure Outsourcing of Large Matrix Multiplications', 2015 IEEE Global Communications Conference (GLOBECOM), GLOBECOM 2015 - 2015 IEEE Global Communications Conference, IEEE.
View/Download from: Publisher's site
Jiang, J, Zhou, A, Yazdi, KM, Wen, S, Yu, S & Xiang, Y 1970, 'Identifying Diffusion Sources in Large Networks: A Community Structure Based Approach', 2015 IEEE TRUSTCOM/BIGDATASE/ISPA, VOL 1, Joint 14th IEEE Int Conf on Trust, Secur and Privacy in Comp and Commun / 13th IEEE Int Symposium on Parallel and Distributed Proc with Applications / 9th IEEE Int Conf on Big Data Science and Engineering (IEEE TrustCom-ISPA-BigDataSE), IEEE, Aalto Univ, Helsinki, FINLAND, pp. 302-309.
View/Download from: Publisher's site
Jiang, J, Zhou, A, Yazdi, KM, Wen, S, Yu, S & Xiang, Y 1970, 'Identifying Diffusion Sources in Large Networks: A Community Structure Based Approach', 2015 IEEE Trustcom/BigDataSE/ISPA, 2015 IEEE Trustcom/BigDataSE/ISPA, IEEE, pp. 302-309.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. The global diffusion of epidemics, rumors and computer viruses causes great damage to our society. It is critical to identify the diffusion sources and promptly quarantine them. However, most methods proposed so far are unsuitable for large networks because of their computational cost and the complex spatiotemporal diffusion processes. In this paper, we develop a community structure based approach to efficiently identify diffusion sources in large networks. We first detect the community structure of a network and assign sensors on community bridge nodes to record diffusion dynamics. From the infection time of bridge sensors, we can determine the very first infected community from which the diffusion started and spread out to other communities. This, therefore, overcomes the scalability issue in source identification problems by narrowing the set of suspects down to the first infected community. Then, to accurately locate the diffusion source from suspects, we utilize an intrinsic feature of diffusion sources that the relative infection time of any node is linear with its effective distance from the diffusion source. Thus, for each suspect, we compute the correlation coefficient to measure the degree of linear dependence between sensors' relative infection times and their effective distances from the suspect, and consider the one with the greatest correlation coefficient as the source. We evaluate our approach in two large networks containing more than 300,000 nodes, which are collected from Twitter. The experiment results show that our method can identify diffusion sources with very high degree of accuracy. Especially when the average community size shrinks, the accuracy of our approach increases dramatically.
Jiang, X, Liu, W, Cao, L & Long, G 1970, 'Coupled collaborative filtering for context-aware recommendation', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI, Austin Texas, USA, pp. 4172-4173.
View description>>
Context-aware features have been widely recognized as important factors in recommender systems. However, as a major technique in recommender systems, traditional Collaborative Filtering (CF) does not provide a straightforward way of integrating the context-aware information into personal recommendation. We propose a Coupled Collaborative Filtering (CCF) model to measure the contextual information and use it to improve recommendations. In the proposed approach, coupled similarity computation is designed to be calculated by inter-item, intra-context and inter-context interactions among item, user and context-ware factors. Experiments based on different types of CF models demonstrate the effectiveness of our design.
Kajdanowicz, T, Michalski, R, Musial, K & Kazienko, P 1970, 'Learning in unlabeled networks – An active learning and inference approach', AI Communications, IOS Press, pp. 123-148.
View/Download from: Publisher's site
Khan, M, Liu, M, Dou, W & Yu, S 1970, 'vGraph: Graph Virtualization towards Big Data', 2015 Third International Conference on Advanced Cloud and Big Data, 2015 Third International Conference on Advanced Cloud and Big Data (CBD), IEEE, Yangzhou, PEOPLES R CHINA, pp. 153-158.
View/Download from: Publisher's site
Kotamarthi, K, Wang, X, Grossmann, G, Sheng, QZ & Indrakanti, S 1970, 'A Framework Towards Model Driven Business Process Compliance and Monitoring', 2015 IEEE 19th International Enterprise Distributed Object Computing Workshop, 2015 IEEE 19th International Enterprise Distributed Object Computing Workshop (EDOCW), IEEE, Adelaide, AUSTRALIA, pp. 24-32.
View/Download from: Publisher's site
Kulkarni, R, Qiao, Y & Sun, X 1970, 'On the Power of Parity Queries in Boolean Decision Trees', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Conference on Theory and Applications of Models of Computation, Springer International Publishing, Singapore, pp. 99-109.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. In an influential paper,Kushilevitz and Mansour (1993) introduced a natural extension of Boolean decision trees called parity decision tree (PDT) where one may query the sum modulo 2, i.e., the parity, of an arbitrary subset of variables. Although originally introduced in the context of learning, parity decision trees have recently regained interest in the context of communication complexity (cf. Shi and Zhang 2010) and property testing (cf. Bhrushundi, Chakraborty, and Kulkarni 2013). In this paper, we investigate the power of parity queries. In particular, we show that the parity queries can be replaced by ordinary ones at the cost of the total influence aka average sensitivity per query. Our simulation is tight as demonstrated by the parity function. At the heart of our result lies a qualitative extension of the result of O’Donnell, Saks, Schramme, and Servedio (2005) titled: Every decision tree has an influential variable. Recently Jain and Zhang (2011) obtained an alternate proof of the same. Our main contribution in this paper is a simple but surprising observation that the query elimination method of Jain and Zhang can indeed be adapted to eliminate, seemingly much more powerful, parity queries. Moreover, we extend our result to linear queries for Boolean valued functions over arbitrary finite fields.
La Paz, A, Merigó, JM, Ramaprasad, A & Syn, T 1970, 'Impact aspirations of mis journals: An ontological analysis', Pacific Asia Conference on Information Systems, PACIS 2015 - Proceedings.
View description>>
Journal impact is an ill-structured, complex construct. Present bibliometric and survey measures do not capture it fully. The paper deconstructs the combinatorial complexity of the construct using an ontology which encapsulates 2500 potential components of the construct. The ontology is a parsimonious, systemic, and systematic representation of journal impact. The paper presents an ontological analysis of the impact aspirations of 31 top MIS journals (from one of the published surveys) based on their editorial statements. These statements were mapped to the ontology by the authors using consensus coding. The ontological and heat maps derived from the editorial statements reveal significant 'bright', 'light', and 'blank/blind' spots - aspects with heavy, light, and no emphasis. The differences in luminosity pose a number of questions about the impact these journals seek in the emerging turbulent, competitive research publication market. A comparison of these maps with the journals' bibliometric and survey impact measures highlights the differences between the impact measures, their strengths and weaknesses. The ontology and ontological mapping can be used by the journal editors to realign their impact aspirations and strategies in the emerging marketplace.
Laengle, S, Loyola, G & Merigó, JM 1970, 'OWA Operators in Portfolio Selection', SCIENTIFIC METHODS FOR THE TREATMENT OF UNCERTAINTY IN SOCIAL SCIENCES, 18th International SIGEF Congress on Scientific methods for the treatment of uncertainty in social sciences, Springer International Publishing, Girona, SPAIN, pp. 53-64.
View/Download from: Publisher's site
Li, L, Su, C, Sun, Y, Xiong, S & Xu, G 1970, 'Hashtag Biased Ranking for Keyword Extraction from Microblog Posts', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Knowledge Science, Engineering and Management, Springer International Publishing, Chongqing, pp. 348-359.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Nowadays, a huge amount of text is being generated for social networking purpose on the Web. Keyword extraction from such text benefit many applications such as advertising, search, and content filtering. Recent studies show that graph based ranking is more effective than traditional term or document frequecy based approaches. However, most work in the literature constructs word to word graph within a document or a collection of documents before applying a kind of random walk. Such a graph does not consider the influence of document importance on keyword extraction. Moreover, social text like a microblog post usually has speical social features such as hashtag and so on, which can help us understand its topic. In this paper, we propose hashtag biased ranking for keyword extraction from a collection of microblog posts. We first build a word-post weighted graph by taking into account the posts themselves. Then, a hashtag biased random walk is applied on this graph, which guides our approach to extract keywords according to the hashtag topic. Last, the final ranking of a word is determined by the stationary probability after a number of interations. We evaluate our proposed method on a real Chinese microblog posts. Experiments show that our method is more effective than the traditional word to word graph based ranking in terms of precision.
Li, X, Xu, G, Chen, E & Li, L 1970, 'Learning User Preferences across Multiple Aspects for Merchant Recommendation', 2015 IEEE International Conference on Data Mining, 2015 IEEE International Conference on Data Mining (ICDM), IEEE, Atlantic City, NJ, pp. 865-870.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. With the pervasive use of mobile devices, Location Based Social Networks(LBSNs) have emerged in past years. These LBSNs, allowing their users to share personal experiences and opinions on visited merchants, have very rich and useful information which enables a new breed of location-based services, namely, Merchant Recommendation. Existing techniques for merchant recommendation simply treat each merchant as an item and apply conventional recommendation algorithms, e.g., Collaborative Filtering, to recommend merchants to a target user. However, they do not differentiate the user's real preferences on various aspects, and thus can only achieve limited success. In this paper, we aim to address this problem by utilizing and analyzing user reviews to discover user preferences in different aspects. Following the intuition that a user rating represents a personalized rational choice, we propose a novel utility-based approach by combining collaborative and individual views to estimate user preference (i.e., rating). An optimization algorithm based on a Gaussian model is developed to train our merchant recommendation approach. Lastly we evaluate the proposed approach in terms of effectiveness, efficiency and cold-start using two real-world datasets. The experimental results show that our approach outperforms the state-of-the-art methods. Meanwhile, a real mobile application is implemented to demonstrate the practicability of our method.
Li, X, Xu, G, Chen, E & Li, L 1970, 'MARS: A multi-aspect Recommender system for Point-of-Interest', 2015 IEEE 31st International Conference on Data Engineering, 2015 IEEE 31st International Conference on Data Engineering (ICDE), IEEE, Seoul, Korea, pp. 1436-1439.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. With the pervasive use of GPS-enabled smart phones, location-based services, e.g., Location Based Social Networking (LBSN) have emerged. Point-of-Interests (POIs) Recommendation, as a typical component in LBSN, provides additional values to both customers and merchants in terms of user experience and business turnover. Existing POI recommendation systems mainly adopt Collaborative Filtering (CF), which only exploits user given ratings (i.e., user overall evaluation) about a merchant while regardless of the user preference difference across multiple aspects, which exists commonly in real scenarios. Meanwhile, besides ratings, most LBSNs also provide the review function to allow customers to give their opinions when dealing with merchants, which is often overlooked in these recommender systems. In this demo, we present MARS, a novel POI recommender system based on multi-aspect user preference learning from reviews by using utility theory. We first introduce the organization of our system, and then show how the user preferences across multiple aspects are integrated into our system alongside several case studies of mining user preference and POI recommendations.
Linares-Mustarós, S, Merigó, JM & Ferrer-Comalat, JC 1970, 'Processing Extreme Values in Sales Forecasting', Cybernetics and Systems, Informa UK Limited, pp. 207-229.
View/Download from: Publisher's site
Liu, B, Chen, L, Liu, C, Zhang, C & Qiu, W 1970, 'RCP Mining: Towards the Summarization of Spatial Co-location Patterns', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Symposium on Advances in Spatial and Temporal Databases, Springer International Publishing, Hong Kong, China, pp. 451-469.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Co-location pattern mining is an important task in spatial data mining. However, the traditional framework of co-location pattern mining produces an exponential number of patterns because of the downward closure property, which makes it hard for users to understand, or apply. To address this issue, in this paper, we study the problem of mining representative co-location patterns (RCP). We first define a covering relationship between two co-location patterns by finding a new measure to appropriately quantify the distance between patterns in terms of their prevalence, based on which the problem of RCP mining is formally formulated. To solve the problem of RCP mining, we first propose an algorithm called RCPFast, adopting the post-mining framework that is commonly used by existing distance-based pattern summarization techniques. To address the peculiar challenge in spatial data mining, we further propose another algorithm, RCPMS, which employs the mineand- summarize framework that pushes pattern summarization into the co-location mining process. Optimization strategies are also designed to further improve the performance of RCPMS. Our experimental results on both synthetic and real-world data sets demonstrate that RCP mining effectively summarizes spatial co-location patterns, and RCPMS is more efficient than RCPFast, especially on dense data sets.
Liu, B, Zhou, W, Gao, L, Wen, S & Luan, TH 1970, 'Mobility Increases the Risk of Malware Propagations in Wireless Networks', 2015 IEEE Trustcom/BigDataSE/ISPA, 2015 IEEE Trustcom/BigDataSE/ISPA, IEEE, Aalto Univ, Helsinki, FINLAND, pp. 90-95.
View/Download from: Publisher's site
Liu, D, Richards, D, Froissard, C & Atif, A 1970, 'Validating the effectiveness of the moodle engagement analytics plugin to predict student academic performance', 2015 Americas Conference on Information Systems, AMCIS 2015.
View description>>
Given the focus on boosting retention rates and the potential benefits of pro-active and early identification of students who may require support, higher education institutions are looking at the data already captured in university systems to determine if they can be used to identify such students. This paper uses historical student data to validate an existing learning analytics tool, the Moodle Engagement Analytics Plugin (MEAP). We present data on the utility of the MEAP to identify students 'at risk' based on proxy measurements of online activity for three courses/units in three different disciplines. Our results suggest that there are real differences in the predictive power of the MEAP between different courses due to differences in the extent and structure of the learning activities captured in the learning management system.
Liu, W, Deng, Z-H, Gong, X, Jiang, F & Tsang, I 1970, 'Effectively Predicting Whether and When a Topic Will Become Prevalent in a Social Network', Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI), Austin, Texas, pp. 210-216.
View/Download from: Publisher's site
View description>>
Effective forecasting of future prevalent topics plays animportant role in social network business development.It involves two challenging aspects: predicting whethera topic will become prevalent, and when. This cannotbe directly handled by the existing algorithms in topicmodeling, item recommendation and action forecasting.The classic forecasting framework based on time seriesmodels may be able to predict a hot topic when a seriesof periodical changes to user-addressed frequency in asystematic way. However, the frequency of topics discussedby users often changes irregularly in social networks.In this paper, a generic probabilistic frameworkis proposed for hot topic prediction, and machine learningmethods are explored to predict hot topic patterns.Two effective models, PreWHether and PreWHen, areintroduced to predict whether and when a topic will becomeprevalent. In the PreWHether model, we simulatethe constructed features of previously observed frequencychanges for better prediction. In the PreWHen model,distributions of time intervals associated with the emergenceto prevalence of a topic are modeled. Extensiveexperiments on real datasets demonstrate that ourmethod outperforms the baselines and generates moreeffective predictions.
Li-Wei Ko, Wei-Kai Lai, Wei-Gang Liang, Chun-Hsiang Chuang, Shao-Wei Lu, Yi-Chen Lu, Tien-Yang Hsiung, Hsu-Hsuan Wu & Chin-Teng Lin 1970, 'Single channel wireless EEG device for real-time fatigue level detection', 2015 International Joint Conference on Neural Networks (IJCNN), 2015 International Joint Conference on Neural Networks (IJCNN), IEEE, Killarney, Ireland.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Driver fatigue problem is one of the important factors of traffic accidents. Recent years, many research had investigated that using EEG signals can effectively detect driver's drowsiness level. However, real-time monitoring system is required to apply these fatigue level detection techniques in the practical application, especially in the real-road driving. Therefore, it required less channels, portable and wireless, real-time monitoring and processing techniques for developing the real-time monitoring system. In this study, we develop a single channel wireless EEG device which can real-time detect driver's fatigue level on the mobile device such as smart phone or tablet. The developed device is investigated to obtain a better and precise understanding of brain activities of mental fatigue under driving, which is of great benefit for devolvement of detection of driving fatigue system. This system consists of a Bluetooth-enabled one channel EEG, a regression model, and smartphone, which was a platform recording and transforming the raw EEG data to useful driving status. In the experiment, this was a sustained-attention driving task to implement in a virtual-reality (VR) driving simulator. To training model and develop the system, we were performed for 15 subjects to study Electroencephalography (EEG) brain dynamics by using a mobile and wireless EEG device. Based on the outstanding training results, the leave-one-subject-out cross validation test obtained 90% fatigue detection accuracy. These results indicate that the combination of a smartphone and wireless EEG device constitutes an effective and easy wearable solution for detecting and preventing driver fatigue in real driving environments.
Lu Qi, Huang, Y, Li, L & Xu, G 1970, 'Learning to rank domain experts in microblogging by combining text and non-text features', 2015 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), 2015 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), IEEE, Nanjing, China, pp. 28-31.
View/Download from: Publisher's site
View description>>
Currently microblog search engines have the function to find related users according to input topic keywords. Traditional approaches rank users by their authentication information or their self descriptions (introductions or labels).However, many users may not publish the posts closely related to their certification profile. In this paper, we study the problem of identifying domain-dependent influential users (or topic experts). We propose to fuse of non-text features and text features to analysis the influence of the users. In addition we compare three kinds of sorting methods, i.e., order-based rank aggregation, greedy selection based rank aggregation, SVM Rank method. Our experimental results show that the highest precision is achieved by SVM rank method.
Mao, M, Lu, J, Zhang, G & Zhang, J 1970, 'A Fuzzy Content Matching-based e-Commerce Recommendation Approach', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul, Turkey, pp. 1-8.
View/Download from: Publisher's site
View description>>
E-Commerce products often come with rich and tree-structured content information describing the attributes. To well utilize the content information, this study proposed a fuzzy content matching-based recommendation approach to assist e-Commerce customers to choose their truly interested items. In this paper, users' ratings and preferences are represented using fuzzy numbers to remain uncertainties. Tree-structured content information is transformed to a set of descriptors, and users' preferences on these descriptors are derived from fuzzy ratings by using fuzzy number operations. A kind of preference dependence relations is established between descriptors to explore the relations of different content features, and as a base to sketch the complete profile of users. While the extended preference profile of a user is established, given a new item, the fuzzy match degree of the user preference and the item content information is carried out, and then a fuzzy Topsis ranking method is proposed to able to rank all candidate items according to the fuzzy match degrees, and the highest ranked items are recommended to the target user. We conduct empirical experiments on Yelp and MovieLens datasets. The results indicate that the proposed approach improve recommendation performance in terms of both coverage and accuracy.
McGregor, C 1970, 'A framework for online health analytics for advanced prognostics and health management of astronauts', 2015 IEEE Aerospace Conference, 2015 IEEE Aerospace Conference, IEEE, Big Sky, MT.
View/Download from: Publisher's site
McGregor, C, Bonnis, B, Stanfield, B & Stanfield, M 1970, 'A Method for Real-Time Stimulation and Response Monitoring Using Big Data and Its Application to Tactical Training', 2015 IEEE 28th International Symposium on Computer-Based Medical Systems, 2015 IEEE 28th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, Univ Sao Paulo, Sao Paulo, BRAZIL, pp. 169-170.
View/Download from: Publisher's site
McGregor, C, Heath, J & Choi, Y 1970, 'Streaming Physiological Data: General Public Perceptions of Secondary Use and Application to Research in Neonatal Intensive Care.', Stud Health Technol Inform, 15th World Congress on Health and Biomedical Informatics (MEDINFO), IOS PRESS, Netherlands, pp. 453-457.
View/Download from: Publisher's site
View description>>
High speed physiological data represents one of the most untapped resources in healthcare today and is a form of Big Data. Physiological data is captured and displayed on a wide range of devices in healthcare environments. Frequently this data is transitory and lost once initially displayed. Researchers wish to store and analyze these datasets, however, there is little evidence of any engagement with citizens regarding their perceptions of physiological data capture for secondary use. This paper presents the findings of a self-administered household survey (n=165, response rate = 34%) that investigated Australian and Canadian citizens' perceptions of such physiological data capture and re-use. Results indicate general public support for the secondary use of physiological streaming data. Discussion considers the potential application of such data in neonatal intensive care contexts in relation to our Artemis research. Consideration of the perceptions of secondary use of the streaming data as early as possible will assist in building appropriate use models, with a focus on parents in the neonatal context.
Medvediev, K, Berkovsky, S, Xu, G, Onikienko, Y & IEEE 1970, 'An Analysis of New Visitors' Website Behaviour before & after TV Advertising', PROCEEDINGS OF 2015 IEEE INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, SOCIO-CULTURAL COMPUTING (BESC), International Conference on Behavioral, Economic and Socio-cultural Computing, IEEE, Nanjing, China, pp. 109-115.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. This paper explores and analyses the actions of users on an e-commerce website after they have watched TV-advertising. The analysis considers factors such as month, day and time of the website visit. This article utilises visualization tools for the analysis of the frequency ratios (probabilities) of searches, conversions, bookings made by new visitors on the website.
Merigó, JM, Yang, J-B & Xu, D-L 1970, 'A Bibliometric Overview of Financial Studies', SCIENTIFIC METHODS FOR THE TREATMENT OF UNCERTAINTY IN SOCIAL SCIENCES, 18th International SIGEF Congress on Scientific methods for the treatment of uncertainty in social sciences, Springer International Publishing, Girona, SPAIN, pp. 245-254.
View/Download from: Publisher's site
Mols, I, Broekhuijsen, M, van den Hoven, E, Markopoulos, P & Eggen, B 1970, 'Do We Ruin the Moment? Exploring the Design of Novel Capturing Technologies', Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, OzCHI '15: The Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, ACM, Melbourne, VIC, Australia, pp. 653-661.
View/Download from: Publisher's site
View description>>
By capturing our experiences we often strive to better
remember them in the future. However, the act of media
capturing also influences these same experiences in the
present, an area which is underexplored. This paper
describes a study with the aim to inform the design of
novel media capturing strategies. Adopting an approach
of defamiliarization based on intervention and reflection,
we strive to gain insights in the influences of future
capturing technologies on the experience of a day out. We
conducted an exploratory study in which 28 students went
on a day out and used a variety of capturing strategies.
Individual and group reflections on the experience during
this day identified several important aspects that media
capturing influences: engagement, perception & attention
and social activity. The paper concludes with implications
for design and proposes three potential future directions
for media capturing, that instead of disturbing the
moment enhance the experience.
Moncur, W, Julius, M, van den Hoven, E & Kirk, D 1970, 'Story Shell: The participatory design of a bespoke digital memorial', Proceedings of the 4th Participatory Innovation Conference, Participatory Innovation Conference, The Hague University of Applied Sciences, The Hague, the Netherlands, pp. 470-477.
View description>>
this paper describes the participatory process involved in designing a bespoke, tangible, digital memory – Story Shell – with a bereaved parent. We drew on emergent framework for digital memorials in considering who should author and experience the memorial, what content should be included, what form the memorial should take =, and what message it was intended to convey. A key finding was that the participatory design process itself served as a memorial, be presenting opportunities for the participant tp share detailed memories of their loved one. Reflections on the process deliver insights for makers and analysts on how to work in sensitive design spaces, where there is a need to consider not only an object's form bit also its situation within a delicate social context.
Nizami, S, Green, JR & McGregor, C 1970, 'An Artifact Detection Framework for Clinical Decision Support Systems', WORLD CONGRESS ON MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING, 2015, VOLS 1 AND 2, World Congress on Medical Physics and Biomedical Engineering, Springer International Publishing, Toronto, CANADA, pp. 1393-1396.
View/Download from: Publisher's site
Nordbo, K, Milne, D, Calvo, RA & Allman-Farinelli, M 1970, 'Virtual Food Court', Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, OzCHI '15: The Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, ACM, pp. 69-72.
View/Download from: Publisher's site
View description>>
Immersive virtual reality environments can provide users with realistic experiences of worlds that do not exist or would be hard to reach. The ability to manipulate these environments and influence experiences can be used to understand decision making under different conditions. In this study we explore how VR can be used to understand more about people's food choices. We explore how policy-based interventions such as the "sugar tax" and "nutrition labelling" to promote healthier food choices could be tested. Only limited experimental studies have been conducted about such choices due to the difficulty of trying such interventions in large retail settings. The objectives of the study were to assess how accurately the Virtual Food Court (VFC), represents a real food court. The study (27 participants) had two study conditions; a control with regular food-court prices, and an experimental condition with taxes on food and beverages. Results revealed that participants were able to imagine doing their real-life food purchases in the VFC indicating that it is a good research tool for assessing people's food choices.
Oberst, S, Griffin, D, Tuttle, S, Lambert, A & Boyce, RR 1970, 'Analysis of thin curved flexible structures for space applications', Acoustics 2015 Hunter Valley, Conference of the Australian Acoustical Society, Hunter Valley, NSW, Australia.
View description>>
With the advent of affordable nano-satellite designs (off-the-shelf payloads, standardised launch geometries), increasingly enterprises, governmental agencies and universities have started developing their own space programs to explore the environment of Low Earth Orbits. Thin, flexible and unfolding/deployable structures are common space engineering antenna and solar panel designs owing to their lightweight and ideal packaging characteristics, which are, however, difficult to experimentally validate in a 1-g environment. Further, curvatures or discontinuities to increase functionality without violating prioritised design criteria may lead to system-level trade-offs: stability issues arising from buckling in combination with micro-vibrations which feed back to the satellite's attitude behaviour. It appears that the literature lacks a systematic investigation of these aspects. On-Earth experimental validations (static experiments, model updating) are the starting point for studying the response to static/dynamic loading of thin curved flexible structures such as deployable high frequency antennas. Linear and nonlinear buckling modes owing to varying loadings (aerodynamic drag, solar radiation pressure, residual gravity and magnetic body forces) are found together with a high sensitivity to torsional modes' frequency changes under micro-vibrational forcing.
Paler, A & Devitt, SJ 1970, 'An introduction to Fault-tolerant Quantum Computing', DAC'15 Proceedings of the 52nd Annual Design Automation Conference Article No. 60 (2015).
View/Download from: Publisher's site
View description>>
In this paper we provide a basic introduction of the core ideas and theoriessurrounding fault-tolerant quantum computation. These concepts underly thetheoretical framework of large-scale quantum computation and communications andare the driving force for many recent experimental efforts to construct smallto medium sized arrays of controllable quantum bits. We examine the basicprincipals of redundant quantum encoding, required to protect quantum bits fromerrors generated from both imprecise control and environmental interactions andthen examine the principals of fault-tolerance from largely a classicalframework. As quantum fault-tolerance essentially is avoiding theuncontrollable cascade of errors caused by the interaction of quantum-bits,these concepts can be directly mapped to quantum information.
Paler, A, Polian, I, Nemoto, K & Devitt, SJ 1970, 'A Regular Representation of Quantum Circuits', Bernard, pp. 139-154.
View/Download from: Publisher's site
View description>>
We present a quantum circuit representation consisting entirely of qubitinitialisations (I), a network of controlled-NOT gates (C) and measurementswith respect to different bases (M). The ICM representation is useful foroptimisation of quantum circuits that include teleportation, which is requiredfor fault-tolerant, error corrected quantum computation. The non-deterministicnature of teleportation necessitates the conditional introduction of correctivequantum gates and additional ancillae during circuit execution. Therefore, thestandard optimisation objectives, gate count and number of wires, are notwell-defined for general teleportation-based circuits. The transformation of acircuit into the ICM representation provides a canonical form for an exactfault-tolerant, error corrected circuit needed for optimisation prior to thefinal implementation in a realistic hardware model.
Pickrell, M, Bongers, B & van den Hoven, E 1970, 'Understanding persuasion and motivation in interactive stroke rehabilitation: A physiotherapists' perspective on patient motivation', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Persuasive Technology, Springer, Chicago, USA, pp. 15-26.
View/Download from: Publisher's site
View description>>
For the research reported in this paper ethnographic research methodologies were used to explore patient motivation, feedback and the use of interactive technologies in the ward. We have conducted in-depth interviews with physiotherapists, who work closely with stroke patients to help them regain movement and function. From this research, a set of design guidelines have been developed which can be applied in the design of interactive rehabilitation equipment.
Pileggi, SF 1970, 'An Individual-centric Probabilistic Extension for OWL: Modelling the Uncertainness', Procedia Computer Science, 15th Annual International Conference on Computational Science (ICCS), Elsevier BV, Reykjavik Univ, Reykjavik, ICELAND, pp. 1742-1751.
View/Download from: Publisher's site
Prasad, M, Er, MJ, Lin, CT, Prasad, OK, Mohanty, M & Singh, J 1970, 'Novel Data Knowledge Representation with TSK-Type Preprocessed Collaborative Fuzzy Rule Based System', 2015 IEEE Symposium Series on Computational Intelligence, 2015 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Cape Town, SOUTH AFRICA, pp. 14-21.
View/Download from: Publisher's site
View description>>
A novel data knowledge representation with the combination of structure learning ability of preprocessed collaborative fuzzy clustering and fuzzy expert knowledge of Takagi-Sugeno-Kang type model is presented in this paper. The proposed method divides a huge dataset into two or more subsets of dataset. The subsets of dataset interact with each other through a collaborative mechanism in order to find some similar properties within each-other. The proposed method is useful in dealing with big data issues since it divides a huge dataset into subsets of dataset and finds common features among the subsets. The salient feature of the proposed method is that it uses a small subset of dataset and some common features instead of using the entire dataset and all the features. Before interactions among subsets of the dataset, the proposed method applies a mapping technique for granules of data and centroid of clusters. The proposed method uses information of only halve or less/more than the halve of the data patterns for the training process, and it provides an accurate and robust model, whereas the other existing methods use the entire information of the data patterns. Simulation results show that proposed method performs better than existing methods on some benchmark problems.
Pratama, M, Lu, J & Zhang, G 1970, 'An Incremental Interval Type-2 Neural Fuzzy Classifier', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul, Turkey, pp. 1-8.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Most real world classification problems involve a high degree of uncertainty, unsolved by a traditional type-1 fuzzy classifier. In this paper, a novel interval type-2 classifier, namely Evolving Type-2 Classifier (eT2Class), is proposed. The eT2Class features a flexible working principle built upon a fully sequential and local working principle. This learning notion allows eT2Class to automatically grow, adapt, prune, recall its knowledge from data streams in the single-pass learning fashion, while employing loosely coupled fuzzy sub-models. In addition, eT2Class introduces a generalized interval type-2 fuzzy neural network architecture, where a multivariate Gaussian function with uncertain non-diagonal covariance matrixes constructs the rule premise, while the rule consequent is crafted by a local non-linear Chebyshev polynomial. The efficacy of eT2Class is numerically validated by numerical studies with four data streams characterizing non-stationary behaviors, where eT2Class demonstrates the most encouraging learning performance in achieving a tradeoff between accuracy and complexity.
Pratama, M, Lu, J, Zhang, G & IEEE 1970, 'A Novel Meta-cognitive Extreme Learning Machine to Learning from Data Streams', 2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, IEEE International Conference on Systems, Man and Cybernetics, IEEE, Kowloon, pp. 2792-2797.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Extreme Learning Machine (ELM) is an answer to an increasing demand for a low-cost learning algorithm to handle big data applications. Nevertheless, existing ELMs leave four uncharted problems: complexity, uncertainty, concept drifts, curse of dimensionality. To correct these issues, a novel incremental meta-cognitive ELM, namely Evolving Type-2 Extreme Learning Machine (eT2ELM), is proposed. Et2Elm is built upon the three pillars of meta-cognitive learning, namely what-To-learn, how-To-learn, when-To-learn, where the notion of ELM is implemented in the how-To-learn component. On the other hand, eT2ELM is driven by a generalized interval type-2 Fuzzy Neural Network (FNN) as the cognitive constituent, where the interval type-2 multivariate Gaussian function is used in the hidden layer, whereas the nonlinear Chebyshev function is embedded in the output layer. The efficacy of eT2ELM is proven with four data streams possessing various concept drifts, comparisons with prominent classifiers, and statistical tests, where eT2ELM demonstrates the most encouraging learning performances in terms of accuracy and complexity.
Ramezani, F, Naderpour, M & Lu, J 1970, 'Handling Uncertainty in Cloud Resource Management Using Fuzzy Bayesian Networks', 2015 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2015), IEEE International Conference on Fuzzy Systems, IEEE, Istanbul, Turkey.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. The success of cloud services depends critically on the effective management of virtualized resources. This paper aims to design and implement a decision support method to handle uncertainties in resource management from the cloud provider perspective that enables underlying complexity, automates resource provisioning and controls client-perceived quality of service. The paper includes a probabilistic decision making module that relies upon a fuzzy Bayesian network to determine the current situation status of a cloud infrastructure, including physical and virtual machines, and predicts the near future state, that will help the hypervisor to migrate or expand the VMs to reduce execution time and meet quality of service requirements. First, the framework of resource management is presented. Second, the decision making module is developed. Lastly, a series of experiments to investigate the performance of the proposed module is implemented. Experiments reveal the efficiency of the module prototype.
Ramos, L, van den Hoven, E, Miller, L & ACM 1970, 'Designing for the Other 'Hereafter': When Older Adults Remember about Forgetting', 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, Conference for Human-Computer Interaction, ACM, San Jose, CA, USA, pp. 721-732.
View/Download from: Publisher's site
View description>>
Designing to support memory for older individuals is a complex challenge in human-computer interaction (HCI) research. Past literature on human memory has mapped processes for recalling past experiences, learning new things, remembering to carry out future intentions and the importance of attention. However, the understanding of how older adults perceive forgetting in daily life remains limited. This paper narrows this gap through a study with older persons (n=18) living independently using self-reporting and semi-structured focus groups to explore what they forget, how they react, and what mechanisms they put in place to recover from and avoid forgetting. Findings include occurrences of prospective and retrospective memory lapses, conflicting negative and neutral perceptions, and techniques to manage forgetting. Participant responses indicate that an awareness of forgetting fosters internal tensions among older adults, thereby creating opportunities for further design research, e.g., to defuse and normalise these reactions.
Rao, R, Sarkar, M, Song, M, Rosing, T, Prasad, V, Yu, S, Mihovska, A, Annamalai, A, Liu, L, Mao, S, Kumar, S & Zhang, Q 1970, 'TPC Welcome', 2015 IEEE Globecom Workshops (GC Wkshps), 2015 IEEE Globecom Workshops (GC Wkshps), IEEE.
View/Download from: Publisher's site
Saberi, M & Saberi, Z 1970, 'ANOVA Based Approch for Efficient Customer Recognition: Dealing with Common Names', IFIP Advances in Information and Communication Technology, Springer International Publishing, pp. 64-74.
View/Download from: Publisher's site
View description>>
© IFIP International Federation for Information Processing 2015. This study proposes an Analysis of Variance (ANOVA) technique that focuses on the efficient recognition of customers with common names. The continuous improvement of Information and communications technologies (ICT) has led customers to have new expectations and concerns from their related organization. These new expectations bring various difficulties for organizations’ help desk to meet their customers’ needs. In this paper, we propose a technique that provides the most beneficial information to the Customer service representative that will assist in the efficient recognition of the customer. The proposed algorithm determines which features of a customer should be asked that would result in his/her prompt recognition. Moreover, to have a clean database, the framework uses the features of customers for which a standard format is available such as street address, month of birth etc. We evaluate our algorithm on synthetic dataset and demonstrate how we can recognize the right customer in the optimum manner.
Saberi, M & Saberi, Z 1970, 'Developing a Fuzzy Predictive Aid System in Contact Centres for an Efficient Customer Recognition Process', 2015 International Conference on Intelligent Networking and Collaborative Systems, 2015 International Conference on Intelligent Networking and Collaborative Systems (INCOS), IEEE, Taipei, TAIWAN, pp. 411-416.
View/Download from: Publisher's site
Saberi, M, Hussain, OK & Chang, E 1970, 'Statistical quality control framework for crowd-worker in ER-in-house crowdsourcing system', Proceedings of the 20th International Conference on Information Quality, ICIQ 2015, pp. 88-104.
View description>>
These days, poor data quality is prevalent in organizations. This poor quality negatively effect on accuracy of organization decision making. The problem of dirty data is more severe for organization’s customer relationship management (CRM) and prevents it from effective performance. One type of dirty data is duplicate records that correspond to the same entities. Presence of duplicate profiles in an organization’s database prevents an organization to have a clear picture of customers’ profile. Thus, developing efficient Entity resolution (ER) technique in a given organization is essential. Recently, crowdsourcing technique has been used to improve the accuracy of entity resolution process that make use of human intelligence to label the data and make it ready for further processing by entity resolution (ER) algorithms. However, labelling of data by humans is an error prone process that affects the process of entity resolution and eventually overall performance of crowd. Thus controlling the quality of labeling task is an essential for crowdsourcing systems. However, this task becomes more challenging due to unavailability of ground data. In this study, we focus on contact centers and employ Customer Service Representatives (CSRs) as crowd-worker for ER-Crowdsourcing system. A statistical quality control (SQC) framework is proposed to control the quality of CSRs labeling. The proposed SQC framework should be able to estimate the true error of CSRs in order to monitor their labeling accuracy and performance. To this end, a Hybrid Gold- plurality (HGP) Algorithm is proposed that estimate CSR’s true error. The proposed HGP algorithm is capable of an appropriate accuracy in error estimation as it is composed of both Masking and Detection crowd-worker quality control mechanisms. Synthetic dataset is used to demonstrate the applicability of the SQC framework.
Saberi, M, Hussain, OK, Janjua, NK & Chang, E 1970, 'Cognition and Statistical-Based Crowd Evaluation Framework for ER-in-House Crowdsourcing System: Inbound Contact Center', DATABASES THEORY AND APPLICATIONS, 26th Australasian Database Conference (ADC), Springer International Publishing, Melbourne, AUSTRALIA, pp. 207-219.
View/Download from: Publisher's site
Shao, J, Yin, J, Liu, W & Cao, L 1970, 'Actionable combined high utility itemset mining', Proceedings of the National Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence, AAAI Press, Austin, Texas, USA, pp. 4206-4207.
View description>>
The itemsets discovered by traditional High Utility Itemsets Mining (HUIM) methods are more useful than frequent itemset mining outcomes; however, they are usually disordered and not actionable, and sometime accidental, because the utility is the only judgement and no relations among itemsets are considered. In this paper, we introduce the concept of combined mining to select combined itemsets that are not only high utility and high frequency, but also involving relations between itemsets. An effective method for mining such actionable combined high utility itemsets is proposed. The experimental results are promising, compared to those from traditional HUIM algorithm (UP-Growth).
Sharma, N, Mandal, R, Sharma, R, Pal, U & Blumenstein, M 1970, 'Bag-of-Visual Words for word-wise video script identification: A study', 2015 International Joint Conference on Neural Networks (IJCNN), 2015 International Joint Conference on Neural Networks (IJCNN), IEEE, Killarney, Ireland.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Use of multiple scripts for information communication through various media is quite common in a multilingual country. Optical character recognition of such document images or videos assists in indexing them for effective information retrieval. Hence, script identification from multi-lingual documents/images is a necessary step for selecting the appropriate OCR, due the absence of a single OCR system capable of handling multiple scripts. Script identification from printed as well as handwritten documents is a well-researched area, but script identification from video frames has not been explored much. Low resolution, blur, noisy background, to mention a few are the major bottle necks when processing video frames, and makes script identification from video images a challenging task. This paper examines the potential of Bag-of-Visual Words based techniques for word-wise script identification from video frames. Two different approaches namely, Bag-Of-Features (BoF) and Spatial Pyramid Matching (SPM), using patch based SIFT descriptors were considered for the current study. SVM Classifier was used for analysing the three popular south Indian scripts, namely Tamil, Telugu and Kannada in combination with English and Hindi. A comparative study of Bag-of-Visual words with traditional script identification techniques involving gradient based features (e.g. HoG) and texture based features (e.g. LBP) is presented. Experimental results shows that patch-based features along with SPM outperformed the traditional techniques and promising accuracies were achieved on 2534 words from the five scripts. The study reveals that patch-based feature can be used for scripts identification in-order to overcome the inherent problems with video frames.
Sharma, N, Mandal, R, Sharma, R, Pal, U & Blumenstein, M 1970, 'ICDAR2015 Competition on Video Script Identification (CVSI 2015)', 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 2015 13th International Conference on Document Analysis and Recognition (ICDAR), IEEE, pp. 1196-1200.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. This paper presents the final results of the ICDAR 2015 Competition on Video Script Identification. A description and performance of the participating systems in the competition are reported. The general objective of the competition is to evaluate and benchmark the available methods on word-wise video script identification. It also provides a platform for researchers around the globe to particularly address the video script identification problem and video text recognition in general. The competition was organised around four different tasks involving various combinations of scripts comprising tri-script and multi-script scenarios. The dataset used in the competition comprised ten different scripts. In total, six systems were received from five participants over the tasks offered. This report details the competition dataset specifications, evaluation criteria, summary of the participating systems and their performance across different tasks. The systems submitted by Google Inc. were the winner of the competition for all the tasks, whereas the systems received from Huazhong University of Science and Technology (HUST) and Computer Vision Center (CVC) were very close competitors.
Sharma, N, Mandal, R, Sharma, R, Roy, PP, Pal, U & Blumenstein, M 1970, 'Multi-lingual text recognition from video frames', 2015 13th International Conference on Document Analysis and Recognition (ICDAR), 2015 13th International Conference on Document Analysis and Recognition (ICDAR), IEEE, Nancy, France, pp. 951-955.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Text recognition from video frames is a challenging task due to low resolution, blur, complex and coloured backgrounds, noise, to mention a few. Consequently, the traditional ways of text recognition from scanned documents having simple backgrounds fails when applied to video text. Although there are various techniques available for text recognition from handwritten and printed documents with simple backgrounds, text recognition from video frames has not been comprehensively investigated, especially for multi-lingual videos. In this paper, we present a technique for multi-lingual video text recognition which involves script identification in the first stage, followed by word and character recognition, and finally the results are refined using a post-processing technique. Considering the inherent problems in videos, a Spatial Pyramid Matching (SPM) based technique, using patch-based SIFT descriptors and SVM classifier, is employed for script identification. In the next stage, a Hidden Markov Model (HMM) based approach is used for word and character recognition, which utilizes the context information. Finally, a lexicon-based post-processing technique is applied to verify and refine the word recognition results. The proposed method was tested on a dataset comprising of 4800 words from three different scripts, namely, Roman (English), Hindi and Bengali. The script identification results obtained are encouraging. The word and character recognition results are also encouraging considering the complexity and problems associated with video text processing.
Singh, AK, Wang, Y-K, King, J-T, Lin, C-T & Ko, L-W 1970, 'A simple communication system based on Brain Computer Interface', 2015 Conference on Technologies and Applications of Artificial Intelligence (TAAI), 2015 Conference on Technologies and Applications of Artificial Intelligence (TAAI), IEEE, Tainan, Taiwan, pp. 363-366.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. The current study presents one Brain Computer Interface (BCI) based communication system in which the intentional eye blink were extracted from the single channel EEG data. This system could be useful for the sufferers of motor disease, locked in syndrome, and paralysis. In particular, one new Human Computer Interface (HCI) can also benefit healthy users. To detect the intentional eye blinking in real-time, the score was calculated from delta, theta and gamma power band from the brain dynamics acquired from single channel through NeuroSky Headset wirelessly. The soft and hard blink represent '0' and '1' in this system respectively and formed four continuous bit string which map to the pre-defined text in this system. The mapped text was convert into speech and send to the speaker. The experimental results shows that this system can provide an accurate and convenient way to communicate through the brain dynamics.
Song, K, Feng, S, Gao, W, Wang, D, Chen, L & Zhang, C 1970, 'Build Emotion Lexicon from Microblogs by Combining Effects of Seed Words and Emoticons in a Heterogeneous Graph', Proceedings of the 26th ACM Conference on Hypertext & Social Media - HT '15, the 26th ACM Conference, ACM Press, Guzelyurt, Northern Cyprus, pp. 283-292.
View/Download from: Publisher's site
View description>>
© 2015 ACM. As an indispensable resource for emotion analysis, emotion lexicons have attracted increasing attention in recent years. Most existing methods focus on capturing the single emotional effect of words rather than the emotion distributions which are helpful to model multiple complex emotions in a subjective text. Meanwhile, automatic lexicon building methods are overly dependent on seed words but neglect the effect of emoticons which are natural graphical labels of fine-grained emotion. In this paper, we propose a novel emotion lexicon building framework that leverages both seed words and emoticons simultaneously to capture emotion distributions of candidate words more accurately. Our method overcomes the weakness of existing methods by combining the effects of both seed words and emoticons in a unified three-layer heterogeneous graph, in which a multi-label random walk (MLRW) algorithm is performed to strengthen the emotion distribution estimation. Experimental results on real-world data reveal that our constructed emotion lexicon achieves promising results for emotion classification compared to the state-of-the-art lexicons.
Sood, K, Liu, S, Yu, S & Xiang, Y 1970, 'Dynamic access point association using Software Defined Networking', 2015 International Telecommunication Networks and Applications Conference (ITNAC), 2015 International Telecommunication Networks and Applications Conference (ITNAC), IEEE, Sydney, AUSTRALIA, pp. 226-231.
View/Download from: Publisher's site
Sun, G, Cui, T, Guo, WW, Beydoun, G, Xu, D & Shen, J 1970, 'Micro Learning Adaptation in MOOC: A Software as a Service and a Personalized Learner Model.', ICWL, International Conference on Web-Based Learning, Springer, Guangzhou, China, pp. 174-184.
View/Download from: Publisher's site
View description>>
Micro learning is gradually becoming a common learning mode in massive open online course learning (MOOC). We illustrate a research strategy to formalize and customize micro learning resources in order to meet personal demands at the real time. This smart micro learning environment can be organized by a Software as a Service (SaaS) we newly designed, in which educational data mining technique is mainly employed to understand learners learning behaviors and recognize learning resource features in order to identify potential micro learning solutions. A learner model with regards to internal and external factors is also proposed for personalization in micro MOOC learning context.
Teague, D, Lister, R & Ahadi, A 1970, 'Mired in the web: Vignettes from charlotte and other novice programmers', Conferences in Research and Practice in Information Technology Series, Australasian Computing Education Conference, ACS, Sydney, Australia, pp. 165-174.
View description>>
Ahadi and Lister (2013) found that many of their introductory programming students had fallen behind as early as week 3 of semester, and those students often then stayed behind. Our later work (Ahadi, Lister and Teague 2014) supported that finding, for students at another institution. In this paper, we go one step further than those earlier studies by observing a number of students as they complete programming tasks while thinking aloud. We describe the types of inconsistencies students manifest, which are often not evident on analysis of conventional written tests. We again interpret our findings using neo- Piagetian theory. We conclude with some thoughts on the pedagogical implications of our research results.
Tian, F, Liu, B, Xiong, J & Gui, L 1970, 'Efficient caching scheme for data access in disruption tolerant networks', 2015 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, 2015 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), IEEE.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Disruption Tolerant Networks (DTNs) are characterized by lack of continuous network connectivity because of the limits of radio communication range, sparsity of mobile nodes, shortage of energy resources. There are many researches about data forwarding in DTNs, while limited work on providing efficient data access. In this paper, we propose an efficient cooperative caching scheme, which enables the data query to be satisfied quickly. Our basic idea is to cache data at a set of Proper Nodes (PNs), which can be easily accessed by other nodes in the network. We present a novel algorithm to select the PNs. This algorithm ensures that the PNs are not clustered to damage the data access performance of the whole network. We evaluate our proposed cooperative caching scheme with extensive simulation by using the Sigcomm2009 trace records. The simulation results show that our proposed cooperative caching scheme significantly improves the performance of data access.
Tsai, W-C & van den Hoven, E 1970, 'Retro jukebox', Proceedings of the Asia Pacific HCI and UX Design Symposium, APCHIUX '15: Asia Pacific Symposium of HCI and UX Design, ACM, Melbourne, pp. 22-25.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Retro Jukebox is a tablet-based software application designed for postoperative elderly patients and bedside nurses. The application is designed as a reminiscence aid to support patients' cognitive stimulation. In this paper, we present the lessons learned from a field study that led us to reflect beyond its utility-oriented design. We shed light on some implicit values and benefits that may not be seen as the designer's intentions but are a meaningful appropriation heading toward the same goal.
van den Hoven, E 1970, 'From Materialising to Memories', Proceedings of the 15th New Zealand Conference on Human-Computer Interaction, CHINZ 2015: 15th New Zealand Conference on Human-Computer Interaction, ACM.
View/Download from: Publisher's site
van Gennip, D, van den Hoven, E & Markopoulos, P 1970, 'Things That Make Us Reminisce', Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15: CHI Conference on Human Factors in Computing Systems, ACM, Seoul, Republic of Korea, pp. 3443-3452.
View/Download from: Publisher's site
View description>>
Interactive devices can support personal remembering to benefit well-being. These designs require insight into what brings the past to mind, and how people relate to such cues. Prior work focused on mementos in the home; instead, this paper presents a diary and interview study of involuntary memory cueing in everyday life. Data was collected from fifteen adult individuals, using sentence completion diaries, combined with debriefing interviews. Qualitative analysis of the data showed that these participants were relying on everyday physical objects like food items for cueing memories during everyday life, locations and (repeated) activities, while digital items and photos were shown to be less frequent stimulants. Meaningful relations to memory cues can be partially explained from a memory cueing perspective. We discuss how design for remembering can benefit from our insights, through careful trade-offs in timing, exposure to cues, and supporting a process of personal attachment with items invoking memories.
Wang, H, Zhang, P, Chen, L & Zhang, C 1970, 'SocialAnalysis: A Real-Time Query and Mining System from Social Media Data Streams', Databases Theory and Applications (LNCS), Australasian Database Conference, Springer International Publishing, Melbourne, Australia, pp. 318-322.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. In this paper, we present our recent progress of designing a real-time system, SocialAnalysis, to discover and summarize emergent social events from social media data streams. In social networks era, people always frequently post messages or comments about their activities and opinions. Hence, there exist temporal correlations between the physical world and virtual social networks, which can help us to monitor and track social events, detecting and positioning anomalous events before their outbreakings, so as to provide early warning. The key technologies in the system include: (1) Data denoising methods based on multi-features, which screens out the query-related event data from massive background data. (2) Abnormal events detection methods based on statistical learning, which can detect anomalies by analyzing and mining a series of observations and statistics on the time axis. (3) Geographical position recognition, which is used to recognize regions where abnormal events may happen.
Wang, H, Zhang, P, Tsang, I, Chen, L & Zhang, C 1970, 'Defragging Subgraph Features for Graph Classification', Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM'15: 24th ACM International Conference on Information and Knowledge Management, ACM, Melbourne, VIC, Australia, pp. 1687-1690.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Graph classification is an important tool for analysing structured and semi-structured data, where subgraphs are commonly used as the feature representation. However, the number and size of subgraph features crucially depend on the threshold parameters of frequent subgraph mining algorithms. Any improper setting of the parameters will generate many trivial short-pattern subgraph fragments which dominate the feature space, distort graph classifiers and bury interesting long-pattern subgraphs. In this paper, we propose a new Subgraph Join Feature Selection (SJFS) algorithm. The SJFS algorithm, by forcing graph classifiers to join short-pattern subgraph fragments, can defrag trivial subgraph features and deliver long-pattern interesting subgraphs. Experimental results on both synthetic and real-world social network graph data demonstrate the performance of the proposed method.
Wang, W, Yin, H, Chen, L, Sun, Y, Sadiq, S & Zhou, X 1970, 'Geo-SAGE: A Geographical Sparse Additive Generative Model for Spatial Item Recommendation', Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM International Conference on Knowledge Discovery and Data Mining, ACM, Sydney, NSW, Australia, pp. 1255-1264.
View/Download from: Publisher's site
View description>>
With the rapid development of location-based social networks (LBSNs), spatialitem recommendation has become an important means to help people discoverattractive and interesting venues and events, especially when users travel outof town. However, this recommendation is very challenging compared to thetraditional recommender systems. A user can visit only a limited number ofspatial items, leading to a very sparse user-item matrix. Most of the itemsvisited by a user are located within a short distance from where he/she lives,which makes it hard to recommend items when the user travels to a far awayplace. Moreover, user interests and behavior patterns may vary dramaticallyacross different geographical regions. In light of this, we propose Geo-SAGE, ageographical sparse additive generative model for spatial item recommendationin this paper. Geo-SAGE considers both user personal interests and thepreference of the crowd in the target region, by exploiting both theco-occurrence pattern of spatial items and the content of spatial items. Tofurther alleviate the data sparsity issue, Geo-SAGE exploits the geographicalcorrelation by smoothing the crowd's preferences over a well-designed spatialindex structure called spatial pyramid. We conduct extensive experiments toevaluate the performance of our Geo-SAGE model on two real large-scaledatasets. The experimental results clearly demonstrate our Geo-SAGE modeloutperforms the state-of-the-art in the two tasks of both out-of-town andhome-town recommendations.
Wang, X, Sheng, QZ, Fang, XS, Li, X, Xu, X & Yao, L 1970, 'Approximate Truth Discovery via Problem Scale Reduction', Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM'15: 24th ACM International Conference on Information and Knowledge Management, ACM, pp. 503-512.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Many real-world applications rely on multiple data sources to provide information on their interested items. Due to the noises and uncertainty in data, given a specific item, the information from different sources may conflict. To make reliable decisions based on these data, it is important to identify the trustworthy information by resolving these conflicts, i.e., the truth discovery problem. Current solutions to this problem detect the veracity of each value jointly with the reliability of each source for every data item. In this way, the efficiency of truth discovery is strictly confined by the problem scale, which in turn limits truth discovery algorithms from being applicable on a large scale. To address this issue, we propose an approximate truth discovery approach, which divides sources and values into groups according to a user-specified approximation criterion. The groups are then used for efficient inter-value influence computation to improve the accuracy. Our approach is applicable to most existing truth discovery algorithms. Experiments on real-world datasets show that our approach improves the efficiency compared to existing algorithms while achieving similar or even better accuracy. The scalability is further demonstrated by experiments on large synthetic datasets.
Wang, X, Sheng, QZ, Fang, XS, Yao, L, Xu, X & Li, X 1970, 'An Integrated Bayesian Approach for Effective Multi-Truth Discovery', Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM'15: 24th ACM International Conference on Information and Knowledge Management, ACM, pp. 493-502.
View/Download from: Publisher's site
View description>>
© 2015 ACM. Truth-finding is the fundamental technique for corroborating reports from multiple sources in both data integration and collective intelligent applications. Traditional truth-finding methods assume a single true value for each data item and therefore cannot deal will multiple true values (i.e., the multi-truth-finding problem). So far, the existing approaches handle the multi-truth-finding problem in the same way as the single-truth-finding problems. Unfortunately, the multi-truth-finding problem has its unique features, such as the involvement of sets of values in claims, different implications of inter-value mutual exclusion, and larger source profiles. Considering these features could provide new opportunities for obtaining more accurate truth-finding results. Based on this insight, we propose an integrated Bayesian approach to the multi-truth-finding problem, by taking these features into account. To improve the truth-finding efficiency, we reformulate the multi-truth-finding problem model based on the mappings between sources and (sets of) values. New mutual exclusive relations are defined to reflect the possible co-existence of multiple true values. A finer-grained copy detection method is also proposed to deal with sources with large profiles. The experimental results on three real-world datasets show the effectiveness of our approach.
Wei, C-S, Lin, Y-P, Wang, Y-T, Jung, T-P, Bigdely-Shamlo, N & Lin, C-T 1970, 'Selective Transfer Learning for EEG-Based Drowsiness Detection', 2015 IEEE International Conference on Systems, Man, and Cybernetics, 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Kowloon, China, pp. 3229-3232.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. On the pathway from laboratory settings to real world environment, a major challenge on the development of a robust electroencephalogram (EEG)-based brain-computer interface (BCI) is to collect a significant amount of informative training data from each individual, which is labor intensive and time-consuming and thereby significantly hinders the applications of BCIs in real-world settings. A possible remedy for this problem is to leverage existing data from other subjects. However, substantial inter-subject variability of human EEG data could deteriorate more than improve the BCI performance. This study proposes a new transfer learning (TL)-based method that exploits a subject's pilot data to select auxiliary data from other subjects to enhance the performance of an EEG-based BCI for drowsiness detection. This method is based on our previous findings that the EEG correlates of drowsiness were stable within individuals across sessions and an individual's pilot data could be used as calibration/training data to build a robust drowsiness detector. Empirical results of this study suggested that the feasibility of leveraging existing BCI models built by other subjects' data and a relatively small amount of subject-specific pilot data to develop a BCI that can outperform the BCI based solely on the pilot data of the subject.
Williams, JJR, Zhang, Z, Oberst, S & Lai, JCS 1970, 'Model updating of brake components' influence on instability predictions', 22nd International Congress on Sound and Vibration, ICSV 2015, International Congress on Sound and Vibration, Florence, Italy.
View/Download from: Publisher's site
View description>>
Customers perceive brake squeal as a major annoyance in their automobiles' acoustic performance. Squeal is self-excited, friction induced audible noise above 1 kHz and one of the strongest cost drivers in noise vibration and harshness departments of automotive manufacturers. In order to reduce expensive and time-consuming dynamometer and road vehicle tests, numerical complex eigenvalue analysis has become popular in predicting brake squeal. However, one difficulty in assessing the prediction quality apart from the linearisation of the system is the complexity of the brake system to be modelled. Using structural finite elements the computer model is often insufficiently detailed, insufficiently damped or insufficiently experimentally validated so that instabilities causing brake squeal are over-predicted. Here we present the process of updating components of a brake system's squeal prediction and the improvement in modelling using updated material parameters and a Rayleigh damping model by applying a rigorous mesh refinement study and different friction laws.
Wu, D, Chuang, C-H & Lin, C-T 1970, 'Online driver's drowsiness estimation using domain adaptation with model fusion', 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), IEEE, Xian, PEOPLES R CHINA, pp. 904-910.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Drowsy driving is a pervasive problem among drivers, and is also an important contributor to motor vehicle accidents. It is very important to be able to estimate a driver's drowsiness level online so that preventative actions could be taken to avoid accidents. However, because of large individual differences, it is very challenging to design an estimation algorithm whose parameters fit all subjects. Some subject-specific calibration data must be used to tailor the algorithm for each new subject. This paper proposes a domain adaptation with model fusion (DAMF) online drowsiness estimation approach using EEG signals. By making use of EEG data from other subjects in a transfer learning framework, DAMF requires very little subject-specific calibration data, which significantly increases its utility in practice. We demonstrate using a simulated driving experiment and 15 subjects that DAMF can achieve much better performance than several other approaches.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 1970, 'Infinite Author Topic Model based on Mixed Gamma-Negative Binomial Process', 2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), IEEE International Conference on Data Mining, IEEE, Atlantic City, USA, pp. 489-498.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Incorporating the side information of text corpus, i.e., authors, time stamps, and emotional tags, into the traditionaltext mining models has gained significant interests in the area of information retrieval, statistical natural language processing, andmachine learning. One branch of these works is the so-called Author Topic Model (ATM), which incorporates the authors'sinterests as side information into the classical topic model. However, the existing ATM needs to predefine the number of topics, which is difficult and inappropriate in many real-world settings. In this paper, we propose an Infinite Author Topic (IAT) modelto resolve this issue. Instead of assigning a discrete probability on fixed number of topics, we use a stochastic process to determinethe number of topics from the data itself. To be specific, we extend a gamma-negative binomial process to three levels in orderto capture the author-document-keyword hierarchical structure. Furthermore, each document is assigned a mixed gamma processthat accounts for the multi-author's contribution towards this document. An efficient Gibbs sampling inference algorithm witheach conditional distribution being closed-form is developed for the IAT model. Experiments on several real-world datasets showthe capabilities of our IAT model to learn the hidden topics, authors' interests on these topics and the number of topicssimultaneously.
Xue, S, Lu, J, Zhang, G & Xiong, L 1970, 'SEIR Immune Strategy for Instance Weighted Naive Bayes Classification', NEURAL INFORMATION PROCESSING, PT I, International Conference on Neural Information Processing, Springer, Istanbul, Turkey, pp. 283-292.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. Naive Bayes (NB) has been popularly applied in many classification tasks. However, in real-world applications, the pronounced advantage of NB is often challenged by insufficient training samples. Specifically, the high variance may occur with respect to the limited number of training samples. The estimated class distribution of a NB classier is inaccurate if the number of training instances is small. To handle this issue, in this paper, we proposed a SEIR (Susceptible, Exposed, Infectious and Recovered) immune-strategy-based instance weighting algorithm for naive Bayes classification, namely SWNB. The immune instance weighting allows the SWNB algorithm adjust itself to the data without explicit specification of functional or distributional forms of the underlying model. Experiments and comparisons on 20 benchmark datasets demonstrated that the proposed SWNB algorithm outperformed existing state-of-the-art instance weighted NB algorithm and other related computational intelligence methods.
Xue, S, Lu, J, Zhang, G, Xiong, L & IEEE 1970, 'Heterogeneous Feature Space based Task Selection Machine for Unsupervised Transfer Learning', 2015 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (ISKE), International Conference on Intelligent Systems and Knowledge Engineering, IEEE, Taipei Taiwan, pp. 46-51.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Transfer learning techniques try to transfer knowledge from previous tasks to a new target task with either fewer training data or less training than traditional machine learning techniques. Since transfer learning cares more about relatedness between tasks and their domains, it is useful for handling massive data, which are not labeled, to overcome distribution and feature space gaps, respectively. In this paper, we propose a new task selection algorithm in an unsupervised transfer learning domain, called as Task Selection Machine (TSM). It goes with a key technical problem, i.e., feature mapping for heterogeneous feature spaces. An extended feature method is applied to feature mapping algorithm. Also, TSM training algorithm, which is main contribution for this paper, relies on feature mapping. Meanwhile, the proposed TSM finally meets the unsupervised transfer learning requirements and solves the unsupervised multi-task transfer learning issues conversely.
Yang, C, Zhu, D & Zhang, G 1970, 'Semantic-Based Technology Trend Analysis', 2015 10th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), 2015 10th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), IEEE, Taipei, Taiwan, pp. 222-228.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. Technology trend analysis offers a flexible instrument to understand both opportunity and competition for emerging technologies. Semantic information is used in Science, Technology & Innovation (ST&I) records which makes the technology trend analysis more challenging. This paper proposes a semantic-based approach for technology trend analysis through emphasizing Subject-Action-Object (SAO) structure, It also applies the trend analysis approach to extract technology information and identify and predict the trend of technology development more effectively. An empirical study on Graphene is completed to demonstrate the proposed trend analysis approach.
Yao, L, Sheng, QZ, Qin, Y, Wang, X, Shemshadi, A & He, Q 1970, 'Context-aware Point-of-Interest Recommendation Using Tensor Factorization with Social Regularization', Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15: The 38th International ACM SIGIR conference on research and development in Information Retrieval, ACM, Santiago, CHILE, pp. 1007-1010.
View/Download from: Publisher's site
Yao, L, Wang, X, Sheng, QZ, Ruan, W & Zhang, W 1970, 'Service Recommendation for Mashup Composition with Implicit Correlation Regularization', 2015 IEEE International Conference on Web Services, 2015 IEEE International Conference on Web Services (ICWS), IEEE, New York, NY, pp. 217-224.
View/Download from: Publisher's site
Ye, T, Youfeng Hao, Wang, Z, Chufan Lai, Siming Chen, Zongru Li, Jie Liang & Xiaoru Yuan 1970, 'Behavior analysis through collaborative visual exploration on trajectory data', 2015 IEEE Conference on Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on Visual Analytics Science and Technology (VAST), IEEE, Chicago, IL, USA, pp. 131-132.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. In VAST Challenge 2015, we proposed a collaborative visual exploration system for behavior analysis over trajectory records. We discuss technical details in this report, in order to deliberate how the system supports multiple users to collaboratively analyze the same data, assist in sharing their findings, and constructing an overall picture of their insights.
Yusoff, B & Merigo Lindahl, JM 1970, 'Heavy weighted geometric aggregation operators in analytic hierarchy process-group decision making', 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, Istanbul, TURKEY.
View/Download from: Publisher's site
Zhang, Q, Zhang, G, Lu, J, Wu, D & IEEE 1970, 'A framework of hybrid recommender system for personalized clinical prescription', 2015 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (ISKE), International Conference on Intelligent Systems and Knowledge Engineering, IEEE, Taipei Taiwan, pp. 189-195.
View/Download from: Publisher's site
View description>>
© 2015 IEEE. General practitioners are faced with a great challenge of clinical prescription owing to the increase of new drugs and their complex functions to different diseases. A personalized recommender system can help practitioners discover mass of medical knowledge hidden in history medical records to deal with information overload problem in prescription. To support practitioner's decision making in prescription, this paper proposes a framework of a hybrid recommender system which integrates artificial neural network and case-based reasoning. Three issues are considered in this system framework: (1) to define a patient's need by giving his/her symptom, (2) to mine features from free text in medical records and (3) to analyze temporal efficiency of drugs. The proposed recommender system is expected to help general practitioners to improve their efficiency and reduce risks of making errors in daily clinical consultation with patients.
Zhang, Y, Chen, H, Zhang, G, Zhu, D & Lu, J 1970, 'Multiple Science Data-Oriented Technology Roadmapping Method', PICMET '15 PORTLAND INTERNATIONAL CENTER FOR MANAGEMENT OF ENGINEERING AND TECHNOLOGY, Portland International Center for Management of Engineering and Technology Conference, IEEE, Portland, USA, pp. 2278-2287.
View/Download from: Publisher's site
View description>>
© 2014 Portland International Conference on Management of Engineering and Technology. Since its first engagement with industry decades ago, Technology Roadmapping (TRM) is taking a more and more important role for technical intelligence in current R&D planning and innovation tracking. Important topics for both science policy and engineering management researchers involves with the approaches that refer to the real-world problems, explore value-added information from the complex data sets, fuse the analytic results and expert knowledge effectively and reasonable, and demonstrate to the decision makers visually and understandable. Moreover, the growing variety of science data sources in the Big Data Age increases these challenges and opportunities. Addressing these concerns, this paper proposes a TRM composing method with a clustering-based topic identification model, a multiple science data sources integration model, and a semi-automated fuzzy set-based TRM composing model with expert aid. We focus on a case study on computer science related R&D. Empirical data from the United States National Science Foundation Award data (innovative research ideas and proposals) and Derwent Innovation Index data source (patents emphasizing technical products) provide vantage points at two stages of the R&D process. The understanding gained will assist in description of computer science macro-trends for R&D decision makers.
Zhang, Z, Oberst, S, Williams, JJR & Lai, JCS 1970, 'Improving Brake squeal propensitiy prediction by model updating', Acoustics 2015 Hunter Valley, Conference of the Australian Acoustical Society, Hunter Valley, NSW, Australia.
View description>>
Brake squeal as a significant warranty-claim related costs problem to the automotive industry is difficult to model numerically and analyse because of inherent nonlinearities, uncertainties in material properties, contact and boundary conditions, and system complexity. Often, model components are linearised and not experimentally validated. Sophisticated contact or friction models as well as stiffness in joints are often not considered owing to difficulties in experimental validation. In this study, a full brake system is modally updated at the component level and then at the subassembly level (pad assembly alone, pad in bracket). Squeal prediction using the complex eigenvalue analysis on a finite element model of the system is compared to squeal results from a noise dynamometer test. The results are discussed with respect to further refinement of the modelling approach and improvements to brake squeal prediction.
Zowghi, D, Gervasi, V, Gregory, SC, Svensson, RB & Amyot, D 1970, 'Message from the chairs', 2015 IEEE 23rd International Requirements Engineering Conference (RE), 2015 IEEE 23rd International Requirements Engineering Conference (RE), IEEE, pp. iii-iv.
View/Download from: Publisher's site
Zowghi, D, Rimini, FD & Bano, M 1970, 'Problems and challenges of user involvement in software development: an empirical study.', EASE, International Conference on Evaluation and Assessment in Software Engineering (EASE), ACM, Nanjing, China, pp. 9:1-9:1.
View/Download from: Publisher's site
View description>>
Copyright 2015 ACM. Context: The benefits of involving users in software development projects have been studied extensively in the last four decades and have been reported to contribute to user satisfaction thus leading to system success. However, the relationship between user involvement and system success, being a multi-faceted and complex concept, has introduced many problems and challenges for the practitioners. Objective: In this paper we present our findings from a case study to give a deeper understanding of the challenges and problems of user involvement during software development. Method: The data in the case study was collected from interviews, observations and project documents. Results: We present our results in four main categories related to users, communicative aspects, managerial considerations, and project issues. It was observed that system success is achievable even when there are problems and challenges in involving users. Conclusion: Understanding the nature of the problems related to user involvement helps the project managers to develop appropriate strategies for increasing the effectiveness of user involvement.
Zuo, H, Zhang, G, Behbood, V & Lu, J 1970, 'Feature Spaces-based Transfer Learning', PROCEEDINGS OF THE 2015 CONFERENCE OF THE INTERNATIONAL FUZZY SYSTEMS ASSOCIATION AND THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY, World Congress of the International-Fuzzy-Systems-Association (IFSA) / Conference of the European-Society-for-Fuzzy-Logic-and-Technology (EUSFLAT), Atlantis Press, Gijon, Spain, pp. 1000-1005.
View/Download from: Publisher's site
View description>>
Transfer learning provides an approach to solve target tasks more quickly and effectively by using previously-acquired knowledge learned from source tasks. Most of transfer learning approaches extract knowledge of source domain in the given feature space. The issue is that single perspective can t mine the relationship of source domain and target domain fully. To deal with this issue, this paper develops a method using Stacked Denoising Autoencoder (SDA) to extract new feature spaces for source domain and target domain, and define two fuzzy sets to analyse the variation of prediction ac-curacy of target task in new feature spaces
Zuo, H, Zhang, G, Behbood, V, Lu, J, Meng, X & IEEE 1970, 'Transfer Learning in Hierarchical Feature Spaces', 2015 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (ISKE), International Conference on Intelligent Systems and Knowledge Engineering, IEEE, Taipei, Taiwan, pp. 183-188.
View/Download from: Publisher's site
View description>>
Transfer learning provides an approach to solve target tasks more quickly and effectively by using previously acquired knowledge learned from source tasks. As one category of transfer learning approaches, feature-based transfer learning approaches aim to find a latent feature space shared between source and target domains. The issue is that the sole feature space can't exploit the relationship of source domain and target domain fully. To deal with this issue, this paper proposes a transfer learning method that uses deep learning to extract hierarchical feature spaces, so knowledge of source domain can be exploited and transferred in multiple feature spaces with different levels of abstraction. In the experiment, the effectiveness of transfer learning in multiple feature spaces is compared and this can help us find the optimal feature space for transfer learning
Babbush, R, Berry, DW, Sanders, YR, Kivlichan, ID, Scherer, A, Wei, AY, Love, PJ & Aspuru-Guzik, A 2015, 'Exponentially More Precise Quantum Simulation of Fermions in the Configuration Interaction Representation'.
Chasseur, T, Theis, LS, Sanders, YR, Egger, DJ & Wilhelm, FK 2015, 'Engineering adiabaticity at an avoided crossing with optimal control'.
Gao, L, Yu, S, Luan, TH & Zhou, W 2015, 'Introduction', Springer International Publishing, pp. 1-7.
View/Download from: Publisher's site
View description>>
© The Author(s) 2015. In this chapter, the backgrounds of Delay Tolerant Networks (DTNs) are introduced first. Hot research questions in DTNs are then formally addressed. Lastly, the organization of this monograph is presented at the end of this chapter.
Gil-Aluja, J, Terceño-Gómez, A, Ferrer-Comalat, JC, Merigó-Lindahl, JM & Linares-Mustarós, S 2015, 'Preface', pp. v-vi.
Kajdanowicz, T, Michalski, R, Musiał, K & Kazienko, P 2015, 'Learning in Unlabeled Networks - An Active Learning and Inference Approach'.
View description>>
The task of determining labels of all network nodes based on the knowledge
about network structure and labels of some training subset of nodes is called
the within-network classification. It may happen that none of the labels of the
nodes is known and additionally there is no information about number of classes
to which nodes can be assigned. In such a case a subset of nodes has to be
selected for initial label acquisition. The question that arises is: 'labels of
which nodes should be collected and used for learning in order to provide the
best classification accuracy for the whole network?'. Active learning and
inference is a practical framework to study this problem.
A set of methods for active learning and inference for within network
classification is proposed and validated. The utility score calculation for
each node based on network structure is the first step in the process. The
scores enable to rank the nodes. Based on the ranking, a set of nodes, for
which the labels are acquired, is selected (e.g. by taking top or bottom N from
the ranking). The new measure-neighbour methods proposed in the paper suggest
not obtaining labels of nodes from the ranking but rather acquiring labels of
their neighbours. The paper examines 29 distinct formulations of utility score
and selection methods reporting their impact on the results of two collective
classification algorithms: Iterative Classification Algorithm and Loopy Belief
Propagation.
We advocate that the accuracy of presented methods depends on the structural
properties of the examined network. We claim that measure-neighbour methods
will work better than the regular methods for networks with higher clustering
coefficient and worse than regular methods for networks with low clustering
coefficient. According to our hypothesis, based on clustering coefficient we
are able to recommend appropriate active learning and inference method.
Peris-Ortiz, M & Merigó-Lindahl, JM 2015, 'Entrepreneurship, Regional Development and Culture', Springer International Publishing, pp. 1-216.
View/Download from: Publisher's site
View description>>
© Springer International Publishing Switzerland 2015. The aim of this book is to analyze the relationships among entrepreneurship, regional development and culture in the current economy. Using an institutional approach, it examines the main theoretical issues and practices and their effect on different dimensions of society and the economy. Business creation is considered a key element of economic growth, innovation and employment. In recent years, entrepreneurial scholars have studied the factors that affect entrepreneurship and drive economic growth. In doing so, these scholars have aimed to understand what promotes entrepreneurial activity and also how to improve the development of regions or countries to increase wealth in society. The institutional approach can be applied to the entrepreneurship field to understand the phenomenon of entrepreneurship. This view considers the role of environment in the decision to create a company, which is critical to entrepreneurship, innovation and economic growth. Environment relates to legal aspects, public policy and support services (formal institutions) but is especially important in terms of sociocultural context (informal institutions). The creation of new ventures is greatly influenced by culture. Furthermore, it is important to highlight the influence of entrepreneurship on regional development, specifically through job creation, stimulation of economic growth and innovation. Thus, entrepreneurship, regional development and culture are fundamental for understanding economic growth and development as well as other phenomena such as technology transfer or women's entrepreneurship. Featuring contributions and cases studies from various countries and sectors, this volume provides an essential reference for scholars, academics, and researchers in entrepreneurship, business management, innovation and economics.
Sanders, YR, Wallman, JJ & Sanders, BC 2015, 'Bounding quantum gate error rate based on reported average fidelity'.
Thinh, LP, Torre, GDL, Bancal, J-D, Pironio, S & Scarani, V 2015, 'Randomness in post-selected events'.
Vasauskaite, J & Gill, AQ 2015, 'Sustainable Enterprise Architecture Towards Global Competitiveness'.
View description>>
Purpose. Competitiveness at the global scale is one of the key concerns enterprises need to deal within the dynamic global business environment. Enterprises need to be sustainable and adaptable to respond to the constantly changing global business landscape. The challenge is how to design a sustainable adaptive enterprise? This paper proposes a sustainable adaptive enterprise architecture driven approach to help enterprises to address the important concern of competitiveness from the perspective of today and tomorrow’s global market. Sustainability is a key element of strategy for future growth of the enterprise where the resource efficient, environmentally responsible manufacturing of products that deliver sustainability benefits can leverage commercial advantage for the company.Methodology. The methodology of this paper included quantitative and qualitative research methods: the analysis of primary and secondary sources of information. The information necessary for the evaluation was obtained from the strategic documents, statistical data sources, interviews with the experts. There was adopted a theoretical sustainable value framework and the adaptive enterprise architecture framework to developing the design of the sustainable adaptive enterprise architecture. The paper explains that an integrated approach to combining sustainability and enterprise architecture disciplines seems to be useful to address the concern of competiveness at the global scale.Results. This paper explores the dynamic interdependencies across processes of technological innovation and production organization, education and skills formation, and economic performance. Its core argument is that economically and socially sustainable growth will depend on the evolution of the knowledge economy and the absorption and application of technological innovations and a parallel transformation of the work force in order to supply the skills needed to implement and operate the new technol...
Xuan, J, Lu, J, Luo, X & Zhang, G 2015, 'Nonnegative Multi-level Network Factorization for Latent Factor Analysis'.
View description>>
Nonnegative Matrix Factorization (NMF) aims to factorize a matrix into twooptimized nonnegative matrices and has been widely used for unsupervisedlearning tasks such as product recommendation based on a rating matrix.However, although networks between nodes with the same nature exist, standardNMF overlooks them, e.g., the social network between users. This problem leadsto comparatively low recommendation accuracy because these networks are alsoreflections of the nature of the nodes, such as the preferences of users in asocial network. Also, social networks, as complex networks, have many differentstructures. Each structure is a composition of links between nodes and reflectsthe nature of nodes, so retaining the different network structures will lead todifferences in recommendation performance. To investigate the impact of thesenetwork structures on the factorization, this paper proposes four multi-levelnetwork factorization algorithms based on the standard NMF, which integratesthe vertical network (e.g., rating matrix) with the structures of horizontalnetwork (e.g., user social network). These algorithms are carefully designedwith corresponding convergence proofs to retain four desired networkstructures. Experiments on synthetic data show that the proposed algorithms areable to preserve the desired network structures as designed. Experiments onreal-world data show that considering the horizontal networks improves theaccuracy of document clustering and recommendation with standard NMF, andvarious structures show their differences in performance on these two tasks.These results can be directly used in document clustering and recommendationsystems.
Xuan, J, Lu, J, Zhang, G, Xu, RYD & Luo, X 2015, 'Nonparametric Relational Topic Models through Dependent Gamma Processes'.
View description>>
Traditional Relational Topic Models provide a way to discover the hiddentopics from a document network. Many theoretical and practical tasks, such asdimensional reduction, document clustering, link prediction, benefit from thisrevealed knowledge. However, existing relational topic models are based on anassumption that the number of hidden topics is known in advance, and this isimpractical in many real-world applications. Therefore, in order to relax thisassumption, we propose a nonparametric relational topic model in this paper.Instead of using fixed-dimensional probability distributions in its generativemodel, we use stochastic processes. Specifically, a gamma process is assignedto each document, which represents the topic interest of this document.Although this method provides an elegant solution, it brings additionalchallenges when mathematically modeling the inherent network structure oftypical document network, i.e., two spatially closer documents tend to havemore similar topics. Furthermore, we require that the topics are shared by allthe documents. In order to resolve these challenges, we use a subsamplingstrategy to assign each document a different gamma process from the globalgamma process, and the subsampling probabilities of documents are assigned witha Markov Random Field constraint that inherits the document network structure.Through the designed posterior inference algorithm, we can discover the hiddentopics and its number simultaneously. Experimental results on both syntheticand real-world network datasets demonstrate the capabilities of learning thehidden topics and, more importantly, the number of topics.
Zowghi, D 2015, '2015 IEEE 23rd International Requirements Engineering Conference (RE)', IEEE computer Society, NJ, USA.