Cao, L & Zhang, C 2006, 'Domain-Driven Data Mining', International Journal of Data Warehousing and Mining, vol. 2, no. 4, pp. 49-65.
View/Download from: Publisher's site
View description>>
Extant data mining is based on data-driven methodologies. It either views data mining as an autonomous data-driven, trial-and-error process or only analyzes business issues in an isolated, case-by-case manner. As a result, very often the knowledge discovered generally is not interesting to real business needs. Therefore, this article proposes a practical data mining methodology referred to as domain-driven data mining, which targets actionable knowledge discovery in a constrained environment for satisfying user preference. The domain-driven data mining consists of a DDID-PD framework that considers key components such as constraint-based context, integrating domain knowledge, human-machine cooperation, in-depth mining, actionability enhancement, and iterative refinement process. We also illustrate some examples in mining actionable correlations in Australian Stock Exchange, which show that domain-driven data mining has potential to improve further the actionability of patterns for practical use by industry and business.
Cao, L, Zhang, C & Liu, J 2006, 'Ontology-based integration of business intelligence', Web Intelligence and Agent Systems, vol. 4, no. 3, pp. 313-325.
View description>>
The integration of Business Intelligence (BI) has been taken bybusiness decision-makers as an effective means to enhance enterprise 'soft power' and added value in the reconstruction and revolution oftraditional industries. The existing solutions based on structuralintegration are to pack together data warehouse (DW), OLAP, data mining(DM) and reporting systems from different vendors. BI system users arefinally delivered a reporting system in which reports, data models,dimensions and measures are predefined by system designers. As aresult of a survey in the US, 85% of DW projects based on the above solutions failed to meet their intended objectives. In this paper, wesummarize our investigation on the integration of BI on the basis ofsemantic integration and structural interaction. Ontology-basedintegration of BI is discussed for semantic interoperability inintegrating DW, OLAP and DM. A hybrid ontological structure isintroduced which includes conceptual view, analytical view and physicalview. These views are matched with user interfaces, DW and enterpriseinformation systems, respectively. Relevant ontological engineeringtechniques are developed for ontology namespace, semantic relationships,and ontological transformation, mapping and query in this ontologicalspace. The approach is promising for business-oriented, adaptive andautomatic integration of BI in the real world. Operational decisionmaking experiments within a telecom company have demonstrated that a BI system utilizing the proposed approach is more flexible. © 2006 - IOS Press and the authors. All rights reserved.
Choi, AL, Levy, JI, Dockery, DW, Ryan, LM, Tolbert, PE, Altshul, LM & Korrick, SA 2006, 'Does living near a superfund site contribute to higher polychlorinated biphenyl (PCB) exposure?', ENVIRONMENTAL HEALTH PERSPECTIVES, vol. 114, no. 7, pp. 1092-1098.
View/Download from: Publisher's site
View description>>
We assessed determinants of cord serum polychlorinated biphenyl (PCB) levels among 720 infants born between 1993 and 1998 to mothers living near a PCB-contaminated Superfund site in Massachusetts, measuring the sum of 51 PCB congeners (∑PCB) and ascertaining maternal address, diet, sociodemographics, and exposure risk factors. Addresses were geocoded to obtain distance to the Superfund site and neighborhood characteristics. We modeled log10(∑PCB) as a function of potential individual and neighborhood risk factors, mapping model residuals to assess spatial correlates of PCB exposure. Similar analyses were performed for light (mono-tetra) and heavy (penta-deca) PCBs to assess potential differences in exposure pathways as a function of relative volatility. PCB-118 (relatively prevalent in site sediments and cord serum) was assessed separately. The geometric mean of ∑PCB levels was 0.40 (range, 0.068-18.14) ng/g serum. Maternal age and birthplace were the strongest predictors of ∑PCB levels. Maternal consumption of organ meat and local dairy products was associated with higher and smoking and previous lactation with lower ∑PCB levels. Infants born later in the study had lower ∑PCB levels, likely due to temporal declines in exposure and site remediation in 1994-1995. No association was found between ∑PCB levels and residential distance from the Superfund site. Similar results were found with light and heavy PCBs and PCB-118. Previously reported demographic (age) and other (lactation, smoking, diet) correlates of PCB exposure, as well as local factors (consumption of local dairy products and Superfund site dredging) but not residential proximity to the site, were important determinants of cord serum PCB levels in the study community.
Davis, ME, Smith, TJ, Laden, F, Hart, JE, Ryan, LM & Garshick, E 2006, 'Modeling particle exposure in US trucking terminals', ENVIRONMENTAL SCIENCE & TECHNOLOGY, vol. 40, no. 13, pp. 4226-4232.
View/Download from: Publisher's site
View description>>
Multi-tiered sampling approaches are common in environmental and occupational exposure assessment, where exposures for a given individual are often modeled based on simultaneous measurements taken at multiple indoor and outdoor sites. The monitoring data from such studies is hierarchical by design, imposing a complex covariance structure that must be accounted for in order to obtain unbiased estimates of exposure. Statistical methods such as structural equation modeling (SEM) represent a useful alternative to simple linear regression in these cases, providing simultaneous and unbiased predictions of each level of exposure based on a set of covariates specific to the exposure setting. We test the SEM approach using data from a large exposure assessment of diesel and combustion particles in the U.S. trucking industry. The exposure assessment includes data from 36 different trucking terminals across the United States sampled between 2001 and 2005, measuring PM2.5 and its elemental carbon (EC), organic carbon (OC) components, by personal monitoring, and sampling at two indoor work locations and an outdoor 'background' location. Using the SEM method, we predict the following: (1) personal exposures as a function of work-related exposure and smoking status; (2) work-related exposure as a function of terminal characteristics, indoor ventilation, job location, and background exposure conditions; and (3) background exposure conditions as a function of weather, nearby source pollution, and other regional differences across terminal sites. The primary advantage of SEMs in this setting is the ability to simultaneously predict exposures at each of the sampling locations, while accounting for the complex covariance structure among the measurements and descriptive variables. The statistically significant results and high R 2 values observed from the trucking industry application supports the broader use of this approach in exposure assessment modeling. © 2006 American...
Du, C, Yang, J, Wu, Q, Zhang, T, Wang, H, Chen, L & Wu, Z 2006, 'Extended Fitting Methods of Active Shape Model for the Location of Facial Feature Points', Lecture notes in computer science, vol. 4338, pp. 610-618.
View/Download from: Publisher's site
View description>>
In this study, we propose three extended fitting methods to the standard ASM(active shape model). Firstly, profiles are extended from ID to 2D; Secondly, profiles of different landmarks are constructed individually; Thirdly, length of the profiles is determined adaptively with the change of level during searching, and the displacements in the last level are constrained. Each method and the combination of three methods are tested on the SJTU(Shanghai Jiaotong University) face database. In all cases, compared to the standard ASM, each method improves the accuracy or speed in a way, and the combination of three methods improves the accuracy and speed greatly.
Gabrys, B & Ruta, D 2006, 'Genetic algorithms in classifier fusion', Applied Soft Computing, vol. 6, no. 4, pp. 337-347.
View/Download from: Publisher's site
Gold, DR, Willwerth, BM, Tantisira, KG, Finn, PW, Schaub, B, Perkins, DL, Tzianabos, A, Ly, NP, Schroeter, C, Gibbons, F, Campos, H, Oken, E, Gillman, MW, Palmer, LJ, Ryan, LM & Weiss, ST 2006, 'Associations of cord blood fatty acids with lymphocyte proliferation, IL-13, and IFN-gamma', JOURNAL OF ALLERGY AND CLINICAL IMMUNOLOGY, vol. 117, no. 4, pp. 931-938.
View/Download from: Publisher's site
View description>>
Background: N-3 and n-6 polyunsaturated fatty acids (PUFAs) have been hypothesized to have opposing influences on neonatal immune responses that might influence the risk of allergy or asthma. However, both n-3 eicosapentaenoic acid (EPA) and n-6 arachidonic acid (AA) are required for normal fetal development. Objective: We evaluated whether cord blood fatty acid levels were related to neonatal immune responses and whether n-3 and n-6 PUFA responses differed. Methods: We examined the relation of cord blood plasma n-3 and n-6 PUFAs (n = 192) to antigen- and mitogen-stimulated cord blood lymphocyte proliferation (n = 191) and cytokine (IL-13 and IFN-γ; n = 167) secretion in a US birth cohort. Results: Higher levels of n-6 linoleic acid were correlated with higher IL-13 levels in response to Bla g 2 (cockroach, P = .009) and Der f 1 (dust mite, P = .02). Higher n-3 EPA and n-6 AA levels were each correlated with reduced lymphocyte proliferation and IFN-γ levels in response to Bla g 2 and Der f 1 stimulation. Controlling for potential confounders, EPA and AA had similar independent effects on reduced allergen-stimulated IFN-γ levels. If neonates had either EPA or AA levels in the highest quartile, their Der f 1 IFN-γ levels were 90% lower (P = .0001) than those with both EPA and AA levels in the lowest 3 quartiles. Reduced AA/EPA ratio was associated with reduced allergen-stimulated IFN-γ level. Conclusion: Increased levels of fetal n-3 EPA and n-6 AA might have similar effects on attenuation of cord blood lymphocyte proliferation and IFN-γ secretion. Clinical implications: The implications of these findings for allergy or asthma development are not yet known. © 2006 American Academy of Allergy, Asthma and Immunology.
Gunes, H & Piccardi, M 2006, 'Assessing facial beauty through proportion analysis by image processing and supervised learning', INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, vol. 64, no. 12, pp. 1184-1199.
View/Download from: Publisher's site
He, X, Jia, W, Wu, Q & Hintz, T 2006, 'Description of the cardiac movement using hexagonal image structures', Computerized Medical Imaging and Graphics, vol. 30, no. 6-7, pp. 377-382.
View/Download from: Publisher's site
View description>>
The most notable characteristic of the heart is its movement. Detection of dynamic information describing cardiac movement such as amplitude, speed and acceleration facilitates interpretation of normal and abnormal function. In recent years, the Omni-directional M-mode Echocardiography System (OMES) has been developed as a process that builds moving information from a sequence of echocardiography image frames. OMES detects cardiac movement through construction and analysis of Position-Time Grey Waveform (PTGW) images on some feature points of the boundaries of the ventricles. Image edge detection plays an important role in determining the feature boundary points and their moving directions as the basis for extraction of PTGW images-Spiral Architecture (SA) has proved efficient for image edge detection. SA is a hexagonal image structure in which an image is represented as a collection of hexagonal pixels. There are two operations called spiral addition and spiral multiplication defined on SA. They correspond to image translation and rotation, respectively. In this paper, we perform ventricle boundary detection based on SA using various defined chain codes. The gradient direction of each boundary point is determined at the same time. PTGW images at each boundary point are obtained through a series of spiral additions according to the directions of boundary points. Unlike the OMES system, our new approach is no longer affected by the translation movement of the heart. As its result, three curves representing the amplitude, speed and acceleration of cardiac movement can be easily drawn from the PTGW images obtained. Our approach is more efficient and accurate than OMES, and our results contain a more robust and complete description of cardiac motion. © 2006 Elsevier Ltd. All rights reserved.
Houseman, EA, Coull, BA & Ryan, LM 2006, 'A functional-based distribution diagnostic for a linear model with correlated outcomes', BIOMETRIKA, vol. 93, no. 4, pp. 911-926.
View/Download from: Publisher's site
View description>>
In this paper we present an easy-to-implement graphical distribution diagnostic for linear models with correlated errors. Houseman et al. (2004) constructed quantile-quantile plots for the marginal residuals of such models, suitably transformed. We extend the pointwise asymptotic theory to address the global stochastic behaviour of the corresponding empirical cumulative distribution function, and describe a simulation technique that serves as a computationally efficient parametric bootstrap for generating representatives of its stochastic limit. Thus, continuous functionals of the empirical cumulative distribution function may be used to form global tests of normality. Through the use of projection matrices, we generalised our methods to include tests that are directed at assessing the normality of particular components of the error. Thus, tests proposed by Lange & Ryan (1989) follow as a special case. Our method works well both for models having independent units of sampling and for those in which all observations are correlated. © 2006 Biometrika Trust.
Hussain, OK, Chang, E, Hussain, FK & Dillon, TS 2006, 'A methodology for Risk measurement in e-transactions', COMPUTER SYSTEMS SCIENCE AND ENGINEERING, vol. 21, no. 1, pp. 17-31.
View description>>
Risk is present in almost every activity. Alternately speaking, almost every activity may have some undesired outcomes which the person doing the activity hopes that they do not occur when it undertakes that particular activity. The quantification of tho
Kazienko, P & Musiał, K 2006, 'Recommendation Framework for Online Social Networks', ADVANCES IN WEB INTELLIGENCE AND DATA MINING, vol. 23, pp. 111-120.
View/Download from: Publisher's site
Li, Y & Ryan, L 2006, 'Inference on survival data with covariate measurement error - An imputation-based approach', SCANDINAVIAN JOURNAL OF STATISTICS, vol. 33, no. 2, pp. 169-190.
View/Download from: Publisher's site
View description>>
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerate distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicates for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error. © Board of the Foundation of the Scandinavian Journal of Statistics 2006.
Louis, GB, Dukic, V, Heagerty, PJ, Louis, TA, Lynch, CD, Ryan, LM, Schisterman, EF, Trumble, A & Grp, PMW 2006, 'Analysis of repeated pregnancy outcomes', STATISTICAL METHODS IN MEDICAL RESEARCH, vol. 15, no. 2, pp. 103-126.
View/Download from: Publisher's site
View description>>
Women tend to repeat reproductive outcomes, with past history of an adverse outcome being associated with an approximate two-fold increase in subsequent risk. These observations support the need for statistical designs and analyses that address this clustering. Failure to do so may mask effects, result in inaccurate variance estimators, produce biased or inefficient estimates of exposure effects. We review and evaluate basic analytic approaches for analysing reproductive outcomes, including ignoring reproductive history, treating it as a covariate or avoiding the clustering problem by analysing only one pregnancy per woman, and contrast these to more modern approaches such as generalized estimating equations with robust standard errors and mixed models with various correlation structures. We illustrate the issues by analysing a sample from the Collaborative Perinatal Project dataset, demonstrating how the statistical model impacts summary statistics and inferences when assessing etiologic determinants of birth weight. © 2006 Edward Arnold (Publishers) Ltd.
Ly, NP, Ruiz-Pérez, B, Onderdonk, AB, Tzianabos, AO, Litonjua, AA, Liang, C, Laskey, D, Delaney, ML, DuBois, AM, Levy, H, Gold, DR, Ryan, LM, Weiss, ST & Celedón, JC 2006, 'Mode of delivery and cord blood cytokines: a birth cohort study', Clinical and Molecular Allergy, vol. 4, no. 1.
View/Download from: Publisher's site
View description>>
AbstractBackgroundThe mechanisms for the association between birth by cesarean section and atopy and asthma are largely unknown.ObjectiveTo examine whether cesarean section results in neonatal secretion of cytokines that are associated with increased risk of atopy and/or asthma in childhood. To examine whether the association between mode of delivery and neonatal immune responses is explained by exposure to the maternal gut flora (a marker of the vaginal flora).MethodsCBMCs were isolated from 37 neonates at delivery, and secretion of IL-13, IFN-γ, and IL-10 (at baseline and after stimulation with antigens [dust mite and cat dander allergens, phytohemagglutinin, and lipopolysaccharide]) was quantified by ELISA. Total and specific microbes were quantified in maternal stool. The relation between mode of delivery and cord blood cytokines was examined by linear regression. The relation between maternal stool microbes and cord blood cytokines was examined by Spearman's correlation coefficients.ResultsCesarean section was associated with increased levels of IL-13 and IFN-γ. In multivariate analyses, cesarean section was associated with an increment of 79.4 pg/ml in secretion of IL-13 by CBMCs after stimulation with dust mite allergen (P < 0.001). Among children born by vaginal delivery, gram-positive anaerobes and total anaerobes in maternal stool were positively correlated with levels of IL-10, and gram-negative aerobic bacteria in maternal stool were negatively correlated with levels of IL-13 and IFN-γ.ConclusionCesarean section is associated with increased levels of IL-13 and IFN-γ, perhaps because of lack of labor and/or reduced exposure t...
Mauger, L & Stoianoff, NP 2006, 'Protecting Australia's Trade Mark Interests through the Australia-China Free Trade Agreement', LawAsia Journal, vol. 2006, no. 1, pp. 125-162.
View description>>
Intellectual property provisions and trade marks consideration in the proposed Australia-China Free Trade Agreement - significant implications for Australian interests.
McCarty, KM, Houseman, EA, Quamruzzaman, Q, Rahman, M, Mahiuddin, G, Smith, T, Ryan, L & Christiani, DC 2006, 'The impact of diet and betel nut use on skin lesions associated with drinking-water arsenic in Pabna, Bangladesh', ENVIRONMENTAL HEALTH PERSPECTIVES, vol. 114, no. 3, pp. 334-340.
View/Download from: Publisher's site
View description>>
An established exposure-response relationship exists between water arsenic levels and skin lesions. Results of previous studies with limited historical exposure data, and laboratory animal studies suggest that diet may modify arsenic metabolism and toxicity. In this study, we evaluated the effect of diet on the risk of arsenic-related skin lesions in Pabna, Bangladesh. Six hundred cases and 600 controls loosely matched on age and sex were enrolled at Dhaka Community Hospital, Bangladesh, in 2001-2002. Diet, demographic data, and water samples were collected. Water samples were analyzed for arsenic using inductively coupled plasma mass spectroscopy. Betel nut use was associated with a greater risk of skin lesions in a multivariate model [odds ratio (OR) = 1.67; 95% confidence interval (CI), 1.18-2.36]. Modest decreases in risk of skin lesions were associated with fruit intake 1-3 times/month (OR = 0.68; 95%CI, 0.51-0.89) and canned goods at least 1 time/month (OR = 0.41; 95% CI, 0.20-0.86). Bean intake at least 1 time/day (OR = 1.89; 95% CI, 1.11-3.22) was associated with increased odds of skin lesions. Betel nut use appears to be associated with increased risk of developing skin lesions in Bangladesh. Increased intake of fruit and canned goods may be associated with reduced risk of lesions. Increased intake of beans may be associated with an increased risk of skin lesions. The results of this study do not provide clear support for a protective effect of vegetable and overall protein consumption against the development of skin lesions, but a modest benefit cannot be excluded.
Morales, KH, Ibrahim, JG, Chen, CJ & Ryan, LM 2006, 'Bayesian model averaging with applications to benchmark dose estimation for arsenic in drinking water', JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, vol. 101, no. 473, pp. 9-17.
View/Download from: Publisher's site
View description>>
An important component of quantitative risk assessment involves characterizing the dose-response relationship between an environmental exposure and adverse health outcome and then computing a benchmark dose, or the exposure level that yields a suitably low risk. This task is often complicated by model choice considerations, because risk estimates depend on the model parameters. We propose using Bayesian methods to address the problem of model selection and derive a model-averaged version of the benchmark dose. We illustrate the methods through application to data on arsenic-induced lung cancer from Taiwan. © 2006 American Statistical Association.
Morris, JS, Arroyo, C, Coull, BA, Ryan, LM, Herrick, R & Gortmaker, SL 2006, 'Using wavelet-based functional mixed models to characterize population heterogeneity in accelerometer profiles: A case study', JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, vol. 101, no. 476, pp. 1352-1364.
View/Download from: Publisher's site
View description>>
We present a case study illustrating the challenges of analyzing accelernmetcr data taken from a sample of children participating in an intervention study designed to increase physical activity. An aceelerometer is a small device worn on the hip that records the minute-by-minute activity levels throughout the day for each day it is worn. The resulting data are irregular functions characterized by many peaks representing short bursts of intense activity. We model these data using the wavelet-based functional mixed model. This approach incorporates multiple fixed-effects and random-effects functions of arbitrary form, the estimates of which are adaptively regularised using wavelet shrinkage. The method yields posterior samples for all functional quantities of the model, which can be used to perform various types of Bayesian inference and prediction, in our case study, a high proportion of the daily activity profiles are incomplete (i.e., have some portion of the profile missing), and thus cannot he modeled directly using the previously described method. We present a new method for stochastically imputing the missing data that allows us to incorporate these incomplete profiles in our analysis. Our approach borrows strength from both the observed measurements within the incomplete profiles and from other profiles, from the same child as well as from other children with similar covariutc levels, while appropriately propagating the uncertainty of the imputation throughout all subsequent inference. We apply this method to our case study, revealing some interesting insights into children's activity patterns. We point out some strengths and limitations of using this approach to analyze accelerometer data. © 2006 American Statistical Association.
Pham, TD, Beck, D & Yan, H 2006, 'Spectral Pattern Comparison Methods for Cancer Classification Based on Microarray Gene Expression Data', IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 53, no. 11, pp. 2425-2430.
View/Download from: Publisher's site
View description>>
We present, in this paper, two spectral pattern comparison methods for cancer classification using microarray gene expression data. The proposed methods are different from other current classifiers in the ways features are selected and pattern similarities measured. In addition, these spectral methods do not require any data preprocessing which is neccessary for many other classification techniques. Expertimental results using three popular microarray data sets demonstrate the robustness and effectiveness of the spectral pattern classifiers. © 2006 IEEE.
Wu, Q, He, S, Hintz, TB & Ye, Y 2006, 'A novel and uniform image partitioning on spiral architecture', International journal of computational science and engineering, vol. 2, no. 1, pp. 57-63.
Wu, Q, He, X, Hintz, T & Ye, Y 2006, 'A novel and uniform image partitioning on spiral architecture', International Journal of Computational Science and Engineering, vol. 2, no. 1/2, pp. 57-57.
View/Download from: Publisher's site
View description>>
Uniform image partitioning based on spiral architecture plays an important role in parallel image processing in many aspects such as uniform data partitioning, load balancing, zero data exchange between the processing nodes et al. However, when the number of partitions is not the power of seven like 7, 49, every sub-image except one is split into a few fragments which are mixed together. We could not tell which fragments belong to which subimage. It is an unacceptable flaw to parallel image processing. This paper proposes a method to resolve the problem mentioned above. From the experimental results, it is shown that the proposed method correctly identifies the fragments belonging to the same subimage and successfully collects them together to be a complete subimage. Then, these subimages can be distributed into the different processing nodes for further processing. Copyright © 2006, Inderscience Publishers.
Zhang, C, Ying, M & Qiao, B 2006, 'Universal programmable devices for unambiguous discrimination', PHYSICAL REVIEW A, vol. 74, no. 4, pp. 1-9.
View/Download from: Publisher's site
View description>>
We discuss the problem of designing unambiguous programmable discriminators for any n unknown quantum states in an m -dimensional Hilbert space. The discriminator is a fixed measurement that has two kinds of input registers: the program registers and the data register. The quantum state in the data register is what users want to identify, which is confirmed to be among the n states in program registers. The task of the discriminator is to tell the users which state stored in the program registers is equivalent to that in the data register. First, we give a necessary and sufficient condition for judging an unambiguous programmable discriminator. Then, if m=n, we present an optimal unambiguous programmable discriminator for them, in the sense of maximizing the worst-case probability of success. Finally, we propose a universal unambiguous programmable discriminator for arbitrary n quantum states. © 2006 The American Physical Society.
Zhang, H, He, S & Wu, Q 2006, 'Generic Object Detection', Journal of Yunnan Nationalities University, vol. 15, no. 4, pp. 261-267.
Abolhasan, M & Lipman, J 1970, 'Self-selection route discovery strategies for reactive routing in ad hoc networks', Proceedings of the first international conference on Integrated internet ad hoc and sensor networks - InterSense '06, the first international conference, ACM Press, Nice, France.
View/Download from: Publisher's site
View description>>
Routing in Ad hoc Networks has received a significant amount of attention. In recent years, the focus of research has been in on-demand (or reactive) routing protocols due to the recognition that these protocols have the potential to achieve higher levels of scalability than proactive routing strategies. However, most on-demand routing protocols proposed to date attempt to increase routing efficiency by using existing knowledge about the destination or by increasing the stability of the routes. Little research has been done to reduce route discovery overhead when no previous destination information is available. We present a number of different strategies, which encourage a more distributed and localised approach to route discovery by allowing each intermediate node during route discovery to make forwarding decisions using localised knowledge and self-selection. The use of self-selection for route discovery enables nodes to independently make route request (RREQ) forwarding decisions based upon a selection criterion or by satisfying certain conditions. The nodes which do not satisfy the selection criterion do not rebroadcast the RREQs. This provides a more effective and efficient search strategy than the use of traditional brute force blind flooding. We implemented our self-selecting route discovery strategies over AODV using the GloMoSim network simulation package, and compared the performance with existing routing protocols. Our simulation results show that a significant drop in the number of control packets can be achieved by giving each intermediate node more authority for self-selection during route discovery. Furthermore, a significant increase in routing performance is achieved as the number of nodes in the network is increased. © 2006 ACM.
Al-Oqaily, A & Kennedy, PJ 1970, 'Using a kernel-based approach to visualize integrated Chronic Fatigue Syndrome datasets', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, ACS, Sydney Australia, pp. 53-61.
View description>>
We describe the use of a kernel-based approach using the Laplacian matrix to visualize an integrated Chronic Fatigue Syndrome dataset comprising symptom and fatigue questionnaire and patient classification data, complete blood evaluation data and patient gene expression profiles. We present visualizations of the individual and integrated datasets with the linear and Gaussian kernel functions. An efficient approach inspired by computational linguistics for constructing a linear kernel matrix for the gene expression data is described. Visualizations of the questionnaire data show a cluster of non-fatigued individuals distinct from those suffering from Chronic Fatigue Syndrome that supports the fact that diagnosis is generally made using this kind of data. Clusters unrelated to patient classes were found in the gene expression data. Structure from the gene expression dataset dominated visualizations of integrated datasets that included gene expression data. © 2006, Australian Computer Society, Inc.
Apeh, ET & Gabrys, B 1970, 'Clustering for Data Matching', KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 1, PROCEEDINGS, 10th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Springer Berlin Heidelberg, Bournemouth, ENGLAND, pp. 1216-1225.
View/Download from: Publisher's site
Beauregard, M & Kennedy, PJ 1970, 'Robust Simulation of Lamprey Tracking', Parallel problem solving from nature - PPSN, Parallel Problem Solving from Nature, Springer Berlin Heidelberg, Rejkavik, Iceland, pp. 641-650.
View/Download from: Publisher's site
View description>>
Biologically realistic computer simulation of vertebrates is a challenging problem with exciting applications in computer graphics and robotics. Once the mechanics of locomotion are available it is interesting to mediate this locomotion with higher level behavior such as target tracking. One recent approach simulates a relatively simple vertebrate, the lamprey, using recurrent neural networks to model the central pattern generator of the spine and a physical model for the body. Target tracking behavior has also been implemented for such a model. However, previous approaches suffer from deficiencies where particular orientations of the body to the target cause the central pattern generator to shutdown. This paper describes an approach to making target tracking more robust.
Beauregard, M, Kennedy, PJ & Debenham, J 1970, 'Fast simulation of animal locomotion: lamprey swimming', IFIP Advances in Information and Communication Technology, World Computer Congress, Springer US, Santiago, Chile, pp. 247-256.
View/Download from: Publisher's site
View description>>
© 2006 by International Federation for Information Processing. All rights reserved. Biologically realistic computer simulation of vertebrate locomotion is an interesting and challenging problem with applications in computer graphics and robotics. One current approach simulates a relatively simple vertebrate, the lamprey, using recurrent neural networks for the spine and a physical model for the body. The model is realized as a system of differential equations. The drawback with this approach is the slow speed of simulation. This paper describes two approaches to speeding up simulation of lamprey locomotion without sacrificing too much biological realism: (i) use of superior numerical integration algorithms and (ii) simplifications to the neural architecture of the lamprey.
Cao, L 1970, 'Activity Mining: Challenges and Prospects', Advanced Data Mining And Applications, Proceedings, Lecture Notes in Artificial Intelligence, International Conference on Advanced Data Mining and Applications, Springer Berlin Heidelberg, Xi'an, China, pp. 582-593.
View/Download from: Publisher's site
View description>>
Activity data accumulated in real life, e.g. in terrorist activities and fraudulent customer contacts, presents special structural and semantic complexities. However, it may lead to or be associated with significant business impacts. For instance, a seri
Cao, L & Zhang, C 1970, 'Domain-Driven Actionable Knowledge Discovery in the Real World', Advances In Knowledge Discovery And Data Mining, Proceedings, Lecture Notes in Artificial Intelligence, Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer Berlin Heidelberg, Singapore, pp. 821-830.
View/Download from: Publisher's site
View description>>
Actionable knowledge discovery is one of Grand Challenges in KDD. To this end, many methodologies have been developed. However, they either view data mining as an autonomous data-driven trial-and-error process, or only analyze the issues in an isolated a
Cao, L, Luo, C, Ni, J, Luo, D & Zhang, C 1970, 'Stock Data Mining through Fuzzy Genetic Algorithms', Proceedings of the 9th Joint International Conference on Information Sciences (JCIS-06), 9th Joint International Conference on Information Sciences (JCIS-06), Atlantis Press.
View/Download from: Publisher's site
View description>>
Stock data mining such as financial pairs mining is useful for trading supports and market surveillance. Financial pairs mining targets mining pair relationships between financial entities such as stocks and markets. This paper introduces a fuzzy genetic algorithm framework and strategies for discovering pair relationship in stock data such as in high dimensional trading data by considering user preference. The developed techniques have a potential to mine pairs between stocks, between stock-trading rules, and between markets. Experiments in real stock data show that the proposed approach is useful for mining pairs helpful for real trading decision-support and market surveillance.
Cao, L, Luo, D & Zhang, C 1970, 'Fuzzy Genetic Algorithms for Pairs Mining', PRICAI 2006: Trends In Artificial Intelligence, Proceedings, Lecture Notes in Artificial Intelligence, Pacific Rim International Conference on Artificial Intelligence, Springer Berlin Heidelberg, Guilin, China, pp. 711-720.
View/Download from: Publisher's site
View description>>
Pairs mining targets to mine pairs relationship between entities such as between stocks and markets in financial data mining. It has emerged as a kind of promising data mining applications. Due to practical complexities in the real-world pairs mining suc
Cao, L, Ni, J & Luo, D 1970, 'Ontological Engineering in Data Warehousing', Frontiers Of WWW Research And Development - Apweb 2006, Proceedings - Lecture Notes in Computer Science, Asia Pacific Web Conference, Springer Berlin Heidelberg, Harbin, China, pp. 923-929.
View/Download from: Publisher's site
View description>>
In our previous work, we proposed the ontology-based integration of data warehousing to make existing data warehouse system more user-friendly, adaptive and automatic. This paper further outlines a high-level picture of the ontological engineering in dat
Chen, J, Shen, J-L, Zhang, J & Wangsa, K 1970, 'A Novel Multimedia Database System for Efficient Image/Video Retrieval Based on Hybrid-Tree Structure', 2006 International Conference on Machine Learning and Cybernetics, 2006 International Conference on Machine Learning and Cybernetics, IEEE, Dalian, pp. 4353-4358.
View/Download from: Publisher's site
View description>>
With recent advances in computer vision, image processing and analysis, a retrieval process based on visual content has became a key component in achieving high efficiency image query for large multimedia databases. In this paper, we propose and develop
Chen, L, Bhowmick, SS & Li, J 1970, 'COWES: Clustering Web Users Based on Historical Web Sessions', DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, PROCEEDINGS, 11th International Conference on Database Systems for Advanced Applications, Springer Berlin Heidelberg, Singapore, SINGAPORE, pp. 541-556.
View/Download from: Publisher's site
Chen, L, Bhowmick, SS & Li, J 1970, 'Mining Temporal Indirect Associations', ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 10th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer Berlin Heidelberg, Singapore, SINGAPORE, pp. 425-434.
View/Download from: Publisher's site
Chen, Q, Chen, Y-PP, Zhang, C & Li, L 1970, 'Mining Frequent Itemsets for Protein Kinase Regulation', PRICAI 2006: Trends in Artificial Intelligence, Pacific Rim International Conference on Artificial Intelligence, Springer Berlin Heidelberg, Guilin, China, pp. 222-230.
View/Download from: Publisher's site
View description>>
Protein kinases, a family of enzymes, have been viewed as an important signaling intermediary by living organisms for regulating critical biological processes such as memory, hormone response and cell growth. The unbalanced kinases are known to cause cancer and other diseases. With the increasing efforts to collect, store and disseminate information about the entire kinase family, it not only leads to valuable data set to understand cell regulation but also poses a big challenge to extract valuable knowledge about metabolic pathway from the data. Data mining techniques that have been widely used to find frequent patterns in large datasets can be extended and adapted to kinase data as well. This paper proposes a framework for mining frequent itemsets from the collected kinase dataset. An experiment using AMPK regulation data demonstrates that our approaches are useful and efficient in analyzing kinase regulation data.
Cheng, E & Piccardi, M 1970, 'Matching Moving Objects by parts with a maximum likelihood criterion', Proceedings of Image and Vision Computing New Zealand 2006, Image and Vision Computing Conference, University of Auckland, New Zealand, pp. 373-378.
Cheng, ED & Piccardi, M 1970, 'Matching of Objects Moving Across Disjoint Cameras', 2006 International Conference on Image Processing, 2006 International Conference on Image Processing, IEEE, Atlanta, USA, pp. 1769-1772.
View/Download from: Publisher's site
View description>>
Matching of single individuals as they move across disjoint camera views is a challenging task in video surveillance. In this paper, we present a novel algorithm capable of matching single individuals in such a scenario based on appearance features. In order to reduce the variable illumination effects in a typical disjoint camera environment, a cumulative color histogram transformation is first applied to the segmented moving object. Then, an incremental major color spectrum histogram representation (IMCSHR) is used to represent the appearance of a moving object and cope with small pose changes occurring along the track. An IMCHSR-based similarity measurement algorithm is also proposed to measure the similarity of any two segmented moving objects. A final step of post-matching integration along the object's track is eventually applied. Experimental results show that the proposed approach proved capable of providing correct matching in typical situations. ©2006 IEEE.
Cheng, ED, Madden, C & Piccardi, M 1970, 'Mitigating the Effects of Variable Illumination for Tracking across Disjoint Camera Views', 2006 IEEE International Conference on Video and Signal Based Surveillance, 2006 IEEE International Conference on Video and Signal Based Surveillance, IEEE, Sydney, Australia, pp. 32-37.
View/Download from: Publisher's site
View description>>
Tracking people by their appearance across disjoint camera views is challenging since appearance may vary significantly across such views. This problem has been tackled in the past by computing intensity transfer functions between each camera pair during an initial training stage. However, in real-life situations, intensity transfer functions depend not only on the camera pair, but also on the actual illumination at pixel-wise resolution and may prove impractical to estimate to a satisfactory extent. For this reason, in this paper we propose an appearance representation for people tracking capable of coping with the typical illumination changes occurring in a surveillance scenario. Our appearance representation is based on an online K-means color clustering algorithm, a fixed, data-dependent intensity transformation, and the incremental use of frames. Moreover, a similarity measurement is proposed to match the appearance representations of any two given moving objects along sequences of frames. Experimental results presented in this paper show that the proposed methods provides a viable while effective approach for tracking people across disjoint camera views in typical surveillance scenarios. © 2006 IEEE.
Christen, P, Kennedy, PJ, Li, J, Simoff, SJ & Williams, G 1970, 'Data Mining 2006: Proceedings of the Australasian Data Mining Conference (AusDM 2006)', Data Mining 2006: Proceedings of the Australasian Data Mining Conference (AusDM 2006), Australian Data Mining Conference, Australian Computer Society, Sydney.
Christen, P, Kennedy, PJ, Li, J, Simoff, SJ & Williams, GJ 1970, 'Preface', Conferences in Research and Practice in Information Technology Series.
Corchado, E, Baruque, B & Gabrys, B 1970, 'Maximum Likelihood Topology Preserving Ensembles', INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2006, PROCEEDINGS, 7th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2006), Springer Berlin Heidelberg, Univ Burgos, Burgos, SPAIN, pp. 1434-1442.
View/Download from: Publisher's site
Davis, M, Smith, T, Laden, F, Hart, J, Ryan, L & Garshick, E 1970, 'Structural equation modeling in exposure assessment', EPIDEMIOLOGY, ISEE/ISEA 2006 Conference, LIPPINCOTT WILLIAMS & WILKINS, Paris, FRANCE, pp. S466-S466.
View/Download from: Publisher's site
Gabrys, B, Baruque, B & Corchado, E 1970, 'Outlier Resistant PCA Ensembles', KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 3, PROCEEDINGS, 10th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Springer Berlin Heidelberg, Bournemouth, ENGLAND, pp. 432-440.
View/Download from: Publisher's site
Gunes, H & Piccardi, M 1970, 'A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior', 18th International Conference on Pattern Recognition, Vol 1, Proceedings, International Conference on Pattern Recognition, IEEE Computer Soc, Hong Kong, China, pp. 1148-1153.
View/Download from: Publisher's site
View description>>
To be able to develop and test robust affective multimodal systems, researchers need access to novel databases containing representative samples of human multi-modal expressive behavior. The creation of such databases requires a major effort in the defin
Gunes, H & Piccardi, M 1970, 'Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-body', Use of Vision in Human-Computer Interaction: Proceedings of the HCSNet Workshop on the use of vision in human-computer interaction, the HCSNet Workshop on the use of vision in human-computer interaction, Australian Computer Society, Canberra, Australia, pp. 35-42.
View description>>
A first step in developing and testing a robust affective multimodal system is to obtain or access data representing human multimodal expressive behaviour. Collected affect data has to be further annotated in order to become usable for the automated systems. Most of the existing studies of emotion or affect annotation are monomodal. Instead, in this paper, we explore how independent human observers annotate affect display from monomodal face data compared to bimodal face-and-body data. To this aim we collected visual affect data by recording the face and face-and-body simultaneously. We then conducted a survey by asking human observers to view and label the face and face-and-body recordings separately. The results obtained show that in general, viewing face-and-body simultaneously helps with resolving the ambiguity in annotating emotional behaviours.
Gunes, H, Piccardi, M & IEEE 1970, 'Creating and annotating affect Databases from face and body display: A contemporary survey', 2006 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-6, PROCEEDINGS, IEEE Conference on Systems, Man and Cybernetics, IEEE, Taipei, Taiwan, pp. 2426-2433.
View/Download from: Publisher's site
View description>>
Databases containing representative samples of human multi-modal expressive behavior are needed for the development of affect recognition systems. However, at present publicly-available databases exist mainly for single expressive modalities such as facial expressions, static and dynamic hand postures, and dynamic hand gestures. Only recently, a first bimodal affect database consisting of expressive face and upper-body display has been released. To foster development of affect recognition systems, this paper presents a comprehensive survey of the current state-of-the art in affect database creation from face and body display and elicits the requirements of an ideal multi-modal affect database
He, X, Hintz, T, Wu, Q, Wang, H & Jia, W 1970, 'A new simulation of spiral architecture', Proceedings of the 2006 International Conference on Image Processing, Computer Vision, and Pattern Recognition, IPCV'06, International Conference on Image Processing, Computer Vision and Pattern Recognition, CSREA Press, Las Vegas, USA, pp. 570-575.
View description>>
Spiral Architecture is a relatively new and powerful approach to machine vision system. The geometrical arrangement of pixels on Spiral architecture can be described in terms of a hexagonal grid. However, all the existing hardware for capturing image and for displaying image are produced based on rectangular architecture. It has become a serious problem affecting the advanced research on Spiral Architecture. In this paper, a new approach to mimicking Spiral Architecture is presented. This mimic Spiral Architecture almost retains image resolution and does not introduce distortion. Furthermore, images can be smoothly and easily transferred between the traditional square structure and this new hexagonal structure. In this paper, we also perform a fast way to locate hexagonal pixels. Another contribution in this paper is a novel construction of hexagonal pixels that are four times as big as the virtual hexagonal pixels. This construction of larger hexagonal pixels does not change the axes of symmetry, and does not create any spaces or overlaps between hexagons.
He, X, Jia, W, Hur, N, Wu, Q & Kim, J 1970, 'Image Translation and Rotation on Hexagonal Structure', The Sixth IEEE International Conference on Computer and Information Technology (CIT'06), The Sixth IEEE International Conference on Computer and Information Technology (CIT'06), IEEE, Seoul, Korea, pp. 1-6.
View/Download from: Publisher's site
View description>>
Image translation and rotation are becoming essential operations in many application areas such as image processing, computer graphics and pattern recognition. Conventional translation moves image from pixels to pixels and conventional rotation usually comprises of computation-intensive CORDIC operations. Traditionally, images are represented on a square pixel structure. In this paper, we perform reversible and fast image translation and rotation based on a hexagonal structure. An image represented on the hexagonal structure is a collection of hexagonal pixels of equal size. The hexagonal structure provides a more flexible and efficient way to perform image translation and rotation without losing image information. As there is not yet any available hardware for capturing image and for displaying image on a hexagonal structure, we apply a newly developed virtual hexagonal structure. The virtual hexagonal structure retains image resolution during the process of image transformations, and almost does not introduce distortion. Furthermore, images can be smoothly and easily transferred between the traditional square structure and the hexagonal structure. © 2006 IEEE.
He, X, Jia, W, Hur, N, Wu, Q, Kim, J & Hintz, T 1970, 'Bilateral Edge Detection on a Virtual Hexagonal Structure', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Symposium on Visual Computing, Springer Berlin Heidelberg, United States, pp. 176-185.
View/Download from: Publisher's site
View description>>
Edge detection plays an important role in image processing area. This paper presents an edge detection method based on bilateral filtering which achieves better performance than single Gaussian filtering. In this form of filtering, both spatial closeness and intensity similarity of pixels are considered in order to preserve important visual cues provided by edges and reduce the sharpness of transitions in intensity values as well. In addition, the edge detection method proposed in this paper is achieved on sampled images represented on a newly developed virtual hexagonal structure. Due to the compact and circular nature of the hexagonal lattice, a better quality edge map is obtained on the hexagonal structure than common edge detection on square structure. Experimental results using proposed methods exhibit encouraging performance. © Springer-Verlag Berlin Heidelberg 2006.
He, X, Wang, H, Hur, N, Jia, W, Wu, Q, Kim, J & Hintz, T 1970, 'Uniformly Partitioning Images on Virtual Hexagonal Structure', 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006 9th International Conference on Control, Automation, Robotics and Vision, IEEE, Singapore, pp. 891-896.
View/Download from: Publisher's site
View description>>
Hexagonal structure is different from the traditionnal square structure for image representation. The geometrical arrangement of pixels on hexagonal structure can be described in terms of a hexagonal grid. Uniformly separating image into seven similar copies with a smaller scale has commonly been used for parallel and accurate image processing on hexagonal structure. However, all the existing hardware for capturing image and for displaying image are produced based on square architecture. It has become a serious problem affecting the advanced research based on hexagonal structure. Furthermore, the current techniques used for uniform separation of images on hexagonal structure do not coincide with the rectangular shape of images. This has been an obstacle in the use of hexagonal structure for image processing. In this paper, we briefly review a newly developed virtual hexagonal structure that is scalable. Based on this virtual structure, algorithms for uniform image separation are presented. The virtual hexagonal structure retains image resolution during the process of image separation, and does not introduce distortion. Furthermore, images can be smoothly and easily transferred between the traditional square structure and the hexagonal structure while the image shape is kept in rectangle. © 2006 IEEE.
He, X, Zhang, H, Hur, N, Kim, J, Kim, T & Wu, Q 1970, 'Complete Camera Calibration Using Line-Shape Objects', TENCON 2006 - 2006 IEEE Region 10 Conference, TENCON 2006 - 2006 IEEE Region 10 Conference, IEEE, Honk Kong, China, pp. 1-4.
View/Download from: Publisher's site
View description>>
Most of object-based calibration methods used 3D or 2D pattern. A novel and more flexible 1D object-based calibration was introduced only a couple of years ago, and merely for estimation of intrinsic parameters without consideration camera distortion. Estimation of extrinsic papers is essential when multiple cameras are involved for simultaneously taking images from different view angles and when the knowledge of relative locations between the cameras is required. Estimation of distortion parameters is critical for precise estimation of all calibration parameters. In this paper, we will perform a multi-layer camera calibration involving both intrinsic and extrinsic parameters including distortion parameters based on a line-shape calibration object. © 2006 IEEE.
He, X, Zhang, H, Hur, N, Kim, J, Wu, Q & Kim, T 1970, 'Estimation of Internal and External Parameters for Camera Calibration Using 1D Pattern', 2006 IEEE International Conference on Video and Signal Based Surveillance, 2006 IEEE International Conference on Video and Signal Based Surveillance, IEEE, Sydney, Australia, pp. 93-98.
View/Download from: Publisher's site
View description>>
Camera calibration is to estimate the intrinsic and extrinsic parameters of a camera. Most of object-based calibration methods used 3D or 2D pattern. A novel and more flexible 1D object-based calibration was introduced only a couple of years ago, but merely for estimation of intrinsic parameters. The estimation of extrinsic papers is essential when multiple cameras are involved for simultaneously taking images from different view angles and when the knowledge of relative locations between the cameras is required. Though it is relatively simple using 2D or 3D calibration pattern, the estimation of extrinsic parameters is not obvious using 1D pattern. In this paper, we will perform a 1D camera calibration involving both intrinsic and extrinsic parameters. © 2006 IEEE.
Huaifeng Zhang, Wenjing Jia, Xiangjian He & Qiang Wu 1970, 'Learning-Based License Plate Detection Using Global and Local Features', 18th International Conference on Pattern Recognition (ICPR'06), 18th International Conference on Pattern Recognition (ICPR'06), IEEE, Hong Kong, pp. 1102-1105.
View/Download from: Publisher's site
View description>>
This paper proposes a license plate detection algorithm using both global statistical features and local Haar-like features. Classifiers using global statistical features are constructed firstly through simple learning procedures. Using these classifiers, more than 70% of background area can be excluded from further training or detecting. Then the AdaBoost learning algorithm is used to build up the other classifiers based on selected local Haar-like features. Combining the classifiers using the global features and the local features, we obtain a cascade classifier. The classifiers based on global features decrease the complexity of the system. They are followed by the classifiers based on local Haar-like features, which makes the final classifier invariant to the brightness, color, size and position of license plates. The encouraging detection rate is achieved in the experiments. © 2006 IEEE.
Hussain, F, Chang, E & Dillon, T 1970, 'Ontological Manifestation of Trust for Service Oriented Environment', 2006 IEEE International Conference on Industrial Informatics, 2006 IEEE International Conference on Industrial Informatics, IEEE, Singapore, pp. 593-598.
View/Download from: Publisher's site
View description>>
Trust and reputation are vital components for trusted e-business. In the literature however there has been no effort in proposing ontology for trust. The trusted agent in service oriented environment may trust a software agent or human agent or a service or a product. Based on this distinction, trust ontology could be proposed for different domains .The trust ontology for the individual domains is proposed and discussed.
Hussain, FK, Chang, E & Dillon, TS 1970, 'Defining Reputation in Service Oriented Environment', Advanced Int'l Conference on Telecommunications and Int'l Conference on Internet and Web Applications and Services (AICT-ICIW'06), Advanced Int'l Conference on Telecommunications and Int'l Conference on Internet and Web Applications and Services (AICT-ICIW'06), IEEE, p. 177.
View/Download from: Publisher's site
View description>>
Reputation has a profound impact on the Trusting Agent and Trusted Agent in business interactions. Moral, ethical and legal guidelines are implemented as a result of the promotion of fair trading practices, honesty from all parties, consumer protection legislation, service quality assessment, and assurance for customers, e-businesses and service-oriented environments. In this paper we propose a definition of reputation that is more suited to service oriented environments. Additionally we explain in detail, all the terms in the definition. © 2006 IEEE.
Hussain, O, Chang, E, Hussain, F & Dillon, T 1970, 'Quantifying Risk in Financial Terms in an e-Transaction', 2006 IEEE International Conference on Industrial Informatics, 2006 IEEE International Conference on Industrial Informatics, IEEE, Singapore, pp. 587-592.
View/Download from: Publisher's site
View description>>
An outcome of risk is the possible loss that could incur in an interaction. In a peer-to-peer financial interaction, the possible loss that could incur is usually the financial loss in the resources of the trusting agent that are involved in the interaction. Hence, a consideration for the trusting agent to analyze the risk in interacting with any probable trusted agent in order to decide whether to interact with it or not, is to determine the potential loss in its resources that may occur. In this paper, we will propose a methodology by which the trusting agent can determine beforehand the possible loss that could be incurred to it as a result of interacting with a probable trusted agent.
Jia, W, Zhang, H, He, X & Wu, Q 1970, 'A Comparison on Histogram Based Image Matching Methods', 2006 IEEE International Conference on Video and Signal Based Surveillance, 2006 IEEE International Conference on Video and Signal Based Surveillance, IEEE, Sydney, Australia, pp. 1-6.
View/Download from: Publisher's site
View description>>
Using colour histogram as a stable representation over change in view has been widely used for object recognition. In this paper, three newly proposed histogram-based methods are compared with other three popular methods, including conventional histogram intersection (HI) method, Wong and Cheung's merged palette histogram matching (MPHM) method, and Gevers' colour ratio gradient (CRG) method. These methods are tested on vehicle number plate images for number plate classification. Experimental results disclose that, the CRG method is the best choice in terms of speed, and the GWHI method can give the best classification results. Overall, the CECH method produces the best performance when both speed and classification performance are concerned. © 2006 IEEE.
Jia, W, Zhang, H, He, X & Wu, Q 1970, 'Image Matching Using Colour Edge Cooccurrence Histograms', 2006 IEEE International Conference on Systems, Man and Cybernetics, 2006 IEEE International Conference on Systems, Man and Cybernetics, IEEE, Taipei, Taiwan, pp. 2413-2419.
View/Download from: Publisher's site
View description>>
In this paper, a novel colour edge cooccurrence histogram (CECH) method is proposed to match images by measuring similarities between their CECH histograms. Unlike the previous colour edge cooccurrence histogram proposed by Crandall et al. in [2], we only investigate those pixels which are located at the two sides of edge points in their gradient direction lines and at a distance away from the edge points. When measuring similarities between two CECH histograms, a newly proposed Gaussian weighted histogram intersection (GWHI) method is extended for this purpose. Both identical colour pairs and similar colour pairs are taken into account in our algorithm, and the weights are decided by the larger distance between two colour pairs involved in matching. The proposed algorithm is tested for matching vehicle number plate images captured under various illumination conditions. Experimental results demonstrate that the proposed algorithm can be used to compare images in real-time, and is robust to illumination variations and insensitive to the model images selected. © 2006 IEEE.
Jia, W, Zhang, H, He, X & Wu, Q 1970, 'Symmetric Color Ratio in Spiral Architecture', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Asian Conference on Computer Vision, Springer Berlin Heidelberg, Hyderabad, India, pp. 204-213.
View/Download from: Publisher's site
View description>>
Color ratio gradient (CRG) is a robust method used for color image retrieval and object recognition. It has been proven to be illumination-independent and geometry-insensitive when tested on scenery images. However, the color ratio gradient produces unsatisfying matching results when dealing with an object which appears rotated by a certain relative angle in the model and target images. In this paper, we adopt the idea of color ratio gradient and develop a new method called Symmetric Color Ratio (SCR) based on a hexagonal image structure, the Spiral Architecture (SA). We focus on license plate images and our aim is to achieve a higher matching rate between the SCR histogram of the images within same class in order to separate different classes of images. Our experimental results demonstrate that the proposed SCR is robust to changes over view angles. © Springer-Verlag Berlin Heidelberg 2006.
Jiaqi Wang & Chengqi Zhang 1970, 'Dynamic Focus Strategies for Electronic Trade Execution in Limit Order Markets', The 8th IEEE International Conference on E-Commerce Technology and The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services (CEC/EEE'06), The 8th IEEE International Conference on E-Commerce Technology and The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services (CEC/EEE'06), IEEE, San Francisco, California, USA, pp. 1-8.
View/Download from: Publisher's site
View description>>
Trade execution has attracted lots of attention from academia and financial industry due to its significant impact on investment return. Recently, limit order strategies for trade execution were backtested on historical order/trade data and dynamic price adjustment was proposed to respond state variables in execution. This paper emphasizes the effect of dynamic volume adjustment on limit order strategies and proposes dynamic focus (DF) strategies, which incorporate a series of market orders of different volume into the limit order strategy and dynamically adjusts their volume by monitoring state variables such as inventory and order book imbalance in real-time. The sigmoid function is suggested as the quantitative model to represent the relationship between the state variables and the volume to be adjusted. The empirical results on historical order/trade data of the Australian Stock Exchange show that the DF strategy can outperform the limit order strategy, which does not adopt dynamic volume adjustment.
Kazienko, P & Musiał, K 1970, 'Social Capital in Online Social Networks', KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 10th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Springer Berlin Heidelberg, Bournemouth, ENGLAND, pp. 417-424.
View/Download from: Publisher's site
Li, J, Li, H, Wong, L, Pei, J & Dong, G 1970, 'Minimum description length principle: Generators are preferable to closed patterns', Proceedings of the National Conference on Artificial Intelligence, pp. 409-414.
View description>>
The generators and the unique closed pattern of an equivalence class of itemsets share a common set of transactions. The generators are the minimal ones among the equivalent itemsets, while the closed pattern is the maximum one. As a generator is usually smaller than the closed pattern in cardinality, by the Minimum Description Length Principle, the generator is preferable to the closed pattern in inductive inference and classification. To efficiently discover frequent generators from a large dataset, we develop a depth-first algorithm called Gr-growth. The idea is novel in contrast to traditional breadth-first bottom-up generator-mining algorithms. Our extensive performance study shows that Gr-growth is significantly faster (an order or even two orders of magnitudes when the support thresholds are low) than the existing generator mining algorithms. It can be also faster than the state-of-the-art frequent closed itemset mining algorithms such as FPclose and CLOSET+. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.
Liu, G, Sim, K & Li, J 1970, 'Efficient Mining of Large Maximal Bicliques', DATA WAREHOUSING AND KNOWLEDGE DISCOVERY, PROCEEDINGS, 8th International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2006), Springer Berlin Heidelberg, Cracow, POLAND, pp. 437-448.
View/Download from: Publisher's site
Lu, S, Zhang, J & Feng, D 1970, 'A Knowledge-Based Approach for Detecting Unattended Packages in Surveillance Video', 2006 IEEE International Conference on Video and Signal Based Surveillance, 2006 IEEE International Conference on Video and Signal Based Surveillance, IEEE, Sydney, NSW, pp. 0-0.
View/Download from: Publisher's site
View description>>
This paper describes a novel approach for detecting unattended packages in surveillance video. Unlike the traditional approach to just detecting stationary objects in monitored scenes, our approach detects unattended packages based on accumulated knowled
Luu, J & Kennedy, PJ 1970, 'Investigating the size and value effect in determining performance of Australian listed companies: A neural network approach', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, ACS, Sydney, Australia, pp. 155-161.
View description>>
This paper explores the size and value effect in influencing performance of individual companies using backpropagation neural networks. According to existing theory, companies with small market capitalization and high book to market ratios have a tendency to perform better in the future. Data from over 300 Australian Stock Exchange listed companies between 2000-2004 is examined and a neural network is trained to predict company performance based on market capitalization, book to market ratio, beta and standard deviation. Evidence for the value effect was found over longer time periods but there was less for the size effect. Poor company performance was also observed to be correlated with high risk. © 2006, Australian Computer Society, Inc.
Marychurch, J & Stoianoff, NP 1970, 'Blurring the Lines of Environmental Responsibility: How Corporate and Public Governance was Circumvented in the Ok Tedi Mining Limited Disaster', Legal Knowledge: Learning, Communicating and Doing: Australasian Law Teachers Association - ALTA 2006 Refereed Conference Papers, ALTA, Australasian Law Teachers Association, Victoria University, Melbourne, Australia, pp. 3-25.
View description>>
This paper will present the preliminary findings of a research project into the impact of legislative legitimation of environmental damage on corporate governance in multinational companies and on public governance in the nation state. The environmental devastation of the Ok Tedi mine in Papua New Guinea (PNG) will be the focus of the paper.
Massimo Piccardi 1970, 'Human-Focused Computer Vision Applications', International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), IEEE, p. 5.
View/Download from: Publisher's site
View description>>
Recent years have seen an increasing number of computer vision applications focusing on humans as their objects of interest. Such applications include video surveillance, domotics, multimedia semantic annotation and indexing, human-computer interfaces, affective computing, just to cite a few. What exactly are their "objects of interest"? A broad range of human-related features: motion, gestures, actions, interactions, activities, attitudes, behaviours, identity. This keynote will offer a survey of this field and present some current work from the speaker and his collaborators in the areas of emotion recognition and people tracking within camera networks. © 2006 IEEE.
McCracken, J, Diaz, A, Castro, E, Edwards, R, Ryan, L, Schwartz, J, Chowdhury, Z & Smith, KR 1970, 'Biomass smoke exposure among Guatemalan infants participating in a randomized trial of chimney stoves', EPIDEMIOLOGY, ISEE/ISEA 2006 Conference, LIPPINCOTT WILLIAMS & WILKINS, Paris, FRANCE, pp. S35-S36.
View/Download from: Publisher's site
Ni, J & Zhang, C 1970, 'A dynamic storage method for stock transaction data', Proceedings of the 2nd IASTED International Conference on Computational Intelligence, CI 2006, IASTED International Conference on Computational Intelligence, ACTA Press, San Francisco, USA, pp. 338-342.
View description>>
Stock transaction data have become very detailed and enormous with the introduction of electronic trading systems. This makes it a problem to store and to access the data in later analyses such as mining useful patterns and backtesting trading strategies. This paper investigates several storage methods in terms of both storage space and access efficiency and then proposes a new dynamic storage method which provides a flexible mechanism to balance between storage space and access efficiency for storing huge intraday transaction data.
Ni, J & Zhang, C 1970, 'A Human-Friendly MAS for Mining Stock Data', 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology Workshops, 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology Workshops, IEEE, Hong Kong, China, pp. 19-22.
View/Download from: Publisher's site
View description>>
Mining stock data can be beneficial to the participants and researchers in the stock market. However, it is very difficult for a normal trader or researcher to apply data mining techniques to the data on his own due to the complexity involved in the whole data mining process. In this paper, we present a multi-agent system that can help users easily deal with their data mining jobs on stock data. This system guides users to specify their mining tasks by simply specifying the data sets to be mined and selecting pre-defined and/or user-added data mining agents. This approach offers normal traders a practical and flexible solution to mining stock data. © 2006 IEEE.
Ni, J & Zhang, C 1970, 'Mining Better Technical Trading Strategies with Genetic Algorithms', 2006 International Workshop on Integrating AI and Data Mining, 2006 International Workshop on Integrating AI and Data Mining, IEEE, Hobart, Australia, pp. 26-33.
View/Download from: Publisher's site
View description>>
Technical analysis is one of the two main schools of thought in the analysis of security prices. It is widely believed and applied by many professional and amateur traders. However, it is often criticized for lacking scientific rigour or worse, for lacking any basis whatsoever. We propose to explore the feasibility and/orlimitation of technical analysis by the optimization of technical trading strategies over historical stock data with genetic algorithms. This paper presents the optimization problem in detail and discusses the potential problems to be tackled during the optimization. Preliminary experiments show that it can identify the limitations quickly. © 2006 IEEE.
Ni, J, Cao, L & Zhang, C 1970, 'Agent Services-Oriented Architectural Design of a Framework for Artificial Stock Markets', Frontiers in Artificial Intelligence and Applications, 4th International Conference on Active Media Technology, IOS PRESS, Queensland Univ Technol, Brisbane, AUSTRALIA, pp. 396-399.
View description>>
Artificial stock markets (ASMs) are very complex. Agent-based ASMs (ABASMs) have become quite popular in the research of ASMs. However, it is very hard and time consuming to design and implement an ABASM from scratch. This paper proposes a novel design of a framework for ABASMs based on our previous work on agent services-oriented architectural design (ASOAD). This design integrates service-oriented computing (SOC) and agent-based computing (ABC) to achieve a more powerful and flexible framework for ABASMs. The resulting system makes it easy to build actionable ASMs for experiments.
Piccardi, M 1970, 'Video Surveillance at the Beginning of the Third Millennium: The Viewpoint of Research, Industry, Government Bodies, Research Funding Agencies and the Community', 2006 IEEE International Conference on Video and Signal Based Surveillance, 2006 IEEE International Conference on Video and Signal Based Surveillance, IEEE.
View/Download from: Publisher's site
Piccardi, M & Cheng, ED 1970, 'Multi-Frame Moving Object Track Matching Based on an Incremental Major Color Spectrum Histogram Matching Algorithm', 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops, IEEE, New York, USA, pp. 1-6.
View/Download from: Publisher's site
Sim, K, Li, J, Gopalkrishnan, V & Liu, G 1970, 'Mining Maximal Quasi-Bicliques to Co-Cluster Stocks and Financial Ratios for Value Investment', Sixth International Conference on Data Mining (ICDM'06), Sixth International Conference on Data Mining (ICDM'06), IEEE, Hong Kong, PEOPLES R CHINA, pp. 1059-1063.
View/Download from: Publisher's site
Stoianoff, NP 1970, 'China and the Protection of Intellectual Property', China: The New Legal Scene: Opportunities and Risks, Centre for Continuing Legal Education, University of New South Wales, Kensington, New South Wales.
Stoianoff, NP 1970, 'Convergent Law, Divergent Behaviour: The Enforcement of Intellectual Property Rights in the People's Republic of China', The Development of Law in Asia: Convergence versus Divergence?, Asian Law Institute, East China University of Politics and Law, Shanghai, China, pp. 967-972.
Stoianoff, NP 1970, 'The Problem of Intellectual Property Enforcement in China: a cultural issue or just a stage in the making of a new Superpower?', UNSW School of Law Seminar Series, University of New South Wales.
Suglia, SF, Ryan, L, Bellinger, D & Wright, R 1970, 'The influence of the social and physical environment on child behavior', EPIDEMIOLOGY, ISEE/ISEA 2006 Conference, LIPPINCOTT WILLIAMS & WILKINS, Paris, FRANCE, pp. S387-S387.
View/Download from: Publisher's site
Suglia, SF, Ryan, L, Bellinger, D & Wright, RJ 1970, 'Violence exposure predicts adverse child behavior: Use of item response theory to characterize violence experience.', AMERICAN JOURNAL OF EPIDEMIOLOGY, 2nd North American Congress of Epidemiology, OXFORD UNIV PRESS INC, Seattle, WA, pp. S233-S233.
View/Download from: Publisher's site
Vellaisamy, K & Li, J 1970, 'Bayesian Approaches to Ranking Sequential Patterns Interestingness', PRICAI 2006: TRENDS IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 9th Pacific Rim International Conference on Artificial Intelligence (PRICAI 2006), Springer Berlin Heidelberg, Guilin, PEOPLES R CHINA, pp. 241-250.
View/Download from: Publisher's site
Wang, H, He, S, Wu, Q & Hintz, TB 1970, 'Improvement of fractual image coding base on the different image', WISTSP '06 proceedings, Workshop in Information Security Theory and Practices, DSP for communication systems, Hobert, Australia, pp. 1-5.
Wang, H, Wu, Q, He, X & Hintz, T 1970, 'Preliminary research on fractal video compression on spiral architecture', Proceedings of the 2006 International Conference on Image Processing, Computer Vision, and Pattern Recognition, IPCV'06, International Conference on Image Processing, Computer Vision and Pattern Recognition, CSREA Press, Las Vegas, USA, pp. 557-562.
View description>>
Fractal Video Compression (FVC) is of extensive interest for over 20 years. Instead of being implemented on square image structure, Spiral Architecture (SA) based fractal image compression is proposed in this paper to illustrate the great potential of FVC on SA. Conceptually, a new definition of range block and domain block is presented on this enhanced image structure. Compared with the conventional square image architecture, spiral architecture provides higher fidelity to fractal image compression, which is demonstrated by the experimental results.
Wang, J, ElGindy, H & Lipman, J 1970, 'On Cache Prefetching Strategies For Integrated Infostation-Cellular Network', Proceedings. 2006 31st IEEE Conference on Local Computer Networks, 2006 31st IEEE Conference on Local Computer Networks, IEEE, Tampa, FL, pp. 185-+.
View/Download from: Publisher's site
Wenjing Jia, Huaifeng Zhang, Xiangjian He & Qiang Wu 1970, 'Gaussian Weighted Histogram Intersection for License Plate Classification', 18th International Conference on Pattern Recognition (ICPR'06), 18th International Conference on Pattern Recognition (ICPR'06), IEEE, Hong Kong, PEOPLES R CHINA, pp. 574-577.
View/Download from: Publisher's site
View description>>
The conventional histogram intersection (HI) algorithm computes the intersected section of the corresponding color histograms in order to measure the matching rate between two color images. Since this algorithm is strictly based on the matching between bins of identical colors, the final matching rate can be easily affected by color variation caused by various environment changes. In this paper, a Gaussian weighted histogram intersection (GWHI) algorithm is proposed to facilitate the histogram matching via taking into account matching of both identical and similar colors. The weight is determined by the distance between two colors. The algorithm is applied to license plate classification. Experimental results show that the proposed algorithm produces a much lower intra-class distance and a much higher inter-class distance than previous HI algorithms for tested images which are captured under various illumination conditions. © 2006 IEEE.
Wenjing Jia, HZ 1970, 'Refined Gaussian Weighted Histogram Intersection and Its Application in Number Plate Categorization', International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), IEEE, Sydney Australia, pp. 249-254.
View/Download from: Publisher's site
View description>>
This paper proposes a refined Gaussian weighted histogram intersection for content-based image matching and applies the method for number plate categorization. Number plate images are classified into two groups based on their colour similarities with the model image of each group. The similarities of images are measured by the matching rates between their colour histograms. Histogram intersection (HI) is used to calculate the matching rates of histograms. Since the conventional histogram intersection algorithm is strictly based on the matching between bins of identical colours, the final matching rate could easily be affected by colour variation caused by various environment changes. In our recent paper [9], a Gaussian weighted histogram intersection (GWHI) algorithm has been proposed to facilitate the histogram matching via taking into account matching of both identical colours and similar colours. The weight is determined by the distance between two colours. When applied to number plate categorization, the GWHI algorithm demonstrates to be more robust to colour variations and produces a classification with much lower intra-class distance and much higher interclass distance than previous HI algorithms. However, the processing speed of this GWHI method is still not satisfying. In this paper, the GWHI method is further refined, where a colour quantization method is utilized to reduce the number of colours without introducing apparent perceptual colour distortion. New experimental results demonstrate that using the refined GWHI method, image categorization can be done more efficiently. © 2006 IEEE.
Wu, Q, Zhang, H, Jia, W, He, X, Yang, J & Hintz, T 1970, 'Car Plate Detection Using Cascaded Tree-Style Learner Based on Hybrid Object Features', 2006 IEEE International Conference on Video and Signal Based Surveillance, 2006 IEEE International Conference on Video and Signal Based Surveillance, IEEE, Sydney, Australia, pp. 1-6.
View/Download from: Publisher's site
View description>>
Car plate detection is a key component in automatic license plate recognition system. This paper adopts an enhanced cascaded tree style learner framework for car plate detection using the hybrid object features including the simple statistical features and Harr-like features. The statistical features are useful for simplifying the process on cascade classifier. The cascaded tree-style detector design will further reduce the false alarm and the false dismissal while retaining a high detection ratio. The experimental results obtained by the proposed algorithm exhibit the encouraging performance. © 2006 IEEE.
Xiangjian He, HW 1970, 'Fractal Image Compression on Spiral Architecture', International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), IEEE, Sydney, Australia, pp. 76-81.
View/Download from: Publisher's site
View description>>
Image compression has many applications. For example, it is an important step for distributed and network based pattern recognition. For real time object recognition or reconstruction, image compression can greatly reduce the image size, and hence increase the processing speed and enhance performance. Fractal image compression is a relatively recent image compression method. Its basic idea is to represent images as a fixed point of a contractive Iterated Function System (IFS). Spiral Architecture (SA) is a novel image structure on which images are displayed as a collection of hexagonal pixels. The efficiency and accuracy of image processing on SA have been demonstrated in many recently published papers. We have shown the existence of contractive IFS's through the construction of a Complete Metric Space on SA. The selection of range and domain blocks for fractal image compression is highly related to the uniform image separation specific to SA. In this paper, we will review the current research work on fractal image compression based on SA. We will compare the results obtained on SA and the traditional square structure in terms of compression ratio and PSNR. © 2006 IEEE.
Xiangjian He, WJ 1970, 'Basic Transformations on Virtual Hexagonal Structure', International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), IEEE, Sydney, Australia, pp. 243-248.
View/Download from: Publisher's site
View description>>
Hexagonal structure is different from the traditional square structure for image representation. The geometrical arrangement of pixels on hexagonal structure can be described in terms of a hexagonal grid. Hexagonal structure provides an easy way for image translation and rotation transformations. However, all the existing hardware for capturing image and for displaying image are produced based on square architecture. It has become a serious problem affecting the advanced research based on hexagonal structure. In this paper, we introduce a new virtual hexagonal structure. Based on this virtual structure, a more flexible and powerful image translation and rotation are performed. The virtual hexagonal structure retains image resolution during the process of image transformations, and does not introduce distortion. Furthermore, images can be smoothly and easily transferred between the traditional square structure and the hexagonal structure. © 2006 IEEE.
Xu, G, Zhang, Y & Begg, R 1970, 'Mining Gait Pattern for Clinical Locomotion Diagnosis Based on Clustering Techniques', Advanced Data Mining And Applications, Proceedings, 2nd International Conference on Advanced Data Mining and Applications, Springer Berlin Heidelberg, Xian, PEOPLES R CHINA, pp. 296-307.
View/Download from: Publisher's site
View description>>
Scientific gait (walking) analysis provides valuable information about an individual's locomotion function, in turn, to assist clinical diagnosis and prevention, such as assessing treatment for patients with impaired postural control and detecting risk o
Xu, G, Zhang, Y & Zhou, X 1970, 'Discovering task-oriented usage pattern for web recommendation', Conferences in Research and Practice in Information Technology Series, 17th Australasian Database Conference, Australian Computer Society, Hobart, Tasmania, Australia, pp. 167-174.
View description>>
Web transaction data usually convey user task-oriented behaviour pattern. Web usage mining technique is able to capture such informative knowledge about user task pattern from usage data. With the discovered usage pattern information, it is possible to recommend Web user more preferred content or customized presentation according to the derived task preference. In this paper, we propose a Web recommendation framework based on discovering task-oriented usage pattern with Probabilistic Latent Semantic Analysis (PLSA) model. The user intended tasks are characterized by the latent factors through probabilistic inference, to represent the user navigational interests. Moreover, the active user's intuitive task-oriented preference is quantized by the probabilities, by which pages visited in current user session are associated with various tasks as well. Combining the identified task preference of current user with the discovered usage-based Web page categories, we can present user more potentially interested or preferred Web content. The preliminary experiments performed on real world data sets demonstrate the usability and effectiveness of the proposed approach. © 2006, Australian Computer Society, Inc.
Zhang, C & Cao, L 1970, 'Domain-Driven Data Mining: Methodologies and Applications', Frontiers in Artificial Intelligence and Applications, 4th International Conference on Active Media Technology, IOS PRESS, Queensland Univ Technol, Brisbane, AUSTRALIA, pp. 13-16.
View description>>
The aims and objectives of data mining is to discover actionable knowledge of main interest to real user needs, which is one of Grand Challenges in KDD. Most extant data mining is a data-driven trial-an-error process. Patterns discovered via predefined models in the above process are often of limited interest to constraint-based real business. In order to work out patterns really interesting and actionable to the real world, pattern discovery is more likely to be a domain-driven human-machine-cooperated process. This talk proposes a practical data mining methodology named “domain-driven data mining”. The main ideas include a Domain-Driven In-Depth Pattern Discovery framework (DDID-PD), constraint-based mining, in-depth mining, human-cooperated mining and loop-closed mining. Guided by this methodology, we demonstrate some of our work in identifying useful correlations in real stock markets, for instance, discovering optimal trading rules from the existing rule classes, and mining trading rule-stock correlations in stock exchange data. The results have attracted strong interest from both traders and researchers in stock markets. It has shown that the methodology is potential for guiding deep mining of patterns interesting to real business.
Zhang, H, Jia, W, He, X & Wu, Q 1970, 'A Fast Algorithm for License Plate Detection in Various Conditions', 2006 IEEE International Conference on Systems, Man and Cybernetics, 2006 IEEE International Conference on Systems, Man and Cybernetics, IEEE, Taibei, China, pp. 2420-2425.
View/Download from: Publisher's site
View description>>
This paper proposes a fast algorithm detecting license plates in various conditions. There are three main contributions in this paper. The first contribution is that we define a new vertical edge map, with which the license plate detection algorithm is extremely fast. The second contribution is that we construct a cascade classifier which is composed of two kinds of classifiers. The classifiers based on statistical features decrease the complexity of the system. They are followed by the classifiers based on Haar-features, which make it possible to detect license plate in various conditions. Our algorithm is robust to the variance of the illumination, view angle, the position, size and color of the license plates when working in complex environment. The third contribution is that we experimentally analyze the relations of the scaling factor with detection rate and processing time. On the basis of the analysis, we select the optimal scaling factor in our algorithm. In the experiments, both high detection rate (with low false positive rate) and high speed are achieved when the algorithm is used to detect license plates in various complex conditions. © 2006 IEEE.
Zhang, H, Jia, W, He, X & Wu, Q 1970, 'Real-Time License Plate Detection Under Various Conditions', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), International Conference on Ubiquitous and Intelligence Computing, Springer Berlin Heidelberg, Wuhan, China, pp. 192-199.
View/Download from: Publisher's site
View description>>
This paper proposes an algorithm for real-time license plate detection. In this algorithm, the relatively easy car plate features are adopted including the simple statistical feature and Harr-like feature. The simplicity of the object features used is very helpful to real-time processing. The classifiers based on statistical features decrease the complexity of the system. They are followed by the classifiers based on Haar-like features, which makes the final classifier invariant to the brightness, color, size and position of license plates. The experimental results obtained by the proposed algorithm exhibit the encouraging performance. © Springer-Verlag Berlin Heidelberg 2006.
Zhang, S, Chen, F, Wu, X & Zhang, C 1970, 'Identifying bridging rules between conceptual clusters', Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD06: The 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Philadelphia, USA, pp. 815-820.
View/Download from: Publisher's site
View description>>
A bridging rule in this paper has its antecedent and action from different conceptual clusters. We first design two algorithms for mining bridging rules between clusters in a database, and then propose two non-linear metrics for measuring the interestingness of bridging rules. Bridging rules can be distinct from association rules (or frequent itemsets). This is because (1) bridging rules can be generated by infrequent itemsets that are pruned in association rule mining; and (2) bridging rules are measured by the importance that includes the distance between two conceptual clusters, whereas frequent itemsets are measured by only the support.
Zhang, S, Qin, Y, Zhu, X, Zhang, J & Zhang, C 1970, 'Optimized Parameters for Missing Data Imputation', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Pacific Rim International Conference on Artificial Intelligence, Springer-Verlag, Guilin, China, pp. 1010-1016.
View/Download from: Publisher's site
View description>>
To complete missing values, a solution is to use attribute correlations within data. However, it is difficult to identify such relations within data containing missing values. Accordingly, we develop a kernel-based missing data imputation method in this paper. This approach aims at making optimal statistical parameters: mean, distribution function after missing-data are imputed. We refer this approach to parameter optimization method (POP algorithm, a random regression imputation). We experimentally evaluate our approach, and demonstrate that our POP algorithm is much better than deterministic regression imputation in efficiency of generating an inference on the above two parameters. The results also show our algorithm is computationally efficient, robust and stable for the missing data imputation. © Springer-Verlag Berlin Heidelberg 2006.
Zhao, Y, Cao, L, Morrow, Y, Ou, Y, Ni, J & Zhang, C 1970, 'Discovering Debtor Patterns of Centrelink Customers', Conferences in Research and Practice in Information Technology Series, Australian Data Mining Conference, ACS Inc, Sydney, Australia, pp. 135-144.
View description>>
Data mining is currently becoming an increasingly hot research field, but a large gap still remains between the research of data mining and its application in real-world business. As one of the largest data users in Australia, Centrelink has huge volume of data in data warehouse and tapes. Based on the available data, Centrelink is seeking to find underlying patterns to be able to intervene earlier to prevent or minimize debt. To discover the debtor patterns of Centrelink customers and bridge the gap between data mining research and application, we have done a project on improving income reporting to discover the patterns of those customers who were or are in debt to Centrelink. Two data models were built respectively for demographic data and activity data, and decision tree and sequence mining were used respectively to discover demographic patterns and activity sequence patterns of debtors. The project produced some potentially interesting results, and paved the way for more data mining applications in Centrelink in near future. © 2006, Australian Computer Society, Inc.
Zhao, Y, Zhang, C & Zhang, S 1970, 'Efficient Frequent Itemsets Mining by Sampling', Frontiers in Artificial Intelligence and Applications, 4th International Conference on Active Media Technology, IOS PRESS, Queensland Univ Technol, Brisbane, AUSTRALIA, pp. 112-117.
View description>>
As the first stage for discovering association rules, frequent itemsets mining is an important challenging task for large databases. Sampling provides an efficient way to get approximating answers in much shorter time. Based on the characteristics of frequent itemsets counting, a new bound for sampling is proposed, with which less samples are necessary to achieve the required accuracy and the efficiency is much improved over traditional Chernoff bounds.
Zhao, Y, Zhang, C & Zhang, S 1970, 'Enhancing DWT for Recent-Biased Dimension Reduction of Time Series Data', AI 2006: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, Australasian Joint Conference on Artificial Intelligence, Springer Berlin Heidelberg, Hobart, AUSTRALIA, pp. 1048-1053.
View/Download from: Publisher's site
View description>>
In many applications, old data in time series become less important as time elapses, which is a big challenge to traditional techniques for dimension reduction. To improve Discrete Wavelet Transform (DWT) for effective dimension reduction in this kind of applications, a new method, largest-latest-DWT, is designed by keeping the largest k coefficients out of the latest w coefficients at each level of DWT transform. Its efficiency and effectiveness is demonstrated by our experiments.
Zhao, Y, Zhang, C, Zhang, S & Zhao, L 1970, 'Adapting K-Means Algorithm for Discovering Clusters in Subspaces', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Asia Pacific Web Conference, Springer Berlin Heidelberg, Habin, China, pp. 53-62.
View/Download from: Publisher's site
View description>>
Subspace clustering is a challenging task in the field of data mining. Traditional distance measures fail to differentiate the furthest point from the nearest point in very high dimensional data space. To tackle the problem, we design minimal subspace distance which measures the similarity between two points in the subspace where they are nearest to each other. It can discover subspace clusters implicitly when measuring the similarities between points. We use the new similarity measure to improve traditional k-means algorithm for discovering clusters in subspaces. By clustering with low-dimensional minimal subspace distance first, the clusters in low-dimensional subspaces are detected. Then by gradually increasing the dimension of minimal subspace distance, the clusters get refined in higher dimensional subspaces. Our experiments on both synthetic data and real data show the effectiveness of the proposed similarity measure and algorithm. © Springer-Verlag Berlin Heidelberg 2006.
Zheng, L, He, X, Wu, Q & Hintz, T 1970, 'Learning-Based Number Recognition on Spiral Architecture', 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006 9th International Conference on Control, Automation, Robotics and Vision, IEEE, Grand Hyatt, Singapore, pp. 897-901.
View/Download from: Publisher's site
View description>>
In this paper, a number recognition algorithm is proposed on Spiral Architecture, a hexagonal image structure. This algorithm employs RULES-3 inductive learning method to recognize numbers. The algorithm starts from a collection of samples of numbers from number plates. Edge maps of the samples are then detected based on Spiral Architecture. A set of rules are extracted using these samples by RULES-3. The rules describe the frequencies of 9 different edge masks appearing in the samples. Each mask is a cluster of 7 hexagonal pixels. In order to recognize a number plate, all numbers are tested one by one using the extracted rules. The number recognition is achieved by counting the frequencies of the 9 masks. In this paper, a comparison between results based on rectangular structure and the results based on Spiral Architecture is given. From the experimental results, we can make the conclusion that Spiral Architecture is better than rectangular structure for inductive learning-based number recognition. © 2006 IEEE.
Zheng, L, He, X, Wu, Q & Hintz, T 1970, 'Learning-based number recognition on spiral architecture', 2006 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1- 5, 9th International Conference on Control, Automation, Robotics and Vision, IEEE, Singapore, SINGAPORE, pp. 2148-+.