Abolhasan, M, Wysocki, T & Lipman, J 2005, 'A New Strategy to Improve Proactive Route Updates in Mobile Ad Hoc Networks', EURASIP Journal on Wireless Communications and Networking, vol. 2005, no. 5, pp. 828-837.
View/Download from: Publisher's site
View description>>
This paper presents two new route update strategies for performing proactive route discovery in mobile ad hoc networks (MANETs). The first strategy is referred to as minimum displacement update routing (MDUR). In this strategy, the rate at which route updates are sent into the network is controlled by how often a node changes its location by a required distance. The second strategy is called minimum topology change update (MTCU). In this strategy, the route updating rate is proportional to the level of topology change each node experiences. We implemented MDUR and MTCU on top of the fisheye state routing (FSR) protocol and investigated their performance by simulation. The simulations were performed in a number of different scenarios, with varied network mobility, density, traffic, and boundary. Our results indicate that both MDUR and MTCU produce significantly lower levels of control overhead than FSR and achieve higher levels of throughput as the density and the level of traffic in the network are increased.
Abraham, JH, Finn, PW, Milton, DK, Ryan, LM, Perkins, DL & Gold, DR 2005, 'Infant home endotoxin is associated with reduced allergen-stimulated lymphocyte proliferation and IL-13 production in childhood', JOURNAL OF ALLERGY AND CLINICAL IMMUNOLOGY, vol. 116, no. 2, pp. 431-437.
View/Download from: Publisher's site
View description>>
Background: Infant endotoxin exposure has been proposed as a factor that might protect against allergy and the early childhood immune responses that increase the risk of IgE production to allergens. Objective: Using a prospective study design, we tested the hypothesis that early-life endotoxin exposure is associated with allergen- and mitogen-induced cytokine production and proliferative responses of PBMCs isolated from infants with a parental history of physician-diagnosed asthma or allergy. Methods: We assessed household dust endotoxin at age 2 to 3 months and PBMC proliferative and cytokine responses to cockroach allergen (Bla g 2), dust mite allergen (Der f 1), cat allergen (Fel d 1), and the nonspecific mitogen PHA at age 2 to 3 years. Results: We found that increased endotoxin levels were associated with decreased IL-13 levels in response to cockroach, dust mite, and cat allergens, but not mitogen stimulation. Endotoxin levels were not correlated with allergen- or mitogen-induced IFN-γ, TNF-α, or IL-10. Increased endotoxin levels were associated with decreased lymphocyte proliferation after cockroach allergen stimulation. An inverse, although nonsignificant, association was also found between endotoxin and proliferation to the other tested stimuli. Conclusion: Increased early-life exposure to household endotoxin was associated with reduced allergen-induced production of the TH2 cytokine IL-13 and reduced lymphoproliferative responses at age 2 to 3 years in children at risk for allergy and asthma. Early-life endotoxin-related reduction of IL-13 production might represent one pathway through which increased endotoxin decreases the risk of allergic disease and allergy in later childhood. © 2005 American Academy of Allergy, Asthma and Immunology.
Abraham, JH, Gold, DR, Dockery, DW, Ryan, L, Park, JH & Milton, DK 2005, 'Within-home versus between-home variability of house dust endotoxin in a birth cohort', ENVIRONMENTAL HEALTH PERSPECTIVES, vol. 113, no. 11, pp. 1516-1521.
View/Download from: Publisher's site
View description>>
Endotoxin exposure has been proposed as an environmental determinant of allergen responses in children. To better understand the implications of using a single measurement of house dust endotoxin to characterize exposure in the first year of life, we evaluated room-specific within-home and between-home variability in dust endotoxin obtained from 470 households in Boston, Massachusetts. Homes were sampled up to two times over 5-11 months, We analyzed 1,287 dust samples from the kitchen, family room, and baby's bedroom for endotoxin. We fit a mixed-effects model to estimate mean levels and the variation of endotoxin between homes, between rooms, and between sampling times. Endotoxin ranged from 2 to 1,945 units per milligram of dust. Levels were highest during summer and lowest in the winter. Mean endotoxin levels varied significantly from room to room. Cross-sectionally, endotoxin was moderately correlated between family room and bedroom floor (r = 0.30), between family room and kitchen (r = 0.32), and between kitchen and bedroom (r = 0.42). Adjusting for season, the correlation of endotoxin levels within homes over time was 0.65 for both the bedroom and kitchen and 0.54 for the family room. The temporal within-home variance of endotoxin was lowest for bedroom floor samples and highest for kitchen samples. Between-home variance was lowest in the family room and highest for kitchen samples. Adjusting, for season, within-home variation was less than between-home variation for all three rooms. These results suggest that room-to-room and home-to-home differences in endotoxin influence the total variability more than factors affecting endotoxin levels within a room over time.
Bellamy, SL, Li, Y, Lin, XH & Ryan, LM 2005, 'Quantifying PQL bias in estimating cluster-level covariate effects in generalized linear mixed models for group-randomized trials', STATISTICA SINICA, vol. 15, no. 4, pp. 1015-1032.
View description>>
We derive the asymptotic bias and variance of the penalized quasilikelihood (PQL) estimator of the cluster-level covariate effect in generalized linear mixed models for group-randomized trials where the number of clusters n is small and the cluster size m is large. We show that the asymptotic bias is of order O-p(1/m) and the asymptotic variance is of order Op(1/n) + Op{1/(nm)}. The practical implication of our results is that the PQL method works well in settings involving small numbers of large clusters which are typical in grouped randomized trials. We illustrate the results using simulation studies.
Burden, S, Guha, S, Morgan, G, Ryan, L, Sparks, R & Young, L 2005, 'Spatio-temporal analysis of acute admissions for ischemic heart disease in NSW, Australia', ENVIRONMENTAL AND ECOLOGICAL STATISTICS, vol. 12, no. 4, pp. 427-448.
View/Download from: Publisher's site
View description>>
The recently funded Spatial Environmental Epidemiology in New South Wales (SEE NSW) project aims to use routinely collected data in NSW Australia to investigate risk factors for various chronic diseases. In this paper, we present a case study focused on the relationship between social disadvantage and ischemic heart disease to highlight some of the methodological challenges that are likely to arise. © 2005 Springer Science+Business Media, Inc.
Cao, L, Zhang, C & Dai, R 2005, 'Organization-Oriented Analysis of Open Complex Agent Systems', International Journal of Intelligent Control and Systems, vol. 10, no. 2, pp. 114-122.
View description>>
Organization-oriented analysis acts as the key step and foundation in building organization-oriented methodology (OOM) to engineer multi-agent systems especially open complex agent systems (OCAS). A number of existing approaches target OOM, while they are incompatible with each other, and none of them is available as a solid and practical tool for engineering OCAS. This paper summarizes our investigation in building a unified framework for abstracting and analyzing OCAS organizations. Our organizationoriented framework, referred to as ORGANISED, integrating and expanding existing approaches, explicitly captures the main attributes in an OCAS. Following this framework, individual modelbuilding blocks are developed for all ORGANISED members; both visual and formal specifications are utilized to present an intuitive and precise analysis . The above techniques have been deployed in developing an agent service-based trading and mining support infrastructure.
Cao, L, Zhang, C & Dai, R 2005, 'The OSOAD Methodology for Open Complex Agent Systems', International Journal of Intelligent Control and Systems, vol. 10, no. 4, pp. 277-285.
View description>>
Open complex agent systems (OCAS) are middle-size or large-scale open agent organization. To engineer OCAS, agentcentric organization-oriented analysis, design and implementation, namely organization-oriented methodology (OOM), has emerged as a highly promising direction. A number of OOM-related approaches have been proposed; while there are some intrinsic issues hidden in them. For instance, some fundamental system attributes, such as system dynamics, are not covered by almost all of the existing approaches. In this paper, we summarize our investigation of existing approaches, and report a new OOM approach called OSOAD. The OSOAD approach consists of organizational abstraction (OA), organization-oriented analysis (OOA), agent service-oriented design (ASOD), and Java agent service -based implementation. OSOAD provides complete and deployable mechanisms for all software engineering phases. Specifically, we notice the transition supports from OA to OOA and ASOD. This approach has been built and deployed with the practical development of agent service -based financial trading and mining applications.
Chen, CZ, Wang, XB, Wang, LH, Yang, F, Tang, GF, Xing, HX, Ryan, L, Lasley, B, Overstreet, JW, Stanford, JB & Xu, XP 2005, 'Effect of environmental tobacco smoke on levels of urinary hormone markers', ENVIRONMENTAL HEALTH PERSPECTIVES, vol. 113, no. 4, pp. 412-417.
View/Download from: Publisher's site
View description>>
Our recent study showed a dose-response relationship between environmental tobacco smoke (ETS) and the risk f early pregnancy loss. Smoking is known to affect female reproductive hormones. We explored whether ETS affects reproductive hormone profiles as characterized by urinary pregnanediol-3-glucuronide (PdG) and estrone conjugate (E1C) levels. We prospectively studied 371 healthy newly married nonsmoking women in China who intended to conceive and had stopped contraception. Daily records of vaginal bleeding, active and passive cigarette smoking, and daily first-morning urine specimens were collected for up to 1 year or until a clinical pregnancy was achieved. We determined the day of ovulation for each menstrual cycle. The effects of ETS exposure on daily urinary PdG and E1C levels in a ±10 day window around the day of ovulation were analyzed for conception and nonconception cycles, respectively. Our analysis included 344 nonconception cycles and 329 conception cycles. In nonconception cycles, cycles with ETS exposure had significantly lower urinary E1C levels (β = -0.43, SE = 0.08, p < 0.001 in log scale) compared with the cycles without ETS exposure. There was no significant difference in urinary PdG levels in cycles having ETS exposure (β = -0.07, SE = 0.15, p = 0.637 in log scale) compared with no ETS exposure. Among conception cycles, there were no significant differences in E1C and PdG levels between ETS exposure and nonexposure. In conclusion, ETS exposure was associated with significantly lower urinary E1C levels among nonconception cycles, suggesting that the adverse reproductive effect of ETS may act partly through its antiestrogen effects.
Chen, JC, Christiani, DC & Ryan, LM 2005, 'Predicting exposure levels', EPIDEMIOLOGY, vol. 16, no. 1, pp. 135-135.
View/Download from: Publisher's site
Chen, Q, Zhang, C & Zhang, S 2005, 'ENDL: A Logical Framework for Verifying Secure Transaction Protocols', Knowledge and Information Systems, vol. 7, no. 1, pp. 84-109.
View/Download from: Publisher's site
View description>>
This paper proposes a new logic for verifying secure transaction protocols. We have named this logic the ENDL (extension of non-monotonic dynamic logic). In this logic, timestamps and signed certificates are used for protecting against replays of old keys or the substitution of bogus keys. The logic is useful for verifying the authentication properties of secure protocols, and especially for protecting data integrity. To evaluate the logic, three practical instances of secure protocols are illustrated. This evaluation demonstrates that the ENDL is effective and promising. © 2004 Springer-Verlag London Ltd.
Cheng, ED & Piccardi, M 2005, 'Track Matching by Major Color Histograms Matching and Post-matching Integration', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3617 LNCS, pp. 1148-1157.
View/Download from: Publisher's site
View description>>
In this paper we present a track matching algorithm based on the 'major color' histograms matching and the post-matching integration useful for tracking a single object across multiple, limitedly disjoint cameras. First, the Major Color Spectrum Histogram (MCSH) is introduced to represent a moving object in a single frame by its most frequent colors only. Then, a two-directional similarity measurement based on the MCHS is used to measure the similarity of any two given moving objects in single frames. Finally, our track matching algorithm extends the single-frame matching along the objects' tracks by a post-matching integration algorithm. Experimental results presented in this paper show the accuracy of the proposed track matching algorithm: the similarity of two tracks from the same moving objects has proved as high as 95%, while the similarity of two tracks from different moving objects has been kept as low as up to 28%. The post-matching integration step proves able to remove detailed errors occurring at the frame level, thus making track matching more robust and reliable. © Springer-Verlag Berlin Heidelberg 2005.
Cheng, X, Ouyang, D, Yunfei, J & Zhang, C 2005, 'An improved model-based method to test circuit faults', Theoretical Computer Science, vol. 341, no. 1-3, pp. 150-161.
View/Download from: Publisher's site
View description>>
This paper presents an improved model-based reasoning method to test circuit faults. The testing procedure is applicable even when the target system contains multiple faulty modes. Using our method, the observation could be planned appropriately to guara
Dong, G & Li, J 2005, 'Mining border descriptions of emerging patterns from dataset pairs', Knowledge and Information Systems, vol. 8, no. 2, pp. 178-202.
View/Download from: Publisher's site
View description>>
The mining of changes or differences or other comparative patterns from a pair of datasets is an interesting problem. This paper is focused on the mining of one type of comparative pattern called emerging patterns. Emerging patterns are denoted by EPs an
Duty, SM, Calafat, AM, Silva, MJ, Ryan, L & Hauser, R 2005, 'Phthalate exposure and reproductive hormones in adult men', HUMAN REPRODUCTION, vol. 20, no. 3, pp. 604-610.
View/Download from: Publisher's site
View description>>
Background: Phthalates are used in personal and consumer products, food packaging materials, and polyvinyl chloride plastics and have been measured in the majority of the general population of the USA. Consistent experimental evidence shows that some phthalates are developmental and reproductive toxicants in animals. This study explored the association between environmental levels of phthalates and altered reproductive hormone levels in adult men. Methods: Between 1999 and 2003, 295 men were recruited from Massachusetts General Hospital. Selected phthalate metabolites were measured in urine. Linear regression models explored the relationship between specific gravity-adjusted urinary phthalate monoester concentrations and serum levels of reproductive hormones, including FSH, LH, sex hormone-binding globulin, testosterone, and inhibin B. Results: An interquartile range (IQR) change in monobenzyl phthalate (MBzP) exposure was significantly associated with a 10% [95% confidence interval (CI): -16, -4.0] decrease in FSH concentration. Additionally, an IQR change in monobutyl phthalate (MBP) exposure was associated with a 4.8% (95% CI: 0, 10) increase in inhibin B but this was of borderline significance. Conclusions: Although we found associations between MBP and MBzP urinary concentrations and altered levels of inhibin B and FSH, the hormone concentrations did not change in the expected patterns. Therefore, it is unclear whether these associations represent physiologically relevant alterations in these hormones, or whether they represent associations found as a result of conducting multiple comparisons. © The Author 2004. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved.
Harezlak, J, Ryan, LM, Giedd, JN & Lange, N 2005, 'Individual and population penalized regression splines for accelerated longitudinal designs', BIOMETRICS, vol. 61, no. 4, pp. 1037-1048.
View/Download from: Publisher's site
View description>>
In an accelerated longitudinal design (ALD), individuals enter the study at different points of their growth trajectory and are observed over a short time span relative to the entire time span of interest. ALD data are combined across independent units to provide an estimate of an overall population curve and predictions of individual patterns of change. As a modest extension of the work of Ruppert et al. (2003, Semiparametric Regression, Cambridge University Press), we develop a computationally efficient procedure for the application of longitudinal semiparametric methods under ALD sampling schemes. We compare balanced and complete longitudinal designs to ALDs using the Berkeley Growth Study data and apply our method to longitudinal magnetic resonance imaging (MRI) brain structure size (volume) measurements from an ongoing developmental study. Potential applications extend beyond growth studies to many other fields in which cost and feasibility constraints impose restrictions on sample size and on the numbers and timings of repeated measurements across subjects.
Hoffmann, K 2005, 'Predicting Exposure Levels', Epidemiology, vol. 16, no. 1, pp. 134-135.
View/Download from: Publisher's site
Huang, D, Chow, TWS, Ma, EWM & Jinyan Li 2005, 'Efficient selection of discriminative genes from microarray gene expression data for cancer diagnosis', IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 52, no. 9, pp. 1909-1918.
View/Download from: Publisher's site
View description>>
A new mutual information (MI)-based feature-selection method to solve the so-called large p and small n problem experienced in a microarray gene expression-based data is presented. First, a grid-based feature clustering algorithm is introduced to eliminate redundant features. A huge gene set is then greatly reduced in a very efficient way. As a result, the computational efficiency of the whole feature-selection process is substantially enhanced. Second, MI is directly estimated using quadratic MI together with Parzen window density estimators. This approach is able to deliver reliable results even when only a small pattern set is available. Also, a new MI-based criterion is proposed to avoid the highly redundant selection results in a systematic way. At last, attributed to the direct estimation of MI, the appropriate selected feature subsets can be reasonably determined. © 2005 IEEE.
Li, H & Li, J 2005, 'Discovery of stable and significant binding motif pairs from PDB complexes and protein interaction datasets', Bioinformatics, vol. 21, no. 3, pp. 314-324.
View/Download from: Publisher's site
View description>>
Abstract Motivation: Discovery of binding sites is important in the study of protein–protein interactions. In this paper, we introduce stable and significant motif pairs to model protein-binding sites. The stability is the pattern’s resistance to some transformation. The significance is the unexpected frequency of occurrence of the pattern in a sequence dataset comprising known interacting protein pairs. Discovery of stable motif pairs is an iterative process, undergoing a chain of changing but converging patterns. Determining the starting point for such a chain is an interesting problem. We use a protein complex dataset extracted from the Protein Data Bank to help in identifying those starting points, so that the computational complexity of the problem is much released. Results: We found 913 stable motif pairs, of which 765 are significant. We evaluated these motif pairs using comprehensive comparison results against random patterns. Wet-experimentally discovered motifs reported in the literature were also used to confirm the effectiveness of our method. Contact haiquan@i2r.a-star.edu.sg Supplementary information http://sdmc.i2r.a-star.edu.sg/BindingMotifPairs
Li, J & Li, H 2005, 'Using fixed point theorems to model the binding in protein-protein interactions', IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 8, pp. 1079-1087.
View/Download from: Publisher's site
View description>>
The binding in protein-protein interactions exhibits a kind of biochemical stability in cells. The mathematical notion of fixed points also describes stability. A point is a fixed point if it remains unchanged after a transformation by a function. Many points may not be a fixed point, but they may approach a stable status after multiple steps of transformation. In this paper, we define a point as a protein motif pair consisting of two traditional protein motifs. We propose a function and propose a method to discover stable motif pairs of this function from a large protein interaction sequence data set. There are many interesting properties for this function (for example, the convergence). Some of them are useful for gaining much efficiency in the discovery of those stable motif pairs; some are useful for explaining why our proposed fixed point theorems are a good way to model the binding of protein interactions. Our results are also compared to biological results to elaborate the effectiveness of our method. © 2005 IEEE.
Lipman, J, Abolhasan, M, Boustead, P & Chicharo, J 2005, 'An optimised resource aware approach to information collection in ad hoc networks', Ad Hoc Networks, vol. 3, no. 5, pp. 643-655.
View/Download from: Publisher's site
View description>>
In ad hoc networks there is a need for all-to-one protocols that allow for information collection or 'sensing' of the state of an ad hoc network and the nodes that comprise it. Such protocols may be used for service discovery, auto-configuration, network management, topology discovery or reliable flooding. There is a parallel between this type of sensing in ad hoc networks and that of sensor networks. However, ad hoc networks and sensor networks differ in their application, construction, characteristics and constraints. The main priority of sensor networks is for the flow of data from sensors back to a sink, but in an ad hoc network this may be of secondary importance. Hence, protocols suitable to sensor networks are not necessarily suitable to ad hoc networks and vice versa. We propose, Resource Aware Information Collection (RAIC), a distributed two phased resource aware approach to information collection in ad hoc networks. RAIC utilises a resource aware optimised flooding mechanism to both disseminate requests and initialise a backbone of resource suitable nodes responsible for relaying replies back to the node collecting information. RAIC in the process of collecting information from all nodes in an ad hoc network is shown to consume less energy and introduce less overhead compared with Directed Diffusion and a brute force approach. Importantly, over multiple successive queries (in an energy constrained environment), the use of resource awareness allows for the load of relaying to be distributed to those nodes most suitable, thereby extending the lifetime of the network. © 2004 Elsevier B.V. All rights reserved.
Litonjua, AA, Celedon, JC, Hausmann, J, Nikolov, M, Sredl, D, Ryan, L, Platts-Mills, TAE, Weiss, ST & Gold, DR 2005, 'Variation in total and specific IgE: Effects of ethnicity and socioeconomic status', JOURNAL OF ALLERGY AND CLINICAL IMMUNOLOGY, vol. 115, no. 4, pp. 751-757.
View/Download from: Publisher's site
View description>>
Background: Asthma is common in minority and disadvantaged populations, whereas atopic disorders other than asthma appear to be less prevalent. It is unclear whether the same holds true for objective markers of sensitization. Objective: To determine the association of asthma, atopic disorders, and specific sensitization with race and socioeconomic factors. Methods: We analyzed total and specific IgE among 882 women (577 white, 169 black, and 136 Hispanic) who delivered a child at a large tertiary hospital in Boston, Mass, and who were screened for participation in a family and birth cohort study. Race/ethnicity and other characteristics were obtained from screening questionnaires. Addresses were geocoded, and 3 census-based geographic area socioeconomic variables were derived from block group information from the 1990 US Census. Results: Black and Hispanic women were more likely to come from areas with low socioeconomic indicators and were more likely to have asthma than white women. However, these women were less likely to have hay fever and eczema than their white counterparts. Compared with white women, black women had higher mean total IgE levels; had greater proportions of sensitization to indoor, outdoor, and fungal allergens; and were more than twice as likely to be sensitized to ≥3 aeroallergens. Conclusion: The racial/ethnic disparities in atopic disorders may represent either underdiagnosis or underreporting and suggest that allergy testing may be underused in some populations. Differences in total IgE levels and specific allergen sensitization are likely a result of the complex interplay between exposures associated with socioeconomic disadvantage. © 2005 American Academy of Allergy, Asthma and Immunology.
Liu, H, Han, H, Li, J & Wong, L 2005, 'DNAFSMiner: a web-based software toolbox to recognize two types of functional sites in DNA sequences', Bioinformatics, vol. 21, no. 5, pp. 671-673.
View/Download from: Publisher's site
View description>>
Abstract Summary: DNAFSMiner (DNA Functional Sites Miner) is a web-based software toolbox to recognize functional sites in nucleic acid sequences. Currently in this toolbox, we provide two software: TIS Miner and Poly(A) Signal Miner. The TIS Miner can be used to predict translation initiation sites in vertebrate DNA/mRNA/cDNA sequences, and the Poly(A) Signal Miner can be used to predict polyadenylation [poly(A)] signals in human DNA sequences. The prediction results are better than those by literature methods on two benchmark applications. This good performance is mainly attributable to our unique learning method. DNAFSMiner is available free of charge for academic and non-profit organizations. Availability: http://research.i2r.a-star.edu.sg/DNAFSMiner/ Contact: huiqing@i2r.a-star.edu.sg
Liu, H, Li, J & Wong, L 2005, 'Use of extreme patient samples for outcome prediction from gene expression data', Bioinformatics, vol. 21, no. 16, pp. 3377-3384.
View/Download from: Publisher's site
View description>>
Motivation: Patient outcome prediction using microarray technologies is an important application in bioinformatics. Based on patients' genotypic microarray data, predictions are made to estimate patients' survival time and their risk of tumor metastasis or recurrence. So, accurate prediction can potentially help to provide better treatment for patients. Results: We present a new computational method for patient outcome prediction. In the training phase of this method, we make use of two types of extreme patient samples: short-term survivors who got an unfavorable outcome within a short period and long-term survivors who were maintaining a favorable outcome after a long follow-up time. These extreme training samples yield a clear platform for us to identify relevant genes whose expression is closely related to the outcome. The selected extreme samples and the relevant genes are then integrated by a support vector machine to build a prediction model, by which each validation sample is assigned a risk score that falls into one of the special pre-defined risk groups. We apply this method to several public datasets. In most cases, patients in high and low risk groups stratified by our method have clearly distinguishable outcome status as seen in their Kaplan-Meier curves. We also show that the idea of selecting only extreme patient samples for training is effective for improving the prediction accuracy when different gene selection methods are used. © The Author 2005. Published by Oxford University Press. All rights reserved.
Luo, D, Liu, W, Luo, C, Cao, L & Dai, RW 2005, 'Hybrid Analyses and System Architecture for Telecom Frauds', Jisuanji Kexue (Computer Science), vol. 32, no. 5, pp. 17-22.
Meeker, JD, Barr, DB, Ryan, L, Herrick, RF, Bennett, DH, Bravo, R & Hauser, R 2005, 'Temporal variability of urinary levels of nonpersistent insecticides in adult men', JOURNAL OF EXPOSURE ANALYSIS AND ENVIRONMENTAL EPIDEMIOLOGY, vol. 15, no. 3, pp. 271-281.
View/Download from: Publisher's site
View description>>
Widespread application of contemporary-use insecticides results in low-level exposure for a majority of the population through a variety of pathways. Urinary insecticide biomarkers account for all exposure pathways, but failure to account for temporal within-subject variability of urinary levels can lead to exposure misclassification. To examine temporal variability in urinary markers of contemporary-use insecticides, nine repeated urine samples were collected over 3 months from 10 men participating in an ongoing study of male reproductive health. These 90 samples were analyzed for urinary metabolites of chlorpyrifos (3,5,6-trichloro-2-pyridinol (TCPY)) and carbaryl (1-naphthol (1N)). Volume- based (unadjusted), as well as creatinine (CRE)- and specific gravity (SG)-adjusted concentrations were measured. TCPY had low reliability with an intraclass correlation coefficient between 0.15 and 0.21, while IN was moderately reliable with an intraclass correlation coefficient between 0.55 and 0.61. When the 10 men were divided into tertiles based on 3-month geometric mean TCPY and 1N levels, a single urine sample performed adequately in classifying a subject into the highest or lowest exposure tertiles. Sensitivity and specificity ranged from 0.44 to 0.84 for TCPY and from 0.56 to 0.89 for IN. Some differences in the results between unadjusted metabolite concentrations and concentrations adjusted for CRE and SG were observed. Questionnaires were used to assess diet in the 24 h preceding the collection of each urine sample. In mixed-effects models, TCPY was significantly associated with season as well as with consuming grapes and cheese, while IN levels were associated with consuming strawberries. In conclusion, although a single sample adequately predicted longer-term average exposure, a second sample collected at least 1 month following the first sample would reduce exposure measurement error. © 2005 Nature Publishing Group All rights reserved.
Morales, KH & Ryan, LM 2005, 'Benchmark dose estimation based on epidemiologic cohort data', ENVIRONMETRICS, vol. 16, no. 5, pp. 435-447.
View/Download from: Publisher's site
View description>>
Risk assessments based on epidemiologic studies are becoming increasingly common in evaluating environmental health risks and setting health standards. This article will discuss and compare some of the available methods for exposure-response modeling and risk estimation based on environmental epidemiologic studies with age-specific incidence and mortality data. Recommendations will be made regarding approaches that can be used in practice
Pfeiffer, RM, Ryan, L, Litonjua, A & Pee, D 2005, 'A case-cohort design for assessing covariate effects in longitudinal studies', BIOMETRICS, vol. 61, no. 4, pp. 982-991.
View/Download from: Publisher's site
View description>>
The case-cohort design for longitudinal data consists of a subcohort sampled at the beginning of the study that is followed repeatedly over time, and a case sample that is ascertained through the course of the study. Although some members in the subcohort may experience events over the study period, we refer to it as the 'control-cohort.' The case sample is a random sample of subjects not in the control-cohort, who have experienced at least one event during the study period. Different correlations among repeated observations on the same individual are accommodated by a two-level random-effects model. This design allows consistent estimation of all parameters estimable in a cohort design and is a cost-effective way to study the effects of covariates on repeated observations of relatively rare binary outcomes when exposure assessment is expensive. It is an extension of the case-cohort design (Prentice, 1986, Biometrika73, 1-11) and the bidirectional case-crossover design (Navidi, 1998, Biometrics54, 596-605). A simulation study compares the efficiency of the longitudinal case-cohort design to a full cohort analysis, and we find that in certain situations up to 90% efficiency can be obtained with half the sample size required for a full cohort analysis. A bootstrap method is presented that permits testing for intra-subject homogeneity in the presence of unidentifiable nuisance parameters in the two-level random-effects model. As an illustration we apply the design to data from an ongoing study of childhood asthma.
Qin, Z, Zhang, C, Xie, X & Zhang, S 2005, 'Dynamic Test-Sensitive Decision Trees with Multiple Cost Scales', Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), vol. 3613, no. PART I, pp. 402-405.
View/Download from: Publisher's site
View description>>
Previous work considering both test and misclassification costs rely on the assumption that the test cost and the misclassification cost must be defined on the same cost scale. However, it can be difficult to define the multiple costs on the same cost scale. In our previous work, a novel yet efficient approach for involving multiple cost scales is proposed. Specifically speaking, we first introduce a new test-sensitive decision tree with two kinds of cost scales, that minimizes the one kind of cost and control the other in a given specific budget. In this paper, a dynamic test strategy with known information utilization and global resource control is proposed to keep the minimization of overall target cost. Our work will be useful in many urgent diagnostic tasks involving target cost minimization and resource consumption for obtaining missing information. © Springer-Verlag Berlin Heidelberg 2005.
Ruta, D & Gabrys, B 2005, 'Classifier selection for majority voting', Information Fusion, vol. 6, no. 1, pp. 63-81.
View/Download from: Publisher's site
View description>>
Individual classification models are recently challenged by combined pattern recognition systems, which often show better performance. In such systems the optimal set of classifiers is first selected and then combined by a specific fusion method. For a small number of classifiers optimal ensembles can be found exhaustively, but the burden of exponential complexity of such search limits its practical applicability for larger systems. As a result, simpler search algorithms and/or selection criteria are needed to reduce the complexity. This work provides a revision of the classifier selection methodology and evaluates the practical applicability of diversity measures in the context of combining classifiers by majority voting. A number of search algorithms are proposed and adjusted to work properly with a number of selection criteria including majority voting error and various diversity measures. Extensive experiments carried out with 15 classifiers on 27 datasets indicate inappropriateness of diversity measures used as selection criteria in favour of the direct combiner error based search. Furthermore, the results prompted a novel design of multiple classifier systems in which selection and fusion are recurrently applied to a population of best combinations of classifiers rather than the individual best. The improvement of the generalisation performance of such system is demonstrated experimentally. © 2004 Elsevier B.V. All rights reserved.
Sanchez, BN, Budtz-Jorgensen, E, Ryan, LM & Hu, H 2005, 'Structural equation models: A review with applications to environmental epidemiology', JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, vol. 100, no. 472, pp. 1443-1455.
View/Download from: Publisher's site
View description>>
Structural equation models (SEMs) have been discussed extensively in the psychometrics and quantitative behavioral sciences literature. However, many statisticians and researchers in other areas of application are relatively unfamiliar with their implementation. Here we review some of the SEM literature and describe basic methods, using examples from environmental epidemiology. We make connections to recent work on latent variable models for multivariate outcomes and to measurement error methods, and discuss advantages and disadvantages of SEMs compared with traditional regressions. We give a detailed example in which two models fit the same data well, yet one is physiologically implausible. This underscores the critical role of subject matter knowledge in the successful implementation of SEMs. A brief discussion on open research areas is included. © 2005 American Statistical Association.
Stark, PC, Celedon, JC, Chew, GL, Ryan, LM, Burge, HA, Muilenberg, ML & Gold, DR 2005, 'Fungal levels in the home and allergic rhinitis by 5 years of age', ENVIRONMENTAL HEALTH PERSPECTIVES, vol. 113, no. 10, pp. 1405-1409.
View/Download from: Publisher's site
View description>>
Studies have repeatedly demonstrated that sensitization to fungi, such as Alternaria, is strongly associated with allergic rhinitis and asthma in children. However, the role of exposure to fungi in the development of childhood allergic rhinitis is poorly understood. In a prospective birth cohort of 405 children of asthmatic/allergic parents from metropolitan Boston, Massachusetts, we examined in-home high fungal concentrations (> 90th percentile) measured once within the first 3 months of life as predictors of doctor-diagnosed allergic rhinitis in the first 5 years of life. In multivariate Cox regression analyses, predictors of allergic rhinitis included high levels of dust-borne Aspergillus [hazard ratio (HR) = 3.27; 95% confidence interval (Cl), 1.50-7.14], Aureobasidium (HR = 3.04; 95% Cl, 1.33-6.93), and yeasts (HR = 2.67; 95% Cl, 1.26-5.66). The factors controlled for in these analyses included water damage or mild or mildew in the building during the first year of the child's life, any lower respiratory tract infection in the first year, male sex, African-American race, fall date of birth, and maternal IgE to Alternaria > 0.35 U/mL. Dust-borne Alternaria and non-sporulating and total fungi were also predictors of allergic rhinitis in models excluding other fungi but adjusting for all of the potential confounders listed above. High measured fungal concentrations and reports of water damage, mold, or mildew in homes may predispose children with a family history of asthma or allergy to the development of allergic rhinitis. Key words: allergic rhinitis, fungi, mold, respiratory health effects, water damage.
Stoianoff, NP & Kaidonis, MA 2005, 'Rehabilitation of mining sites: do taxation and accounting systems legitimise the privileged or serve the community?', Critical Perspectives on Accounting, vol. 16, no. 1, pp. 47-59.
View/Download from: Publisher's site
View description>>
This paper explores both accounting standards and the taxation provisions with respect to the treatment of rehabilitation costs of mining entities in Australia. A special tax deduction is allowed only for expenditure actually incurred, yet the accounting standard provides a different calculative practice for the representation of the same event. With this example we demonstrate inconsistencies that exist between accounting and tax and although the accounting for income taxes standard accounts for the differences, we argue this merely legitimatises them. We challenge this false consciousness that assumes these inconsistencies are merely incidental and point out that these two systems, of tax and accounting, implicitly sustain and reinforce each other. These institutional practices perpetuate the privileges, powers and impact of the mining industry, whilst claiming to serve the community as a whole.
Surkan, PJ, Ryan, LM, Bidwell, HW, Brooks, DR, Peterson, KE & Gillman, MW 2005, 'Psychosocial Correlates of Leisure-Time Physical Activity in Urban Working-Class Adults', Journal of Physical Activity and Health, vol. 2, no. 4, pp. 397-411.
View/Download from: Publisher's site
View description>>
Background:Limited data address psychosocial and environmental correlates of physical activity.Methods:We assessed associations of regular and recent leisure-time physical activity with physical/mental well-being, social support, and civic trust and reciprocity in a working-class Boston neighborhood. We surveyed 409 adults in 1999 to 2000 using methodology from the Behavioral Risk Factor Surveillance System.Results:Adjusted for demographic variables, correlates of regular physical activity included feeling energetic/healthy (odds ratio [OR] = 1.7, 95% confidence interval [CI] 1.3 to 2.3 for each one of four categories), feeling worried/tense/anxious (OR = 0.7, 95% CI 0.5 to 1.0), pain interfering with usual activities (OR = 0.5, 95% CI 0.3 to 0.8), feeling sad/blue/depressed (OR = 0.7, 95% CI, 0.5 to 0.9), inadequate sleep/rest (OR = 0.8, 95% CI 0.6 to 1.0) and feeling satisfied with life (OR = 1.6, 95% CI 1.0 to 2.6, for very satisfied versus other). We found similar associations for participation in any physical activity.Conclusions:Lack of energy, anxiety, pain, sadness, poor sleep, and dissatisfaction with life were associated with low physical activity levels.
Wang, H, Wang, M, Hintz, T, Wu, Q & He, X 2005, 'VSA-based fractal image compression', 13th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2005, WSCG'2005 - In Co-operation with EUROGRAPHICS, Full Papers, vol. 13, no. 1-3, pp. 89-96.
View description>>
Spiral Architecture (SA) is a novel image structure which has hexagons but not squares as the basic elements. Apart from many other advantages in image processing, SA has shown two unbeatable characters that have potential to improve image compression performance, namely, Locality of Pixel Density and Uniform Image Partitioning. Fractal image compression is a relatively recent image compression method which exploits similarities in different parts of the image. The basic idea is to represent an image as fixed points of Iterated Function Systems (IFS). Therefore, an input image can be represented by a series of IFS codes rather than pixels. In this way, an amazing compression ratio 10000:1 can be achieved. The application of fractal image compression presented in this paper is based on Spiral Architecture. Since there is no mature capture and display device for hexagon-based images, the experiments are implemented on a newly proposed mimic scheme, called Virtual Spiral Architecture (VSA). The experimental results in the paper have shown that introducing Spiral Architecture into fractal image compression will improve the compression performance in image quality with little trade-off in compression ratio. A lot of research work exists in this area to further improve the results. Copyright UNION Agency - Science Press.
Wang, J, Wu, X & Zhang, C 2005, 'Support vector machines based on K-means clustering for real-time business intelligence systems', International Journal of Business Intelligence and Data Mining, vol. 1, no. 1, pp. 54-54.
View/Download from: Publisher's site
View description>>
Support vector machines (SVM) have been applied to build classifiers, which can help users make well-informed business decisions. Despite their high generalisation accuracy, the response time of SVM classifiers is still a concern when applied into real-time business intelligence systems, such as stock market surveillance and network intrusion detection. This paper speeds up the response of SVM classifiers by reducing the number of support vectors. This is done by the K-means SVM (KMSVM) algorithm proposed in this paper. The KMSVM algorithm combines the K-means clustering technique with SVM and requires one more input parameter to be determined: the number of clusters. The criterion and strategy to determine the input parameters in the KMSVM algorithm are given in this paper. Experiments compare the KMSVM algorithm with SVM on real-world databases, and the results show that the KMSVM algorithm can speed up the response time of classifiers by both reducing support vectors and maintaining a similar testing accuracy to SVM. Copyright © 2005 Inderscience Enterprises Ltd.
Wang, J, Zhang, C, Wu, X, Qi, H & Wang, J 2005, 'SVM-OD: SVM Method to Detect Outliers1', Studies in Computational Intelligence, vol. 9, pp. 129-141.
View/Download from: Publisher's site
View description>>
Outlier detection is an important task in data mining because outliers can be either useful knowledge or noise. Many statistical methods have been applied to detect outliers, but they usually assume a given distribution of data and it is difficult to deal with high dimensional data. The Statistical Learning Theory (SLT) established by Vapnik et al. provides a new way to overcome these drawbacks. According to SLT Scholkopf et al. proposed a v-Support Vector Machine (v-SVM) and applied it to detect outliers. However, it is still difficult for data mining users to decide one key parameter in v-SVM. This paper proposes a new SVM method to detect outliers, SVM-OD, which can avoid this parameter. We provide the theoretical analysis based on SLT as well as experiments to verify the effectiveness of our method. Moreover, an experiment on synthetic data shows that SVM-OD can detect some local outliers near the cluster with some distribution while v-SVM cannot do that
Yan, X, Zhang, C & Zhang, S 2005, 'ARMGA: IDENTIFYING INTERESTING ASSOCIATION RULES WITH GENETIC ALGORITHMS', Applied Artificial Intelligence, vol. 19, no. 7, pp. 677-689.
View/Download from: Publisher's site
View description>>
Priori-like algorithms for association rules mining have relied on two user-specified thresholds: minimum support and minimum confidence. There are two significant challenges to applying these algorithms to real-world applications: database-dependent minimum-support and exponential search space. Database-dependent minimum-support means that users must specify suitable thresholds for their mining tasks though they may have no knowledge concerning their databases. To circumvent these problems, in this paper, we design an evolutionary mining strategy, namely the ARMGA model, based on a genetic algorithm. Like general genetic algorithms, our ARMGA model is effective for global searching, especially when the search space is so large that it is hardly possible to use deterministic searching method. Copyright © 2005 Taylor & Francis Inc.
Yu, JX, Yuming Ou, Chengqi Zhang & Shichao Zhang 2005, 'Identifying Interesting Customers through Web Log Classification', IEEE Intelligent Systems, vol. 20, no. 3, pp. 55-59.
View/Download from: Publisher's site
View description>>
The use of web log classification to identify the customer with a small data set is discussed. Web mining is a popular technique for analyzing visitor activities in e-service systems, which include, web text mining, web structure mining, and web log mining. Several groups of experiments on Dell Workstation PWS650 with 2 Gbytes of main memory running Window 2000 are conducted to evaluate the web log mining technique. The results show that when one classifies the 39,033 log records using the three classifiers, removing one attribute at a time, confirms that it's hard to determine which attribute to remove in order to achieve high accuracy.
Zhang, C, Zhang, Z & Cao, L 2005, 'Agents and Data Mining: Mutual Enhancement by Integration', Lecture Notes In Computer Science, vol. 3505, pp. 50-61.
View/Download from: Publisher's site
View description>>
This paper tells a story of synergism of two cutting edge technologies - agents and data mining. By integrating these two technologies, the power for each of them is enhanced. Integrating agents into data mining systems, or constructing data mining syste
Zhang, S, Wu, X, Zhang, J & Zhang, C 2005, 'A Decremental Algorithm for Maintaining Frequent Itemsets in Dynamic Databases', Lecture Notes in Computer Science, vol. 3589, pp. 305-314.
View/Download from: Publisher's site
View description>>
Data mining and machine learning must confront the problem of pattern maintenance because data updating is a fundamental operation in data management. Most existing data-mining algorithms assume that the database is static, and a database update requires rediscovering all the patterns by scanning the entire old and new data. While there are many efficient mining techniques for data additions to databases, in this paper, we propose a decrementai algorithm for pattern discovery when data is being deleted from databases. We conduct extensive experiments for evaluating this approach, and illustrate that the proposed algorithm can well model and capture useful interactions within data when the data is decreasing. © Springer-Verlag Berlin Heidelberg 2005.
Abolhasan, M, Lipman, J & Society, IEEEC 1970, 'Efficient and highly scalable route discovey for on-demand routing protocols in ad hoc networks', LCN 2005: 30th Conference on Local Computer Networks, Proceedings, Local Computer Networks, 2005. 30th Anniversary. The IEEE Conference on, IEEE, Sydney, Australia, pp. 358-365.
View/Download from: Publisher's site
View description>>
This paper presents a number of different route discovery strategies for on-demand routing protocols, which provide more control to each intermediate node make during the route discovery phase to make intelligent forwarding decisions. This is achieved through the idea of self-selection. In self-selecting route discovery each node independently makes route request (RREQ) forwarding decisions based upon a selection criterion or by satisfying certain conditions. The nodes which do not satisfy the selection criterion do not rebroadcast the routing packets. We implemented our self-selecting route discovery strategies over AODV using the GloMoSim network simulation package, and compared the performance with existing route discovery strategies used in AODV. Our simulation results show that a significant drop in the number of control packets can be achieved by giving each intermediate node more authority for self-selection during route discovery. Furthermore, a significant increase in throughput is achieved as the number nodes in the network is increased
Abolhasan, M, Wysocki, T, Lipman, J & IEEE 1970, 'Performance investigation on three-classes of MANET routing protocols', 2005 Asia-Pacific Conference on Communications (APCC), Vols 1& 2, Communications, 2005 Asia-Pacific Conference on, Communications, 2005 Asia-Pacific Conference on, Perth, WA, pp. 774-778.
View/Download from: Publisher's site
View description>>
Routing in Ad hoc Networks has received significant attention with a number of different routing protocols proposed in recent years. These routing protocols may be classified into three main categories: proactive, reactive and hybrid. Prior work aimed at comparing the performance of routing protocols has mainly focused on comparing reactive and proactive protocols [6] [4] [1]. In this paper, we present a simulation study of different routing protocols from all three categories. We also explore the benefits and performance of each routing category. Further, we present a discussion of future research directions for routing in Ad hoc Networks. © 2005 IEEE.
Bao Hua Liu, Chun Tung Chou, Lipman, J & Jha, S 1970, 'Using frequency division to reduce MAI in DS-CDMA wireless sensor networks', IEEE Wireless Communications and Networking Conference, 2005, IEEE Wireless Communications and Networking Conference, 2005, IEEE, New Orleans, LA, pp. 657-663.
View/Download from: Publisher's site
Cao, L, Schurmann, R & Zhang, C 1970, 'Domain-driven in-depth pattern discovery: A practical methodology', AusDM 2005 Proc. - 4th Australasian Data Mining Conf. - Collocated with the 18th Australian Joint Conf. on Artificial Intelligence, AI 2005 and the 2nd Australian Conf. on Artifical Life, ACAL 2005, Australian Data Mining Conference, The University of Technology, Sydney, Sydney, Australia, pp. 101-114.
View description>>
Traditional data mining is a data-driven trial-and-error process. The patterns discovered via predefined models in the above process are generic patterns. Generally, they are often not really interesting to constraint-based real business, hi order to work out patterns that are of interest and actionable to the real world, in-depth patterns are often essential. This type of pattern discovery is more likely to be a business or industry domain-driven human-machine-cooperated process. The use of in-depth patterns requires the development of a more practical methodology, than is presently available for guiding real-world data mining. This paper proposes such a practical data mining methodology, referred to as domain-driven in-depth pattern discovery (DDID-PD). The main idea of the DDID-PD methodology is to mine in-depth patterns through domain-driven iterative human-machine interaction in a constraint-based context. Using this methodology as a basis, we demonstrate some of our work in mining in-depth correlations in Australian Stock Exchange (ASX) data and preliminary research on developing a quality knowledge base for Centrelink interventions. The deployment of DDID-PD to ASX data mining tasks has shown that the methodology is practical and has potential for further improving the analysis of large quantities of data to identify patterns for practical use by industry and business. © 2013.
Cao, L, Zhang, C, Luo, D, Chen, W & Zamani, N 1970, 'Integrative early requirements analysis for agent-based systems', Proceedings - HIS'04: 4th International Conference on Hybrid Intelligent Systems, 4th International Conference on Hybrid Intelligent Systems (HIS 04), IEEE COMPUTER SOC, Kitakyushu, JAPAN, pp. 118-123.
View description>>
Early requirements analysis (ERA) is quite significant for building agent-based systems. Goal-oriented requirements analysis is promising for the agent-oriented early requirements analysis. In general, either visual modeling or formal specifications is used for the ERA. This way cannot capture requirements precisely and completely. In this paper, we present an integrative modeling framework for agent-oriented early requirements analysis; this framework implements goal-oriented requirement analysis. The integrative modeling combines visual modeling and formal modeling together. Extended i* framework is used for building visual models; formal specifications complement the visual modeling to define and refine requirements. Both visual and formal models are outlined through a practical agent-based system F-TRADE1. The integrative modelling seems to model early requirements comprehensively and concretely, and benefit refinement and conflict management in building agent systems. © 2005 IEEE.
Chang, E, Dillon, TS & Hussain, FK 1970, 'Trust and reputation relationships in service-oriented environments', Third International Conference on Information Technology and Applications, Vol 1, Proceedings, International Conference on Information Technology and Applications, IEEE Computer Society, Sydney, Australia, pp. 4-14.
View/Download from: Publisher's site
View description>>
Trust and trustworthiness plays a major role in conducting business on the Internet in service-oriented environments. In defining trust for service-oriented environments, one needs to capture the notation of service level, service agreement, context and timeslots. The same applies for reputation which is the opinion of the third party agents which is used in determining the trust and trustworthiness. Because of the complexity of the issues, and the fact that the trust and reputation are essentially concerns with the relationships, it is important to clearly define the notion of the trust relationships and notion of the reputation relationships. In this paper, therefore, we clear these definitions and we introduce a graphical notation for representing these relationships.
Chang, E, Hussain, FK & Dillon, T 1970, 'CCCI metrics for the measurement of quality of e-service', 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Proceedings, IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IEEE CS Press, Campiegne, France, pp. 603-610.
View/Download from: Publisher's site
View description>>
The growing development in web-based trust and reputation systems in the 21st century will have powerful social and economic impact on all business entities, and will make transparent quality assessment and customer assurance realities in the distributed web-based service oriented environments. The growth in web-based trust and reputation systems will be the foundation for web intelligence in the future. Trust and Reputation systems help capture business intelligence through establishing customer relationships, learning consumer behaviour, capturing market reaction on products and services, disseminating customer feedback, buyers opinions and end-user recommendations, and revealing dishonest services, unfair trading, biased assessment, discriminatory actions, fraudulent behaviours, and un-true advertising. The continuing development of these technologies will help in the improvement of professional business behaviour, sales, reputation of sellers, providers, products and services. In this paper, we present a new methodology known as CCCI (Correlation, Commitment, Clarity, and Influence) for trustworthiness measure that is used in the Trust and Reputation System. The methodology is based on determining the correlation between the originally committed services and the services actually delivered by a Trusted Agent in a business interaction over the service oriented networks to determine the trustworthiness of the Trusted Agent.
Chang, E, Hussain, FK & Dillon, T 1970, 'Reputation ontology for reputation systems', ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS 2005: OTM 2005 WORKSHOPS, PROCEEDINGS, The International Conference on Semantic Web and Web Services, Springer, New York, USA, pp. 957-966.
View/Download from: Publisher's site
View description>>
The growing development of web-based reputation systems in the 21st century will have a powerful social and economic impact on both business entities and individual customers, because it makes transparent quality assessment on products and services to achieve customer assurance in the distributed web-based Reputation Systems. The web-based reputation systems will be the foundation for web intelligence in the future. Trust and Reputation help capture business intelligence through establishing customer trust relationships, learning consumer behavior, capturing market reaction on products and services, disseminating customer feedback, buyers opinions and end-user recommendations. It also reveals dishonest services, unfair trading, biased assessment, discriminatory actions, fraudulent behaviors, and un-true advertising. The continuing development of these technologies will help in the improvement of professional business behavior, sales, reputation of sellers, providers, products and services. Given the importance of reputation in this paper, we propose ontology for reputation. In the business world we can consider the reputation of a product or the reputation of a service or the reputation of an agent. In this paper we propose ontology for these entities that can help us unravel the components and conceptualize the components of reputation of each of the entities.
Chang, E, Hussain, FK & Dillon, TS 1970, 'Trustworthiness measure for e-service', PST 2005 - 3rd Annual Conference on Privacy, Security and Trust, Conference Proceedings, Annual Conference on Privacy, Security and Trust, University of New Brunswick, St Andrews, Canada, pp. 1-14.
View description>>
Traditionally, transactions were carried out face-toface, now, they are carried out over the Internet. The infrastructure for the above business and information exchange could be client-server, peer-to-peer or mobile network environments, and very often users on the network carry out interactions in one of three forms: • Anonymous (No names are identified in the communication) • Pseudo-anonymous (Nicknames are used in the communication) • Non-anonymous (Real names are used in the communication) Incapability or a fraudulent practice could occur when the seller or business provider or buyer (the agents on the network) does not behave in the manner that is mutually agreed or understood, especially if published terms and conditions exist. This paper evaluates currently existing trustworthiness systems and points out that currently there is no existing standardized measurement system for Quality of Service and outlines the methodology that we have developed for this.
Chang, E, Thomson, P, Dillon, T & Hussain, F 1970, 'The Fuzzy and Dynamic Nature of Trust', Lecture Notes in Computer Science: Trust, Privacy, And Security In Digital Business, International Conference on Trust, Privacy and Security in Digital Business, Springer Berlin Heidelberg, Copenhagen, Denmark, pp. 161-174.
View/Download from: Publisher's site
View description>>
Trust is one of the most fuzzy, dynamic and complex concepts in both social and business relationships. The difficulty in measuring Trust and predicting Trustworthiness in service-oriented network environments leads to many questions. These include issue
Chang, EJ, Hussain, FK & Dillon, TS 1970, 'Fuzzy nature of trust and dynamic trust modeling in service oriented environments', Proceedings of the 2005 workshop on Secure web services, CCS05: 12th ACM Conference on Computer and Communications Security 2005, ACM, Fairfax, USA, pp. 1-10.
View/Download from: Publisher's site
View description>>
In this paper, we propose and describe the six characteristics of trust. Based on the six proposed characteristics, we determine why trust is fuzzy. The term 'fuzzy' in this paper not in the sense of the precise definitions given in the Fuzzy Systems literature but to indicate a certain vagueness, complexity or ill-definition and qualitative characterization rather than quantitative representation and dynamic.We then determine why due to the six characteristics of trust, trust is dynamic. We then propose a modeling language tool to model the fuzzy and dynamic nature of trust.
Chen, Q, Chen, Y-PP, Zhang, C & Zhang, S 1970, 'A framework for merging inconsistent beliefs in security protocol analysis', International Workshop on Data Engineering Issues in E-Commerce, International Workshop on Data Engineering Issues in E-Commerce, IEEE, Tokyo, Japan, pp. 119-124.
View/Download from: Publisher's site
View description>>
This paper proposes a framework for merging inconsistent beliefs in the analysis of security protocols. The merge application is a procedure of computing the inferred beliefs of message sources and resolving the conflicts among the sources. Some security properties of secure messages are used to ensure the correctness of authentication of messages. Several instances are presented, and demonstrate our method is useful in resolving inconsistent beliefs in secure messages. © 2005 IEEE.
Cheng, E & Piccardi, M 1970, 'Disjoint camera track matching by an illumination effects reduction and major colour spectrum histograms representation algorithms', Proceedings Image and vision computing New Zealand 2005, Image and Vision Computing Conference, Wickliffe Ltd, Dunedin, New Zeland, pp. 432-437.
Cheng, E, Piccardi, M & Jan, T 1970, 'Boat Generated acoustic target signal detection by use of an adaptive median CFAR and multi-frame integration algorithm', Proceedings of EUSPICO 2005, 13th Europena Signal Processing Conference (EUSIPCO 2005), EURASIP, Antalya, Turkey, pp. 1-4.
Cheng, ED, Piccardi, M & Jan, T 1970, 'Boat-generated acoustic target signal detection by use of an Adaptive Median CFAR and multi-frame integration algorithm', 13th European Signal Processing Conference, EUSIPCO 2005, pp. 13-16.
View description>>
In this paper, an Adaptive Median Constant False Alarm Rate (AMCFAR) and multi-frame post detection integration algorithm is proposed for effective real time automatic target detection of boat-generated acoustic signals, in which, an observation space is created by sampling and dividing input analog acoustic signal into multiple frames and each frame is transformed into the frequency domain. In the created observation space, a Median Constant False Alarm Rate (MCFAR) and post detection integration algorithms have been proposed for an effective automatic target detection of boat generated acoustic signals, in which a low constant false alarm rate is kept with relative high detection rate. The proposed algorithm has been tested on several real acoustic signals from hydrophone sensors, and statistical analysis and experimental results showed it able to provide a very low false alarm rate and a relatively high detection rate in all cases.
Gunes, H & Piccardi, M 1970, 'Automatic visual recognition of face and body action units', Third International Conference on Information Technology and Applications, Vol 1, Proceedings, International Conference on Information Technology and Applications, IEEE, Sydney, Australia, pp. 668-673.
View/Download from: Publisher's site
View description>>
Expressive face and body gestures are among the main non-verbal communication channels in human-human interaction. Understanding human emotions through these nonverbal means is one of the necessary skills both for humans and also for the computers to int
Gunes, H & Piccardi, M 1970, 'Fusing face and body display for bi-modal emotion recognition: Single frame analysis and multi-frame post integration', AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PROCEEDINGS, International Conference on Affective Computing and Intelligent, Springer, Beijing, China, pp. 102-111.
View/Download from: Publisher's site
View description>>
This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Face and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single expressive frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only.
Gunes, H, Piccardi, M & IEEE 1970, 'Affect recognition from face and body: Early fusion vs. late fusion', INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOL 1-4, PROCEEDINGS, IEEE Conference on Systems, Man and Cybernetics, IEEE, Hawaii, USA, pp. 3437-3443.
View/Download from: Publisher's site
View description>>
This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level.
Gunes, H, Piccardi, M & IEEE 1970, 'Fusing face and body gesture for machine recognition of emotions', 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), International workshop on Robot and Human Interaction communication, IEEE, Nashville, USA, pp. 306-311.
He, S, Wang, H, Wu, Q & Hintz, TB 1970, 'Contractive IFS for fractal image compression on spiral architecture', Proceedings of 2005 Asia-Pacific Workshop on Visual Information Processing, Asia-Pacific Workshop on Visual Information Processing, IEEE, Hong Kong, China, pp. 171-176.
He, X, Hintz, T, Wu, Q & Zheng, L 1970, 'Number recognition using inductive learning on spiral architecture', Proceedings of the 2005 International Conference on Computer Vision, VISION'05, International conference in computer vision, CSREA Press, Las Vegas, USA, pp. 58-62.
View description>>
In this paper, a number recognition algorithm on Spiral Architecture is proposed. This algorithm employs RULES-3 inductive learning method and template matching technique. The algorithm starts from a collection of samples of numbers or letters used in number plates. Edge maps of the samples are then detected based on Spiral Architecture. A set of rules are extracted using these samples by RULES-3. The rules describe the frequencies of 9 different edge masks appearing in the samples. Each mask is a cluster of 7 hexagonal pixels. In order to recognize a number plate, all characters (digits or letters) are tested one by one using the extracted rules. The number recognition is achieved by the frequencies of the 9 masks.
Hussain, FK, Chang, E & Dillon, TS 1970, 'Formalizing a grammar for reputation in peer-to-peer communication', MoMM 2005 Proceedings, International Conference on Advances in Mobile Computing and Multimedia, Australian Computer Society, Kuala Lumpur, Malaysia, pp. 81-96.
Hussain, O, Chang, E, Hussain, F, Dillon, T & Soh, B 1970, 'Context Based Riskiness Assessment', TENCON 2005 - 2005 IEEE Region 10 Conference, TENCON 2005 - 2005 IEEE Region 10 Conference, IEEE, Melbourne, Aust, pp. 1-6.
View/Download from: Publisher's site
View description>>
In almost every interaction the trusting peer might fear about the likelihood of the loss in the resources involved during the transaction. This likelihood of the loss in the resources is termed as Risk in the transaction. Hence analyzing the Risk involved in a transaction is important to decide whether to proceed with the transaction or not. If a trusting peer is unfamiliar with a trusted peer and has not interacted with it before in a specific context, then it will ask for recommendations from other peer in order to determine the trusted peer?s Riskiness value or reputation. In this paper we discuss the process of asking recommendations from other peers in a specific context and assimilating those recommendations according to its criteria of the interaction in order to determine the correct Riskiness value of the trusted peer.
Hussain, O, Chang, E, Hussain, FK, Dillon, TS & Soh, B 1970, 'A Methodology for Determining Riskiness in Peer-to-Peer Communications', Proceedings of the IEEE International Conference of Industrial Informatics (INDIN 05), INDIN, IEEE, Perth, Australia, pp. 421-432.
View description>>
Every activity has some Risk involved in it. Analyzing the Risk involved in a transaction is important to decide whether to proceed with the transaction or not. Similarly in Peer-to-Peer communication analyzing the Risk involved in undertaking a transaction with another peer too is important. It would be much easier for the trusting peer to make a decision of proceeding a transaction with the trusted peer if he knows the Risk that the trusted peer is worthy of. In this paper develop and propose such a methodology which allows the trusting peer to rate the trusted peer in terns of Risk that he deserves after the transaction is over.
Hussain, O, Chang, E, Soh, B, Hussain, FK & Dillon, TS 1970, 'Factors of Risk Variance in Decentralized Communications', EICAR 2005 Conference Best Paper Proceedings, European Institute for Computer Anti-Virus Research EICAR Conference, Computer Associates, Saint Julians, Malta, pp. 129-137.
View description>>
Decentralized transactions are increasingly becoming popular. These transactions resemble the early forms of the internet and in many ways are regarded as the next generation of the internet. The result will be that this e-commerce transactions approach will shift to peer-to-peer communications rather than client-server environment. However, these peer-to-peer communications or decentralized transactions suffer from some disadvantages, which includes the risk associated with each transaction. This paper focuses on the factors that influence Risk in a decentralized transaction.
Hussain, OK, Chang, E, Hussain, FK, Dillon, TS & Soh, B 1970, 'Risk in Trusted Decentralized Communications', 21st International Conference on Data Engineering Workshops (ICDEW'05), 21st International Conference on Data Engineering Workshops (ICDEW'05), IEEE, Tokyo, Japan, pp. 63-67.
View/Download from: Publisher's site
View description>>
Risk is associated with almost every activity that is undertaken on a daily life. Risk associated with Trust, Security and Privacy. Risk is associated with transactions, businesses, information systems, environments, networks, partnerships, etc. Generally speaking, risk signifies the likelihood of financial losses, human casualties, business destruction and environmental damages. Risk indicator gives early warning to the party involved and helps avoid deserters. Until now, risk has been discussed extensively in the areas of investment, finance, health, environment, daily life activities and engineering. However, there is no systematic study of risk in Decentralised communication, which involves e-business, computer networks and service oriented environment. In this paper, we define risk associated with trusted communication in e-business and e-transactions; provide risk indicator calculations and basic application areas.
Hussain, OK, Chang, E, Hussain, FK, Dillon, TS, Soh, B & IEEE 1970, 'A methodology for determining riskiness in peer-to-peer communications', 2005 3rd IEEE International Conference on Industrial Informatics (INDIN), pp. 655-666.
View/Download from: Publisher's site
View description>>
Every activity has some Risk involved in it. Analyzing the Risk involved in a transaction is important to decide whether to proceed with the transaction or not. Similarly in Peer-to-Peer communication analyzing the Risk involved in undertaking a transaction with another peer too is important. It would be much easier for the trusting peer to make a decision of proceeding a transaction with the trusted peer if he knows the Risk that the trusted peer is worthy of. In this paper develop and propose such a methodology which allows the trusting peer to rate the trusted peer in terms of Risk that he deserves after the transaction is over. © 2005 IEEE.
Jinyan Li, Huiqing Liu & Ling Li 1970, 'Diagnostic Rules Induced by an Ensemble Method for Childhood Leukemia', Fifth IEEE Symposium on Bioinformatics and Bioengineering (BIBE'05), Fifth IEEE Symposium on Bioinformatics and Bioengineering (BIBE'05), IEEE, Minneapolis, MN, pp. 246-249.
View/Download from: Publisher's site
Liberati, NB, Platen, E, Martini, F & Piccardi, M 1970, 'A multi-point distributed random variable accelerator for Monte Carlo simulation in finance', 5th International Conference on Intelligent Systems Design and Applications, Proceedings, International Conference on Intelligent Systems Designs and Applications, IEEE, Wroclaw, Poland, pp. 532-537.
View/Download from: Publisher's site
View description>>
The pricing and hedging of complex derivative securities via Monte Carlo simulations of stochastic differential equations constitutes an intensive computational task. To achieve 'real time' execution, as often required by financial institutions, one needs highly efficient implementations of the multi-point distributed random variables underlying the simulations. In this paper a fast and flexible dedicated hardware solution is proposed. A comparative performance analysis demonstrates that the hardware solution is bottleneck-free and flexible, and significantly increases the computational efficiency of the software solution. © 2005 IEEE.
Liberati, NB, Platen, E, Martini, F & Piccardi, M 1970, 'An FPGA generator for multipoint distributed random variables (abstract only)', Proceedings of the 2005 ACM/SIGDA 13th international symposium on Field-programmable gate arrays, FPGA05: ACM/SIGDA International Symposium on Field Programmable Gate Arrays 2005, ACM, p. 280.
View/Download from: Publisher's site
View description>>
Multi-point distributed random variables whose moments match those of a Gaussian random variable up to a certain order play an important role in Monte Carlo simulations of weak approximations of stochastic differential equations. In applications such as finance, where "real time" execution is required, there is a strong need for highly efficient implementations. In this paper a fast and flexible dedicated hardware solution on a Field Programmable Gate Array (FPGA) is presented. A comparative performance analysis between a software-only and the proposed hardware solution demonstrates that the FPGA solution is bottleneck-free, retains the flexibility of the software solution and significantly increases the computational efficiency.
Lin, L, Cao, L & Zhang, C 1970, 'Genetic algorithms for robust optimization in financial applications', Proceedings of the IASTED International Conference on Computational Intelligence, IASTED International Conference on Computational Intelligence, ACTA Press, Calgary, Canada, pp. 387-391.
View description>>
In stock market or other financial market systems, the technical trading rules are used widely to generate buy and sell alert signals. In each rule, there are many parameters. The users often want to get the best signal series from the in-sample sets, (Here, the best means they can get the most profit, return or Sharpe Ratio, etc), but the best one will not be the best in the out-of-sample sets. Sometimes, it does not work any more. In this paper, the authors set the parameters a sub-range value instead of a single value. In the sub-range, every value will give a better prediction in the out-of-sample sets. The improved result is robust and has a better performance in experience.
Lin, L, Cao, L & Zhang, C 1970, 'The fish-eye visualization of foreign currency exchange data streams', Conferences in Research and Practice in Information Technology Series, Asia-Pacific Symposium on Information Visualisation, ACS, Sydney, Australia, pp. 91-96.
View description>>
In a foreign currency exchange market, there are highdensity data streams. The present approaches for visualization of this type of data cannot show us a figure with targeted both local details and global trend information. In this paper, on the basis of features and attributes of foreign currency exchange trading streams, we discuss and compare multiple approaches including interactive zooming, multiform sampling with combination of attribute of large foreign currency exchange data, and fish-eye view embedded visualization for visual display of high-density foreign currency exchange transactions. By comparison, Fish-eye-based visualization is the best option, which can display regional records in details without losing global movement trend in the market in a limited display window. We used Fish-eye technology for output visualization of foreign currency exchange trading strategies in our trading support system linking to realtime foreign currency market closing data. © 2005, Australian Computer Society, Inc.
Lin, L, Cao, L & Zhang, C 1970, 'The Visualization of Large Database in Stock Markets', Proceedings of the IASTED International Conference on Databases and Applications, IASTED International Multi Conference, ACTA Press, Innsbruck, Austria, pp. 163-166.
Liu, J, Li, SS, He, XJ & Wu, Q 1970, 'A study of fractal based watermarking for images', DCABES and ICPACE Joint Conference on Distributed Algorithms for Science and Engineering, Joint Meeting of the International Symposium on Distributed Computing and Applications to Business, Engineering and Science/International Conference on Parallel Algorithms and Computing Evironments, UNIV GREENWICH, SCH COMPUTING & MATHEMATICAL SCIENCES, Univ Greenwich, Maritime Greenwich Campus, Greenwich, ENGLAND, pp. 127-130.
View description>>
In this paper, we will provide a study on Fractal based watermarking techniques available today. Fractal is a technique that makes use of the similarity of the natural phenomena of irregular shapes. Only in recent years it has been used in image coding a
Longbing Cao, Chengqi Zhang, Luo, D, Wanli Chen & Zamani, N 1970, 'Integrative Early Requirements Analysis for Agent-Based Systems', Fourth International Conference on Hybrid Intelligent Systems (HIS'04), Fourth International Conference on Hybrid Intelligent Systems (HIS'04), IEEE, Kitakyushu, Japan, pp. 1-6.
View/Download from: Publisher's site
View description>>
Early requirements analysis (ERA) is quite significant for building agent-based systems. Goal-oriented requirements analysis is promising for the agent-oriented early requirements analysis. In general, either visual modeling or formal specifications is u
Longbing Cao, Chengqui Zhang & Jiarui Ni 1970, 'Agent Services-Orinted Architectural Design of Open Complex Agent Systems', IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IEEE, Compiegne, France, pp. 120-123.
View/Download from: Publisher's site
View description>>
Architectural design is a critical phase in building agent-based systems. However, most of existing agent-oriented software engineering approaches deliver weak or incomplete supports for the architectural design of distributed and especially Internet-based agent systems. On the other hand, the emergence of service-oriented computing (SOC) brings in intrinsic mechanisms for complementing agent-based computing (ABC). In this paper, we investigate the dialogue between ABC and SOC and their integration in implementing architectural design. We synthesize them and develop the computational concept agent service, and build a new design approach called agent service-oriented architectural design (ASOAD). The ASOAD expands the contents and ranges of agent and ABC, and synthesize the qualities of SOC such as interoperability and openness with the performances of ABC like flexibility and autonomy. It is suitable for designing distributed agent systems and agent service-based enterprise application integration.
Lu, S, Zhang, J & Feng, D 1970, 'Classification of Moving Humans Using Eigen-Features and Support Vector Machines', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11th International Conference on Computer Analysis of Images and Patterns, CAIP 2005, Springer Berlin Heidelberg, Versailles, pp. 522-529.
View/Download from: Publisher's site
View description>>
This paper describes a method of categorizing the moving objects using eigen-features and support vector machines. Eigen-features, generally used in face recognition and static image classification, are applied to classify the moving objects detected fro
Luo, D, Luo, C & Zhang, C 1970, 'A Framework for Relational Link Discovery', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Berlin Heidelberg, pp. 1311-1314.
View/Download from: Publisher's site
View description>>
Link discovery is an emerging research direction for extracting evidences and links from multiple data sources. This paper proposes a self-organizing framework for discovering links from multi-relational databases. It includes main functional modules for developing adaptive data transformers and representation specification, multi-relational feature construction, and self-organizing multi-relational correlation and link discovery algorithms. © Springer-Verlag Berlin Heidelberg 2005.
Madden, CS & Piccardi, M 1970, 'Height measurement as a session-based biometric for people matching across disjoint camera views', Proceedings Image and Vision Computing New Zealand 2005, Image and Vision Computing Conference, Wickliffe Ltd, Dunedin, New Zealand, pp. 282-286.
Martini, F, Piccardi, M, Liberati, NB, Platen, E & IEEE 1970, 'A hardware generator for multi-point distributed random variables', 2005 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), VOLS 1-6, CONFERENCE PROCEEDINGS, International Symposium on Circuits and Systems, IEEE Computer Society Press, Kobe, Japan, pp. 1702-1705.
View/Download from: Publisher's site
View description>>
In Monte Carlo simulation of weak approximation of stochastic differential equations, multi-point distributed random variables play an important role. However, they need highly efficient implementations to meet the 'real-time' requirements of applications such as the pricing of complex derivative securities. In this paper a fast and fexible dedicated hardware generator of multi-point distributed random variables on a Field Programmable Gate Array (FPGA) is presented. A comparative performance analysis demonstrates that the hardware solution is bottleneck-free, retains the fexibility of a traditional software implementation and significantly increases the computational fficiency of the overall simulation. © 2005 IEEE.
Mathew, R, Yu, Z & Zhang, J 1970, 'Detecting New Stable Objects In Surveillance Video', 2005 IEEE 7th Workshop on Multimedia Signal Processing, 2005 IEEE 7th Workshop on Multimedia Signal Processing, IEEE, Shanghai, pp. 0-0.
View/Download from: Publisher's site
View description>>
We describe a novel method to detect new stable objects in video. This includes detecting new objects that appear in a scene and remain stationary for a period of time. Examples include detecting a dropped bag or a parked car. Our method utilizes the sta
McCarty, KM, Houseman, A, Quamruzzaman, Q, Rahman, M, Mahiuddin, G, Smith, T, Ryan, L & Christiani, DC 1970, 'The impact of age and gender on arsenic methylation capacity', AMERICAN JOURNAL OF EPIDEMIOLOGY, Joint Meeting of the Society-for-Epidemiologic-Research/Canadian-Society-for-Epidemiology-and-Biostatistics, OXFORD UNIV PRESS INC, Toronto, CANADA, pp. S30-S30.
McCarty, KM, Houseman, A, Su, L, Quamruzzaman, Q, Rahman, M, Mahiuddin, G, Smith, T, Ryan, L & Christiani, DC 1970, 'Drinking water exposure to arsenic, polymorphisms in GSTT1, GSTM1 and GSTP1 and methylation capacity', EPIDEMIOLOGY, 17th Annual Conference of the International-Society-for-Environmental-Epidemiology, LIPPINCOTT WILLIAMS & WILKINS, Johannesburg, SOUTH AFRICA, pp. S113-S113.
View/Download from: Publisher's site
Milton, J, Kennedy, P & Mitchell, H 1970, 'The effect of mutation on the accumulation of information in a genetic algorithm', AI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, Australasian Joint Conference on Artificial Intelligence, Springer, Sydney, Australia, pp. 360-368.
View/Download from: Publisher's site
View description>>
We use an information theory approach to investigate the role of mutation on Genetic Algorithms (GA). The concept of solution alleles representing information in the GA and the associated concept of information density, being the average frequency of solution alleles in the population, are introduced. Using these concepts, we show that mutation applied indiscriminately across the population has, on average, a detrimental effect on the accumulation of solution alleles within the population and hence the construction of the solution. Mutation is shown to reliably promote the accumulation of solution alleles only when it is targeted at individuals with a lower information density than the mutation source. When individuals with a lower information density than the mutation source are targeted for mutation, very high rates of mutation can be used. This significantly increases the diversity of alleles present in the population, while also increasing the average occurrence of solution alleles.
Ni, A, Zhu, X & Zhang, C 1970, 'Any-Cost Discovery: Learning Optimal Classification Rules', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Australasian Joint Conference on Artificial Intelligence, Springer Berlin Heidelberg, Sydney, Australia, pp. 123-132.
View/Download from: Publisher's site
View description>>
Fully taking into account the hints possibly hidden in the absent data, this paper proposes a new criterion when selecting attributes for splitting to build a decision tree for a given dataset. In our approach, it must pay a certain cost to obtain an attribute value and pay a cost if a prediction is error. We use different scales for the two kinds of cost instead of the same cost scale defined by previous works. We propose a new algorithm to build decision tree with null branch strategy to minimize the misclassification cost. When consumer offers finite resources, we can make the best use of the resources as well as optimal results obtained by the tree. We also consider discounts in test costs when groups of attributes are tested together. In addition, we also put forward advice about whether it is worthy of increasing resources or not. Our results can be readily applied to real-world diagnosis tasks, such as medical diagnosis where doctors must try to determine what tests should be performed for a patient to minimize the misclassification cost in certain resources. © Springer-Verlag Berlin Heidelberg 2005.
Ni, J & Zhang, C 1970, 'An Efficient Implementation of the Backtesting of Trading Strategies', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), IEEE International Symposium on Parallel and Distributed Processing with Applications, Springer Berlin Heidelberg, Nanjing, China, pp. 126-131.
View/Download from: Publisher's site
View description>>
Some trading strategies are becoming more and more complicated and utilize a large amount of data, which makes the backtesting of these strategies very time consuming. This paper presents an efficient implementation of the backtesting of such a trading strategy using a parallel genetic algorithm (PGA) which is fine tuned based on thorough analysis of the trading strategy. The reuse of intermediate results is very important for such backtesting problems. Our implementation can perform the backtesting within a reasonable time range so that the tested trading strategy can be properly deployed in time. © Springer-Verlag Berlin Heidelberg 2005.
Piccardi, M & Cheng, ED 1970, 'Track matching over disjoint camera views based on an incremental major color spectrum histogram', Proceedings. IEEE Conference on Advanced Video and Signal Based Surveillance, 2005., IEEE Conference on Advanced Video and Signal Based Surveillance, 2005., IEEE, Como, Itlay, pp. 147-152.
View/Download from: Publisher's site
View description>>
Matching tracks from a single individual across disjoint camera views is a challenging task in video surveillance. In this paper, a Major Color Spectrum Histogram representation (MCSHR) is introduced to represent a moving object by using a normalized distance between two points in the RGB space. Then, an incremental MCSHR is proposed to cope with small pose changes and segmentation errors occurring along the track. Finally, a similarity measurement algorithm is proposed based on the incremental MCSHR to measure the similarity of any two tracked moving objects. The proposed similarity measurement algorithm proved capable of measuring the similarity of the two moving objects accurately. Experimental results show that with three to five frames integration, the proposed incremental MCSHR algorithm can make matching more robust and reliable than single-frame matching, especially for small pose changes. The matching performance is not obviously improved instead when the number of integration is more than five. The similarity of a same moving object in two different tracks has been improved from 92% to 95% with the integration number increased from three to five, while two different moving objects have been easily discriminated. The proposed algorithm can be used to match tracks from single individuals in camera networks which do not provide full coverage of the monitored space. © 2005 IEEE.
Qiang Wu, Xiangjian He & Hintz, T 1970, 'Bi-Lateral Filtering Based Edge Detection on Hexagonal Architecture', Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., IEEE.
View/Download from: Publisher's site
View description>>
Edge detection plays an important role in image processing area but is still an open problem. This paper presents a novel edge detection method based on bi-lateral filtering which achieves better performance than single Gaussian filtering. In this form of filtering, both spatial closeness and intensity similarity of pixels are considered in order to preserve important visual cues provided by edges and reduce the sharpness of transitions in intensity values as well. In addition, the edge detection method proposed in this paper is achieved on hexagonally sampled images. Due to the compact and circular nature of the hexagonal lattice, a better quality edge map is obtained on hexagonal architecture than common edge detection on square architecture. Experimental results using our proposed method in this paper exhibit encouraging performance. © 2005 IEEE.
Riedel, S & Gabrys, B 1970, 'Hierarchical Multilevel Approaches of Forecast Combination', OPERATIONS RESEARCH PROCEEDINGS 2004, Annual International Conference of the German-Operations-Research-Society, Springer Berlin Heidelberg, Tilburg, NETHERLANDS, pp. 479-486.
View/Download from: Publisher's site
Stoianoff, NP 1970, 'Biotechnology Patents: the effect of the House of Lords decision in Kiren-Amgen Inc v Hoecsht Mation Roussel Limited', QMIPRSI Seminar, Queen Mary Intellectual Property Research Institute, University of London, UK.
Stoianoff, NP 1970, 'Intellectual Property Law in China', Australian Intellectual Property Academics Conference, University of Tasmania, Hobart, Tasmania.
Wang, H, Wang, M, Hintz, T, He, X & Wu, Q 1970, 'Fractal image compression on a pseudo Spiral Architecture', Conferences in Research and Practice in Information Technology Series, Australasian Computer Science Conference, ACM, Newcastle, Aust, pp. 201-208.
View description>>
Fractal image compression is a relatively recent image compression method which exploits similarities in different parts of the image. The basic idea is to represent an image by fractals and each of which is the fixed point of an Iterated Function System (IFS). Therefore, an input image can be represented by a series of IFS codes rather than pixels. In this way, an impressive compression ratio 10000:1 can be achieved. The application of fractal image compression presented in this paper is based on a novel image structure, Spiral Architecture, which has hexagonal instead of square pixels as the basic element. In the paper evidence would suggest that introducing Spiral Architecture into fractal image compression will improve the compression performance in compression ratio with little suffering in image quality. There are also much research could be done in this area to further improve the results. Copyright © 2005, Australian Computer Society, Inc.
Wenjing Jia, Huaifeng Zhang, Xiangjian He & Piccardi, M 1970, 'Mean shift for accurate license plate localization', Proceedings. 2005 IEEE Intelligent Transportation Systems, 2005., 2005 IEEE Intelligent Transportation Systems, 2005., IEEE, Vienna, Austria, pp. 566-571.
View/Download from: Publisher's site
View description>>
This paper presents a region-based algorithm for accurate license plate localization, where mean shift is utilized to filter and segment color vehicle images into candidate regions. Three features are extracted in order to decide whether a candidate region represents a real license plate, namely, rectangularity, aspect ratio, and edge density. Then, the Mahalanobis classifier is used with respect to above three features to classify license plate regions and non-license plate regions. Experimental results show that the proposed algorithm produces high robustness and accuracy. © 2005 IEEE.
Wu, Q, He, S & Hintz, TB 1970, 'Bi-lateral filtering based edge detection on hexagonal architecture', 2005 IEEE International Conference On Acoustics, Speech, And Signal Processing, Vols 1-5 - Speech Processing, IEEE International Conference on Acoustics, Speech, and Signal Processing, IEEE, Philadelphia, PA, USA, pp. 11713-11716.
View description>>
Edge detection plays an important role in image processing area but is still an open problem. This paper presents a novel edge detection method based on bi-lateral filtering which achieves better performance than single Gaussian filtering. In this form of filtering, both spatial closeness and intensity similarity of pixels are considered in order to preserve important visual cues provided by edges and reduce the sharpness of transitions in intensity values as well. In addition, the edge detection method proposed in this paper is achieved on hexagonally sampled images. Due to the compact and circular nature of the hexagonal lattice, a better quality edge map is obtained on hexagonal architecture than common edge detection on square architecture. Experimental results using our proposed method in this paper exhibit encouraging performance.
Xu, G, Zhang, Y & Zhou, X 1970, 'Towards User Profiling for Web Recommendation', Lecture Notes in Computer Science, 18th Australian Joint Conference on Artificial Intelligence, Springer Berlin Heidelberg, Sydney, Australia, pp. 415-424.
View/Download from: Publisher's site
Xu, G, Zhang, Y & Zhou, X 1970, 'Using probabilistic latent semantic analysis for web page grouping', Proceedings of the IEEE International Workshop on Research Issues in Data Engineering, 15th International Workshop on Research Issues in Data Engineering: Stream Data Mining and Applications, IEEE Computer Society, Tokyo, Japan, pp. 29-36.
View description>>
The locality of web pages within a web site is initially determined by the designer's expectation. Web usage mining can discover the patterns in the navigational behaviour of web visitors, in turn, improve web site functionality and service designing by considering users' actual opinion. Conventional web page clustering technique is often utilized to reveal the functional similarity of web pages. However, high-dimensional computation problem will be incurred due to taking user transaction as dimension. In this paper, we propose a new web page grouping approach based on Probabilistic Latent Semantic Analysis (PLSA) model. An iterative algorithm based on maximum likelihood principle is employed to overcome the aforementioned computational shortcoming. The web pages are classified into various groups according to user access patterns. Meanwhile, the semantic latent factors or tasks are characterized by extracting the content of 'dominant' pages related to the factors. We demonstrate the effectiveness of our approach by conducting experiments on real world data sets. © 2005 IEEE.
Xu, G, Zhang, Y, Ma, J & Zhou, X 1970, 'Discovering user access pattern based on probabilistic latent factor model', Conferences in Research and Practice in Information Technology Series, 16th Australasian Database Conference, Australian Computer Society, Newcastle, Australia, pp. 27-35.
View description>>
There has been an increased demand for characterizing user access patterns using web mining techniques since the informative knowledge extracted from web server log files can not only offer benefits for web site structure improvement but also for better understanding of user navigational behavior. In this paper, we present a web usage mining method, which utilize web user usage and page linkage information to capture user access pattern based on Probabilistic Latent Semantic Analysis (PLSA) model. A specific probabilistic model analysis algorithm, EM algorithm, is applied to the integrated usage data to infer the latent semantic factors as well as generate user session clusters for revealing user access patterns. Experiments have been conducted on real world data set to validate the effectiveness of the proposed approach. The results have shown that the presented method is capable of characterizing the latent semantic factors and generating user profile in terms of weighted page vectors, which may reflect the common access interest exhibited by users among same session cluster. © 2005, Australian Computer Society, Inc.
Zhang, C & Zhang, S 1970, 'In-Depth Data Mining and Its Application in Stock Market', Lecture Notes In Artificial Intelligence, 1st International Conference on Advanced Data Mining and Applications, Springer Berlin Heidelberg, Wuhan, China, pp. 13-13.
View/Download from: Publisher's site
View description>>
N/A
Zhang, C, Xu Yu, J & Zhang, S 1970, 'Identifying Interesting Patterns in Multidatabases', CLASSIFICATION AND CLUSTERING FOR KNOWLEDGE DISCOVERY, 9th International Conference on Neural Information and Processing/4th Asia-Pacific Conference on Simulated Evolution and Learning /1st Internationsal Conference on Fuzzy Systems and Knowledge Discovery, Springer Berlin Heidelberg, Singapore, SINGAPORE, pp. 91-112.
View/Download from: Publisher's site
Zhang, H, Jia, W, He, X & Wu, Q 1970, 'Modified Color Ratio Gradient', 2005 IEEE 7th Workshop on Multimedia Signal Processing, 2005 IEEE 7th Workshop on Multimedia Signal Processing, IEEE, Shanghai, China, pp. 317-320.
View/Download from: Publisher's site
View description>>
Color ratio gradient is an efficient method used for color image retrieval and object recognition, which is shown to be illumination-independent and geometry-insensitive when tested on scenery images. However, color ratio gradient produces unsatisfied matching result while dealing with relatively uniform objects without rich color texture. In addition, performance of color ratio gradient degenerates while processing unsaturated color image objects. In this paper, a scheme with modified color ratio gradient is presented, which addresses the two problems above. Experimental results using the proposed method in this paper exhibit more robust performance.
Zhang, Y, Xu, G & Zhou, X 1970, 'A Latent Usage Approach for Clustering Web Transaction and Building User Profile', Lecture Notes in Computer Science, First International Conference, ADMA 2005, Springer Berlin Heidelberg, Wuhan, China, pp. 31-42.
View/Download from: Publisher's site