Kuang-Chi Chen, Tzu Chi University, Hualien, Taiwan
Title: Information Fusion on Human Disease Network in Taiwan
Network-based approach is promising to analyze high-throughput medicare data. Previous studies of human disease networks have mainly focused on Western World medical records, and the association of disease comorbidity is mostly examined by either phi-correlation or relative risk (RR). We investigate the network properties of Taiwanese disease comorbidity, and try to develop an information fusion method for testing the association of disease comorbidity comprehensively.
Based on five millions inpatients medical records, the human disease network (HDN) is constructed, and in the context of human interactome, the network properties are explored in comparison with random null-network. In addition, we try to propose the utilization of association analysis, phi-correlation, RR, or combining the above two, to examine disease comorbidity and generate a disease risk predictive model that uses past medical histories to determine the future disease risks through the possibly developing pathways in HDN. The initial results show that the HDN of inpatients is highly modular, as its largest component smaller than the ones in random networks. Further, the integration analysis of our HDN and OMIM database gives some hints of potentially novel disease-associated genes.
This study provides a comprehensive view of network characteristics and pathways in Taiwanese HDNs. Our findings not only demonstrate the adaptability of information fusion in human disease network analysis, but also might have important implications for the molecular mechanisms underlying disease pathogenesis.
Kuang-Chi Chen received a master in Statistics in 1996 from Ohio State University, USA and a PhD degree in Biostatistics (Institute of Epidemiology) in 2003 from National Taiwan University, Taiwan. Her research centers on genetic statistics, statistical analysis and mathematical modeling in biomedical informatics, with particular emphasis on systems and network biology. Her recent research activities focus on construction of disease network and prediction of disease pathway by analyzing National Health Insurance Database (NHID) in Taiwan. At Tzu Chi University she is an assistant professor in the Department of Medical Informatics. She is a member of the Taiwan Bioinformatics and Systems Biology Society, the Chinese Institute of Probability and Statistics.
D. Frank Hsu, Department of Computer and Information Science, Fordham University, New York, NY 10023, USA
Title: Combinatorial fusion on multiple scoring systems
The combined cyber-physical-natural(CPN) world has rapidly become a highly instrumented and densely interconnected ecosystem. Combinatorial fusion on multiple scoring systems(MSS) plays an important role in analyzing big data and understanding complex phenomena.
Combinatorial Fusion Analysis(CFA) uses the method and practice of combining multiple scoring systems, where each scoring system A consists of a score function sA , a rank function rA derived from sA , and a rank-score characteristics (RSC) function fA obtained as a composite function of sA and rA-1. Cognitive diversity d(A,B) between two scoring systems A and B is defined as a function of fA and fB. Combinatorial fusion using RSC function and cognitive diversity has been shown to be useful in the selection and combination of variables (e.g.: features, cues, criteria, similarity measures, indicators, parameters, ... ) and that of systems (e.g.: classifiers, models, forecasting systems, learning systems, decision systems, inference systems, models, decision making, neural nets, belief functions, ... ).
In this talk, I will review progress in this emerging field in a variety of application domains including, among others, information retrieval, target tracking, virtual screening, portfolio management, ChIP-seq analytics, joint decision making on visual cognition systems, sensor fusion for stress identification, and combining multiple classifier systems. We will also explore the following general issues: (a) When is combination of two systems better than each individual system? (b) What is the optimal number of systems to combine? and (c) How to best fuse multiple scoring systems?
D. Frank Hsu is the Clavius Distinguished Professor of Science, a professor of computer and information science, and Director of the Laboratory for Informatics and Data Mining at Fordham University in New York City. He is Vice Chair of the New York Chapter of the IEEE Computational Intelligence Society. Receiving an MS degree from the University of Texas and a Ph.D. from the University of Michigan, Professor Hsu is a Fellow of the New York Academy of Sciences, the Institute of Combinatorics and Applications, the International Society of Intelligent Biological Medicine, and the International Institute of Cognitive Informatics and Cognitive Computing.
Professor Hsu's research interests include combinatorics and graph theory; network interconnection and communications; and computing, informatics and analytics. In the emerging fields of micro- and macro-informatics, Professor Hsu is interested in (and has been working on) combinatorial fusion analysis which involves combination of multiple scoring systems in a variety of application domains such as information retrieval, target tracking and robotics, virtual screening and chemoinformatics, business intelligence and financial informatics, genomics and sequence analytics, joint decision making on visual cognition, social choice function, and sensor fusion for health improvement. He has served on several editorial boards including IEEE Transactions on Computers, Networks, International Journal of Foundation of Computer Science, Pattern Recognition Letters, Journal of Advanced Mathematics and Applications, and Journal of Interconnection Networks(as Founding Editor and Editor-in-Chief).
Paul B. Kantor, Rutgers University
Title: Sensor Synergy
Sensors, or tests, are central in our analysis of the state of our surroundings. We examine the possible relations among two sensors which are both (imperfectly) able to discriminate between the same pair of states of the object of interest. We show that, in principle, (1) each such pair has a range of possible performance when used optimally (2) in general, two sensors, even 'good ones' cannot be combined to be perfect and (3) every sensor has, in principle, a magic partner sensor such that the two of them together can, if the relation between them is at its limit, be perfect. We speculate briefly on the role of this result in technological and natural systems, and its implications for the discovery of mechanisms in the natural sciences. Research supported in part by the NSF and by the AFOSR.
Paul Kantor's research centers on the role of information systems for storage and retrieval in a wide range of applications, with particular emphasis on rigorous evaluation of the effectiveness of such systems. At Rutgers he is a member of the Department of Library and Information Science, and Research Director of the CCICADA Center. He is also a member of the graduate faculty of the Center for Operations Research (RUTCOR),and of the Department of Computer Science, and is a member of the Center for Discrete Mathematics and Computer Sciences (DIMACS)
He is a member of the American Society for Information Science and Technology (ASIST), the American Association for the Advancement of Science (AAAS), the IEEE, the American Physical Society, and the American Statistical Association. His research has been supported by such agencies as the NSF, DARPA, ARDA and the US Department of Education. He was educated in Physics and Mathematics at Columbia and Princeton, has received the ASIST Research award, and is a Fellow of the AAAS. Biographical listings: Who's Who in America; Who's Who in the World.
Sebastien Konieczny, CRIL-CNRS, Universite d'Artois, France
Title: Propositional merging in the light of social choice theory properties
Propositional merging is the problem of aggregating several propositional logic bases, representing either the beliefs or the goals of several agents, into a single coherent base, representing the opinion of the group. This problem is well studied in AI, with the proposal of practical merging methods, as well as theoretical studies of the expected properties. It was shown in particular that this problem is closely related to AGM belief revision theory. It is clear that it has also close links with social choice theory. And, in this talk, we will focus on these links between propositional merging and social choice theory. We will review how some classical properties of social choice theory translate in this merging framework, and what are they implications. We will focus on the problems of truth tracking (through Condorcet's jury theorem), of Independence of Irrelevant Alternatives, of strategy-proofness, and of unanimity. We will see that some of these properties can be translated in different forms, and we will study if they are satisfied by (all, some or none) merging methods.
S-Aibastien Konieczny is a senior researcher at Centre National de la Recherche Scientifique (CNRS). Since 2004 he is affiliated to the Centre de Recherche en Informatique de Lens (CRIL), Universiti d'Artois. From 2001 to 2004 he was affiliated to the Institut de Recherche en Informatique de Toulouse (IRIT).
His research interests lie in logical approaches for Artificial Intelligence and Knowledge Representation, and more specifically on logical conflict resolution, that embed problems of reasoning under inconsistencies, belief change, belief merging and negotiation.
Bruce S. Kristal, PhD; Department of Neurosurgery, Brigham and Women's Hospital; Department of Surgery, Harvard Medical School Boston, MA 02115
Title: Long-Range Disease Risk Prediction -- Algorithmic Information Fusion in a Life and Death Environment
Over-nutrition and suboptimal dietary macronutrient choices are arguably the major environmental stressor in individuals living in Western societies. Obesity and poor diet are estimated to cause or contribute to as many as 25% of all cancers, in addition to cardio- and cerebro-vascular disorders (hypertension, heart attack, stroke) and metabolic diseases such as diabetes. One of the most clear examples of this, dietary or caloric restriction (CR), is the most potent and reproducible known means of increasing longevity and reducing morbidity (including cancer, diabetes, etc) in mammals. As one example, risk of breast cancer is generally decreased by more than 90% in CR rodents, and the CR-mediated effects are usually dominant to those induced by genetic risk factors, carcinogens, or co-carcinogens. The robust observations of reduced morbidity in CR animals is directly analogous to studies in humans that link obesity with poor health outcomes, including increased risk of neoplastic disease. We therefore proposed to test the general concept that biomarkers of diet in rats will predict risk of future disease in humans. Preliminary work with plasma small molecules (metabolomics) shows promise, but long-term clinical utility would be enhanced by consideration of other factors (blood proteins, genetics, demographic considerations, etc. Among other complications, these data are collected from different sources (e.g., humans and animals), different data structures (e.g., qualitative, discrete, continuous, bimodal), and can be static or temporally shifting (e.g., genetic vs environmental), etc... Thus, this is a classic -- and nasty -- fusion problem that will require work at all levels -- starting by simply defining the ground rules for success and failure and by trying to consider in advance the potential problems. Major issues involve deciding if, when, and at what level (e.g., data, latent variable, decision model) to conduct algorithmic fusion as well as the criteria by which we can evaluate in the absence of infinitely large training and test sets. We will present our existing modeling approaches, the models, and their ability to distinguish sera based on caloric intake, as well as data from the initial application of these markers to address risk of breast cancer in case-control studies nested within the Nurses' Health Study. We will then address some of the checks and cross-checks used to evaluate these data and lay out a path towards fusion-based approaches.
Bruce Kristal received a BS in Life Sciences from MIT in 1986 and his PhD (Virology) from Harvard University in 1991. He rose from post-doc to Research Assistant Professor at the University of Texas Health Science Center at San Antonio (1991-1996), then moved to Burke Medical Research Institute (1996) and the Departments of Biochemistry (1997) and Neuroscience (1998) at Weill Medical College of Cornell University, becoming an Associate Professor in Neuroscience in 2004. He joined Brigham and Women-F"s Hospital's Department of Neurosurgery in April 2007 and the Department of Surgery at Harvard Medical School in 2008. Dr. Kristal was the founding secretary (2004-2008) and member of the Board of Directors of the Metabolomics Society (2004-2011).
HIs laboratory currently has active projects in three areas: drug development aimed at reducing damage from strokes and head injury, developing blood tests that predict future disease risk for preventable disorders (e.g., diabetes, breast and colon, heart disease), and the development of computer-based approaches to better deliver personalized medical care.
Christophe Labreuche, Thales Research & Technology, Decision Technologies & Mathematics Lab. (LMTD) Campus Polytechnique
Title: Robust recommendations and their explanation in multi-criteria decision aiding with interacting criteria
Aggregation functions play an important role in information fusion. In applications such as the production of indicators of situation assessment for homeland security (e.g. criminality), the input information (attributes) that is needed is heterogeneous. One can find statistics on criminality, and opinion of citizens for instance. Hence one needs to normalize the attributes before using an aggregation function. This turns into a multi-criteria decision aiding (MCDA) setting in which normalization is obtained through utility functions. Unlike MCDA where most of the aggregation functions used in practice are quite simple, information fusion calls for more complex aggregation functions. There are indeed many situations where some subtle decision strategies shall be represented. Examples of such decision strategies are the presence of a veto criterion, complementarity or redundancy among criteria. In all these situations, we say that the criteria interact.
There are two leading models in MCDA for representing interacting criteria: the Choquet integral (which generalizes the weighted sum), and the GAI model (Generalized Additive model which generalizes the additive utility model).
In order for the operator of the system to trust the information fusion that is produced, the underlying MCDA model shall provide (1) a relevant quotation (confidence) of its recommendation, and (2) explain its recommendation. In particular, the MCDA model shall be able to demarcate between trivial situations where the decision function is sure, and complex situations where the decision is tight. The MCDA model (either a Choquet integral or a GAI model) results from an elicitation approach, and tries to mimic the decision strategies of a decision maker. Because of the hesitations of the decision maker during the interview, the MCDA model is not specified uniquely. Making robust recommendations means that one shall take into account all values of the parameters of the MCDA model that are compatible with the preferential information. By contrast, in the traditional approach, one selects only one value of the parameters vector, and the recommendation is only based on this value, which introduced some arbitrariness.
We will show in the presentation how the robust recommendations are obtained for both the Choquet integral and the GAI models. Moreover, we will also give the latest works on how to explain these recommendations.
Christophe Labreuche received the graduate engineer diploma from Ecole Centrale de Lyon in 1993, and a master in numerical analysis also in 1993. He received a PhD degree in applied mathematics in 1997 from UniversitNi de Paris IX Dauphine. His first publications were in the areas of partial differential equations and more specifically scattering and inverse scattering. He has been working one year in University of Delaware (USA), during the 1995/1996 academic year. From 1997 to 1998, he was working at the research lab of Thales in numerical analysis. Then he joined the computer science department to work on multiple criteria decision aiding (MCDA), fuzzy measure theory, fuzzy integrals and their application in MCDA as well as in group decision making, negotiation, argumentation, and cooperative and non-cooperative game theory. He is conducting research activity in these fields. His field of interest includes also the representation of uncertainty and vagueness, and the modeling of expert knowledge. He applies these techniques in domains of complex system engineering (design), homeland security, embedded systems (e.g. radars), satellites, metro supervision. He participates in the development of software's in decision aiding.
Jerome Lang, Centre National de la Recherche Scientifique (CNRS)
Title: Incomplete knowledge and communication issues in voting
Computational social choice is a rapidly emerging research topic, located at the crossing point between social choice and computer science. One of the key problems in computational social choice is the determination of the winning alternative(s) when the knowledge about the voters' preferences is incomplete. This incompleteness may have several possible causes: voters who forget to send their vote, as typical in Doodle polls; new candidates appearing in the course of the process, on which the voters haven't expressed any opinion yet; voters refusing to compare two alternatives because their preference depends on an exogenous event; etc. In these situations, one may try to identify the alternatives that can still win when the voters' preferences are eventually fully known, and one may also try to build interactive protocols for asking the agents enough of the missing information so as to be able to compute the winning alternative(s) while trying to minimize the amount of communication. This talk is mainly a survey, but also reports several results by myself and colleagues.
Jerome Lang is a senior researcher at Centre National de la Recherche Scientifique (CNRS). Since 2008 he is affiliated with the Laboratoire d'Analyse et de Mod-Ailisation de Systemes d'Aide a la Dicision (LAMSADE), Universiti Paris-Dauphine. From 1991 to 2008 he was affiliated with Institut de Recherche en Informatique de Toulouse. His research interests span a large part of Artificial Intelligence, especially Knowledge Representation and Multi-Agent Systems. His recent research activities focus on preference representation and computational social choice.
Yanjun Li, Fordham University, New York
Title: Performing Information Fusion Analysis in Information Retrieval and Text Mining
The method, Combinatorial Fusion Analysis (CFA), has been widely applied in many research areas. In this presentation, we report adopting CFA to improve the performance of Information Retrieval in biomedical domain and Text Categorization. First, we proposed to investigate the combination of multiple information retrieval models and exploring their interactions in biomedical information retrieval area. CFA has been applied to perform extensive combinatorial experiments on seven popular generic information retrieval models on a biomedical literature collection. The results have demonstrated that a combination of multiple information retrieval models can outperform a single model only if each of the individual models has different scoring and ranking behavior and relatively high performance. Second, in order to improve the performance of text categorization, we proposed to combine multiple feature selection methods by using CFA. We have shown that a combination of multiple feature selection methods can outperform a single method only if each individual feature selection method has unique scoring behavior and relatively high performance. Moreover, it is shown that the rank-score function and rank-score graph are useful for the selection of a combination of feature selection methods.
Dr. Yanjun Li received the B.S. degree in Economics from the University of International Business and Economics, Beijing, P.R. China, in 1993, the B.S. degree in Computer Science from Franklin University, Columbus, Ohio, in 2001, the M.S. degree in Computer Science and the Ph.D. degree in Computer Science and Engineering from Wright State University, Dayton, Ohio, in 2003 and 2007, respectively. She is currently an assistant professor in the department of Computer and Information Sciences at Fordham University, New York, New York. Her research interests include data mining and knowledge discovery, text mining, information retrieval, Ontology, bioinformatics analysis, parallel and distributed computing. Dr. Li has published numerous journal and conference papers in the areas of text mining, data mining, artificial intelligence, bioinformatics, and parallel computing. She is considered an expert in the area of data mining and information retrieval.
Pierre Marquis, CRIL-CNRS and Universiti d'Artois
Title: On Argument Aggregation
Argument aggregation is the problem of merging argumentation frameworks. After recalling Dung's setting for abstract argumentation and introducing the argument aggregation problem, I shall motivate and define a number of postulates for argument aggregation. I shall present some impossibility theorems, which echo similar theorems for preference or judgment aggregation. I shall also identify the postulates satisfied by some simple aggregation procedures. (Some parts of this work have been achieved in collaboration with P.E Dunne and M. Wooldridge).
Pierre Marquis is a professor of computer science at Universiti d'Artois, Lens. His research topics are centered around knowledge representation and automated reasoning, with a focus on reasoning under inconsistency (under various forms including belief merging and argumentation), and knowledge compilation. He is the author or a co-author of more than 120 publications in international journals or conferences. He is an associate editor of the Journal of Artificial Intelligence Research and of AI Communications.
Piotr Mirowski, Bell Labs-Lucent Technologies, New Jersey, USA
Title: Visual Odometry and Localization on Low-cost WiFi Mapping Robot
Indoor localization has become a key application of telecommunications. Accurate localization can enable location-based services as varied as turn by turn directions in a large public building, guided first responders intervention or personalized advertising; it can also help in optimizing the coverage and energy efficiency of the network. Radio- Frequency fingerprinting is one interesting solution for such indoor localization. It exploits existing telecommunication infrastructure, such as WiFi routers, along with a database of signal strengths at different locations, but requires manually collecting signal measurements along with precise position information.
As a way to overcome the tedious fingerprinting and repeated calibration procedure (i.e., creating up-to-date signal maps along with precise position information), we built a low-cost autonomous and self-localizing robotic platform capable of real-time obstacleavoidance- based navigation and of Simultaneous Localization and Mapping (SLAM). To localize itself, our robot relies on a combination of a wheel encoder, of an inertial motion unit and of a Kinect color and depth camera that is limited by a narrow field of view and short range. This robotic test bed enabled us to experiment with various sensor fusion techniques. First, we designed several low-latency techniques for visual odometrybased estimation of bearing angles that could cope with rotational drift. We then integrated wheel and visual odometry into a particle filtering-based localization (on an existing blueprint) or into a SLAM algorithm, followed by map registration with absolute position landmarks (using self-describing QR codes) and RGB-D image-based loop closure. We will compare the accuracy of several localization techniques and show how our robot can localize itself while collecting and building WiFi maps in medium-sized office spaces.
Piotr Mirowski is a research scientist in machine learning. He has been a Member of Technical Staff at Bell Laboratories since January 2011, after obtaining his PhD in computer science at the Courant Institute of Mathematical Sciences at New York University. His machine learning thesis's subject was "Time Series Modeling with Hidden Variables and Gradient-Based Methods" and covered applications of time series modeling such as learning gene regulation networks or statistical language modeling. His advisor was Prof. Yann LeCun. Prior to his Ph.D. studies, Piotr graduated in 2002 with a Master's degree in computer science from Ecole Nationale Supirieure ENSEEIHT in Toulouse, France and worked as a research engineer in geology at Schlumberger Research (2002-2005). During his Ph.D. studies, Piotr has also interned at the NYU Medical Center (investigating epileptic seizure prediction from EEG), at Google, at the Quantitative Analytics department of Standard & Poor's and at AT&T Labs Research. Piotr's current research focuses on machine learning methods for computer vision and simultaneous localization and mapping, relying on a rover doted with autonomous robotic navigation, on radio fingerprinting for indoor positioning, and on electric load forecasting for smart grid optimization. Piotr owns 2 patents, 3 patent publications and authored several articles on applications of machine learning to geology, epileptic seizure prediction, statistical language modeling, robotics and geolocalization.
Stefano Moretti and Alexis Tsoukias, CNRS UMR7243 - Universite Paris Dauphine, France
Title: Ranking sets of objects using the Shapley value and other regular semivalues
A lot of problems in decision making involve the comparison of sets of objects, where objects may have very different meanings (e.g., alternatives, opportunities, candidates, etc.). On the other hand, in many practical situations, only the information about "preferences" among single objects is available. Consequently, a central question is: given a primitive ranking over the single elements of a set X, how to derive a "compatible" ranking over the set of all subsets of X? In information fusion the problem appears as soon as the available information concerns single elements (reliability of a single components) but the demand concerns compound objects (reliability of a complex object). The problem of extending an order on a set to its power set has been studied in the literature over the last thirty years with the objective to analyse the axiomatic structure of families of rankings over subsets. Most of such approaches make use of axioms aimed to prevent any kind of interaction among the single objects. In this work, we apply the theory of coalitional games to analyse the properties of rankings over subsets of a set. In particular, we focus on those orderings of the subsets such that the ranking of singleton subsets is aligned with the ranking induced by the Shapley value and other regular semivalues of associated coalitional games. Some properties of those orderings are discussed, with the objective to justify and contextualise their application to the problem of ranking sets of possibly interacting objects.
Moretti S, Tsoukias A. Ranking Sets of Possibly Interacting Objects Using Shapley Extensions, In Proceedings 13th International Conference on Principles of Knowledge Representation and Reasoning (KR 2012), pages 11, 2012.
Biography of the authors:
Stefano Moretti graduated in Environmental Science in 1999 from the University of Genoa, Italy, and was awarded from the same university with a Ph.D. in Applied Mathematics in 2006. In 2008, he was also awarded with a Ph.D. in Game Theory at Tilburg University, The Netherlands. He is a researcher of CNRS (Laboratoire d'Analyse et Modelisation de Systemes pour l'Aide a la Decision, LAMSADE), at the University Paris-Dauphine. His main research interests deal with cooperative game theory and combinatorial optimization problems, and with the application of game theoretic models to computational biology.
Alexis Tsoukias (Greece, 1959) is a CNRS research director at LAMSADE, Universite Paris Dauphine. He holds (1989) a PhD in Computer Science and Systems Engineering from Politecnico di Torino (Italy) where he also graduated engineering studies. His research interests include subjects such as: multiple criteria decision making, non conventional preference modelling, applied non classical logics, ordinal mathematical programming, artificial intelligence and decision theory. He is the co-author of two books and more than 70 journal articles and book contributions. He has been vice-president of ROADEF (the french OR society) as well as President of EURO (the european association of OR societies). Besides teaching to several post-graduate classes he ocasionally practices decision support. He has been invited at several Universities world wide. He has been the coordinator of the COST Action IC0602 "Algoritmic Decision Theory". He is presently the director of LAMSADE.
Ganapati P. Patil, Center for Statistical Ecology and Environmental Statistics, Department of Statistics, The Pennsylvania State University, University Park, PA, 16802, USA
Title: On Comparative Knowledge Discovery in with Partial Order and Composite Indicator in Multi-Indicator Information Fusion Systems
Comparative knowledge discovery and data mining ( CKDDM) is an emerging area of study in this age of indicators triggered by the computerized twenty-first century information technology.
Using partial order and cumulative rank frequency ( CRF ) operator, it has become possible to accomplish ranking in a multi-indicator system in the presence of equal or unequal weights, signifying relative importance or relative proximity of individual indicators to the central abstract conceptual theme of ranking and comparison.
This capability of the CRF operator allows us through weighted rank frequency distributions a way for reconciliation between analytical and advocacy issues arising in multi-criterion decision making for purposes of ranking and comparison for multi-indicator systems with stakeholders around and with data matrices as empirical evidence in hand. .
The presentation is expected to help develop concepts, methods, tools, and visuals involved with some illustrative examples.
Background motivation and information:
R. Bruggemann and G. P. Patil ( 2011 ). Ranking and Prioritization for Multi-indicator Systems. Introduction to Partial Order Applications. Springer, New York, pp 338. W.L.Myers and G.P.Patil (2012). Statistical Geoinformatics for Human Environment Interface. CRC Press,Chapman Hall, New York. pp 305. W.L.Myers and G.P.Patil(2012).Multivariate Methods of Representing Relations in R for Prioritization purposes. Springer, New York. pp 297 G. P. Patil and C. Taillie (2004). Multiple indicators, partially ordered sets, and linear extensions: Multi-criterion ranking and prioritization. Environmental and EcologicalStatistics,11(2),199-228. G. P. Patil (2010). Ranking and Prioritization with Multiple Indicators in Digital Governance and Surveillance Hotspot GeoInformatics- A Preface. Environmental and Ecological Statistics, 17, 377-381. G. P. Patil ( 2011 ) UNEP Workshop on Sustainability Indicators, Chair, Keynote Address, New Delhi, India G. P. Patil (2012) UNEP Workshop on Green Economy Indicators, Plenary Lecture, Beijing, China. G. P. Patil, and S.W. Joshi (2012) Partial order ranking of objects with weights for indicators and its representability and reconciliation by a composite indicator.
Ganapati P. Patil is Director of the Center for Statistical Ecology and Environmental Statistics, and Distinguished Professor Emeritus of Mathematical and Environmental Statistics at the Pennsylvania State University. He is a former Visiting Professor of Biostatistics at the Harvard School of Public Health, Harvard University, in the Department of Biostatistics and Dana Farber Cancer Institute. He holds a Ph.D. in Mathematics, a D.Sc. in Statistics, an Honorary Degree in Biological Sciences, and another in Laws and Letters.
He is a fellow of the American Statistical Association, American Association of Advancement of Science, Institute of Mathematical Statistics, International Statistical Institute,Royal Statistical Society, International Association for Ecology, International Indian Statistical Association, Indian National Institute of Ecology, and Indian Society for Medical Statistics.
Dr. Patil has served on panels for numerous international organizations, including United Nations Environment Program,U.S. National Science Foundation, U.S. Environmental Protection Agency, U.S.Forest Service, and U.S. National Marine Fisheries Service.He has been a founder of programs and initiatives for Statistical Distributions in Scientific Work, Statistical Ecology, Environmental Statistics, Risk Analysis, EcoHealth, Syndromic Surveillance,Digital Governance and Statistical Geoinformatics, and Comparative Knowledge Discovery with Multi-indicator Systems.
He has been founding editor of the Springer International Journal and Monograph Series , Environmental and Ecological Statistics. He has authored and coauthored more than 300 research papers, and authored, coauthored, edited, and coedited more than 30 cross-disciplinary volumes.
Recently, he has been Principal Investigator of a large seven year NSF research project on Digital Governance and Surveillance GeoInformatics for Hot Spots Detection and Their Ranking and Prioritization for Monitoring,Etiology, Early Warning, and Sustainable Development with Live Case Studies.
Gabriella Pigozzi, Universite Paris-Dauphine, LAMSADE
Title: Judgment Aggregation Rules Based on Minimization
Many voting rules are based on some minimization principle. Likewise, in the field of logic-based knowledge representation and reasoning, many belief change or inconsistency handling operators also make use of minimization. Judgment aggregation is a recent field which studies how individual judgments on logically related propositions can be aggregated into a consistent collective outcome. Surprisingly, minimization has not played a major role in the field of judgment aggregation, in spite of its proximity to voting theory and logic- based knowledge representation and reasoning. Here we make a step in this direction and study several judgment aggregation rules. We study the inclusion relationships between these rules and address some of their social choice theoretic properties. (Joint work with J. Lang, M. Slavkovik, L. van der Torre)
Gabriella Pigozzi is Associate Professor in Computer Science at Universite Paris-Dauphine, and a member of the LAMSADE Lab. After a Ph.D. in Philosophy on belief revision, she held postdoc positions in computer science departments.
Her main research interests lie in multi-agent systems and artificial intelligence. She investigates aspects of individual and collective decision and develops formal approaches for their representation. In particular, she is interested in the generalization of existing frameworks for individual agent reasoning to their collective counterpart, in the study and representation of the interactions between agents in a group, and in the definition of feasible aggregation procedures. Topics she currently contributes to include: judgment aggregation, computational social choice, argumentation theory, and normative multi-agent systems.
Fred Roberts, DIMACS, Rutgers University
Title: Scoring Rules for Fisheries Rules Violations
The United States set regulations for allowable fishing with the goal of maintaining a healthy fish population. This talk will describe the uses of scoring rules to determine whether a given fishing vessel might be in violation of the regulations and thus to decide whether orr not to board such a vessel to perform an inspection.
Christina Schweikert, St. John's University
Title: Portfolio Management with Combinatorial Fusion
We utilize combinatorial fusion to demonstrate how a combination method can enhance the performance of individual methods for management of a stock portfolio. Combinatorial Fusion Analysis (CFA) is applied to the selection of stocks for a portfolio using multiple financial indicators. Multiple scoring systems are constructed by analyzing the trends of the following financial indicators for a set of real estate companies: net profit ratio, inventory turnover ratio, receivables turnover ratio, assets turnover ratio, self-owned capital ratio, and liabilities ratio. Results demonstrate improved portfolio performance for multiple scoring systems based on a combination of financial indicators. We also analyze the system combinations in terms of the diversity and relative performance of their component systems by computing the RSC diversity and the performance ratio for each combined system. The model provides insights into how portfolio management can be improved using a combination of multiple scoring systems.
Christina Schweikert is a Clare Boothe Luce Assistant Professor of Computer Science at St. John-F"s University in the Division of Computer Science, Mathematics, and Science. Dr. Schweikert completed her undergraduate degrees in Computer Science and General Science at Fordham University, M.S. in Computer Science at New York Institute of Technology, and Ph.D. in Computer Science from the Graduate Center of the City University of New York. Her research interests include: programming languages, bioinformatics, and medical informatics. Dr. Schweikert has also taught at Fordham University, the State University of New York and City University, of New York. She serves as the Assistant Vice President and Webmaster for the Global Business and Technology Association, as well as track chair for the association's annual international conference. Her professional service activities also include co-guest editing a Special Issue on "Algorithms and Molecular Science" which appeared in Algorithms and the International Journal of Molecular Sciences. Dr. Schweikert is a member of the Association for Computing Machinery (ACM) and IEEE Computer Society and Computational Intelligence Society.
Peter Willett, University of Sheffield, Great Britain
Title: Fusing database rankings in similarity-based virtual screening
Virtual screening plays an important role in the discovery of new drugs and agrochemicals. It involves ranking a database of previously untested chemical molecules in order of decreasing probability of biological activity, e.g., the ability to lower a person's cholesterol level. A common approach to virtual screening involves comparing each database molecule with a reference structure, i.e. a molecule that is known to exhibit the activity of interest, using a measure of inter-molecular structural similarity. There are many ways in which similarity can be computed and hence many different ways in which a database ranking can be produced.
This talk will review work in Sheffield and elsewhere that has used simple arithmetic fusion rules to combine multiple similarity rankings. The resulting fused rankings are generally found to contain larger numbers of high-ranked molecules that prove to be active than do rankings resulting from the use of individual similarity measures. Two main approaches have been studied: combinations based on the use of multiple similarity measures with a single reference structure; and combinations based on the use of multiple reference structures with a single similarity measure. The latter approach in particular has been widely adopted in operational systems for similarity-based virtual screening. The final part of the talk discusses a recent comparison in Sheffield that has demonstrated the general effectiveness of one particular fusion rule, reciprocal rank fusion, that had been used previously to combine rankings in text search engines.
Peter Willett obtained an Honours degree in Chemistry from Exeter College, Oxford in 1975 and then went to the Department of Information Studies, University of Sheffield where he obtained an MSc in Information Studies. Following doctoral and post-doctoral research on computer techniques for the processing of databases of chemical reactions, he joined the staff of the University of Sheffield as a Lecturer in Information Science in 1979. He was awarded a Personal Chair in 1991 and a DSc in 1997.
Professor Willett was the recipient of the 1993 Skolnik Award of the American Chemical Society, of the 1997 Distinguished Lecturer Award of the New Jersey Chapter of the American Society for Information Science, of the 2001 Kent Award of the Institute of Information Scientists, of the 2002 Lynch Award of the Chemical Structure Association Trust, of the 2005 Award for Computers in Chemical and Pharmaceutical Research of the American Chemical Society, and of the 2010 Patterson-Crane Award of the American Chemical Society. He is included in Who's Who, is a member of the editorial boards of four international journals, and has been involved in the organisation of many national and international conferences in various aspects of information retrieval.
Professor Willett has over 500 publications describing his research work over the years on the processing of textual, chemical and biological information. His current interests include database applications of cluster analysis and graph theory, ligand-based virtual screening, and applications of bibliometrics.
Xin Yao, University of Birmingham, Great Britain (http://www.cs.bham.ac.uk/~xin)
Title: Evolving, Training and Designing Neural Network Ensembles
Previous work on evolving neural networks has focused on single neural networks. However, monolithic neural networks have become too complex to train and evolve for large and complex problems. It is often better to design a collection of simpler neural networks that work collectively and cooperatively to solve a large and complex problem. The key issue here is how to design such a collection, i.e., an ensemble, automatically so that it has the best generalisation ability. This talk introduces some recent work on evolving neural network ensembles, including negative correlation, constructive negative correlation and multi-objective approaches to ensemble learning.
Xin Yao is a Chair (Professor) of Computer Science and the Director of CERCIA (Centre of Excellence for Research in Computational Intelligence and Applications) at the University of Birmingham, UK. He is an IEEE Fellow and a Distinguished Lecturer of IEEE Computational Intelligence Society (CIS). He won the 2001 IEEE Donald G. Fink Prize Paper Award, 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award, 2010 BT Gordon Radley Award for Best Author of Innovation (Finalist), 2011 IEEE Transactions on Neural Networks Outstanding Paper Award, and many other best paper awards at conferences. He won the prestigious Royal Society Wolfson Research Merit Award in 2012 and was selected to receive the 2013 IEEE CIS Evolutionary Computation Pioneer Award. He as the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation and is an Associate Editor or Editorial Member of more than ten other journals. He has been invited to give 65 keynote/plenary speeches at international conferences. His major research interests include evolutionary computation and neural network ensembles. He has more than 400 refereed publications. According to Google Scholar, his H-index is 58.