Compartir:

Programación del Seminario: Año 2008



GAGA I MIGAGA: MODELS JERÀRQUICS PER A L´ANÀLISI DE MICROARRAYS

CONVIDAT: David Rossell. Department of Biostatistics and Bioinformatics Institute for Research in Biomedicine, Barcelona.

IDIOMA: Català

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 22 de febrer de 2008, 12:30.

RESUM: En els darrers anys la biologia experimental ha desenvolupat tecnologies capaces de generar quantitats massives de dades. Els microarrays, que mesuren els nivells d'expressió d'RNA missatger de desenes de milers de gens simultàniament, és un dels casos que ha rebut més atenció. Des d'un punt de vista estadístic, un dels majors reptes és que el nombre de variables (de l'ordre de desenes de milers) és molt superior al nombre d'observacions (sovint inferiors a una desena). Introduirem breument algunes de les qüestions habituals en microarrays: contrast massiu d'hipòtesis, classificació de mostres i clustering de gens. Revisarem recerca centrada en la primera qüestió, que inclou noves definicions d'errors de tipus I, mètodes d'ajust per a comparacions múltiples, i models de mixtures i jeràrquics que permeten compartir informació entre gens. Breument, presentarem dos models Bayesians jeràrquics desenvolupats per l'autor: els models GaGa i MiGaGa. Aquests permeten contrastar més de dues hipòtesis per a cada gen, classificar mostres i realitzar càlculs de grandària mostral. Després d'analitzar conjunts de dades reals i simulades, concloem que ambdós models presenten bones propietats i que són una eina potent per l'anàlisi de dades massives.


Inici
APLICACIONES DE LA PROGRAMACIÓN ESTOCÁSICA ENTERA A LA LOGÍSTICA EN CHILE

CONVIDAT: Antonio Alonso Ayuso. Dpto. de Estadística e Investigación Operativa, Universidad Rey Juan Carlos, Móstoles, Madrid.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 18 d'abril de 2008, 12:00.

RESUM: En esta charla se analizarán dos problemas de planificación estratégica en Chile: localización de cárceles y gestión de explotaciones forestales. Los modelos deterministas no permiten incluir la incertidumbre en los parámetros del modelo. En particular, la localización de cárceles está supeditada a nivel de población reclusa en cada de las 13 regiones del país (recordemos que Chile es una estrecha franja de 4200km de norte a sur). Por otro lado, la rentabilidad de una explotación forestal,  segunda industria exportadora de país después del cobre, depende fuertemente del rendimiento de los árboles y del precio, muy fluctuante, de la madera. En ambos casos, el horizonte de planificación es muy amplio (20 años) y no es posible disponer de información precisa sobre el verdadero valor de los parámetros inciertos. La Programación Estocástica con variales enteras permite obtener soluciones en tiempos razonables y con una calidad muy superior a aquellas que se obtienen utilizando modelos de Programación Matemática Determinista. La clave de la resolución de ambos modelos está en la utilización de la metodología de Brach-and-Fix, presentada por Alonso-Ayuso, Escudero y Ortuño en 2003. Estos trabajos son una extensión de los modelos deterministas existentes en la literatura y han sido realizados en  colaboración con L.F. Escudero, A. Weintraub y M. Guignard.


Inici
MÉTODOS ESTADÍSTICOS PARA ANALIZAR ASOCIACIÓN, ESTRATIFICACIÓN Y COMPARACIONES MÚLTIPLES EN ESTUDIOS DE EPIDEMIOLOGÍA GENÉTICA

CONVIDAT: Juan Ramón González. Centre de Recerca en Epidemiologia Ambiental (CREAL), Barcelona.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 25 d'abril de 2008, 13:00.

RESUM: Actualmente, los estudios en epidemiología no sólo incluyen variables que cuantifican la exposición ambiental a ciertos factores de riesgo, si no que cada vez más también contienen información genética sobre SNPs (single nucleotide polymorphisms). Los análisis estadísticos de los estudios de asociación que incluyen SNPs no suele ser complicados, sin embargo hay algunos problemas que deben tenerse en cuenta a la hora de analizar este tipo de datos. Uno de los más discutidos aparece por el hecho de analizar miles de SNPs en un mismo estudio, ya que se introduce un problema de multiplicidad de contrastes. Sin embargo, hay otros problemas que surgen debido a que se están analizando datos genéticos como son: qué modelo de herencia analizar para evaluar la asociación del SNP con la enfermedad y como tratar la posible estratificación (confusión) debido a posibles diferencias genéticas entre individuos. En esta charla se presentarán estos tres problemas, así como los métodos estadísticos propuestos para tratarlos. Dicha metodología se ilustrará con ejemplos pertenecientes a datos de estudios reales realizados en pacientes con distintas enfermedades complejas.


Inici
MINIMAL INFEASIBLE SUBSYSTEMS AND BENDERS CUTS

CONVIDAT: Matteo Fischetti. Dipartimento di Elettronica ed Informatica , Universita di Padova, Italia.

IDIOMA: Anglès

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Dimarts, 13 de maig de 2008, 12:00.

RESUM: There are many situations in mathematical programming where cutting planes can be generated by solving a certain "cut generation linear program" whose feasible solutions define a family of valid inequalities for the problem at hand. Disjunctive cuts and Benders cuts are two familiar examples. In this talk we concentrate on classical Benders cuts, as they belong to the basic toolbox for mixed-integer programming. It is a common experience however that the use of Benders cuts is not always as effective as hoped, the main so if the impact of simple yet fundamental design issues are underestimated and the method is implemented "as in its textbook description". The lack of control on the quality of the generated Benders cuts is a main issue of the method. We propose alternative selection criteria for Benders cuts, and analyze them computationally. Our approach is based on the correspondence between minimal infeasible subsystems of an infeasible LP, and the vertices of the so-called alternative polyhedron. The choice of the "most effective" violated Benders cut then correspond to the selection of a suitable vertex of the alternative polyhedron, hence a clever choice of the dual objective function is crucial--whereas the textbook Benders approach uses a completely random selection policy, at least when the so-called feasibility cuts are generated. Computational results on a testbed of MIPLIB instances are presented, where the quality of Benders cuts is measured in terms of "percentage of gap closed" at the root node, as customary in cutting plane methods. We show that the proposed methods allow for a speedup of 1 to 2 orders of magnitude with respect to the textbook one.


Inici
A GENERALIZED APPROACH TO PORTFOLIO OPTIMIZATION: IMPROVING PERFORMANCE BY CONSTRAINING PORTFOLIO NORMS

CONVIDAT: Francisco Javier Nogales. Dpto. de Estadística, Universidad Carlos III, Madrid.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 23 de maig de 2008, 12:30.

RESUM: In this work, we provide a general framework for finding portfolios that perform well out-of-sample in the presence of estimation error. This framework relies on solving the traditional minimum-variance problem but subject to the additional constraint that the norm of the portfolio-weight vector be smaller than a given threshold. We show that our framework nests as special cases several well-known (shrinkage and constrained) approaches considered in the literature. We also use our framework to propose several new portfolio strategies. For the proposed portfolios, we provide a moment-shrinkage interpretation and a Bayesian interpretation where the investor has a prior belief on portfolio weights rather than on moments of asset returns. Finally, we compare empirically the out-of-sample performance of the new portfolios we propose to various well-known strategies in the literature across several datasets. We find that the norm-constrained portfolios we propose outperform shortsale-constrained portfolio approaches, shrinkage approaches, the 1/N portfolio, factor portfolios, and also other strategies considered in the literature.


Inici
MODELIZACIÓN ESTADÍSTICA DE LA CALIDAD DE VIDA RELACIONADA CON LA SALUD: CUESTIONARIO SF-36

CONVIDAT: Inmaculada Arostegui. Universidad del País Vasco UPV/EHU. Bizkaia.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 30 de maig de 2008, 13:00.

RESUM: (Trabajo conjunto con Vicente Núñez Antón)
La Calidad de Vida Relacionada con Salud (CVRS) es un parámetro destacado en la medición de resultados en salud. Es un concepto difícil de medir y su medición se realiza a través de cuestionarios. Las propiedades psicométricas de un cuestionario, su adaptación cultural, el diseño del estudio y el análisis de los resultados plantean cuestiones estadísticas importantes que deben tratarse con cautela. El Cuestionario de Salud SF-36 es uno de los instrumentos más utilizados, validados y traducidos en el campo de la medición de la CVRS. Nuestro trabajo se centra en el análisis de datos, cuyo objetivo sea estudiar la influencia de las características del individuo o su entorno en su CVRS. Las puntuaciones de CVRS en general, y las del SF-36 en particular, tienen una distribución no normal, sesgada y acotada. Nuestra propuesta de análisis se basa en el ajuste de las puntuaciones del SF-36 a la distribución beta-binomial y la definición del modelo de regresión beta-binomial (RBB) adaptado a este contexto. Comprobamos la adecuación del modelo de RBB en datos reales y simulados. Comparamos los resultados obtenidos de la aplicación del modelo de RBB con los obtenidos de la aplicación de modelos alternativos en una muestra de pacientes reales. Los modelos alternativos seleccionados son los siguientes: estimación por mínimos cuadrados, bootstrap y regresión logística ordinal. El modelo de RBB garantiza el cumplimiento de las hipótesis del modelo en todas las dimensiones del SF-36, ofreciendo unos resultados más coherentes y fácilmente interpretables, especialmente en las dimensiones con un gran sesgo o de tipo ordinal con pocas categorías. Por tanto, recomendamos el modelo de RBB para analizar e interpretar la influencia de varias variables explicativas en el SF-36.


Inici
OPTIMIZACIÓN ROBUSTA EN PLANIFICACIÓN DE PROYECTOS

CONVIDAT: Eduardo Conde. Dpto. de Estadística e Investigación Operativa, Facultad de Matemáticas, Universidad de Sevilla.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 13 de juny de 2008, 12:00.

RESUM: La ejecución de un proyecto, compuesto de múltiples tareas interrelacionadas mediante vínculos de precedencia, representa un reto a la hora de controlar los recursos disponibles y las fechas de entrega comprometidas. En este trabajo se analiza este problema bajo la hipótesis de que el tiempo requerido para la realización de cada tarea individual es desconocido y sólo puede ser estimado su rango de variación. Manteniéndose las relaciones de precedencia entre tareas, cada valor posible para los tiempos de ejecución determina un escenario diferente bajo el cual cambiará, previsiblemente, la duración del proyecto. Aquí se utilizará el criterio minmax regret para obtener una aproximación robusta al conjunto de tareas críticas que condicionan el tiempo final de ejecución del proyecto completo.


Inici

PROGRAMACIÓN LINEAL ENTERA: PEQUEÑOS PROBLEMAS, GRANDES DESAFIOS

CONVIDAT: Juan José Salazar. Dpto. de Estadística, Investigación Operativa y Computación, Universidad de La Laguna, Tenerife.

IDIOMA: Castellà.

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 20 de juny de 2008, 12:00.

RESUM: Veremos ejemplos de problemas de optimización discreta definidos por una ecuación lineal y pocas variables enteras (no negativas) que modernos softwares comerciales (como Cplex 11) no pueden resolver en muchas horas de cálculo. Mostraremos algoritmos alternativos a la Programación Lineal Entera que en cambio resuelven estos mismos ejemplos en fracciones de segundo. Analizaremos por qué falla la Programación Matemática y daremos una aplicación donde es fundamental la resolución eficiente de estos ejemplos diminutos de optimización. Como aplicación hablaremos de la determinación de si el ideal tórico de una curva monimial dada es, o no, intersección completa (y cuando lo sea, calcular un sistema generador y el número de Frobenius). Esta aplicación es de alto interés en Álgrebra y está a su vez motivado por la Geometría, Criptografía, etc. Creemos que la charla puede resultar de interés para personas interesadas en Optimización Matemática, en Teoría de Grafos y en Teoría de Números. Tendrá una orientación divulgativa. Detalles técnicos en: I. Bermejo, I. García, J.J. Salazar "An algorithm to check wether the toric ideal of an affine monomial curve is a complete intersection" Journal of Symbolic Computation 42/10 (2007) 971-991.

TRANSPARENCIES DE LA PRESENTACIO: clica aquí


Inici

SPATIAL ASSOCIATION BETWEEN SPECIATED FINE PARTICLES AND MORTALIT

CONVIDAT: Montserrat Fuentes. Department of Statistics, North Carolina State University, USA.

IDIOMA: Castellà.

LLOC: Aula 005, FME, Campus Sud, UPC

DATA: Dimecres, 25 de juny de 2008, 13:00. (To be confirmed)

RESUM: Particulate matter (PM) has been linked to a range of serious cardiovascular and respiratory health problems. Some of the recent epidemiologic studies suggest that exposures to PM may result in tens of thousands of excess deaths per year, and many more cases of illness among the US population. The main objective of our research is to quantify uncertainties about the impacts of fine PM exposure on mortality. We develop a multivariate spatial regression model for estimation of the risk of mortality associated to fine PM and its components across all counties the coterminous US. Our approach adjusts for meteorology and other confounding influences, such as socioeconomic factors, age, gender and ethnicity, characterizes different sources of uncertainty of the data, and models the spatial structure of several components of fine PM. We consider a flexible Bayesian hierarchical model for a space-time series of counts (mortality) by constructing a likelihood based version of a generalized Poisson regression model. The model has the advantage of incorporating both over and under dispersion in addition to correlations that occur in space and time. We apply these methods to monthly mortality county counts and measurements of total and several components of fine PM from national monitoring networks in the U.S.. Our results seem to suggest an increase by a factor of 2 in the risk of mortality due to fine particules with respect to coarse particules. Our study also shows that in the Western U.S., the nitrate and crustal components of the speciated fine PM seem to have more impact on mortality than the other components. On the other hand, in the Eastern U.S. sulfate and ammonium explain most of the PM fine effect.



SPATIAL ASSOCIATION BETWEEN SPECIATED FINE PARTICLES AND MORTALITY

CONVIDAT: Sastry G. Pantula. Department of Statistics, North Carolina State University, USA.

IDIOMA: Anglès.

LLOC: Aula 005, FME, Campus Sud, UPC

DATA: Dijous, 26 de juny de 2008, 13:00. (To be confirmed)

RESUM: Unit root tests in time series analysis have received a considerable attention since the seminal work of Dickey and Fuller(1976). In this talk, some of the existing unit root test criteria will be reviewed. Size, power and robustness to model misspecification of various unit root test criteria will be discussed. More recent work on unit root tests where the alternative hypothesis is a unit root process will be discussed. Tests for trend stationarity versus difference stationary models will be discussed briefly. Current work on unit root test criteria on random coefficient models and seasonal series will also be discussed. Examples of unit root time series and future directions in unit root hypothesis testing will be presented.


Inici
SHOULD WE USE RELATIVE RISKS OR ODDS RATIOS IN CLUSTER RANDOMISED TRIALS WITH BINARY OUTCOMES WHICH HAVE HIGH PROPORTIONS?

CONVIDAT: Michael Joseph Campbell, Director of Health Services Research, Medical Statistics Group, Sheffield (UK).

IDIOMA: Anglès.

LLOC: Aula 005, FME, Campus Sud, UPC

DATA: Divendres, 27 de juny de 2008, 13:00.

RESUM: It is well known in cluster randomised trials with a binary outcome and a logistic link that the population averaged and cluster specific models estimate different population parameters. (Neuhaus and Jewell , Biometrika, 1993) However, it is less well appreciated that for a log link, the population parameters are the same (Campbell et al Statistics in Medicine 2007) and a log link leads to a relative risk. This suggests that for a prospective cluster randomised trial the relative risk is easier to interpret. Commonly the odds ratio and the relative risk have similar values and are interpreted similarly. However, when the incidence of events is high they can differ quite markedly, and it is unclear which is the better parameter parameter. We estimate the relative risk through either the use of generalised estimating equations or through a random effects model, which are the population averaged and cluster specific methods respectively. Although a cluster specific model for a clinical trial has no realization, the model can be relatively easily fitted using either Gaussian quadrature or Bayesian methods(MCMC). We explore these issues in a cluster randomised trial, the Paramedic Practitioner Older People's Support Trial (Mason et al, BMJ 2007). This investigated whether paramedic practitioners who assessed and treated patients in the community could reduce emergency admissions to hospital. In this trial the admission rate was high and use of a logistic model was potentially misleading. A relative risk was a better summary measure and gave a different interpretation to some of the data. However, in this case the Intraclass Correlation Coefficient (ICC) was low and so fitting was not a problem. For simulated data with a high ICC, there were no problems fitting a logistic model, but there were convergence problems with a log-linear model. We conclude that notwithstanding the attractions of a log-linear model leading to a relative risk, in general with high incidence and a high ICC a logistic model is the best option.


Inici
ESTIMATING AND FORECASTING GARCH VOLATILITY IN THE PRESENCE OF OUTLIERS

CONVIDAT: Esther Ruiz, Departamento de Estadística, Uniuversidad Carlos III de Madrid.

IDIOMA: Castellà.

LLOC: Aula per confirmar, FME, Campus Sud, UPC

DATA: Divendres, 4 de juliol de 2008, 13:00.

RESUM: The main goal when fitting GARCH models to conditionally heteroscedastic time series is to obtain estimates of the underlying volatilities. It is well known that outliers can bias the ML estimator of the GARCH parameters. However, little is known about their effects on the estimated volatilities and this is the objective of this paper. We analyze the biases incurred when the volatility is estimated by substituting the GARCH parameters by their ML estimates and show that they can be very large even when there is a single moderate outlier. Furthermore, the estimated volatilities are biased not only in the period of time when the outlier appears but through all the sample period. Obviously, these biases affect the construction of prediction intervals for future observations. Therefore, it seems important to take into account the potential presence of outliers when estimating volatilities. However, the available procedures for detecting outliers in conditionally heteroscedastic time series are rather complicated. Consequently, in this paper, we propose to use robust procedures. In particular, we propose a new very simple robust estimator and show that its properties are comparable with other more complicated ones available in the literature. Then, we analyze the properties of the estimated and predicted volatilities obtained using robust filters based on robust estimates of the parameters. All the results are illustrated by implementing the alternative estimators and filters to the estimation and prediction of the underlying volatility of daily S&P500 and IBEX35 returns.

(Joint work with M. Angeles Carnero and Daniel Peña).


Inici
TEST ANOVA PARA DATOS FUNCIONALES

CONVIDAT: Manuel Febrero, Departamento de Estadística e Investigación Operativa, Uniuversidad de Santiago de Compostela.

IDIOMA: Castellà.

LLOC: Aula per confirmar, FME, Campus Sud, UPC

DATA: Divendres, 4 de juliol de 2008, 15:00.

RESUM: En esta presentación se propone un procedimiento para tratar diseños complicados ANOVA con datos funcionales. El procedimiento está basado en el análisis de proyecciones uni-dimensionales elegidas al azar lo que lo convierte en un procedimiento computacionalmente sencillo. La presentación comenta brevemente algún resultado teórico así como algunas simulaciones y el análisis de datos reales. Como el análisis de datos funcionales incluye en particular datos multidimensionales, la presentación incluye una comparación entre el procedimiento propuesto y algunos de los MANOVA tests usuales en dos conjuntos de datos clásicos.


Inici

SEQUENTIAL APPROACH FOR THE LOT SIZING PROBLEM WITH SETUP TIMES

CONVIDAT: Jean-Marie Bourjolly. Université du Québec à Montréal and CIRRELT, Montreal, Canada.

IDIOMA: Anglès

LLOC: Aula C4002, Campus Nord, UPC (veure mapa)

DATA: Dimarts, 8 de juliol de 2008, 12:00.

RESUM: The Capacitated Lot-Sizing Problem is a fundamental decision at the tactical planning stage in manufacture process, which consists of deciding how many units of each demanded item to be produced in each period of the planning horizon. We compare in this paper a "global" metaheuristic (in which the whole planning horizon is considered at once) with the "rolling horizon" approach that uses the same the same metaheurustic to solve a sequence of sub-problems corresponding to periods (1), and then (1,2), (1,2,3), (1,2,3,4), etc. We show that the sequential approach often gives substantially better results than the global one, which suggests that significant gains can be obtained by applying a heuristic to a sequence of smaller size sub-problems instead of applying it to the full-scale instance of a problem.


Inici

FIRST-IN MAN STUDIES: WHAT HAVE WE LEARNED AND WHAT SHOULD WE DO IN FUTURE

CONVIDAT: Stephen Senn. U. Glasgow, UK.

IDIOMA: Anglès

LLOC: Aula 005, FME, Campus Sud, UPC

DATA: Dimecres, 3 de setembre de 2008, 12:00.

RESUM: On the morning of March 13, 2006, eight healthy volunteers forming the first cohort of a planned group escalation study of TeGenero's monoclonal antibody TGN 1412 at a Phase I clinical research facility adjacent to Northwick Park Hospital were given their treatment. Within a short period of time, the six men who had been allocated to TGN 1412 began showing signs of an adverse reaction. In fact, they were all showing the first signs of a severe cytokine storm. By that evening, all six had been admitted to intensive care. Hindsight is an exact science. We are well aware that everybody can be wise in retrospect about what happened at Northwick Park. Unfortunately for the six young men involved, such hindsight is too late. However, it behooves all of us who plan, run or analyze trials to learn the lessons and to plan better in the future. It is in this spirit that we have prepared our report and hope that others will find it useful.


Inici

STATISTICS IN INDUSTRY AND BUSINESS: A HISTORICAL PERSPECTIVE

CONVIDAT: Bovas Abraham. Department of Statistics and Actuarial Science University of Waterloo, Ontario, CANADA

IDIOMA: Anglès

LLOC: Seminari EIO, ETSEIB (Edifici Eng. Industrial), Planta 6, Campus Sud, Universitat Politècnica de Catalunya Avda. Diagonal, 647

DATA: Dilluns 6 d'octubre de 2008, 12:00.

RESUM: Bovas Abraham is the president of the International Society for Business and Industrial Statistics and is the founding president of the "Business and Industrial Statistics Section" of the Statistical Society of Canada. Bovas is also the former Director for the Institute for Improvement in Quality and Productivity and has been a consultant and teacher with the Institute since its inception. He has given seminars across North America, Europe, Australia, and the Far East. Bovas has been a faculty member a the University of Waterloo in the Department of Statistics and Actuarial Science since 1977. Bovas received his Bachelor of Science from the University of Kerala in India, and his Ph.D. from the University of Wisconsin, Madison, U.S.A. Bovas has been involved as a consultant in a wide range of statistical applications in industry in Canada and the United States. Some of these companies include: general Motors, standard Products, Manchester Plastics, Wescast Industries, Imperial Oil, Nortel Networks, and BF Goodrich. In addition to extensive consulting and teaching experience in the automotive and automotive supplier industry, Bovas has been involved in consulting for the Ministry of the Environment. His main areas of interest include Quality Improvement, and the management and implementation of statistical procedures such as Designed Experiments, SPC, and Time Series Analysis. He is co-author of the books " Statistical Methods for Forecasting", and "Introduction to Regression Modeling", and the editor of a volume "Quality Improvement Through Statistical Methods". Bovas is a Fellow of the American Society for Quality, a Fellow of the Royal Statistical Society, a Fellow of the American Statistical Association, an elected member of the International Statistical Institute, and a member of the International Environmetrics Society and the Statistical Society of Canada.


 Inici

THE PERSPECTIVE RELAXATION FOR MIXED INTEGER NONLINEAR PROGRAMS WITH SEMICONTINUOUS VARIABLES

CONVIDAT: Claudio Gentile. Institute of System Analysis and Computer Science ``Antonio Ruberti'', Italian National Research Council (IASI-CNR), Roma, Italia.

IDIOMA: Anglès

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 10 d'octubre de 2008, 12:00.

RESUM: In this talk we present different solution methods for a class of Mixed-Integer Programming problems with semicontinous variables and convex objective functions. We derive the best convex relaxation for the separable case that is strictly related with the perspective function of the continuous part of the objective function (and thus named ``Perspective Relaxation''). Using a characterization of the subdifferential of the perspective function, we derive ``perspective cuts'', a family of valid inequalities that can be used to solve this class of problems with a Branch-and-Cut algorithm. We then show an alternative approach for the solution of the Perspective Relaxation based on a Second-Order Cone Programming (SOCP) reformulation. Moreover, we introduce a general way by which it is possible to apply the Perspective Relaxation to problems with a nonseparable quadratic objective function; in particular, a very effective implementation of the method requires the solution of a Semidefinite Programming problem. Finally, we provide computational results comparing the different implementations of the Perspective Relaxation and the standard continuous relaxation on two relevant test problems with different characteristics: the Unit Commitment problem and the Mean-Variance model in Portfolio Optimization.

TRANSPARENCIES DE LA PRESENTACIO: clica aquí


 Inici

COMPARACIÓN DE DISTRIBUCIONES DE RENTAS MEDIANTE L-ESTADÍSTICOS

CONVIDAT: José Ramón Berrendero. Departamento de Matemáticas, Universidad Autónoma de Madrid.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 24 d'octubre de 2008, 12:30.

RESUM: En el seminario se presentará una condición que garantiza que dos variables aleatorias, ordenadas respecto al orden estocástico convexo, tienen la misma distribución. La condición está relacionada con los valores esperados de los estadísticos de orden y se puede interpretar económicamente en términos de la comparación de los recursos acumulados por los más pobres o más ricos en muestras aleatorias de individuos de dos poblaciones. A partir de esta condición se derivan medidas de discrepancia para contrastar la hipótesis nula de que dos variables tienen la misma distribución frente a la alternativa de que una de ellas domina estrictamente a la otra en el orden convexo. Se ilustrarán algunas propiedades de los contrastes propuestos mediante un ejemplo empírico de comparación de distribuciones de rentas.

Una versión preliminar del artículo (trabajo conjunto con Javier Cárcamo) en el que está basado el seminario puede descargarse aquí


 Inici

DISSENY D'ESTUDIS LONGITUDINALS OBSERVACIONALS QUAN L'EXPOSICIÓ VARIA EN EL TEMPS

CONVIDAT: Xavier Basagaña. Centre de Recerca en Epidemiologia Ambiental (CREAL), Centre de Recerca Biomèdica (CRB), Barcelona.

IDIOMA: Català

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 5 de desembre de 2008, 12:30.

RESUM: Les fórmules existents pel càlcul de la grandària de la mostra en estudis longitudinals assumeixen que l’exposició (variable independent, que suposem binària) és constant en el temps per un mateix individu, o bé que varia d’una forma que és controlada per l’investigador. No obstant, en estudis observacionals, l’investigador no controla com l’exposició varia en el temps per cada individu, i normalment s’observen un gran nombre de patrons d’exposició, amb grans variacions en el nombre de períodes amb exposició per individu, i amb canvis en la proporció de persones exposades en cada període. En la presentació es mostraran fórmules pel càlcul de la grandària mostral que són vàlides per estudis observacionals i que només requereixen un paràmetre addicional. A més a més, es compararà l’eficiència d’estudis on l’exposició varia amb el temps amb estudis on l’exposició és fixa, tant pel cas on es vol veure l’efecte de l’exposició en els valors mitjans de la resposta com pel cas on es vol veure l’efecte en el canvi de la resposta amb el temps. Els grans canvis en eficiència que s’observen en alguns casos impedeixen que les fórmules que assumeixen que l’exposició és constant en el temps es puguin fer servir quan en realitat l’exposició sí que tindrà variació intra-individu.


 Inici

BUILDING COVARIANCE FUNCTIONS FROM THE MARGINS: SPACE AND SPACE-TIME CHALLENGES

CONVIDAT: Emilio Porcu. Departamento de Matemáticas, Universitat Jaume I de Castelló.

IDIOMA: Castellà

LLOC: Aula C5016, Campus Nord, UPC (veure mapa)

DATA: Divendres, 12 de desembre de 2008, 12:30.

RESUM: This presentation is oriented to the construction of space and space-time models built from the margins. We argue on the existence of a general class of link functions that, applied to marginal covariances, preserves the positive definiteness on higher dimensional spaces. The straightforward application of this axiomatic view is to space-time geostatistics. We also assess some general properties of positive definite functions through componentwise isotropy and show, as Corollaries of a general construction, general criteria for the permissibility of the geometric and harmonic means of covariance functions, a result related to an open problem proposed in a early paper by Crum (1930). An application to Irish wind speed data illustrates our findings.

TRANSPARENCIES DE LA PRESENTACIO: clica aquí


Inici

ANALYSIS OF LOW-COPY NUMBER DNA PROFILES

CONVIDAT: David Balding. Centre for Biostatistics, Imperial College London.

IDIOMA: Anglès

LLOC: Seminari EIO, ETSEIB (Edifici Eng. Industrial), Planta 6, Campus Sud, Universitat Politècnica de Catalunya Avda. Diagonal, 647

DATA: Divendres, 19 de desembre de 2008, 16:30.

RESUM: Recently, forensic DNA profiling has been used with far smaller volumes of DNA than was previously thought possible. This "low copy number" profiling enables DNA to be recovered from the slightest traces left by touch or even merely breath, but brings with it serious interpretation problems that courts have not yet adequately solved. The most important challenge to interpretation arises when either or both of “dropout” and “dropin” create discordances between the crime scene DNA profile and that expected under the prosecution allegation. Stochastic artefacts affecting the peak heights read from the electropherogram (EPG) are also problematic, in addition to the effects of masking from the profile of a known contributor. We outline a framework for assessing such evidence, based on likelihood ratios that involve dropout, dropin and masking probabilities as parameters that must be supplied independently, and apply it to two casework examples and reveal serious deficiencies of the reported analyses. In particular, analysis based on exclusion probabilities, widely used in the USA and other countries, can be systematically unfair to defendants, sometimes extremely so. We also show that the LR often depends strongly on the assumed value for the dropout probability, and there is typically no approximation that is useful for all values. We illustrate that ignoring the possibility of dropin is usually unfair to defendants, and argue that under circumstances in which the prosecution relies on dropout, it is unsatisfactory to ignore any possibility of dropin. Finally, we propose an approach to allowing for uncertain allele calls and uncertain masking, for example due to possible stutter artefacts.

Joint work with John Buckleton, ESR, Auckland, New Zealand.

Inici