Compartir:

Programación del Seminario: Año 2013

  • Viernes 20 de diciembre de 2013, Hora: 12:30

The statistics of Hardy-Weinberg equilibrium

Jan Graffelman, Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya, Spain.

 

  • Jueves 12 de diciembre de 2013, Hora: 12:30

Managing Risks with Data

Ron S. Kenett, The KPA Group, Raanana, Israel and Department of Applied Mathematics and Statistics, University of Turin, Turin, Italy and Center for Risk Engineering, NYU, USA.

 

  • Viernes 22 de noviembre de 2013, Hora: 12:30

Análisis de datos longitudinales con ecuaciones estructurales
Jesús Rosel, Departamento de Psicología Evolutiva, E, S y Metodología, Universidad Jaume I, Castellón, España.

 

  • Jueves 21 de noviembre de 2013, Hora: 12:30

The Minimum Flow Cost Hamiltonian Cycle Problem: Formulations and Related Problems

Ivan Contraras, Mechanical and Industrial Engineering Dept., Concordia University i CIRRELT. Montreal, Canada.

 

  • Viernes 8 de noviembre de 2013, Hora: 12:30

Modelos de clasificación y unfolding

José Fernando Vera, Departamento de Estadística e Investigación Operativa, Universidad de Granada.

 

  • Viernes 25 de octubre de 2013, Hora: 12:30

Matheuristics for arc routing problems with profits

Claudia Archetti, Università degli Studi di Brescia, Itàlia

 

  • Dilluns 7 de octubre de 2013, Hora: 12:30

Methods for Descriptive Factor Analysis of Multivariate Geostatistical Data: a Case-study Comparison

Samuel D. Oman, Department of Statistics, Hebrew University, Jerusalem, Israel

 

  • Viernes 4 de octubre de 2013, Hora: 12:30

Un algoritmo de localización-asignación-búsqueda local para manejar restricciones de conectividad y balance múltiple en sistemas de diseño territorial

Roger Z. Ríos, Universidad Autónoma de Nuevo León, México

 

  • Viernes 24 de mayo de 2013. Hora: 12:30

Aportacions de l'estadística per avaluar l'efectivitat i l'impacte de les intervencions en Salut Pública

Àngela Domínguez, Medicina Preventiva i Salut Pública, Universitat de Barcelona

 

  • Viernes 8 de marzo de 2013, Hora: 18:00-19:30


Dins el marc de l'Any Internacional de l'Estadística presentem la Taula rodona "Trials and Truths" amb dues ponències de Rosa Lamarca i Stephen Senn organitzada en col.laboració amb la Societat Catalana d'Estadística (SCE) i la Facultat de Matemàtiques i Estadística (FME) de la UPC.

Presentadors: Jan Graffelman (Seminari DEIO) i Lupe Gómez (SCE)
Moderador: Erik Cobo (DEIO)

Programa:

 

1. Ponent 1: Time to change disclosing clinical trials. Rosa Lamarca, Clinical Statistics, Laboratoris Almirall, Barcelona

2. Ponent 2: Bad JAMA? Are medical journal editors biased in favour of positive studies? Stephen Senn, Competence Center in Methodology and Statistics, Luxembourg.

3. Discussió

4. Sorpresa

 

  • Miércoles 20 de febrero de 2013, Hora:12:30

Large Scale Optimization with FAIPA, the Feasible Arc Interior Point Algorithm, for nolinear optimization
Jose Herskovits Norman,OptimizE - Engineering Optimization Lab, Mechanical Engineering Program, COPPE
Federal University of Rio de Janeiro

  • Martes 22 de enero de 2013, Hora 10:30

A Bilevel Approach for Optimal Location and Contract Pricing of Distributed eneroation in Radial Distribution Systems Using Mixed-Integer Linear Programming
Marcos J. Rider, Electrical Engineering Department, Universidade Estadual Paulista, Brasil.


Inicio

The statistics of Hardy-Weinberg equilibrium

 

CONVIDAT: Jan Graffelman

IDIOMA: Anglès
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA: Viernes 20 de diciembre de 2013. Hora: 12:30
RESUM: The Hardy-Weinberg law is a cornerstone principle of modern genetics. The law is more than a century old, and was independently stated in 1908 by the English mathematician Godfrey Hardy and the German physician Wilhelm Weinberg. The law states that, in the absence of disturbing factors (migration, differential survival and others) allele and genotype frequencies in a biological population will achieve equilibrium values within one generation and remain stable afterwards.

The principle is still relevant today, because modern genetic studies use statistical tests for equilibrium as a device to detect genotyping error. Moreover, in genetic epidemiology the law is assumed in many disease models. The HW law has been a topic of intense research, and there are hundreds of research papers dedicated to it.

Pearson's chi-square test has been the most popular procedure to test genetic markers for equilibrium for decades, though nowadays computer-intensive exact procedures have become more and more popular. Currently, geneticists have several possibilities to investigate equilibrium: the chi-square test, exact procedures, permutation tests, likelihood ratio tests and Bayesian procedures.

In this talk I will give an overview of the various statistical tests for Hardy-Weinberg equilibrium, and address some of the related statistical issues in more detail such as ML estimation, power computation, definition of the p-value under a discrete reference distribution, graphical representations and dealing with missing genotype data.

References:

Weir, B. S. (1996) Genetic Data Analysis II, Sinauer Associates, Massachusetts. Chapter 3.
Graffelman, J., Sánchez, M., Cook, S. and Moreno, V. (2013) Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information. To appear in PLOS one.
Graffelman, J. (2013) Exploring di-allelelic genetic markers: the HardyWeinberg package. To appear in the Journal of Statistical Software.
Graffelman, J. and Moreno, V. (2013) The Mid p-value in exact tests for Hardy-Weinberg proportions. Statistical Applications in Genetics and Molecular Biology. 12(4): 433-448. DOI: http://dx.doi.org/10.1515/sagmb-2012-0039
Graffelman, J. and Egozcue, J.J. (2011) Hardy-Weinberg equilibrium: a nonparametric compositional approach. In Pawlowsky-Glahn, V. and Buccianti A., editors, Compositional Data Analysis: Theory and Applications, pages 208-217, John Wiley & Sons, Ltd.
Graffelman, J. and Morales-Camarena, J. (2008) Graphical tests for Hardy-Weinberg Equilibrium based on the ternary plot. Human Heredity 65(2): 77-84. doi: http://dx.doi.org/10.1159/000108939


Inicio

Managing Risks with Data

 

CONVIDAT: Ron S. Kennet

IDIOMA: Anglès
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA: Jueves 12 de diciembre de 2013. Hora: 12:30
RESUM: Assessing exposure to potential risk events and initiating proactive risk mitigation actions is currently a clear priority of businesses, organizations and governments world-wide. Managing risks with data is a growing discipline that involves data acquisition and data merging, risk analytics and risk management decisions support systems. The presentation provides an overview of modern integrated risk management, including examples of how qualitative unstructured data, like text, can be combined with quantitative data like social networks dynamics and technical performance time series, to generate integrated risk scores. We suggest that data based risk analysis is an essential competency complementing and reinforcing the traditional subjective scoring methods used in classical risk management. The examples we use consist of applications of risk scoring models for evaluating risks, Bayesian Networks to map cause and effect, Ontologies to interpret automated text annotation, ETL to merge various data bases and a follow up integrated risk management approach. Examples from the FP7 RISCOSS project will be also provided. RISCOSS is about risk management of free open source software (FOSS) adoption and deployment (www.riscoss.eu). Some references are listed below.

References:

Franch, X. et al, 2013, Managing Risk in Open Source Software Adoption, ICSOFT2013, 8th International Joint Conference on Software Technologies, Reykjavik, Iceland, July 29-31st. http://riscoss.sites.ow2.org/bin/download/Events/WebHome/ICSOFT-EA_2013_78_PROCS.pdf
Kenett, R.S. and Zacks, S., 1998. Modern Industrial Statistics: Design and Control of Quality and Reliability, Duxbury Press, San Francisco, Spanish edition, 2000, 2nd edition 2003, Chinese edition, 2004.
Kenett, R.S. and Zacks, S., with contributions by D. Amberti, 2014. Modern Industrial Statistics with applications in R, MINITAB and JMP.
http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118456068.html
Kenett, R.S. and Salini, S., 2008. Relative Linkage Disequilibrium Applications to Aircraft Accidents and Operational Risks. Transactions on Machine Learning and Data Mining, Vol.1, No 2, pp. 83-96.
Kenett, R.S. and Baker, E.M., 2010. Process Improvement and CMMI for Systems and Software, Taylor and Francis, Auerbach CRC Publications.
http://www.crcpress.com/product/isbn/9781420060508
Kenett, R.S. and Raanan, Y., 2010. Operational Risk Management: a practical approach to intelligent data analysis, Wiley and Sons.
http://www.wiley.com/WileyCDA/WileyTitle/productCd-047074748X.html
RISCOSS, Managing Risk and Costs in Open Source Software Adoption, http://www.riscoss.eu



Inicio

Análisis de datos longitudinales con ecuaciones estructurales

 

CONVIDAT: Jesús Rosel

IDIOMA: Castellà
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA: Viernes 22 de noviembre de 2013. Hora: 12:30
RESUM: Se revisarán diferentes modelos de ecuaciones estructurales para el análisis de datos longitudinales : (a) los modelos univariantes de variables observables, (b) los modelos multivariados de variables observables, (c) los modelos con variables latentes, (d) los modelos no condicionados y condicionados a otras variables, (e) modelos con interacción de las variables , (f) los modelos con variables no lineales, (g ) los modelos con una constante, (h) con un solo nivel y con múltiples niveles, y (i) otros avances en el SEM de datos longitudinales (modelo de la curva de desarrollo latente, la puntuación de diferencia latente, etc).vSe presta más atención a la interacción de las variables y de transformaciones no lineales de las variables, ya que no se utilizan con frecuencia en la investigación empírica. Estos modelos, sin embargo, ofrecen posibilidades interesantes para los investigadores que deseen comprobar las relaciones entre las variables que obtienen. Se describen las posibles aplicaciones, su relación con otros modelos (ANOVA de medidas repetidas, con sus ventajas (su similitud con modelos multinivel, se pueden comparar niveles entre factores, concepto de invarianza y cambio: invarianza configural; invarianza en las cargas, o ‘ligera’; e invarianza en los niveles de los factores, o ‘fuerte’) e inconvenientes (programas ‘a la medida’, sistemas de ajuste, ausencia de un criterio de ajuste global,…).


Inicio

The Minimum Flow Cost Hamiltonian Cycle Problem: Formulations and Related Problems

 

CONVIDAT: Ivan Contreras

IDIOMA: Castellà
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA:
Jueves 21 de noviembre de 2013. Hora: 12:30
RESUM:
In this talk we introduce the Minimum Flow Cost Hamiltonian Cycle Problem (FCHC). Given a graph and positive flow between pairs of vertices, the FCHC consists of finding a Hamiltonian cycle that minimizes the total flow cost between pairs of vertices through the shortest path on the cycle. Potential applications of the FCHC arise naturally in telecommunications network design and in rapid transit systems planning, namely in the design of automated guided vehicles (AGV) networks. This problem also appears as a subproblem in complex general network design problems in which a ring topology is sought. We present five different mixed integer programming formulations for the FCHC which are theoretically and computationally compared. We also propose several families of valid inequalities for one of the formulations and perform some computational experiments to assess the performance of these inequalities.

EL PONENT:

Ivan Contreras és professor del Departament d'Enginyeria Industrial i Mecànica de la Concordia University i és membre regular del centre de recerca interuniversitari CIRRELT. És graduat i màster en Enginyeria Industrial per la Universidad de las Américas, México. El 2009 va obtenir el títol de doctor al Departament d'Estadística i Investigació Operativa de la UPC. Des d'aleshores, fins a la seva incorporació a la Concordia University el 2011 ha ocupat places d'investigador postdoctoral a la universitat de Heidelberg i al CIRRELT. Les seves principals linies de recerca són la localització de hubs, el disseny de xarxes de hubs, la localització discreta, el disseny de xarxes i els mètodes de descomposició per problemes d'optimització a gran escala.

Pots trobar més informació sobre Ivan Contreras a la seva pàgina web.


Inicio

Modelos de clasificación y unfolding

 

CONVIDAT: José Fernando Vera

IDIOMA: Castellà
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA: Viernes 8 de noviembre de 2013. Hora: 12:30
RESUM: Los métodos de clasificación y espaciales pueden ser usados conjuntamente para representar la información individual de preferencias similares por medio de grupos. En unfolding, la agrupación de los individuos en clases, mientras las clases son representadas en un espacio de dimensión reducida, resulta un procedimiento recomendable que permite faiclitar la interpretación de los resultados en grandes conjuntos de datos. En el contexto de los modelos de clases latentes y mediante el uso de Simulated Annealing, se presenta un modelo de clasificación y unfolding para datos de preferencia que ofrece mejores resultados que un enfoque de dos etapas en el que en primer lugar se determinan los grupos y despues se representan las clases mediante unfolding. La naturaleza probabilística del modelo permite tomar decisiones desde el punto de vista estadístico acerca del número de clusters y de la dimesionalidad de la solución. No obstante, cuando el coste computacional es demasiado elevado y/o no es asumible alguna de las hipótesis del modelo, se propone un procedimiento alternativo en un contexto de mínimos cuadrados, en el que los individuos y/o los objetos son clasificados mediante un procedimiento de k-medias, mientras que al mismo tiempo los centros de los conglomerados son representados mediante unfolding.


Inicio

Matheuristics for arc routing problems with profits

CONVIDADA: Claudia Archetti

IDIOMA: Anglès
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA: Viernes 25 de octubre de 2013. Hora: 12:30
RESUM:

Matheuristics are heuristic solution methods that make use of mathematical programming models in a heuristic framework. The interest in this solution methodology has remarkably increased in the last ten years. They have been applied successfully to many application domains, proving to be competitive with respect to more classical heuristic or metaheuristic schemes.

Among the main application domains, we find the class of routing problems. Matheuristics have been applied to several different routing problems and include a number of different approaches.

We focus on the application of matheuristics to arc routing problems with profits which is a quite recent and challenging class of problems where customers are associated with a profit and the carrier has to determine the most convenient subset of customers to serve. We will present two applications dealing with a single vehicle case and a multiple vehicles case.



LA PONENT:

Claudia Archetti és Assistant Professor en Investigació Operativa a la Università degli Studi di Brescia (Dipartimento di Economia e Management) on treballa des de 2005. Imparteix cursos de grau, màster i doctorat en investigació operativa i logística. Les principals arees de la seva activitat científica són: models i algoritmes per problemes de rutes de vehicles, models de programació entera mixta per la minimització de la suma de costos d'inventari i de transport en xarxes logístiques, mètodes exactes i heuristics per a la gestió de la cadena de subministrament i reoptimització de problemes d'optimització combinatòria.

Ha dut a terme la seva activitat científica en col·laboració amb col·legues Italians i extrangers, i ha publicat treballs conjuntament amb alguns dels millors investigadors a nivell intarnacional. És autora de més de 35 publicacions en revistes internacionals i actualment és editora associada de la revista Networks.

 

Podeu trobar aquí més informació sobre la ponent.

 


Inicio

Methods for Descriptive Factor Analysis of Multivariate Geostatistical Data: a Case-study Comparison

CONVIDAT: Samuel D. Oman

IDIOMA: Anglès
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (see map)
DATA: Dilluns 7 de octubre de 2013. Hora: 12:30
RESUM: We consider the framework in which vectors of variables are observed at different points in a region. Such data are typically characterized by point-wise correlations among the variables, as well as spatial autocorrelation and cross-correlation. To help understand and model this dependence structure, one may define factors which operate at different spatial scales. We consider four such factor-analytic techniques: the Linear Model of Coregionalization and three recently proposed alternatives. We apply them to the same set of data, concentrations of major ions in water samples taken from springs in a carbonate mountain aquifer. The methods give quite different results for the spring chemistry, with those of the Linear Model of Coregionalization being much more interpretable. We suggest some possible explanations for this, which may be relevant in other applications as well.

 

KEY WORDS: Carbonate dissolution; Linear model of Coregionalization; Major ions; Principal component analysis; Sea water; Spatial correlation; Springs.


Inicio

Un algoritmo de localización-asignación-búsqueda local para manejar restricciones de conectividad y balance múltiple en sistemas de diseño territorial

CONVIDAT: Roger Z. Ríos

IDIOMA: Espanyol
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (see map)
DATA: Viernes 4 de octubre de 2013. Hora: 12:30
RESUM:

Los problemas de diseño territorial consisten en agrupar pequeñas unidades geográficas en grupos de mayor tamaño, denominados territorios o distritos, de tal forma que éstos satisfacen ciertos criterios relevantes de planificación. Estos requerimientos pueden ser motivados económicamente (potencial de ventas, carga de trabajo, número de clientes) o demográficamente (número de habitantes, población votante. Además, se imponen con frecuencia restricciones espaciales, como compacidad o conectividad territorial. Los problemas de diseño territorial aparecen en diversas aplicaciones como diseño de distritos electorales, diseño de distritos escolares, diseño de territorios de ventas, diseño de planes de reciclaje de aparatos electrodomésticos, por mencionar algunas.

 

En particular, en esta charla se presenta un problema de diseño territorial motivado por una aplicación práctica en la distribución de bebidas embotelladas. El problema consiste en encontrar un conjunto dado de territorios que satisfagan criterios de compacidad, conectividad y requerimientos múltiples de balance con respecto al número de clientes y demanda del producto. Se discutirá la descripción y modelado del problema como un Programa Entero Mixto Lineal y se describirá en detalle una novel metodología de solución basada en localización-asignación-búsqueda local que maneja exitósamente la presencia de las múltiples restrcciones de balanceo y conectividad. La charla incluye una exhaustiva evaluación numérica del método propuesto y sus componentes en una variedad de instancias que demuestra su excelente comportamiento.

EL PONENT:

Roger labora actualmente como Profesor-Investigador de Tiempo Completo en el Programa de Posgrado en Ingeniería de Sistemas de la Universidad Autónoma de Nuevo León (UANL), México. Recibió sus grados de Doctor y Maestro en Ciencias en Investigación de Operaciones e Ingeniería Industrial de la Universidad de Texas en Austin (EUA), y su titulo de Licenciatura en Matemáticas de la UANL. Ha ostentado puesto de investigador visitante invitado en la U. de Texas (EUA), UPC (España), U. de Colorado (EUA) y Universidad de Houston (EUA). Sus intereses de investigación abarcan el desarrollo de heurísticas y técnicas exactas para la solución eficiente de problemas de toma de decisiones, particularmente de optimización discreta, provenientes de diversas aplicaciones industriales. En particular, en los últimos años, ha abordado diversas aplicaciones de problemas de planificación de sistemas territoriales en el ramo logístico, decisiones de localización en la gestión forestal, operación eficiente de sistemas de transporte de gas natural y problemas de secuenciación en procesos de manufactura. Es miembro de la Academia Mexicana de Ciencias y es distinguido como Investigador Nacional Nivel 2 del Sistema Nacional de Investigadores en México. Más sobre su obra científica y tecnológica puede encontrarse aquí.

 


Inicio

Aportacions de l 'estadística per avaluar l'efectivitat i l'impacte de les intervencions en Salut Pública

CONVIDAT: Àngela Domínguez

IDIOMA: Castellà
LLOC: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
DATA: Viernes 24 de mayo de 2013. Hora: 12:30
RESUM: La Salut Pública es defineix com el conjunt d’esforços organitzats de la societat per protegir, promoure i restaurar la Salut de la població. Entre les seves funcions estan les d’ avaluar les estratègies de promoció de la Salut i el desenvolupament de noves metodologies per avaluar i investigar sobre la salut de la població.

L¡ avaluació de les intervencions preventives es pot fer amb mesures d’ efectivitat i amb mesures d’ impacte. Les mesures d’ efectivitat avaluen l’ efecte que produeix la intervenció en les persones que la reben en condicions reals d’ aplicació, que són diferents de les condicions ideals en que es realitzen els assaigs aleatoritzats. L’ efectivitat es mesura habitualment mitjançant estudis observacionals de casos i controls (que permeten obtenir l’ odds ratio, OR) o de cohorts (que permeten obtenir el risc relatiu, RR).

L’ efectivitat de la intervenció preventiva (una vacuna, per exemple) és 1-OR (si s’ ha fet un estudi de casos i controls) o 1-RR (si s’ ha fet estudi de cohorts).

La fracció previnguda en els exposats és una mesura d’efectivitat. Es la part de malaltia que s’ ha previngut per la intervenció en els exposats i es pot calcular a partir del RR : 1-RR

L’ impacte d’ una intervenció fa referència a l ‘efecte beneficiós que produeix una intervenció preventiva en el conjunt de la població i no només en les persones que han rebut la intervenció. La fracció previnguda en la població i la fracció prevenible són mesures d’ impacte.

La fracció previnguda en la població és pot calcular a partir del RR i de la prevalença de l’exposició en la població com : [prevalença x (1-RR)].

La fracció prevenible és la proporció de la malaltia que podria ser previnguda si tots els que no han rebut la intervenció preventiva la rebessin. Es pot estimar a partir de la prevalença de l’ exposició a la intervenció preventiva en la població i del RR com: [( 1-Prevalença) x ( 1-RR)] / [(1+Prevalença) x (1-RR)]

L’estimació de RR o OR vàlides ( sense biaixos d’ informació, de selecció i controlant les possibles variables confusores) requereix dissenys adequats i anàlisis acurades per a les quals la Estadística és una eina fonamental .

La construcció a partir de les dades disponibles sobre les malalties i les variables relacionades de models que permetin predir la millor estratègia preventiva es un altra dels objectius de la Salut Pública que requereix una col·laboració molt estreta entre investigadors de l’ àmbit de la Salut Pública i investigadors de l’ àmbit de l’ Estadística.

 


Inicio

Time to change disclosing clinical trials
CONVIDAT:
Rosa Lamarca
IDIOMA: Anglès
LLOC: FME, Edifici U, Sala d'Actes, Campus Sud, UPC, Pau Gargallo, 5, 02028 Barcelona
DATA: Viernes 8 de Març de 2013. Hora: 18:00
RESUM:
Multiple efforts have been made in the publishing field to avoid fraud and disseminate relevant information to allow the scientific community to progress. Transparency in the publishing process started with the guidelines on authorship contribution in the early eighties by the International Committee of Medical Journal Editors and the suggestion of creating an international registry of clinical trials by Simes RJ. In the clinical trials setting, an increasing demanding environment exists towards transparency with the obligation to publish all the clinical trials, except Phase I, into clintrials.gov within 21 days after first patient first visit, and once the compound is approved by the FDA, results from subsequent trials must be published within one year of last patient last visit and 30 days after the approval for previous unpublished trials. Also, there is an EU clinical trials registry (Eudract & EU-CTR), and the EU guideline on clinical trial data and transparency is expected to be released for consultation in 2013. More recently, Alltrials campaign is pursuing the disclosure of all the Clinical Study Reports with BMJ announcing that clinical trials will be published only if the company is committed to "make the relevant anonymised patient level data available on a reasonable request", and GSK further stated that they will give access to detailed anonymised patient level data after their previous issue with paroxetine.

LA PONENT:
Rosa Lamarca holds a PhD in Health Life Sciences of the Pompeu Fabra university (UPF). She graduated in Statistics at the Polytechnic University of Catalonia, and obtained a Master of Science in Applied Statistics at the Sheffield Hallam University. She is currently head of Clinical Statistics at Almirall in Barcelona. She worked as an investigator in the Health Services Research Unit at the "Institut Municipal d'Investigació Mèdica" in Barcelona from 1995 till 2001. She was professor of the master program in Public Health at UPF during 7 years, and associate professor of ESADE's MBA program during 3 years. She has published several articles in specialized journals and was Spanish delegate at the European Federation of Statisticians in the Pharmaceutical Industry from 2003 till 2005.



Inicio

Bad JAMA?
CONVIDAT:
Stephen Senn
IDIOMA: Anglès
LLOC:
FME, Edifici U, Aula: Sala d'Actes, Campus Sud, UPC, Pau Gargallo, 5, 02028 Barcelona
DATA: Viernes 8 de Març de 2013. Hora: 18:00
RESUM: "But to be kind, for the sake of completeness, and because industry and researchers are so keen to pass the blame on to academic journals, we can see if the claim is true... Here again the journals seem blameless: 745 manuscripts submitted to the Journal of the American Medical Association (JAMA) were followed up, and there was no difference in acceptance for significant and non-significant findings." Bad Pharma, p34.

A central argument in Ben Goldacre's recent book Bad Pharma is that although trials with negative results are less likely to be published than trials with positive results, the medical journals are blameless: they are just as likely to publish either. I show, however, that this is based on a misreading of the literature and would rely, for its truth, on an assumption that is not only implausible but known to be false, namely that authors are just as likely to submit negative as positive studies. I show that a completely different approach to analysing the data has be used: one which compares accepted papers in terms of quality. When this is done, what studies have been performed, do, in fact, show that there is a bias against negative studies. This explains the apparent inconsistency in results between observational and experimental studies of publication bias.

 

EL PONENT: Stephen Senn is researcher at the Competences Center for Methodology and Statistics in Luxembourg. He has extensive experience in both academia and industry. As a former Professor of Statistics at the University of Glasgow, former Professor of Pharmaceutical and Health Statistics at University College London, statistician with the National Health Service in England and within the Swiss pharmaceutical industry, he is recognized worldwide for his studies in statistical methodology applied to drug development. He has been the recipient of national and international awards, including the 1st George C Challis award for Biostatistics at the University of Florida, and the Bradford Hill Medal of the Royal Statistical Society. He is the author of the monographs Cross-over Trials in Clinical Research (1993, 2002), Statistical Issues in Drug Development (1997, 2007) and Dicing with Death (2003), as well as numerous scientific articles published in internationally recognized, peer-reviewed journals. Professor Senn is a Fellow of the Royal Society of Edinburgh and an honorary life member of Statisticians in the Pharmaceutical Industry (PSI) and the International Society for Clinical Biostatistics (ISCB) and has an honorary chair in statistics at University College London. For more information on his research click here.



Inicio

Large Scale Optimization with FAIPA, the Feasible Arc Interior Point Algorithm, for nolinear optimization
CONVIDAT:
J. Herskovits Norman
IDIOMA: Anglès
LLOC:
Campus Nord, UPC (see map)
DATA: Miércoles 20 de febrer de 2013. Hora: 12:03
RESUM:
Numerical algorithms for real life engineering optimization must be strong and capable of solving very large problems with a small number of simulations and sensitivity analysis. In this talk we describe some numerical techniques to solve very large problems with the Feasible Arc Interior Point Algorithm (FAIPA) for nonlinear constrained optimization. These techniques include quasi-Newton formulations that avoid the storage of the approximation matrix. They also involve numerical algorithms to solve in an efficient manner the internal linear systems of FAIPA. Numerical results with large size test problems and with a structural optimization example shows that FAIPA is strong an efficient for large size optimization.

EL PONENT:
Prof. José Herskovits Norman works on the development of Numerical Methods for Optimization and their applications in Mechanical Engineering, mainly in Structural Optimization and Stress Analysis involving variational inequalities. He is the author of a general interior point technique for nonlinear constrained optimization and a series of iterative algorithms based on this technique. These methods are employed by engineers and researchers. He also developed several methods for Structural Optimization covering a wide set of problems concerning discrete structures, such as trusses, beams and plates and also shape optimization of shells and solids. Herskovits' interior point algorithms also proved to be also strong and efficient in stress analysis of solids in contact and nonlinear limit analysis.
Click here to access his personal web page


Inicio

A Bilevel Approach for Optimal Location and Contract Pricing of Distributed eneroation in Radial Distribution Systems Using Mixed-Integer Linear Programming
CONVIDAT:
Marcos J. Rider
IDIOMA: Castellà
LLOC: C5 building, room C5016, Campus Nord, UPC (see map)
DATA: Martes 22 de gener de 2013. Hora: 10:30
RESUM: In this work a novel approach for the optimal location and contract pricing of distributed generation is presented. Such an approach is designed for a market environment in which the distribution company can buy energy either from the wholesale energy market or from the distributed generation units within its network. In this scenario, the location and contract pricing of distributed generation is determined by the interaction between the distribution company and the owner of the distributed generators. These agents have different objective functions.
The distribution company intends to minimize the payments incurred in meeting the expected demand, while the owner of the distributed generation intends to maximize the profits obtained from the energy sold to the distribution company. This two-agent relationship is modeled in a bi-level scheme: the upper-level optimization is for determining the allocation and contract prices of the distributed generation units, while the lower-level optimization is for modeling the reaction of the distribution company. The bi-level programming problem is turned into an equivalent single-level mixed-integer linear optimization problem using duality properties, which is then solved using commercially available software. The results, using a 34-node test distribution system, show the robustness and efficiency of the proposed model compared with other existing models. As regards contract pricing, the proposed approach allowed to find better solutions than those reported in previous work.

EL PONENT:
Marcos J. Rider (S’97–M’06) received the B.Sc. (Hons.) and P.E. degrees from the National University of Engineering, Lima, Perú, in 1999 and 2000, respectively; the M.Sc. degree from the Federal University of Maranhão, Maranhão, Brazil, in 2002; and the Ph.D. degree from the University of Campinas, Brazil, in 2006, all in electrical engineering.
Currently he is a Professor in the Electrical Engineering Department at the Universidade Estadual Paulista, Ilha Solteira, Brazil. His areas of research are the development of methodologies for the optimization, planning and control of electrical power systems, and applications of artificial intelligence in power systems.
Click here for further information



Inicio