Programació del seminari: Any 2018

  • Divendres, 14 de desembre de 2018. Hora: 12:00

Christian Blum, Artificial Intelligence Research Institute, CSIC, Bellaterra, Spain

Construct, Merge, Solve & Adapt: Un algoritmo general para la optimización combinatoria

Hongzhe Li, Department of Biostatistics and Epidemiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.

Methods for High Dimensional Compositional Data Analysis in Microbiome Studies

  • Dijous, 5 de juliol de 2018. Hora: 12:00

Thomas Jaki, Medical and Pharmaceutical Statistics Research Unit, Department of Mathematics and Statistics, Lancaster University, United Kingdom

Dose-escalation trials without monotonicity assumption: A weighted differential entropy approach

  • Divendres, 8 de Juny de 2018. Hora: 12:30

Julie Rendlovà, Institute of Molecular and Translational Medicine, Palacký University, Olomouc, Czech Republic.

Bayesian counterpart to t-tests in compositional analysis of metabolomic data

  • Divendres, 18 de Maig de 2018. Hora: 12:30

    Jue Lin Ye, Laboratorio de Ingeniería Marítima, Universitat Politècnica de Catalunya, Barcelona, Spain.

    Non-stationary multivariate characterization of wave-storms in coastal areas: copulas and teleconnection patterns

  •  Divendres, 27 d'abril de 2018. Hora: 12:30

Irina Gribkovskaia,  Molde University Ciollege, Noruega

Decision support tools for supply vessel planning in offshore oil and gas logistics

  • Divendres, 9 de març de 2018. Hora: 12:30

Ivy Liu, School of Mathematics and Statistics, Victoria University of Wellington, Wellington, New Zealand

Diagnostics for ordinal response models

 


Inici

 

Construct, Merge, Solve & Adapt: Un algoritmo general para la optimización combinatoria
CONVIDAT: Christian Blum
IDIOMA: Anglès
LLOC: Edifici C5, Aula C506 DATA: Divendres, 14 de desembre. Hora: 12:00

RESUM:Construct, Merge, Solve & Adapt (CMSA) es un algoritmo híbrido reciente para la optimización combinatoria. Se basa en la resolución iterativa de instancias reducidas de problemas mediante un método exacto como, por ejemplo, un solver ILP (integer linear programming). En esta charla, en primer lugar, presentaremos CMSA en terminos generales. Luego, nos centraremos en dos desarrollos recientes. El primero se refiere al uso de aprendizaje para generar las instancias reducidas de problemas dentro del CMSA. El segundo se refiere al desarrollo de una versión de CMSA independiente del problema que se intenta resolver, dirigido a resolver ILPs binarios.

SOBRE L'AUTOR: Christian Blum és investigador científic a l’Institut d’Investigació en Intel·ligència Artificial del CSIC des de gener de 2017. Abans, havia estat professor investigador a Ikerbasque i professor/investigador amb diferents figures a la UPC.La seva recerca té dues vessants: Per una banda, treballa amb swarm intelligence, la disciplina d’intel·ligència artificial basada en la inspiració obtinguda del comportament col·lectiu de, per exemple, insectes socials, estols d’ocels, i bancs de peixos. Per altra banda també treballa en la hibridació de metaheurístiques amb eines més clàssiques de la intel·ligència artificial i la investigació operativa.

Pel que fa a les aplicacions de la seva recerca, destaca el seu caire interdisciplinari. De fet, els problemes d’optimització i control sorgeixen en moltes àrees d’aplicació com telecomunicacions, bio-informàtica, neurociència i robòtica.
Podeu trobar més informació sobre la tasca de Christian Blum a la seva web

 


Inici


Complex Time to Event Outcome: Design and Statistical Inference for the INVESTED Trial

CONVIDAT: KyungMann KimIDIOMA: AnglèsLLOC: Edifici C5, Aula C5016, Campus Nord, UPC (veure mapa)DATA: Divendres, 14 de setembre de 2018. Hora: 11:30
RESUM:
The INfluenza Vaccine to Effectively Stop cardioThoracic Events and Decompensated heart failure (INVESTED) trial (NCT02787044) was designed to determine which of two formulations of influenza vaccine, the standard dose or an investigational higher dose, is more effective in reducing deaths and heart- or lung-related admissions to hospital in patients with a recent history of hospitalization due to heart failure or myocardial infarction.  In this presentation, I will describe challenges in design of and statistical inference in randomized controlled trials with complex time to event data using the INVESTED trial as an example and opportunities for development of novel statistical methods to address the challenges. Specifically I will describe first how the INVESTED trial is supported, my role as principal investigator of its data coordinating center, and the original design of the trial and statistical analysis plan and then the change in design and statistical analysis plan which led to many challenging issues including the revised primary endpoint of the trial and statistical analysis plans involving correlated times to event data and non-randomized cohorts due to vaccinations over three influenza seasons based on a randomize-once design. I will also highlight statistical analysis issues in vaccine trials and mediation analysis. Slides of this seminar are available here.

SOBRE L'AUTOR: KyungMann Kim is Professor of Biostatistics and Medical Informatics at the University of Wisconsin-Madison. His research interest concern, among others: sequential methods of statistical analysis; clinical trials methodology; categorical data analysis; survival analysis; repeated measures analysis; applications in cancer research; applications in ophthalmology; interface between statistics and computer science. For more information on his research interest, see https://www.biostat.wisc.edu/content/kim-kyungmann


Inici

Statistics of Ambiguous Rotations

CONVIDAT:  Richard ArnoldIDIOMA: AnglèsLLOC: Edifici C5, Aula D6004, Campus Nord, UPC (veure mapa)
DATA:
Divendres, 20 de juliol de 2018. Hora: 12:30


RESUM:
Orientations of objects in $\mathbb{R}^p$ with symmetry group $K$ cannot be described unambiguously by elements of the rotation group $SO(p)$, but correspond instead to elements of the quotient space $SO(p)/K$.  Specifications of probability distributions and appropriate statistical methods for such objects have been lacking -- with the notable exceptions of axial objects in the plane and in $\mathbb{R}^3$.

We exploit suitable embeddings of $SO(p)/K$ into spaces of symmetric tensors to provide a systematic and intuitively appealing approach to the statistical analysis of the orientations of such objects.  We firstly consider the case of {\em orthogonal $r$-frames} in $\mathbb{R}^p$, corresponding to sets of $r\leq p$ mutually orthogonal axes in $p$ dimensions, which includes the Watson and Bingham distributions.  Using the same approach we then treat the three dimensional case of $SO(3)/K$, where $K$ is one of the point symmetry groups.

In both cases the resulting tools include measures of location and dispersion, tests of uniformity, one-sample tests for a preferred orientation and two-sample tests for a difference in orientation.  The methods are illustrated using data from earthquake focal mechanisms (fault plane and slip vector orientations) and crystallographic data with orientation measurements of distinct mineral phases.

SOBRE L'AUTOR: I started out as an astronomer, studying the structure of galaxies and the orbits of stars within them. I moved into statistics in 1995 and have developed interests in a wide range of statistical applications, including some in my original subject of physics. In recent years I have research projects in Reliability Theory, Directional Statistics, Statistics in Geophysics, and Cluster Analysis. I have a developing interest in fisheries research. I have worked in institutes in Cambridge and London in the UK, Leiden in the Netherlands and Statistics New Zealand in Wellington. I started work at Victoria in 2001. I enjoy working on applied data analysis problems, especially where a novel approach is required to analyse the data correctly.


Research interests: In my research I often take a Bayesian statistical approach. My work in reliability focusses on the description of correlated failures in multiple component systems - such as cars and computers - where the failure of one component can induce ageing or failure of other components. In geophysics my interests are in the determination of the properties of earthquakes, and then using those characteristics to infer properties of the tectonic stresses in the earth's crust. This work in geophysics has led to research into the statistics of oriented objects: earthquakes, stress axes, and crystal orientations - in particular where there are high degrees of symmetry present. In such situations specialised methods of data analysis are needed. In fisheries I am interested in models of population size, and inference from the limited data that surveys and commercial catch reports provide. For more details on professor Arnold's research projects, visit his personal homepage at https://www.victoria.ac.nz/sms/about/staff/richard-arnold


Inici

Methods for High Dimensional Compositional Data Analysis in Microbiome Studies

CONVIDAT:  Hongzhe LiIDIOMA: AnglèsLLOC: Edifici B6, Sala d'Actes Manuel Martí, Campus Nord, UPC (veure mapa)
DATA:
Dimecres, 11 de Juliol de 2018. Hora: 12:30
RESUM:
Human microbiome studies using high throughput DNA sequencing generate compositional data with the absolute abundances of microbes not recoverable from sequence data alone. In compositional data analysis, each sample consists of proportions of various organisms with a unit sum constraint. This simple feature can lead traditional statistical methods when naively applied to produce errant results and spurious associations. In addition, microbiome sequence data sets are typically high dimensional, with the number of taxa much greater than the number of samples. These important features require further development of methods for  analysis of high dimensional compositional data. This talk presents several latest developments in this area, including methods for estimating the compositions based on sparse count data, two-sample test for compositional vectors and regression analysis with compositional covariates. Several micobiome studies at Penn are used to illustrate these methods and several open questions will be discussed.

SOBRE L'AUTOR:  Dr. Hongzhe Li is a Professor of Biostatistics and Statistics at the Perelman School of Medicine at the University of Pennsylvania (Penn). He is Vice Chair of Integrative Research in the Department of Biostatistics, Epidemiology and Informatics, Chair of the Graduate Program in Biostatistics and Director of Center of Statistics in Big Data at Penn. Dr. Li has been elected as a Fellow of the American Statistical Association (ASA), a Fellow of the Institute of Mathematical Statistics (IMS) and a Fellow of AAAS. Dr. Li severed on the Board of Scientific Counselors of the National Cancer Institute of NIH and regularly serves on various NIH study sections. He is currently an Associate Editor of Biometrics, Statistica Sinica and also co-Editor-in-Chief of Statistics in Biosciences. He served as Chair of the Section on Statistics in Genomics and Genetics of the ASA. Dr. Li’s research has been focused on developing powerful statistical and computational methods for analysis of large-scale genetic, genomics and metagenomics data and high dimensional statistics with applications in genomics.  He has published papers in Science, Nature, Nature Genetics, Science Translational Medicine, JASA, JRSS, Biometrika, etc.


Inici

Dose-escalation trials without monotonicity assumption: A weighted differential entropy approach

CONVIDAT: Thomas JakiIDIOMA: AnglèsLLOC: FME, Edifici U, Aula: PC1, Campus Sud, UPC, Pau Gargallo, 5, 02028 Barcelona (veure mapa).
DATA:
Dijous, 5 de Juliol de 2018. Hora: 12:00
RESUM:
Methods for finding the highest dose that has an acceptable risk of toxicity in Phase I dose-escalation clinical trials assuming a monotonic dose-response relationship have been studied extensively in recent decades. The assumption of monotonicity is fundamental in these methods. As a result, such designs fail to identify the correct dose when the dose-response relationship is non-monotonic. Moreover, many of these approaches can not be straightforwardly extended to dose-escalation studies of drug combination because ordering the dose combinations is difficult. We propose a dose-escalation method that does not require monotonicity or any pre-specified relationship between dose levels. The method is motivated by the concept of the weighted differential entropy. More precisely we propose to use the difference of the weighted and standard differential entropy as a criterion for dose escalation. For a given weight function the asymptotically unbiased and consistent estimator of the proposed dose escalation criterion is found. For small sample size typical for dose-escalation studies it is shown in simulations that the proposed method is comparable to well-studied and used methods under the assumption of monotonicity and outperforms them when this assumption is violated. We subsequently extended the basic method to limit the risk of overdosing patients by imposing the time-varying safety constraint controlling the probability of overdosing. Again simulations are used to show good performance under many different scenarios.


SOBRE L'AUTOR: Thomas Jaki is Professor of Statistics at Lancaster University and director of the Medical and Pharmaceutical research unit (MPS, www.mps-research.com) which has a long lasting tradition in the design and analysis of clinical trials. He is an NIHR Senior Research Fellow, Coordinator of the EU funded IDEAS network (www.ideas-itn.eu) and a Co-Investigator of the MRC’s North-West Hub for Trials Methodology Research.

His methodological research to date has focused on adaptive designs and multiplicity, Bayesian methods and estimation with sparse data. He has worked on estimators for pharmacokinetic parameters, developed adaptive designs - in particular for multi-arm studies - and investigated Bayesian methods for dose-escalation.


Inici

Bayesian counterpart to t-tests in compositional analysis of metabolomic data

CONVIDAT: Julie RendlovàIDIOMA: AnglèsLLOC: Edifici C5, Aula C5016, Campus Nord, UPC (veure mapa)
DATA:
Divendres, 8 de Juny de 2018. Hora: 12:30


RESUM:
 Currently, both targeted and untargeted metabolomic methods aim to find statistically significant differences in chemical fingerprints of patients with some disease and a control group and to identify biological markers allowing for the future prediction of the disease. After the raw measurements are pre-processed and interpreted as intensities of chromatographic peaks, the differences between controls and patients are evaluated by both univariate and multivariate statistical methods. The traditional univariate approach relies on t-tests or their nonparametric version (Wilcoxon test) and the results from multiple testing are still compared merely by p-values using a so-called volcano plot. This presentation attempts to introduce metabolomic data, pre-processing methods, and drawbacks, and to propose a Bayesian counterpart to the widespread univariate analysis, taking into account the compositional character of metabolomes. Since each metabolome is a collection of some small-molecule metabolites in a biological material, relative structure of metabolomic data is of the main interest, hence they should be treated within the framework of the logratio methodology. Therefore, a proper choice, construction, and interpretation of orthonormal coordinates will be discussed as well as fundamental principles of compositional data analysis. Theoretical background of the contribution is illustrated on the analysis of the data set containing dry blood spots of patients suffering from medium chain acyl-CoA dehydrogenase deficiency (MCADD) and a small simulation for evaluation of the stability of methods in case of loss of samples is added.

SOBRE L'AUTOR: Already during master's degree studies of applied mathematics, I have started to focus mainly on statistics which later led me to continue with my Ph.D. at Palacký University, Olomouc in the Czech Republic. My research topics are quite diverse but in all cases, I work on methods developed within compositional data framework. Last year, I joined a metabolomic group at Institute of Molecular and Translational Medicine where I work as a data analyst. These days, most issues and subjects from my job are nicely linked to my studies and I can enjoy the challenges that specifics of medical data analysis sometimes bring.


Inici

Non-stationary multivariate characterization of wave-storms in coastal areas: copulas and teleconnection patterns

CONVIDAT:  Jue Lin YeIDIOMA: AnglèsLLOC: Edifici C5, Aula C5016, Campus Nord, UPC (veure mapa)
DATA:
Divendres 18 de maig Hora: 12:30

RESUM:
In the characterization of wave-storm events of the sea, engineers tend to make use of only the significant wave-height and the peak-period at the peak of the wave-storm, considering them independent from each other. This assumption derives into a series of computational errors that make the necessary computer simulations unreliable for operational purposes. Jue Lin has characterized a total of four variables of wave storms (significant wave-height and peak period at the peak of the storm, as well as total duration and energy of the storm) through generalized Pareto distribution functions, while their joint probability density has been characterized through a hierarchical Archimedean copula.

The next step has been to take the characterization, which assumes stationarity in a period comprising 1994-2014, to the level of non-stationarity. Wave projections in the period 1950-2100 has been used, and this non-stationarity, achieved through vectorial generalized additive models, can provide information related to the Climate Change. What is more, climate teleconnection patterns such as the North Atlantic Oscillation, or the less known East Atlantic pattern and Scandinavian pattern, are used as covariates in the vectorial generalized additive models, to establish their relationship to the threshold of the wave-storms, the yearly occurrence, as well as the parameters of the generalized Pareto distributions. The latter can, its turn, inform about the moving mean and variance of each variable.

Two study areas have been considered: the Catalan Coast and the northwestern Black Sea. Especially in the Black Sea, where measurements and projections are scarce, the characterization of a set of wave projections for the same period of 1950-2100, under two Climate Change scenarios, has provided new insight into the evolution of wave-storm events in the area.

SOBRE L'AUTOR:  Jue Lin-Ye is a Spanish Civil Engineer (2013) who is earning her Ph.D. degree in Civil Engineering in the Universitat Politècnica de Catalunya. She currently works at the Laboratory of Maritime Engineering of the same university, conducting research on Multivariate statistics applied to Maritime Engineering. She has authored 6 communications in congresses, as well as 3 articles. Her little breakthrough came when she published her first article on copulas applied to wave-storms, in Coastal Engineering, which was ranking 1st in 14 journals in Ocean Engineering, in 2017.


Inici

Decision support tools for supply vessel planning in offshore oil and gas logistics

CONVIDAT:  Irina GribkovskaiaIDIOMA: AnglèsLLOC: Edifici C5, Aula C5016,Campus Nord, UPC (veure mapa)
DATA:
Divendres, 27 d'abril de 2018. Hora: 12:30


RESUM:
Supply vessel planning problem arises in the upstream offshore oil and gas logistics, where supply vessels provide delivery of necessary materials and equipment to a set of offshore installations from an onshore supply base. Installations require periodic service due to the limited deck capacity and continuous supply for various type of cargo. Vessel schedules should be reliable since the downtime of an installation is too costly and at the same time cost-efficient as supply vessels are the most expensive logistics resource. The objective is to develop a least-cost weekly sailing plan used repetitively over a period of several weeks or months. We introduce several models and solution algorithms developed to handle deterministic variants of supply vessel planning problem, including emissions reduction through speed optimization. Weather conditions are a main uncertainty factor in supply vessel planning since sailing time and service duration at offshore installations are weather dependent. Bad weather conditions quite often lead to inability to deliver all the planned cargo according to planned schedule, resulting in the reduced service level and added costs of using extra vessels. Therefore, planners aim to create vessel schedules with sufficient robustness against weather uncertainty to avoid frequent schedule disruptions. We present several optimization-simulation tools for construction of robust, green and cost-efficient supply vessel schedules. We also present results of several simulation studies on strategic fleet sizing with uncertainty in weather conditions and on evaluation of different operational recourse strategies in response to weather.

SOBRE L'AUTOR:

Irina Gribkovskaia has been professor in quantitative logistics of Molde University College-Specialized University in Logistics (MUC), Norway, since 1999, and 20 previous years she worked at the Department of Optimization Methods and Optimal Control of Belarusian State University. Her current research interests include development of optimization and simulation models and algorithms for solution of vehicle routing problems with applications in offshore oil and gas logistics. Up to now she has supervised 7 PhD theses and she is author of over 30 articles and book chapters on vehicle routing published in EJOR, JORS, Networks, C&OR, ORL, TRD, TRC, Omega, INFOR, IJPDLM, etc.  She usually teaches courses on Mathematical Modelling, Vehicle Routing and Network Logistics for Master programs in Logistics Analytics and Petroleum Logistics at MUC, and gives courses on Offshore Petroleum Logistics at MUC, many other universities and oil and gas companies in Norway and abroad. She is a coordinator of a joint double-degree Master on Petroleum Logistics with Russian University of Oil and Gas, is a coordinator of research-based academic collaboration projects in Logistics Analytics, Arctic Logistics and Oil, Gas and Renewable Energy Logistics with largest universities in Russia, Belarus and Brazil. She has established close contacts within the oil and gas industry in Norway, Brazil and Russia, also giving her MSc and PhD students the opportunity to benefit from this network. Students perform their research based on up-to-date problems within the industry and in close cooperation with professionals from oil and gas companies.

Més informació sobre Irina Gribkovskaia al seu web.


Inici

Diagnostics for ordinal response models

CONVIDAT:  Ivy LiuIDIOMA: AnglèsLLOC: Edifici C5, Aula C5016, Campus Nord, UPC (veure mapa)
DATA:
Divendres 9 de març Hora: 12:30


RESUM:
This talk has two parts. The first part provides some graphical and numerical methods for checking the adequacy of the proportional odds regression model using the cumulative residuals. The methods focus on evaluating functional misspecification for specific covariate effects.  An example of a data set from the Normative Aging Study, which studies the effect of two markers of oxidative stress in men in the age group of 48-94 years: white blood cell count and C-reactive protein on FBG measurement (measured in three clinically defined ordinal categories), is given.

The second part presents a new goodness-of-fit test for an ordered stereotype model used for an ordinal response variable. The proposed test is based on the well-known Hosmer-Lemeshow test and its version for the proportional odds regression model. One of the main advantages of the ordered stereotype model is that it allows us to determine a new uneven spacing of the ordinal response categories, dictated by the data. The proposed test takes the use of this new adjusted spacing to partition data. A simulation study shows good performance of the proposed test under a variety of scenarios.


SOBRE L'AUTOR: Ivy Liu received a PhD in Statistics from University of Florida, Gainesville, Florida, USA. In 2000, she joined the Statistics and Operations Research group as a lecturer. Now, she is an associate professor in the School of Mathematics and Statistics at Victoria University of Wellington, New Zealand. Her main research area is in Categorical Data Analysis, including ordinal response data analysis, longitudinal data analysis, repeated measurements, and their applications in medical science. For more details on her research projects, visit her personal homepage http://homepages.ecs.vuw.ac.nz/~iliu/