Interns 2011

Here you can read about projects completed by the 2011 interns. Click here to see the weekly blog written by 2011 interns. 

Lawrence Bardwell   
Lancaster University, BSc Mathematics (2009-2012)
Supervisor: Matt Sperrin  

Breast Cancer Screening

Breast density is a substantial risk factor for breast cancer. It can be estimated from mammograms, which are taken regularly for middle-aged women. A breast density reading can then be used to produce individualised monitoring for women (e.g. screening women with high breast density more frequently). However, breast density is estimated by radiologists subjectively. It is of interest to calibrate the breast density readings, so that each radiologist’s scores are on the same scale; and assess the consistency of each radiologist. Data is available for the readings made by radiologists: we can attempt to exploit the fact that each mammogram is read twice by each radiologist, and each mammogram is read by two radiologists.  

View Lawrence's presentation and poster.

Elizabeth Buckingham-Jeffery   
University of Warwick, MMath Mathematics (2008-2012)
Supervisor: Konstantinos Kaparis
New Penalty Methods for Bilevel Optimisation 
Bilevel problems appear in areas such as economics, engineering, medicine and ecology. These types of problems are optimisation problems which include as part of their constraints a second optimisation problem. The upper level (or leader's) problem corresponds to our aim to optimise a certain function. The notion of optimality takes into account the subaltern part of the upper level decisions. This part is represented by the lower level (or follower's) problem. This project concerns the linear case of bilevel programs.

View Liz's presentation and poster.

David Ewing
University of St Andrews, MMath Mathematics and Statistics (2008-2013)
Supervisors: Dennis Prangle and Paul Fearnhead  

The ABC of model choice
 While Approximate Bayesian Computation (ABC) is now well-established for estimating parameters, its use for model-choice is still in its infancy. There have been recent papers disagreeing about whether ABC can be used for model-choice, and if so how it should be implemented. This project looks at some simple applications, to see whether ABC can give reliable inferences about the underlying statistical model; and if so, how to implement ABC so as to infer the model as accurately as possible.        

View Dave's presentation and poster

Thomas Facer   
University of Edinburgh, BSc Mathematics (2008-2012)
Supervisor: Arne Strauss
Choice Modelling with Links to Optimisation and Compressed Sensing
In many business applications, frequent decisions need to be made that depend on the choice behaviour of customers. For example, e-retailers such as must decide on the assortment of results to display in response to a customer query; airlines or hotels need to decide on the available booking classes to display in response to a customer request. Similar situations arise for many other firms. An often used approach to choice modelling is to identify product attributes that influence the customer’s decision, and to select and calibrate a structural model based on these attributes that fits the observed data. Recently, an intriguing way was proposed to learn a choice model from data using concepts from Revenue Management, Inventory Optimisation and Compressed Sensing. This project gives an insight into these respective fields whilst working on a topic that is currently at the forefront of research and has wide applicability.     

View Tom's presentation and poster.

Liam Fielder
Lancaster University, MSci Mathematics with Statistics (with a year abroad: Australia) (2008-2012)
Supervisor: Robert Fildes

Forecasting using time series methods
One of the most important applications of statistics is time series forecasting. The key application area is to forecast demand (for a product or service). This project gives an introduction to the area of business forecasting using a newly written text book. It includes some software testing (and development if appropriate) as well as the evaluation of different methods on test problems.   

View Liam's presentation and poster.

George Foulds
Lancaster University, MPhys Physics First Class (2005-2010)
Supervisor: Jonathan Tawn

Portfolio Optimisation
The aim of any investor is to maximise their return. The highest return must be for a given amount of risk, or equivalently the risk must be minimised for an expected return. A mixture of analytical and simulation based methods will be used to derive the properties of a portfolio and consequently the weight of investment that is given to each individual asset.

View George's presentation.

Kaylea Haynes   
Heriot-Watt University, Edinburgh, BSc Mathematics and Statistics (2008-2012)
Supervisor: Matt Nunes

Compressed sensing methods for problems in statistics
Compressed sensing (CS) has recently emerged as an important area of scientific research for efficient signal sensing and compression. The main idea behind CS is that certain signals will be able to be entirely constructed using numerical optimisation algorithms from a relatively small number of “well-chosen” signal samples. This project is exploratory in nature and it provides the opportunity to learn about and research the area of compressed sensing, focussing on the role of CS in statistical applications for particular signals of interest.       

View Kaylea's presentation and poster.

Clive Newstead    
University of Cambridge, BA Mathematics (2009-2012)
Supervisors: Giorgos Sermaidis and Paul Fearnhead  

Parametric inference for missing data problems
A typical complication in parametric inference for missing data problems is the intractability of the likelihood. A well established approach to maximum likelihood estimation is the simulated likelihood, where estimation is based on the optimisation of an unbiased Monte Carlo estimate of the likelihood. An important drawback, however, is that parameter consistency is achieved only when the Monte Carlo effort increases as a function of the data sample size, thus leading to computationally expensive algorithms. The aim of this project is to tackle this problem by constructing unbiased estimators of the log-likelihood, in which case consistency can be achieved even for fixed Monte Carlo size. The project involves standard techniques for Monte Carlo simulation and unbiased integral estimation and programming in R.   

View Clive's presentation and poster.

Ragnhild Noven
Imperial College London, MSci Mathematics (2008-2012)
Supervisors: Karolina Krzemieniewska and Matt Nunes

Analysing the structure of (multivariate) time series
Time series that are observed in practice are often highly complex in nature, for example, accelerometry signals arising from human movement experiments. The underlying behaviour of these signals is sometimes hidden or difficult to detect in the first instance. This project focuses on applied data analysis for complex time series and using statistical techniques to investigate changes in the underlying structure of time series. The project involves analysing real-world data arising from investigative health studies conducted by external collaborators.    

View Ragnhild's presentation and poster.

Robert Stainforth    
Durham University, MSci Mathematics and Physics (2008-2012)
Supervisor: Stephan Onggo  

Stochastic actor-based models for network dynamics
A stochastic actor-based model is a model for network dynamics that can represent a wide variety of influences on network change, and allow us to estimate parameters expressing such influences, and test corresponding hypotheses. The nodes in the network represent social actors, and the collection of ties represents a social relation. The project involves reading and summarising the relevant research literature on stochastic actor-based models, learning how to use RSiena, preparing a set of data, and applying the technique to the data.       

View Rob's presentation and poster.

Ivar Struijker Boudier    
University of Glasgow, BSc Statistics (2008-2012)
Supervisor: Ioannis Papastathopoulos

Exploring a new class of probability models for tail estimation in extreme value modelling
Statistical modelling of extreme values plays an important role in understanding the behaviour of unusual events such as extreme weather conditions, earthquakes and financial crashes. The most common approach to the modelling of extreme values is to fit an appropriate probability distribution to the tail of the data and extrapolate it to levels above which no data are observed. This class of distributions is called the generalised Pareto distribution which contains the Exponential distribution. However, fits to finite samples are not always adequate and more flexible models might be appropriate. The project explores a new class of probability models that incorporates existing models as special cases. The project involves exposure to the theory of extremes, simulation studies for the applicability of the new models and the statistical analysis of a medical dataset.     

View Ivar's presentation and poster.

Lisa Turner   
Durham University, MMath Mathematics (2008-2012)
Supervisors: Yifei Zhao and Stein W. Wallace

Facility layout
Facility layout, in its simplicity, is about where to place different machines on a production floor in situations where the use of conveyor belts is not possible because the different products do not all visit all the machines and, even if they did, not necessarily in the same order. So transportation of the products from machine to machine can be complicated if the machines are far apart. In fact, it can result in total chaos. The ultimate goal is to place machines close to each other if it is likely that products need to be transported between them. The problem we study is simply: how should the machines be placed on the production floor? 

View Lisa's presentation and poster.

Back to top >>