If you are interested in PhD opportunities with the Optimisation Research Group, refer to the list below to select a potential supervisor and the project area(s) they offer. Feel free to contact them for more information, but please include:
- your area(s) of interest for a PhD
- your CV listing any degrees, awards, prior publications, and work experience
- a copy of your academic results for previous coursework
|Supervisor(s):||Project area(s):||University to apply to:|
|Maria Garcia de la Banda|| |
Title: Debugging and Visualization of Optimization
|There is a tremendous lack of integrated development environments for optimization. As a result, modelers or programmers receive little or no help to debug their models/programs, to analyze performance, and to understand the behavior of their applications and the consequences of their modeling choices. This severely restricts access to optimization, since optimization tends to have rather complex control and data flows, due to non- deterministic search, constraint propagation, randomization, and sophisticated building blocks such as cuts, nogoods, and global constraints, whose behavior may be hard to understand.We need to remedy this situation by building a sophisticated development environment for optimization systems. A key aspect of the research is to communicate some of the solver behavior in terms that are understandable to modelers, i.e., in terms of the original model. Another important feature will be a set of tools to “debug” the correctness of the models (e.g., identifying sets of infeasible constraints), to explore the performance of a given optimization technique (e.g., capturing how the search space is explored and what is performed at every node), to compare the performance of several solving techniques (e.g., by comparing profiling data of each technique on an array of benchmarks), and to automatically tune an optimization model. Developing these tools in a scalable, meaningful and effective way is very challenging, but even modest progress in this direction may make a significant difference on the practice in the field. |
Peter Stuckey, Thibaut Feydy, Andreas Schutt, Gefforey Chu
Title: Propagation, Explanation, Nogoods, Search
|University of Melbourne|
During constraint solving if we keep track of the reasons for inference we can determine the reasons why search decisions went wrong and record these as nogoods, with a potential to exponentially reduce search. But building effective algorithms for explaining inference, in a manner that is as general as possible, and reusable is a challenging task. Sometimes it is advantageous to introduce intermediate literals to explain propagation since these literals can create more reusable nogoods, and again exponentially reduce search. The questions on how we can should explain propagators best, and how this interacts with search require much more exploration.
Peter Stuckey, Kim Marriott
|Title: Constrained Optimization for Digital Publishing||University of Melbourne|
|The internet and electronic reading devices are fundamentally changing the nature of text books, newspapers and magazines, transforming how we create and use such documents. These changes are set to accelerate with near universal access to high-speed broadband, the rise of eBooks and their readers. Australian publishers are increasingly concerned about the viability of their current business models. |
However electronic documents potentially provide considerable benefits over print documents:
• Customized content: The content can be customized, based on the interests and needs of the viewer. Furthermore the content can be interactive, alive with embedded data and RSS feeds.
• Customized presentation: Documents can be viewed on a wide variety of electronic and print devices with very different characteristics and capabilities. The document’s presentation should automatically adapt to take into account viewing device characteristics while preserving the basic layout and style.
• Accessibility: There can be better support for blind and vision-impaired users to access on-line material including graphics.
• Collaborative, continuous authoring: Electronic documents allow collaborative authoring on a scale we have never seen before. Furthermore, documents can now be annotated and modified throughout their lifetime.
However, we still do not know how to provide these benefits. While standards such as ePub 3.0 are a step in the right direction a key challenge is good automatic layout. Current tools provide good automatic micro-typography--line breaking, kerning, H&J. They cannot handle the macro-typographic aspects of layout: float placement, table layout etc. This is of course a combinatorial optimization problem.
|Peter Stuckey, Maria Garcia de La Banda, Guido Tack||Title: High Level Modelling and Modelling Transformation||University of Melbourne|
|High level modelling languages for concisely describing combinatorial optimization problems make the specification and solving of such problems far easier. The modelling languages Zinc and MiniZinc were developed in this group. But high level modelling languages bring new challenges: how do we mao a high level model to an effective executable low level model, and how to we create a complex hybrid solution from the high level model. Thus high level modelling requires model analysis, and model transformation to be efficiently usable. Furthermore high level modelling languages need to be extended to handle stochasti optimization problems, with features such as random variables, common distributions, a variety of uncertainty models such as Markov and graphical models, as well as a variety of sampling procedures; and to handle dynamic optimization problems. Building a universal modelling language with effective transformations is an important research topic. |
|Peter Stuckey, Pascal Van Hentenryck||Title: Data Intensive Optimization ||University of Melbourne|
|One fundamental change in the optimization space is the availability of massive amounts of data. This has led to new classes of optimization applications such as data-mining, (constraint-based) clustering and pattern discovery and recognition. These applications arise in many areas including computational biology, social networks and forecasting. Moreover, some of these areas are evolving from very dedicated algorithms, specialized to a given task, to more general frameworks. Indeed, some data-mining projects have shown that constraint programming is a flexible and efficient tool for addressing some traditional data-mining tasks. |
It is clear however that optimization technology has to evolve to meet the demands of some of these applications. Some of them feature few variables but extremely large domains (e.g., large DNA or RNA sequences), others feature huge number of variables and constraints (e.g., mining documents), while others deal with extremely large graphs that must be clustered with a variety of constraints on the clusters. We need to design the data structures and optimization algorithms appropriate for data-intensive optimization.
|Pascal Van Hentenryck, Peter Stuckey, Maria Garcia de la Banda, Geoffrey Chu||Title: Large Scale Parallel Optimization ||University of Melbourne|
|The last couple of years have seen a fundamental change in hardware evolution. Processors are not doubling speed every 18 months or so. Rather, the industry has moved from single core desktop, to multi-core computers, large clusters, and cloud of computing resources. |
Optimization software has benefited tremendously from advances in processor speed, making it possible to solve increasingly complex applications or problems which must be solved with time constraints. However, improvements in hardware will no longer translate into speedups for optimization software: It will become necessary to exploit parallel computing resources which may be challenging given the nature of optimization algorithms.
Recent research has demonstrated that parallel computing on multi-core architectures can bring significant benefits and that communication starts becoming a significant issue when dealing with more than 1,000 processors. One of the goals for parallel optimization will be to build a novel architecture to scale to larger clouds, which may fundamen- tally change the scope of highly complex problems and data-intensive optimizations. Another goal should be to study how to parallelize hybrid and decomposition algorithms, since they require more synchronization and communication.