Noon seminar
The informal research seminar of the ALGO group. Talks last roughly 25 minutes, with five extra minutes allocated for discussion. Many presentations are focused on recent conference presentations, or practice talks for upcoming conferences. New members are often asked to give an overview of their field of research. Talks given by invited speakers may take up to 45–60 minutes including questions.
To be kept up-to-date about noon seminar presentations, please subscribe to the algoseminar-l mailing list.
COVID-19 note
Some of the talks are currently conducted via Zoom. To avoid abuse, we do not distribute the join links publicly. To make the links appear in the list below, make sure you are signed in. Alternatively, you can find the join link in the announcement email.
This is the schedule for 2016. Schedules are available between 2005 and 2024.
- Presentation
- Master's defense
- Midterm
- Invited talk
- External talk
- Practice talk
- PhD defense
- Cancelled
Past talks
-
Dec 13, 12:45—13:15
(MF 3)
Sander Beekhuis: Pseudo one-sided rectangular duals.
- Abstract
SETTING
A rectangular layout is a partition of a rectangle into a finite set of interior disjoint rectangles. The interior of this rectangle thus contains vertical and horizontal line segments, such a segment is maximal if it can't be extended any further on either side. A rectangular layout is one-sided if every maximal segment is the side of a single rectangle.
The rectangular dual of a graph G is a rectangular layout whose rectangles have the same adjacencies as the vertices of G. To make such a dual one usually adds 4 corner vertices to G to obtain an extended graph.
Some rectangular duals are more useful then others. For example area-universal rectangular duals have adjacencies that hold regardless of the areas we assign to each rectangle.
Eppstein et al. have shown that rectangular duals are area-universal exactly when they are one-sided.[1] They show one can compute such a rectangular dual with a FPT algorithm if it exists. Unfortunately not all graphs admit a one-sided dual, an example is given by Rinsma. [2]
WORK
Since not all graphs admit a one-sided rectangular dual we relax this condition slightly and consider pseudo one-sided rectangular duals. Where we enforce that every maximal segment is on the same side of k adjacent rectangles, with k some (hopefully) small constant.
We show that extended graphs containing a separating 4-cycle in general do not admit pseudo one-sided duals.
We conjecture that all extended graphs without any separating 4-cycle can be colored in a pseudo one-sided way. And will show our progress on this conjecture so far.
REFERENCES
[1] D. Eppstein, E. Mumford, B. Speckmann, and K. Verbeek. “Area-Universal and Constrained Rectangular Layouts”. In: SIAM Journal on Computing 41.3 (2012), pp. 537–564. doi: 10.1137/110834032.
[2] I. Rinsma. “Nonexistence of a certain rectangular floorplan with specified areas and adjacency”. In: Environment and Planning B: Planning and Design 14.2 (1987), pp. 163–166. doi: 10.1068/b140163.
-
Dec 06, 12:45—13:15
(MF 3)
Bart Jansen: A treasure found on the lost continent of polynomial time: Kernelization for Feedback Vertex Set.
- Abstract
When solving a hard computational problem, the running time can often be reduced by using a preprocessing step that throws away irrelevant parts of the data which are guaranteed not to affect the final answer. Until recently, there was no good explanation for the effectiveness of preprocessing. This changed when the notion of kernelization was developed within the field of parameterized complexity. It has been called ``the lost continent of polynomial time'', since the exploration of the formal model of preprocessing captured by kernelization has led to a surprisingly rich set of techniques that can reduce the size of NP-hard problem inputs in polynomial time, without changing the answer. Using a user-defined complexity-parameter, one can also give theoretical guarantees on the amount of data reduction that is achieved. This talk presents a modern exposition of one of the treasures that has been found on the lost continent: an efficient and provably effective preprocessing algorithm for the undirected Feedback Vertex Set problem (FVS), which asks to find a minimum vertex set whose removal makes the graph acyclic. While kernelization algorithms for FVS are not new, the presented approach and its elegant correctness proof were not known before.
* This talk is based on a novel view on kernels for FVS that was developed together with Huib Donkers as part of his Capita Selecta.
-
Nov 28, 12:30—13:00
(MF 3)
Hans Bodlaender: The 2^O(n/log n)-phenomenon and Intervalizing Colored Graphs.
- Abstract
Recent results show that, assuming the Exponential Time Hypothesis, several problems have a complexity
that is exponential in n/log n. In this talk, it is shown that this bounds for the following problem:
given a vertex colored graph G, with the number of colors bounded by a constant, can we assign to each vertex
an interval such that intervals of adjacent vertices overlap, and vertices with overlapping intervals have different colors, i.e., is G a subgraph of a properly colored interval graph, with the upper bound holding for each constant number of colors, and the lower bound holding for G a tree with 5 colors. This is joint work with Jesper Nederlof,
Johan van Rooij and Tom van der Zanden.
-
Oct 25, 12:30—13:00
(MF 7.084)
Sudeshna Kolay: Subexponential Algorithms for Rectilinear Steiner Tree and Arborescence Problems.
- Abstract
A rectilinear Steiner tree for a set T of points in the plane is a tree which connects T using horizontal and vertical lines. In the Rectilinear Steiner Tree problem, input is a set T of n points in the Euclidean plane and the goal is to find an rectilinear Steiner tree for T of smallest possible total length. A rectilinear Steiner arborecence for a set T of points and root r ∈ T is a rectilinear Steiner tree S for T such that the path in S from r to any point t ∈ T is a shortest path. In the Rectilinear Steiner Arborescense problem the input is a set T of n points in the plane , and a root r ∈ T, the task is to find an rectilinear Steiner arborescence for T, rooted at r of smallest possible total length. We present the first subexponential time algorithms for both problems. Our algorithms are deterministic and run in 2^O( √ n log n) time.
-
Oct 18, 12:30—13:00
(MF 3.119)
Michael Horton: Compact Flow Diagrams for State Sequences.
- Abstract
We introduce the concept of compactly representing a large number of state sequences, e.g., sequences of activities, as a flow diagram. We argue that the flow diagram representation gives an intuitive summary that allows the user to detect patterns among large sets of state sequences. Simplified, our aim is to generate a small flow diagram that models the flow of states of all the state sequences given as input. For a small number of state sequences we present efficient algorithms to compute a minimal flow diagram. For a large number of state sequences we show that it is unlikely that efficient algorithms to compute a flow diagram of minimum size exist. More specifically, the problem is W[1]-hard if the number of state sequences is taken as a parameter. We thus introduce several heuristics for this problem. We argue about the usefulness of the flow diagram by applying the algorithms to two problems in sports analysis. We evaluate the performance of our algorithms on a football data set and generated data.
This is joint work with:
Kevin Buchin -- TU Eindhoven
Maike Buchin and Stef Sijben -- Ruhr-Universität Bochum
Joachim Gudmundsson – The University of Sydney
-
Oct 18, 12:00—12:30
(MF 3.119)
Thom Castermans: GlamMap: Geovisualization for e-Humanities.
- Abstract
We present GlamMap, a visualization tool for large, multi-variate georeferenced
humanities data sets. Our approach visualizes the data as glyphs on a zoomable
geographic map, and performs clustering and data aggregation at each zoom level
to avoid clutter and to prevent overlap of symbols. GlamMap was developed for
the Galleries, Libraries, Archives, and Museums (GLAM) domain in cooperation
with researchers in philosophy.
-
Oct 11, 12:00—12:30
(MF 6.202)
Quirijn Bouts: Visual Encoding of Dissimilarity Data via Topology-Preserving Map Deformation.
- Abstract
We present an efficient technique for topology-preserving map deformation and apply it to the visualization of dissimilarity data in a geographic context. Map deformation techniques such as value-by-area cartograms are well studied. However, using deformation to highlight (dis)similarity between locations on a map in terms of their underlying data attributes is novel. We also identify an alternative way to represent dissimilarities on a map through the use of visual overlays. These overlays are complementary to deformation techniques and enable us to assess the quality of the deformation as well as to explore the design space of blending the two methods. Finally, we demonstrate how these techniques can be useful in several—quite different—applied contexts: travel-time visualization, social demographics research and understanding energy flowing in a wide-area power-grid.
-
Sep 13, 12:00—12:30
(MF 14)
Kevin Verbeek: Bundled Crossings in Embedded Graphs.
- Abstract
Any planar drawing of a non-planar graph necessarily contains
crossings, but the geometric structure of those crossings can have
a significant impact on the visual quality of the drawing. In
particular, the structure of two disjoint groups of locally parallel
edges (bundles) intersecting in a complete crossbar (a bundled crossing)
is visually simpler even if it involves many individual crossings,
compared to an equal number of random crossings scattered in the plane.
In this paper, we consider the number of bundled crossings in a drawing
as a measure of the drawing's quality, and we introduce the problem of
partitioning the crossings of a given drawing into the minimum number of
bundled crossings. We show that this problem is NP-hard. We also point
out an interesting connection to the problem of dissecting a rectangular
polygon with holes into rectangular regions. This connection leads to a
constant-factor approximation of the optimal solution for circular
layouts, in which all vertices lie on the outer face.
-
Aug 30, 13:00—13:30
(MF 13)
Roy Berkeveld: Design and Implementation of Unicycle AGVs for Cooperative Control.
- Abstract
Autonomous mobile robot research is a broad field and new technologies are continuously being developed. Practical evaluation of novel algorithms and techniques tends to be expensive and involved due to the hardware, so research frequently resorts to simulation. This work presents the design of a low-cost robotics platform for the purpose of evaluating cooperative motion control algorithms. Much focus is put on sensor performance to reduce localization error, as this is a leading metric for algorithmic performance. This is aided by specific optimizations that take advantage of full-system integration, which is not possible for partial designs. A prototype of the hardware is built, and a partial software implementation addresses many of the integration problems. The platform uses modern technology such as Ultra-wideband radio (UWB), Lidar and computer vision, at a unit cost of around €500. Environmental awareness is shared among collaborating robots, by datastructures that are well-suited for distributed updates. One use case is collaborative Coverage Path Planning under the constraints of position error. For this purpose, optimized strategies for trajectory tracking, online navigation, contour-based coverage path planning and formation control are proposed. And finally, the same software framework is also usable for simulations, and therefore suits many types of comparative analysis.
-
Aug 22, 11:00—12:00
(MF 12)
Kai Xu: Visual analytics provenance for sensemaking.
- Abstract
Sensemaking is a process of find meaning from information. It is hardly a single step process: it requires ETL (Extract, Transform and Load), various computational analysis (such as statistics and machine learning), data visualisation, and finally human interpretation/reasoning and decision making. Existing work mostly targets only part of the process, and there are important issues such as data quality and human bias are not very well studied. In this talk, I would like to introduce 'analytic provenance' and how it might address some of these issues.
-
Aug 17, 13:00—13:40
(FLX 1.09)
Ankur Tiwari: Range-assignment for broadcasting in mobile ad-hoc networks.
- Abstract
In a range-assignment problem one is given a set of nodes (points) in the plane, and the goal is to assign a communication range to each node such that the resulting communication graph has some desired properties, while the total energy consumption is small. In this thesis project we consider the power assignment problem for moving nodes, where the desired property is that the communication graph contains a broadcast tree from a given source node. We develop a new algorithm for this problem, and show experimentally that the resulting power consumption is smaller than for existing algorithms.
-
Aug 17, 10:00—10:30
(MF 6)
Jacco Snoeren: Workload Distribution in Heterogeneous SMT Assembly Lines.
- Abstract
For my graduation project I am looking into load balancing problems within the Surface Mount Technology (SMT) industry, more specifically within the company Kulicke & Soffa. Their machines apply SMT technology by outfitting printed circuit boards with components. My project involves distributing the workload over different pick-and-place machines (placing those components) within an assembly line. In my presentation I will discuss how I applied simulated annealing to help obtain balance among the machines, the problems encountered during my research, and how these have been tackled.
-
Aug 16, 13:00—13:30
(MF 9)
Sander Kools: Visualizing pre-clustered graphs as maps.
- Abstract
We use graphs to represent relations between objects. Graph visualization allows us to convey the content of a graph. Typically a graph is represented as a node-link diagram. This is still the most widely used visualization for a graph.
The goal for this graduation project is to explore another way to visualize graphs, namely to visualize graphs as maps. Maps are used everywhere. They display a region with additionally information such as road and cities. People are taught how to read them at a young age. This skill can be taken advantage of by displaying the graph as a map and visualizing information on the map such that the user can easily identify and read the information without him having to get used to the visualization method.
There are several existing techniques that visualize graphs as maps, but these techniques have several shortcomings. In this project I aim to develop a new technique for visualizing graphs as maps that does not have these shortcomings. We will present the design process and the resulting conclusions for turning graphs into maps.
We have pre-clustered graphs with nodes and edges and where certain nodes are grouped together in the same cluster. The idea of this visualization is to show the importance of clusters and nodes and the relations among the clusters and the nodes. We will use pre-clustered graphs, because we want to visualize clusters for any possible clustering of a graph.
To turn the graph into a map we will turn nodes and clusters into regions on the map where the size of the region indicates its importance. How close the regions are in distance would indicate its similarity from the graph.
This visualization technique is implemented in Java. This implementation is provided as open source for further research and development.
-
Aug 15, 13:00—13:30
(MF 7.084)
Jeffrey Aben: Network Creation from general Trajectory sets.
- Abstract
For my graduation project I investigate a general network construction method.
If the trajectories have an inherent underlying network e.g. a road network, the algorithm should find this network. If however there is no such underlying network e.g. bird trajectories, we still obtain valuable information from the constructed network. This information includes clues to hotspots, popular routes and obstacles.
-
Aug 15, 12:30—13:00
(MF 7.084)
Willem Sonke: Mapping polygons to the grid with small Hausdorff and Fréchet distance.
- Abstract
We show how to represent a simple polygon P by a (pixel-based) grid polygon Q that is simple and whose Hausdorff or Fréchet distance to P is small. For any simple polygon P, a grid polygon exists with constant Hausdorff distance between their boundaries and their interiors. Moreover, we show that with a realistic input assumption we can also realize constant Fréchet distance between the boundaries. We present algorithms accompanying these constructions, heuristics to improve their output while keeping the distance bounds, and experiments to assess the output. (ESA 2016 practice talk)
-
Jul 29, 13:00—13:30
(MF 9)
Roel Jacobs: Constructing Maps by Clustering Trajectories.
- Abstract
We propose a new approach for constructing the underlying map from trajectory data. This algorithm is based on the idea that road segments can be identified
as stable subtrajectory clusters in the data. For this, we consider how subtrajectory clusters evolve for varying distance values, and choose stable values for these, in this way avoiding a global proximity
parameter. Within trajectory clusters, we choose representatives, which are combined to form the map.
We experimentally evaluate our algorithm on hiking and vehicle tracking data. These experiments demonstrate that our approach
can naturally deal with different road widths,and differences in density of the data. It can also to an extent separate roads that run close to each
other and can deal with outliers in the data, two issues that are notoriously difficult in road network reconstruction.
-
Jul 29, 10:30—11:00
(MF 9)
Qi Xiao: On the Number of Cycles and Perfect Matchings in Planar Graphs.
- Abstract
We investigate the number of simple cycles, Hamiltonian cycles and
perfect matchings in planar graphs with n vertices. By analyzing the
way simple paths can be extended several steps at a time using a
face-coloring technique, we show that there can be at most
O(2.870214^n) simple cycles in planar graphs. We look into a result
claimed in a previous work that there are at most O(2.2134^n)
Hamiltonian cycles in planar graphs, and argue that their proof is
likely flawed. Using the transfer matrix technique, we show that there
is a family of planar graphs with Ω(1.535365^n) perfect matchings.
-
Jul 04, 11:00—11:30
(MF 11)
Jules Wulms: Lower bounds for preprocessing algorithms based on protrusion replacement.
- Abstract
We present lower bounds for algorithms that use protrusion replacement to preprocess for optimization problems. Protrusion replacement can be used on a planar graph G to replace a subgraph H of G by an equivalent graph gadget H' called a representative. Replacing a subgraph of G by a representative changes the size of the optimal solution by exactly some constant Delta(H,H'), regardless of the remainder of G. Existing work shows upper bounds on the number of equivalence classes for these representatives, and the size of small representatives. These quantities depend on the number of vertices t, through which the representative is attached to the rest of the graphs. We establish lower bounds for Independent Set: For general graphs there are at least 2^{2^t / sqrt(4t)} equivalence classes. We improve on the existing upper bound on the number of equivalence classes by showing there are at most 2^{2^t-1} equivalence classes. These bounds also hold for more restricted graph classes, and we will show this for planar graphs which have bounded treewidth/pathwidth. Let R_t be a set of representatives such that there is a representative in R_t for each equivalence class. Using the lower bound on the number of equivalence classes, we prove a lower bound of Ω(2^t / sqrt(4t)) on the size of a representative in R_t.
-
Jun 27, 12:30—13:00
(MF 7.084)
Astrid Pieterse: Optimal Sparsification for Some Binary CSPs Using Low-degree Polynomials.
- Abstract
I will present the results of a paper accepted to MFCS 2016, co-authored by Bart Jansen.
We analyze to what extent it is possible to efficiently reduce the number of clauses in NP-hard satisfiability problems, without changing the answer. Upper and lower bounds are established using the concept of kernelization. Existing results show that if NP is not contained in coNP/poly, no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT with d literals per clause, to equivalent instances with O(n^{d-e}) bits for any e > 0. For the Not-All-Equal SAT problem, a compression to size \~O(n^{d-1}) exists. We put these results in a common framework by analyzing the compressibility of binary CSPs. We characterize constraint types based on the minimum degree of multivariate polynomials whose roots correspond to the satisfying assignments, obtaining (nearly) matching upper and lower bounds in several settings. Our lower bounds show that not just the number of constraints, but also the encoding size of individual constraints plays an important role. For example, for Exact Satisfiability with unbounded clause length it is possible to efficiently reduce the number of constraints to n+1, yet no polynomial-time algorithm can reduce to an equivalent instance with O(n^{2-e}) bits for any e > 0, unless NP is a subset of coNP/poly.
-
Jun 01, 12:30—13:00
(MF 7.084)
Sander Kools: Graph to map.
- Abstract
Information visualization is essential in understanding information hidden within datasets. Map visualization can be used to display geographical information, a region or area where the user wants information such as routes to destinations. It is then also possible to superimpose other information about the region over it such as crime or demographics indicated by colouring. When the data is not geographical this method is not used, but more traditional graph visualization techniques are used.
In this project, we propose a visualization technique that visualizes relation data as geographical maps. This way when a user will gather data from the map, the map can be used to convey information within the graph and the user can easily identify this information, because people learn how to read maps at a young age. We show a practical algorithm and we illustrate the effectiveness of this approach.
-
Jun 01, 12:00—12:30
(MF 7.084)
Max Sondag: Stability of treemap algorithms.
- Abstract
The stability of a treemap visualisation can be intuitively understood as how much a treemap changes when the underlying data changes.
In this presentation it will be explained why we should consider the stablity of treemap algorithms and why the current definitions of stability are not complete enough.
An algorithmic approach will be presented that can be used to modify an existing rectangular treemap in such a way that the impact on the stability of the treemap can be controlled when optimizing the layout of the rectangles in the treemap according to any metric.
Finally a new rectangular treemap algorithm will be presented that uses this approach to be able to guarantee upper bounds on the aspect ratio of the rectangles in the treemap while controlling for the stability.
-
May 30, 12:30—13:00
(MF 9)
Arthur van Goethem: Grouping Time-varying Data for Interactive Exploration.
- Abstract
We present algorithms and data structures that support the interactive analysis of the grouping structure of one-, two-, or higher-dimensional time-varying data while varying all defining parameters. Grouping structures characterise important patterns in the temporal evaluation of sets of time-varying data. We follow Buchin et al. who define groups using three parameters: group-size, group-duration, and inter-entity distance. We give upper and lower bounds on the number of maximal groups over all parameter values, and show how to compute them efficiently. Furthermore, we describe data structures that can report changes in the set of maximal groups in an output-sensitive manner. Our results hold in R^d for fixed d.
-
May 26, 15:00—16:00
(MF 13)
Anne Driemel: Two decades of algorithms for the Frechet distance.
- Abstract
Time series data is one of the most common forms of big data. More
generally, we can think of sequence data such as DNA-sequences, strings,
trajectories of moving objects, geometric curves and speech recordings.
The edit distance stands as an unchallenged distance measure for
strings. Dynamic time warping is a variation of the edit distance that
was developed for speech recognition and is heavily used in the data
mining community for various types of time series data, alongside with
LCS (longest common subsequence). The Frechet distance is the geometric
counterpart which was conceived independently as a mathematical metric
for curves.
In 1995, Alt and Godau described a quadratic-time algorithm to decide if
two polygonal curves are similar under the Frechet distance. For almost
two decades, faster algorithms seemed out of reach and it was
conjectured that this algorithm is optimal. In 2014, Bringmann gave
complexity-theoretical evidence for this conjecture by proving that no
O(n^{2-\eps})-algorithm can exists (for any \eps>0), unless the strong
exponential time hypothesis is false.
The talk will put these results into context with related distance
measures such as dynamic time warping, LCS and edit distance, which
share a similar story. We will review the body of work that has been
developed to deal with the quadratic hardness, making near-linear time
algorithms possible after all and we will go beyond single distance
computation towards important topics such as data structures and
clustering under the Frechet distance.
-
May 23, 13:00—13:30
(MF 9)
Jacco Snoeren: Workload Distribution in Heterogeneous SMT Assembly Lines.
- Abstract
For my graduation project I am looking into load balancing problems within the Surface Mount Technology (SMT) industry, more specifically within the company Kulicke & Soffa. Their machines apply SMT technology by outfitting printed circuit boards with components. My project involves distributing the workload over different pick-and-place machines (outfitting those components) within an assembly line. In my presentation I will discuss how I applied simulated annealing and other techniques to help obtain balance among the machines, the problems I have encountered during my research, and how I am planning to overcome those issues in the upcoming months.
-
May 23, 12:30—13:00
(MF 9)
Tim Ophelders: The Complexity of Snake.
- Abstract
Snake and Nibbler are two well-known video games in which a snake slithers through a maze and grows as it collects food. During this process, the snake must avoid any collision with its tail. Various goals can be associated with these video games, such as avoiding the tail as long as possible, or collecting a certain amount of food, or reaching some target location. Unfortunately, like many other motion-planning problems, even very restricted variants are computationally intractable. In particular, we prove the NP-hardness of collecting all food on solid grid graphs; as well as its PSPACE-completeness on general grid graphs. Moreover, given an initial and a target configuration of the snake, moving from one configuration to the other is PSPACE-complete, even on grid graphs without food.
Our results make use of the nondeterministic constraint logic framework by Hearn and Demaine, which has been used to analyze the computational complexity of many games and puzzles. We extend this framework for the analysis of puzzles whose initial state is chosen by the player.
Authors: Marzio De Biasi and Tim Ophelders
-
May 17, 15:00—15:30
(MF 9)
Gert van der Plas: 3D cache-oblivious multi-scale traversals of meshes using 8-reptile tetrahedra.
- Abstract
In the field of numerical simulation, a multi-scale grid and its traversal can have a huge impact on the cache-efficiency of the calculations performed. We present the Haverkort element traversal and refinement scheme which is based on the bisection of 8-reptile tetrahedra. We developed a stack-assignment scheme that allows the Haverkort traversal to use stacks for storing the input data, the output data, and the temporary data of vertices and elements. Our parallel plane approach to assign stacks to vertices gave solution that requires at most 8 stacks to store temporary vertex data for uniform and multi-scale refined grids independent of the refinement level. Furthermore 8 is also the lower bound for the number of stacks for our parallel plane approach. Additionally we show that a general lower bound exists of 5 temporary stacks for storing the vertex data during a traversal.
The combination of the Haverkort traversal with our Constant Stack algorithm is cache-oblivious, suitable for multi-level cache hierarchies and maintains its performance for multi-level refined grids and allows for fast and efficient space-filling-curve-based (re)partitioning. The Constant Stack algorithm has a time complexity of O(1) for stack-assignment and -access and can achieve a cache-miss ratio of less than 5.2% .For all of these reasons we expect that a constant-number-of-stacks solution can compete with and outperform numerical simulations using cache-optimization techniques based on loop blocking when running on CPUs or GPUs. We apply the approach found for obtaining a constant-number-of-stacks solution for the Haverkort element traversal and a refinement scheme was applied to two other suitable 8-reptile polyhedra. Traversal and refinement schemes are given for an 8-reptile bisected cube and an 8-reptile bisected triangular prism needing 9 and 7 stacks respectively.
-
May 17, 13:00—13:30
(MF 9)
Mark de Berg: Geodesic Spanners for Points on a Polyhedral Terrain.
- Abstract
Let S be a set of n points on a polyhedral terrain T in R^2, and let eps>0 be a fixed constant. We prove that S admits a (2+eps)-spanner with O(n log n) edges with respect to the geodesic distance. This is the first spanner with constant spanning ratio and a near-linear number of edges for points on a terrain. On our way to this result, we prove that any set of n weighted points in R^d admits an additively weighted (2+eps)-spanner with O(n) edges; this improves the previously best known bound on the spanning ratio (which was 5+eps), and almost matches the lower bound.
Joint work with M. Abam and M.-J. Rezaei Seraji
-
Apr 12, 13:00—13:30
(MF 7.084)
Jules Wulms: Geo Word Clouds.
- Abstract
Word clouds are a popular method to visualize the frequency of
words in textual data. Nowadays many text-based data sets, such as
Flickr tags, are geo-referenced, that is, they have an important spatial
component. However, existing automated methods to generate
word clouds are unable to incorporate such spatial information. We
introduce geo word clouds: word clouds which capture not only the
frequency but also the spatial relevance of words.
-
Mar 22, 12:30—14:00
(MF 8)
Aleksandar Markovic: Colouring Contact Graphs of Squares and Rectilinear Polygons. .
- Abstract
In a parallel universe, the Earth is flat and every country is a square. Cartographers in this universe would like to provide maps such that no two country sharing even a single point get assigned the same colour. In our universe, we can always colour our maps with four colours at most as the graph resulting from our map is planar. However, in that universe, this graph is not.
In this talk, we will show that the graphs are indeed not always planar, and that they can be coloured with at most 6 colours. We will also show an instance which cannot be coloured with less than 5 colours.
(EuroCG practice talk)
Authors: Mark de Berg, Aleksandar Markovic and Gerhard Woeginger
-
Mar 22, 12:30—14:00
(MF 8)
Sandor Kisfaludi-Bak: Connected dominating set on unit disk graphs is W[1]-hard.
- Abstract
Connected dominating set on unit disk graphs (CDS-UDG) is a problem related to wireless broadcasting. We explore the parameterized complexity of this problem: we show that CDS-UDG is W[1]-hard by a reduction from the grid tiling problem. Our proof is based on a method developed by Daniel Marx.
(EuroCG practice talk)
Authors: Mark de Berg, Hans Bodlaender, Sandor Kisfaludi-Bak
-
Mar 22, 12:30—14:00
(MF 8)
Tim Ophelders: Computing the Fréchet Distance between Real-Valued Surfaces.
- Abstract
The Fréchet distance is a well-studied measure for the similarity of shapes. While efficient algorithms for computing the Fréchet distance between curves exist, there are only few results on the Fréchet distance between surfaces. Previous work has shown that the Fréchet distance is upper semi-computable between piecewise linear surfaces f and g from [0,1]² to ℝᵏ; however, it is not even known to be computable. We focus on the case k=1. Intuitively, we measure the distance between terrains based solely on the height function. Our main result is that the Fréchet distance between f and g is computable, and even in NP.
We also prove that, even for k=1, computing a factor 2-ε approximation of the Fréchet distance is NP-hard, showing that the problem is in fact NP-complete. We achieve our results by defining an intermediate distance between contour trees, which we also show to be NP-complete to compute.
(EuroCG practice talk)
Authors: Kevin Buchin, Tim Ophelders, and Bettina Speckmann
-
Mar 15, 13:00—14:00
(MF 13)
Willem Sonke: Mapping polygons to the grid with small Hausdorff and Fréchet distance.
- Abstract
We consider the problem of representing shapes on a pixel grid. More formally, we want to represent a simple polygon P as a grid polygon Q with small distance between them. We provide two algorithms that produce such grid polygons. The first algorithm produces grid polygons with small Hausdorff distance to P. Those grid polygons do not necessarily intuitively resemble P, however. Therefore, we present a second algorithm that produces grid polygons with small Fréchet distance to P. We also provide corresponding lower bounds.
(EuroCG practice talk)
Authors: Quirijn W. Bouts, Irina Kostitsyna, Marc van Kreveld, Wouter Meulemans, Willem Sonke, and Kevin Verbeek
-
Mar 15, 13:00—14:00
(MF 13)
Irina Kostitsyna: Beaconless geocast protocols are interesting, even in 1D.
- Abstract
Beaconless geocast protocols are routing protocols used to send messages in mobile ad-hoc wireless networks, in which the only information available to each node is its own location. Packets get routed in a distributed manner: each node uses local decision rules based on the packet source and destination, and its own location. In this paper we analyze some of the most relevant existing protocols in a formal and structured way, focusing on two relevant 1D scenarios. (EuroCG practice talk)
Authors: Joachim Gudmundsson, Irina Kostitsyna, Maarten Löffler, Vera Sacristán, Rodrigo I. Silveira
-
Mar 08, 13:00—13:30
(MF 13)
Max Konzack: Fine-Grained Analysis of Problems on Curves.
- Abstract
We provide conditional lower bounds on two problems on polygonal curves. First, we generalize a recent result on the (discrete) Fréchet distance to k curves. Specifically, we show that, assuming the Strong Exponential Time Hypothesis, the Fréchet distance between k polygonal curves in the plane with n edges cannot be computed in
O(n^{k−ε}) time, for any ε > 0. Our second construction shows that under the same assumption a polygonal curve with n edges in dimension Ω(log n) cannot be simplified optimally in O(n^{2−ε}) time.
-
Feb 29, 11:30—12:00
(MF 12)
Tesshu Hanaka: Graph Optimization Problems for Economic Network Analyses.
- Abstract
In this talk, I introduce two examples of applications of graph optimization to analyze economic networks. One is about the input output model proposed by Leontief, which is one of the most well-known models representing relations between several economic sectors (e.g., industries). The model is a linear system that represents transactions between industries as a matrix. Our approach is to consider such a transaction matrix as a weighted graph, which enables us to apply graph optimization. I show an analysis example in this approach. Another approach is a supply chain network analysis. In this analysis, I propose a graph optimization problem named maximum weighted minimal separator problem. I will show several first-step results, such as NP-hardness.
-
Feb 23, 13:00—13:30
(cancelled)
Ali Mehrabi: TBA.
-
Feb 22, 11:30—12:00
(MF 14)
Jules Wulms: Lower bounds for preprocessing algorithms based on protrusion.
- Abstract
Meta-kernelization is a general framework to obtains kernels for NP-hard graph problems. It works by replacing subgraphs by smaller 'gadgets' that behave similarly with respect to global solutions. First only the existence of such preprocessing algorithms was proved, while follow-up work established upper bounds with high multiplicative constants. In this project we investigate whether these large constants are needed, by looking for lower bounds. By analyzing how many different types of behaviour can be exhibited by a graph with a small boundary, we can find lower bounds on the size of the gadgets used in meta-kernelization.
-
Feb 09, 13:00—13:30
(MF 13)
Kevin Buchin: On the number of substructures in planar graphs.
- Abstract
We are interested in the maximum number of cycles, spanning trees etc. a
planar graph may have. I will discuss some older bounds for these
problems and how (and how not) we want to improve them.
-
Jan 26, 11:30—12:00
(cancelled)
Bettina Speckmann: TBA.
-
Jan 19, 11:30—12:00
(cancelled)
Irina Kostitsyna: TBA.
-
Jan 12, 11:30—12:00
(MF 12)
Mikkel Abrahamsen: Common Tangents of Two Simple Polygons in Linear Time and Constant Workspace.
- Abstract
We describe algorithms for computing the common tangents of two disjoint simple polygons using linear time and only constant workspace. A tangent of a polygon is a line touching the polygon such that all of the polygon lies on the same side of the line. A common tangent is a tangent of both polygons. If the polygons are on the same side of the tangent, it is an outer common tangent. Otherwise, it is a separating common tangent. Each polygon is given as a read-only array of its corners in cyclic order. The algorithms detect if outer or separating tangents do not exist. This implies that it is possible in linear time and constant workspace to detect if the convex hulls of the polygons are disjoint, and if not, whether one convex hull is contained in the other. The algorithms are short and simple, but the proofs that they work are non-trivial.