Exploiting multi-core systems for parallel network simulation
Autori
Viac o knihe
Discrete event simulation constitutes a fundamental methodology in the design, development, and evaluation process of communication systems. Despite their abstract nature, simulation models often exhibit considerable computational complexity, resulting in extensive simulation runtimes. To counteract the runtime demand of complex simulation models, parallel discrete event simulation distributes the workload of a simulation model across multiple processing units. Traditionally, parallel discrete event simulation focused on investigating large scale system models utilizing distributed computing clusters. In the last decade, however, two developments have fundamentally changed the established state-of-the-art in parallel discrete event simulation. First, multi-core systems have become the de facto standard hardware platform for desktop and server computers. In contrast to distributed computing clusters, multi-core systems provide different hardware characteristics, notably shared memory. Second, the focus of interest in the research community shifted from wired to wireless communication systems. Contrary to wired networks the simulated network entities are tightly coupled due to detailed modeling of physical layer and wireless channel effects, thereby hindering efficient parallelization. This thesis addresses the challenges resulting from these two developments by designing algorithms and tools to enable and support efficient parallel simulation of tightly coupled systems on multi-core systems. In particular, we make four distinct contributions: Our first contribution is parallel expanded event simulation, a modeling paradigm extending discrete events with durations that span a period in simulated time. The resulting expanded events form the basis for a conservative synchronization scheme that considers overlapping expanded events eligible for parallel processing. We furthermore put these concepts into practice by implementing Horizon, a parallel expanded event simulation framework specifically tailored to multi-core systems. The durations carried by expanded events provide a deeper insight into event dependencies. Yet, they typically do represent the true dependencies among events. Hence, our second contribution, probabilistic synchronization, exploits the globally shared memory space of multi-core systems to observe the behavior of a simulation at runtime and learn accurate dependencies between events. Three different heuristics subsequently exploit the dependency information to guide speculative event execution. While the previous two contributions focus on speeding up individual simulation runs, our third contribution exploits the massively parallel processing power of GPUs to reduce the runtime demand of entire parameter studies. To this end, we develop a multi-level parallelism scheme that bridges the gap between the fundamentally different processing paradigms underlying expanded event simulation and GPUs. Finally, the performance of any parallelization scheme heavily depends on the structure of a given simulation model. Hence, we specify a performance analysis methodology that enables model developers to identify and eliminate performance bottlenecks in simulation models. In combination, our four contributions provide the means for efficient parallel simulation on multi-core systems.