The research group for Parallel Computing is concerned with means and methods for efficiently utilizing real (typically clusters of SMP nodes, shared-memory systems, highly parallel multi-core processors like the Intel Xeon Phi, and to a limited extent also GPUs) and idealized (PRAM, communication networks) parallel computing architectures for the solution of given computational problems. Focus is on design, development and implementation of parallel algorithms and data structures for fundamental problems under different model assumptions and on real, parallel systems. Focus is on the design, implementation and evaluation of interfaces and frameworks for expressing and executing parallel computations on different types of systems. Focus is on understanding and designing architectural features to support efficient, parallel computation. Systematic, well-founded and reproducible evaluation of algorithms, frameworks, interfaces and languages on distributed- and shared-memory, real parallel systems are integral to the research area.

Some concrete themes that we are pursuing are:

If you would like to come and visit us and give a talk, do not hesitate to contact us. We are always open to interesting discussions, problems and collaborations in any of the above areas.