Research » Parallelization

Automatic Parallelization

Automatic parallelization has apparently been abandoned for its lack of general applicability. However, it still alive and thriving in the form of auto-vectorization to leverage vector instructions on the CPUs! GPUs have also caused a resurgence of auto-parallelization, powered by renewed interest in polyhedral models, especially since the code that performs the best on GPUs is almost exactly the same code that can also be more readily handled by automatic means. In addition, array-based high-level languages provide some compelling application scenarios where automatic parallelization could be fruitful and practical.

My research on automatic parallelization is targeted at array languages, such as MATLAB and R, and at specific target forms, such as GPUs or task-parallel libraries. Results show that parallelization continues to be useful, and highly practical, for specific application domains and targets.

Related publications:

  1. Pushkar Ratnalikar and Arun Chauhan. Automatic Parallelism through Macro Dataflow in MATLAB. In Proceedings of the 27th International Workshop on Languages and Compilers for Parallel Computing (LCPC), 2014.
    [Article DOI]
  2. Chun-Yu Shei, Pushkar Ratnalikar and Arun Chauhan. Automating GPU Computing in MATLAB. In Proceedings of the International Conference on Supercomputing (ICS), pages 245–254, 2011.
    [Article DOI]
  3. Chun-Yu Shei, Adarsh Yoga, Madhav Ramesh and Arun Chauhan. MATLAB Parallelization through Scalarization. In Proceedings of the 15th Workshop on the Interaction between Compilers and Computer Architectures (INTERACT), pages 44–53, 2011. Held in conjunction with the 17th IEEE International Symposium on High Performance Computer Architecture (HPCA).
    [Article DOI]
     Journal               Book chapter

Related open-source software releases:

Arun Chauhan / Computer Science / Indiana University