The overarching themes of my research are compilers, programmers' productivity, and heterogeneous parallel computing with attention to performance (why else would you run your code in parallel?).
My research interests include:
High-level Programming Languages
Compiler techniques to eliminate performance bottlenecks in high-level programming languages with focus on domain-specific languages, such as MATLAB and R.
Heterogenous Parallel Computing with GPUs
Developing high-level programming techniques for heterogeneous parallel systems, such as those involving embedded processors, GPUs, and FPGAs. This includes compiler techniques for computation partitioning, and memory space allocation, as well as new ways of thinking about algorithms to make them amenable to execution on heterogeneous parallel platforms.
Declarative Parallel Programming
Developing declarative embedded domain-specific languages for expressing the trickier parts of parallel computations, such as communication, encouraging programmers to develop novel parallel algorithms (leveraging the creative abilities of humans), while leaving the error-prone and tedious tasks (that can be automated) to compilers.
Automatic parallelization techniques for high-level programming systems, including high-level languages and applications written using high-level component libraries (e.g., generic libraries).
- R2C (Now part of the High-level Programming Languages project.)
- ParaM (Now part of the High-level Programming Languages project.)
- Past reasearch in grad school
Important: Prospective students, please read this first.