Research Supervisor: Steven D. Johnson
Postdoctoral Researcher: Eric Jeschke
Since this project does not involve a change in supervision or locale, it does not appear to qualify under the CISE Post-Doctoral Research Associates program. However, we believe that program's general objectives are well served by this project. In supporting his spouse's career development, Jeschke is remaining in Bloomington for the next two years. He is uniquely qualified to do this work. In addition, this project makes use of an NSF-funded research vehicle ideally suited to its purpose. The work spawns new opportunities for collaboration with researchers at Indiana investigating parallel symbolic processing. Finally, the proposed funding would enable a new generation of students to train for an experimental effort that has been ongoing since 1979.
The DSI programming system is a model of concurrent demand-oriented processing in which the only synchronizing mechanism is a suspending constructor. Newly allocated data cells are initialized with transparent computations for computing their fields. Field access is blocked until these suspensions compute and replace themselves with concrete values.
Suspensions are ultra-lightweight tasks. The DSI system is a vehicle for experimenting with various approaches to concurrent symbolic computation, including output-driven parallelism, demand propagation, delayed list ordering, and speculative scheduling. In twenty years of exploring these aspects, we have also developed general perspectives on architecture and resource management for parallel heap memories. Our motive is not to promote a particular computation model, but to contribute to the evolution of general purpose computers toward better symbolic processing capabilities.
Our experimentation has focused on moderate scale, shared memory multiprocessing models, involving tens of processors and giga-byte-scale memories. This architecture represents the coming generation of general-purpose work stations. We also believe that symbolic processing will prevail on these machines, owing to the heterogeneity of network computing, hence, the need to coordinate disparate local and remote tasks.
The proposed work is to instrument a DSI implementation on the Indiana University Computer Science Department's SGI Power Challenge computer. The Power Challenge, funded by ARI grant CDA93-03189 (PD David S. Wise) in 1993, is an instance of the architecture class for which DSI was originally targeted. The implementation would be a test bed for performance studies of storage and task management, and process synchronization.
As in the past, we would continue to explore applicative surface languages for DSI, particularly system modeling languages geared toward stream processing. The current surface language for DSI, a quasi-functional language called Daisy, is especially agile for building and observing stream networks. We are interested in techniques for improving the granularity of such programs. However, our primary intent is to investigate underlying architecture and resource management issues for the parallel heap-based processing.
The original DSI model emerged from the idea of a "lazy cons" proposed by Friedman and Wise in 1976. Johnson and Kohlstaedt refined a concurrent virtual machine model in 1979-1981. A stream-oriented programming paradigm arises naturally from this approach, especially when I/O devices are introduced to the heap as pseudo-suspensions, using the same basic synchronization mechanics.
Based on this work, Johnson embarked on research (funded through MIPS) adapting applicative programming methodology to hardware specification and verification (he has 30 refereed publications in this area). Although this formal-methods did not support continued language implementation work, DSI and its surface language were used very effectively to animate hardware specifications. Thus, while DSI was extensively used in support of methodological research, providing a novel collection of features for stream models, its implementation details were never the focus of published articles.
With assistantship support from the Computer Science Department, Jeschke took over DSI development in 1987. He created a highly portable C implementation and also a version running the department's 32-node BBN Butterfly. Informed by performance measurements on the Butterfly, Jeschke completely redesigned DSI for shared-memory parallel implementation. Components of the design included a parallel mark-sweep collector, distributed allocation system, and several variants of a fine-grained, parallel task scheduler, and a virtual processor model.
Another important aspect of Jeschke's work is a multi-window I/O server, which, for one thing, facilitated Johnson's system modeling methodology. However, the two most significant contributions of Jeschke's work may be its investigation of the role of I/O processes in speculative task scheduling and the potential of data recreation as a mechanism for load balancing.
Johnson will administer the project at no cost.
The Computer Science Department will provide access to and support for the Power Challenge and front-end workstations. It will waive its standard computer services fees.
There are several related research projects at Indiana. The largest is a parallel Scheme implementation effort headed by Kent Dybvig. David Wise, who heads the ARI project, has done research in parallel matrix decomposition, parallel heap memory architectures, and applicative file systems, which all reflect interests in common with Johnson. Indeed, the DSI/Daisy model arose through collaborations between Johnson and Wise.