The current implementation of CEAL consists of a compiler, cealc
, as well as a suite of CEAL benchmarks that we use for performance evaluation. This manual briefly describes how to download and build the cealc
compiler. We also briefly describe how to use the CEAL language itself using examples from our benchmark suite.
Your questions and comments are appreciated and should be directed to hammer@mpi-sws.org.
sh
, make
, gcc
, etc.Once you've got the prerequisites installed, download the source distribution:
http://mpi-sws.org/~hammer/pldi09/ceal-0.1.1.tar.gz
Extract the source distribution to a local directory and change to this directory.
To start building the system, begin by building CIL as follows:
cd cil ./configure make cd ..
Next, build the CEAL runtime library and the benchmark suite as follows:
make
Now the bin
directory should contain:
cealc
, a script that invokes the CEAL compilerlibceal.a
, the runtime library,cealc
. Each testing program is named test-appname
where appname
is the name of a benchmark application.
Optionally, you can also re-generate this manual (if you have doxygen
installed on your machine), as follows:
cd doc doxygen cd ..
Then doc/html/
will contain this manual in HTML format.
cealc
is a perl-script wrapper for cilly
, and cilly
is itself a perl-script wrapper for CIL and gcc
. Check out http://cil.sourceforge.net/ for information about both CIL and cilly
. The purpose of this script is to invoke cilly
with the right options to translate CEAL source code into C target code (and by default) compile it to a native binary via gcc
. Its use is analogous to cilly
, whose use is itself analogous to gcc:
supply one or more source files as arguments and it will compile a binary a.out
.
Many gcc
options work here too. The -o
option can be used to redirect the output. The -c
option can be used to create an object file only. As a general rule, any flag accepted by either gcc
or cilly
can be passed to this process and it will pass it along down the line as necessary. This fact is largely the result of the fact that cilly
accepts options recognized by gcc
and "does the right thing with them".
Special Options:
-nolibceal
Instead of the default behavior--linking against libceal--this option will attempt to compile (and/or link) without using libceal. This option is used when compiling libceal itself via cealc
.At a high-level, a typical meta-program is structured as follows. First it performs an initial-run of a core program:
1. construct initial core input 2. run core program 3. inspect core output
In the initial run (also referred to as the from-scratch run), the self-adjusting framework transparently creates an execution trace. The trace describes the dynamic data and control dependencies of the core program's run, and records how the core program produced its output (and intermediate results, etc.) from its input.
After the initial run, the meta program makes one or more changes to the core input. To get the output associated with the changed input in a conventional programming paradigm, the meta program would re-invoke the core program from-scratch to recompute the new output. However, in the self-adjusting setting the meta program uses change-propagation to automatically update the core output with respect to the input changes. Behind the scenes, change propagation uses the core program's execution trace to selectively re-execute the executions steps that are affected by the changes.
4. repeat: 4.A change core input 4.B propagate changes 4.C inspect updated core output
The meta-program benefits from a self-adjusting core program when it is faster to use change propagation to update the output than to to recompute it from-scratch. There is a sizeable class of programs for which this performance benefit is asymptotic (e.g., for small changes, change propagation can sometimes be faster by a linear factor over recomputing from scratch). Although it may also be slower in some cases, it is only ever slower by a constant factor (again, when compared to running from scratch).
We often say that a core program is stable for a certain class of input changes if we can show that for any such change, change propagation will always update the core output in a desirable amount of time. The property of a core program being stable is usually relative to which kinds of input changes we want to consider, and possibly the size of the input itself. As an example, consider a core program that produces a sequence of sorted elements given an unsorted input sequence. We might consider an input change to be an insertion or deletion of an input element. Then we might say the core program is stable if change propagation can respond to such a change in time O(log n), where n is the total number of input elements. In this case, the time for handling a small change via change propagation is consistent with the optimum time for maintaining a sorted list of elements under insertions and deletions.
The stability of a core program is determined in part by how it's written, and core programs that do not exhibit good stability can sometimes be rewritten as stable ones.
From a core-program's perspective, a modref is like any mutable (pointer-sized) location in memory: it can be read-from (using read) and written-to (using write). In both cases, the read or written value is of type void*
. The modref function creates fresh modrefs that are initially empty.
Modrefs are typically used to hold inputs and outputs of core programs, as well as their intermediate results. Typically, after the meta-program runs the core program initially, it (1) mutates the core input by writing one or more modrefs with new values (via write), and (2) forces the changes to propagate to core output (via slime_propagate).
ceal_propagate
.