time, given that we had most of the material already. This
is how the whole thing started.
Before we actually tried to implement it, the initial plan
seemed to make perfect sense: The major portion of the
work had already been completed by putting all the
technical material on the slides – or so we thought. We
would only have to convert bullets into sentences, redo
some of the figures, add a chapter drawing all the details
into the grand picture of transaction-oriented systems,
compile a list of references – and be done! It was the kind
of plan that everybody will enthusiastically agree to at the
end of a meeting, so they can get on with their real work. In
our case it was the review meeting after the course, and
neither Jim nor I had a clear idea of how to implement it
after we got back from Berlin. However, with the best of
intentions we agreed on producing text from the slides
some time soon.
With no deadline at all and many other things to do, we
made very little progress in turning the foils into prose – in
fact, we did not make any progress at all. I used the
material for a variety of courses I taught at the university,
extending and changing it as new algorithms, new systems
etc. became available. Jim did the same, teaching
transaction processing at Stanford, but we still were just
using and updating the slides; no prose was being produced
as a result of the teaching activities. The only new type of
content that proved useful when – much later – we actually
wrote the book was a rapidly growing number of problems
and exercises related to the various topics that were
covered in the foils. Those problems were specifically
created for the university courses; they had not been part of
the Berlin seminar.
That was the situation in 1987, and it did not change in
1988, or in 1989. In the fall of 1989 we discussed the
project and found that the original plan had been a failure.
It was obvious that if we wanted to get anything written, we
would have to hide in some remote, quiet and pleasant
place, equipped with PCs, printer, toner, with easy access
to good food, and spend all our time typing – well, most of
it. We figured that three months should be enough to
produce a complete first draft of the book, the polishing of
which could be done later, when we were back in our
normal habitats. After some lengthy and careful
deliberation it was decided to rent a house in a small village
in Tuscany named Ripa (near Carrara) and spend February
through April of 1990 there.
This time we got it partially right: At the end of April we
had about 600 pages of text, thanks to Jim’s strict regime
that required each of us to produce 2,000 words per day, no
matter which day. 600 pages were very close to our
estimate – but they only covered less than half the topics
we wanted to discuss. So in order to preserve the
investment, we had to plan for a second hideaway, which
took place one year later in Bolinas (north of San
Francisco), again from February to April. At the end of this
period, we had about 1,000 pages of text, plus a number of
lessons learned
2
:
- We would not be able to cover all the material that
was contained in the foils of the course.
- We would still have to do a lot of work in order to get
the book to the printer (glossary, index, and proof
reading).
- Writing a book is hard work; we would never do it
again.
So Robert Burns was right indeed: The best laid plans …
3. ORGANIZING THE MATERIAL
In the years between the Berlin course and the time of
finishing the book, technologies related to transaction
processing, distributed computing, parallel databases, etc
developed at a rapid pace. There is by far not enough space
to list them all, but I will mention some that had a major
influence on the way the book was structured.
First, transaction technology for distributed systems started
to be used seriously on non-proprietary operating system
platforms, i.e. Unix [4]. This partly was the result of
transferring research results from academia into real
products via start-ups. A particularly notable example for
this was Transarc’s Encina-system [7], which was the result
of a multi-year research effort at CMU, led by Alfred
Spector. Since Jim had been discussing with this group on a
regular basis, he had very detailed insight into both the
architecture and the implementation of the system.
Second, the ideas for making transactions a fundamental
mechanism for reliable execution at all levels of a system
rather than just use it for database applications were being
transformed into real systems. The most advanced example
in that category was Tandem’s TMF [6], which
demonstrated the use of transactions in the operating
system and featured real transactional RPCs, among other
interesting things. Of course, Jim was particularly familiar
with that system, so we often used it as a reference when
discussing how the elements of a “good” transaction
processing system (for teaching purposes) should play
together.
Third, C. Mohan of IBM had started to systematize and
clarify many techniques for implementing transactional
execution that had been around in various systems for
many years and present them in a coherent framework. This
resulted in a famous series of papers on the ARIES design
[5], which had a significant influence on how we presented
recovery mechanisms and methods for synchronization on
B-tree structures, among other things.
In several companies and many research labs people were
working on new synchronization protocols, disaster
2
There was yet another lesson, having to do with the deeply
rooted connection between transaction processing and the level
of precipitation in the area one writes about it - but that is
beyond the scope of this short article.
文档被以下合辑收录
评论