object-map
, but instead of assigning it a
fresh OID, we'll re-use the OID from the original object as well.
The
object-map
, as you may recall, is also a pure
functional object. Adding an entry does not mutate any part of the
object-map
but instead makes a new object-map
that shares as much structure as possible with the old
one. Any OID dereferencing that happens under the old object-map
will continue to see the original object with its original
contents. Dereferencing an OID under the new object-map
will see the new version of the object with the
‘modified’ field. We mimic side effects by creating a
whole new world where everything is identical to the original except
for the thing that appears to change.Normally, one would expect that creating a new world every time you wish to mutate an object would be prohibitively expensive. But since the objects and the
object-map
itself are immutable, we
can re-use them freely. This makes it reasonably efficient. Creating
a new world takes time proportional to the logarithm of the number of
objects persistently stored. Even having a huge number of objects
adds very little time and space overhead to simulate mutation.
You may think this is an awfully roundabout way to do something simple. After all, why not do the traditional thing of simply maintaining a snapshot of the current consistent state of the world and actually mutating it? This would require modifying the logging strategy slightly. A special mutation record would have to be written to the log to echo the mutation being performed in memory. If the machine crashes, the log could be read forward to re-do the mutations that were in progress at crash time. Alternatively, it could be read backwards to undo the mutations that ought to be undone if the transaction didn't commit.
The rationale for this roundabout solution is simplicity. It is hard to make transactions work correctly every single time given the fact that a hard failure can occur at any moment at all. Use Google to search for “transaction bug” and take a look at how easy it is to make subtle mistakes. By making it impossible to mutate existing persistent store records and indexes, we guarantee that bugs in the transaction layer simply cannot affect earlier, successful transactions. Cleanup and recovery of the persistent store remain as easy and trivial as before: we simply find the most recent valid commit record. There are no undo or redo logs to process. The transaction algorithm can be proven correct by inspection.
Once we have ‘mutation’, we'll need to ensure that we can maintain consistency when we use it. We want the persistent store to behave as if the mutations had some serial order to them. We augment each transaction object with some more bookkeeping information. Each transaction must record which objects it dereferences, allocates, and ‘mutates’ before it commits. We'll also add a transaction manager object which examines transactions before they are committed to determine if a serial ordering can still be computed. If there is a single thread performing transactions, then a serial ordering is virtually guaranteed. But if multiple concurrent threads run transactions in parallel, we have to make sure that each transaction runs in isolation from each other, and that the sets of locations read and written do not intersect with other transactions. If the transaction manager finds a conflict, it aborts one of the transactions.
Again, we note that this implementation can easily handle conflicts because it is very simple. Aborting a transaction is easier than committing it. You simply discard the transaction and you don't write a commit record for it. Allowing transaction to proceed and only checking for conflicts at commit time is an ‘optimistic’ strategy. In many situations an ‘optimistic’ concurrency strategy can perform better than a ‘pessimistic’ one. With a slight change it is easy to go to a ‘pessimistic’ model. In a pessimistic model, conflicting transactions are detected when a field is mutated. If the mutated field has been read within another transaction, then the other transaction may be working with stale data if it commits before this one. A pessimitic model is slightly simpler to understand because you check the validity of each operation just before performing it. However, an optimistic model performs very well if concurrent changes to the exact same objects do not often occur.
While trying to bring this code back to life I encountered two bugs in MIT/GNU Scheme. The first was a bug in a rare corner case of structure definition. The second was caused by the fact that the floating point layout changed between the 32 and 64 bit releases. I've fixed these, but I have a more interesting problem.
The MIT Scheme object system, SOS, is derived from many of the ideas in Tiny CLOS. It is missing, however, a fully functional MOP. In addition, method dispatch is done only on the types of arguments. This makes it tricky to do ‘EQL specializers’. EQL specializers are necessary to smoothly integrate object instantiation. You can work around this limitation in some ways, but it would be much better if the implementation had this as a built-in feature.
1 comment:
What do you mean with "smoothly integrating object instantiation", and why are EQL specializers convenient for this? (I like EQL specializers a lot myself, but I would like to hear more about how other people use them...)
Post a Comment