Way back in the 70's, when solid-state computing involved chisels and rock walls, the concept of the abstract data type (ADT) was developed. The ADT encapsulated its representation as thoroughly as possible. Each ADT had a set of representation invariants that were supposed to be maintained under all circumstances. It was considered an embarrassing bug to be able to cause the representation to violate an invariant if the caller made only API calls, no matter how unusual. Additionally, each ADT was equipped with an abstraction function which described how to interpret concrete representations to the abstract ones they represented.
It was, quite frankly, a pain in the butt to document all this. Especially because the documentation made no difference — it was just text that could say anything it wanted. In theory, you could replace the representation of any object with an equivalent representation that obeyed the invariants, adjust the abstraction function appropriately, and no one would be the wiser. In practice, no one ever did this. Not even once. So the documentation pointed out that the representation was simply the built-in Cartesian product that any decent language provided and the abstraction function was the overly broad enumeration of the elements of the product.
These days, of course, things are different. Drawing upon lessons learned in the 80's and 90's, people are using sophisticated IDLs to ensure their abstract data types maintain invariants across processes and languages and ... ha ha ha ha ha. Sorry.
No, these days people don't bother with abstract data types. They just use semi-abstract JSON objects that don't even provide the basic encapsulation of their 1970's counterparts. The representation isn't hidden at all: it is a string. You can make string to string mappings, aggregate them and nest them in arrays, but you cannot hide the fact that you've got a string. So much for abstraction.
It isn't even worth parsing the string to determine if it satisfies invariants. Chances are, you're not going to process it yourself, but instead are just going to hand it off to another process in string form and let it deal with it. And why not? At best you'll simply find that it is indeed well-formed. At worst, you'll raise an error now that will probably be raised shortly anyway. And is well-formed data really that important? It isn't like you're going to store it persistently.....
2 comments:
Should this trend be called an Abstraction Heresy?
+42
Post a Comment