> relational databases are basically
> 2-dimensional matrices with pointers...
> high-class spreadsheets. object and xml
> databases are more likely to accept the
> architectures and data models you're
> working with every day, and are more
> adaptable, flexible and customizable :
> you'll be able to apply extreme
> programming principles and add fields as
> you go, connect to other systems and
> maintain focus on your data model and
> your project, not that of the database
> and the best possible model that fits
> into that database.
> before choosing to ignore this
> suggestion and go back to relational
> databases, struggling to fit your
> designs into its data model, consider
> xml and object databases. import/export
> from/to RDBMSs is straightforward, and
> has been set up in real-time to truly
> garner the design/maintenance benefits
> of OODBMSs while retaining the atomicity
> and existing relational tools without
> sacrificing data quality and currency.
At the risk of starting a flame war: "object databases
are basically edge-labelled graphs with pointers,
high-class linked lists". It is simply a mistake to
suggest that object databases are inherently better
than relational databases, or the reverse. For some
problems the data is most conveniently modeled as
relations, for other problems the data is most
conveniently modeled as a graph. For other
problems the data is most conveniently modeled as
an n-dimensional array.
And a relation is not "just" a 2-dimensional matrix,
any more than an object oriented database is "just"
a collection of relations. What is true is that a
relation can be viewed as a two-dimensional array
for some purposes, just as a a set of objects can be
viewed as a collection of relations for some
purposes, but there is no such thing as a
fundamental abstraction for which all other
abstractions are properly viewed as just a special
case of that fundamental one.
Here is an example of why such reductions are
absurd: A set is "just" a bag (or multiset) where the
count of each element is always <= 1. A bag is
"just" a function mapping its domain to the natural
numbers, a function is "just" a binary relation
satisfying xRy and xRz implies y=z, a binary relation
is "just" a set of pairs. Each reduction can be made
to seem quite logical, but the result is a cycle.
Another way of saying this is that the relation "an X
can be vieiwed as a Y with blah" is not an order
It isn't a functional relation either. You can
start with general graphs, restrict them to DAGs, then
to trees, then to sequences, in order to view a list as
a special kind of graph. Or you can view a list as a
1-dimensional array. Either view is legitimate.
There is no such thing as "an X is 'just' a Y with
blah", it is an illusion created by an over-simplified
Oh, and XML sucks.
Reprise from the author
A few comments to the comments:
First, thanks to all the helpful people who have pointed out systems similar to the one I described. I've been planning to do the survey work on this idea for a while now and you all have given me some useful pointers.
Second, I wish I had ended the article with a more concrete statement of my goals for writing the article. The point wasn't so much to introduce a new idea, the idea was old even 6 years ago when I was doing compiler research, rather it was to try to provoke some interest in the project in the open source community. The technical difficulty of the project would be large, but much larger is the political difficulty of generating interest in developers for using a new VM. Yet I think the open software community has shown enormous power in influencing software development, and has a vested interest in this sort of project.
Third, some comments to my critics:
Todd Fast wishes to berate me for not mentioning runtime code specialization. In the first place I don't think any production JIT compilers do this, in the second place this technique benefits only a small minority of programs, and in the third place, static optimization is still critical in systems that do implement this technique, all of which leave my original position unaffected by your response. And may I suggest, Mr. Fast, that you are inviting flames by taking a quote out of context, introducing a completely new subject, and then urging the author to "learn more about such technologies before dismissing their advantages out of hand". I could hardly have dismissed these technologies when I never mentioned them.
jetson123 does not know what I mean by a "Java-style JIT compilation" and suggests different implementation strategies for Java. What I mean is a Just-In-Time compiler meant to improve the speed of a normal interpreter. You are probably correct when you say "...if you were to spend the enormous effort of coming up with a new VM ... you probably wouldn't do significantly better than the best current Java environments", but that is not my purpose anyway. My purpose
is to develop a strategy for distributing applications written in any language at all, Java, C++, C, Python, Perl, Sather, ML, Unicon, Janus, and other languages that have not even been invented yet, and to do so efficiently. And I have to disagree with you that the success of the current Java VM says anything at all about the technical merits of the approach. What it tells us is that Sun had a good marketing strategy and that the anti-Microsoft forces have considerable power when they focus their efforts.
In fact, I have a rather low opinion of Java technology in general, the language, the API's and the VM, and it is my hope that some day the open source community will embrace something better for a machine independent platform. If you want to know why anyone might possibly not like Java, you may want to look at http://www.azstarnet.com/~dgudeman/javacrit.htm (http://www.azstarnet.com/~dgudeman/javacrit.htm), although it is somewhat out of date.
Sesse suggest that a VM is not enough, one also needs to define an API and address the issue of revisions. He is correct on both points, but I didn't want to get too ambitious in one editorial... :)