SimpleDBM is an Open Source Embeddable Database Engine written entirely in Java.
Friday, September 30, 2005
Design choices - flexibility versus performance
A benefit of choosing this design is flexibility. For example, although there are different types of Log records in the system, the Log Manager has no knowledge of them. Similarly, the Transaction Manager interacts with disk pages without knowing much about the structure of those pages. Even the Buffer Manager, which is responsible for caching disk pages in memory, actually knows nothing about disk pages. Therefore, it is possible to implement new page types without impacting the Buffer Manager. As a final example, the module I am working on currently, the BTree manager, has no knowledge about the types of index keys and rowids it has to handle. It relies upon a couple of externally provided factory classes to generate, and manage these objects. I am therefore able to develop the BTree module in advance of defining the table row structure, the field types or even the field operations. From the BTree module's perspective, all that is needed is that the keys should be sortable and support the form of serialization that is defined by SimpleDBM.
The downside to this flexibility is increased overhead. There is extra memory and processing overhead, as various object types must be created and managed dynamically using reflection. There is extra overhead in maintaining type information and looking up type information. For instance, when the log records are retrieved during restart recovery, and we need to replay the changes that have occured within a BTree page, we need to first obtain instances of the index key generation factory and the rowid generation factory. The log records must save extra type information to allow this.
Testing
Writing test cases for a module often takes more time than coding the module itself. I think that this is not unusual, and any project that is serious about testing must factor in additional time and effort for this activity. In the company I work for, we never write automated tests. All testing is carried out by running the program, feeding input and checking output manually. This form of testing is important as well, but totally inadequate. A unit test written by the developer can test aspects that a user or operator can never have access to.
Of course, all test cases are not created equal. Often we hear about projects that use JUnit based testing framework, but this in itself does not prove how good the testing is. There are several things to look for:
- Test coverage - how much of the program has been subjected to testing.
- Test scenario - a program is meant to be used in many different scenarios. How many of the possible scenarios have been tested.
- Test data - has the program been tested for variations in data?
For a project like SimpleDBM, testing is additionally complicated by two things:
- Liberal use of threads and multi-threading constructs complicate testing.
- Recovery operations need the system to be crashed in various ways.
It is not always possible to write test cases that are non-intrusive. Sometimes it is necessary for the program being tested to cooperate with the test framework. For example, in the BTree split page test case, I need to be able to crash the system at specific points in order to test recovery. To do this, I use flags within the BTree module that the test framework can manipulate to trigger this behaviour. For normal operations, the flags can be switched off.
Thursday, September 15, 2005
Page oriented redo/undo versus Logical Undo
The term Logical Undo is used to describe logging where the undo action may not always be applied to the page that generated the log record. However, Logical Undos can still be page-oriented. The difference is that the undo action may be applied to a different page. It is also possible for a logical undo operation to modify more than one page, and generate additional log records.
Since a Logical Undo can be page-oriented, the term page-oriented undo is ambiguous.
Monday, September 05, 2005
In favour of Checked Exceptions
A good discussion of the problems with Checked Exceptions is in this article by Brian Goetz. The main problems can be summarized as:
- Unnecessary exposure of implementation details.
- Unstable method signatures.
- Unreadable code.
- Exception swallowing.
- Too much Exception wrapping.
Problems 1 and 2 are caused by poor design. By ensuring that exceptions are thrown at the right level of abstraction, these problems can be avoided. Of course, this means greater effort by the programmer, and also leads to problem 5 - ie - too much exception wrapping.
As for problem 3, I think that any piece of code that has proper error handling will always be less readable than code that has no error handling. This is the reason why so much text book code has very little or no error handling. In a production system, unlike in text books, readability is important, but proper error handling is even more important.
Since so many articles talk about the problems with Checked Exceptions, perhaps a novel approach to this issue is to look at the problems caused by Unchecked Exceptions. These, in my view, can be summarized as:
- Lack of information in method signatures. So that by looking at a method signature, you cannot tell what exceptions are likely to be thrown.
- Java compiler will not warn you if you ignore exceptions. Therefore, rather than thinking about every possible exception, you will be blissfully unaware of potential exceptional conditions. The result is likely to be an unstable system; one that crashes with indecipherable errors.
- Loss of information. Since information about exceptional conditions is fairly local within a system, when an exception at a lower level is caught and wrapped at a higher level, there is potential for additional information to be supplied to diagnose the problem. Since unchecked exceptions pass through silently, all you will get is the cryptic error at the lowest level of abstraction with very little contextual information other than the stack trace.
- Exception swallowing. Let us not forget that the most common method of swallowing Exceptions, ie, catching all Exceptions, will also swallow Unchecked Exceptions. I have seen programmers swallowing Exceptions because they find this an easy and simple way to fix that annoying NullPointerException.
- Whether you like it or not, Java uses Checked Exceptions everywhere in its libraries. Therefore, even if you throw Unchecked Exceptions, chances are that programmers will still swallow Exceptions, because they still need to deal with the Checked Exceptions thrown by Java libraries.
- As it is, programmers do not like to think about error handling. To encourage the use of Unchecked Exceptions is to encourage this behaviour, as there will be even less inclination to think about exceptional conditions.
If you are still not convinced, read this excellent interview with Bruce Lindsay, where he talks about the type of error handling that goes into building a DBMS.
Friday, September 02, 2005
BTrees and Logical Undos
Certain BTree operations require Logical Undo to be performed during rollbacks and restart recovery. This is because, in a BTree, keys can move from one page to another, and therefore an undo operation may have to be applied to a different page from the one where the original action took place.
To take an example, suppose transaction T1 deletes the key D from page P1. Another transaction T2 comes along and inserts the key B into page P1. This causes a page split, after which, all keys greater than B end up in a new page P2. Now, if the first transaction T1 aborts, then it needs to reinstate the deleted key D. However, if it looks at page P1, it will find that the deleted key no longer belongs there. Therefore it must locate the new page P2 and perform the undo action there.
There are a couple of design decisions that affect the complexity of the logical undo actions:
If the BTree is designed to physically remove keys when they are deleted, then undo actions require the key to be physically re-inserted. This means that during the undo of a key delete, page splits can occur if the insert cannot be performed due to the page being full. The splits can recurse up the tree causing more page splits.
The advantage of the physical removal option is that the BTree never contains data that is superfluous. If a large number of rows are deleted, the BTree will be immediately compacted, as the pages used by deleted keys would be removed. Keeping the BTree dense and small is good for performance as fewer pages need to be scanned during traversals.
The other approach is to use logical key deletes, where the key is not physically removed but merely marked as deleted. To undo the delete, only the mark needs to be reset. Logical deletes allows undo operations to proceed without page splits, and therefore Logical Undos can be page oriented.
The problem with logical deletes, of course is that these keys must be garbage collected at some point and key traversals need to obtain instant locks on the deleted keys to determine whether they are truly deleted or not. The presence of a large number of deleted keys can also make the BTree less dense, and increase the path for traversals.
It seems to me that the logical delete key approach is more suited to a scenario where there are frequent rollbacks.
The issue of logical deletes is not discussed at any great length anywhere. There is a brief discussion of this in Mohan's paper Commit_LSN: A Novel and Simple Method for Reducing Locking and Latching in Transaction Processing systems.
A BTree implementation that uses logical key deletes is in Apache Derby.
Thursday, September 01, 2005
Space Management
I am a firm believer in Checked Exceptions. I think Unchecked Exceptions should be used very very rarely, and only after considerable thought. It seems to me that those who champion Unchecked Exceptions are primarily looking for clean code that is aesthetically pleasing and is not cluttered with a lot of error handling code. While this may be desirable for improving the readability of the code, correct error handling is more important than improving readability, especially in any mission critical system. Unchecked Exceptions have the big problem that the Java compiler does not flag it if you ignore such exceptions. Therefore, you can write code that happily ignores exceptions, and if this is part of a large system, believe me, bad things will happen.
I will write more on this subject in a separate post. The reason I brought this up is that in SimpleDBM, almost all exceptions are checked exceptions, and while implementing the Space Manager module, I encountered a problem that would not have occurred had I used unchecked exceptions. The Space Manager module is transactional, and therefore needs to implement the transactional redo/undo interfaces defined by the Transaction Manager. While implementing these interfaces, I suddenly found that because I was using checked exceptions, the existing redo/undo interfaces did not allow the Space Manager to throw exceptions that are unknown to the Transaction Manager. At first I thought that maybe I should expand the interfaces to allow this, but then, I realized that doing this would break the modular structure of the system, as it would create a cycle in the dependency graph between the two modules. Also, the Transaction Manager has to be able to manage any number of transactional modules, and therefore must not know about the specifics of any module. If I allowed the TM to know about exceptions thrown by the Space Manager, then it would mean that the same would have to be done for every other module that is implemented in future!
The solution to this problem is to specify the redo/undo interfaces in such a way that modules that implement these interfaces can wrap any undeclared exceptions in a specific exception that is meant to wrap such exceptions. This way, we can continue to use and benefit from checked exceptions, without having to resort to unchecked exceptions.