I am currently working on a problem for a customer that involves the backup and restore of selective rows in a relational database. Not easy. The application in question requires selected rows in 38 tables (think: project, phases, milestones, resources, etc) to be exported to a file (about 20,000 table rows per export), then that same data imported as a NEW dataset. That last bit is tricky and I'll explain why later.
Built into Tornado is a class called pmaSerial which basically turns a JDBC ResultSet into a lump of XML. So the export is quite easy, just need to make sure we export the data in the correct order, eg parent tables first, followed by children. The result is a compressed xml file - actually the same basic format as .pmx (used to export Tornado apps).
The import is much more of a challenge. We use Tornado's XMLImportParser which does most of the heavy lifting. When we import a row, we need to generate a NEW PRIMARY KEY so that the imported data is added as an addition to any data that might be in the database already. Of course once a new primary key for a row is generated all the child rows of that table break because they refer to a primary key that no longer exists (or worse, the imported rows are connected to existing rows in another project). To get around this a map of translated keys is kept and each time just prior to row insertion, the foreign keys in the row are transposed.
The good news is it works well, and between different database types (eg postgresql to mysql), the bad is it is sloooooow. And the testing is not fun :-(