Forum: Building VoltDB Applications

Post: Multi- vs single-partitioned procedures for large updates

Multi- vs single-partitioned procedures for large updates
pettermahlen
Jan 22, 2013
Hi,

I was wondering which is likely to be faster if you wanted to, say, update a couple of million rows out of a couple of hundred million: a straightforward multipartitioned version (passing in for instance a couple of million primary keys), or sending the whole set of primary keys to each partition, and just letting the procedure not update the entries that aren't present?

Obviously, the transactional characteristics are different, but I'm wondering primarily about performance. I'm guessing the update could take a fairly significant amount of time, which means the multipart version would cause other queries to have to wait.

Not sure if the question is clear enough, let me know if more detail is needed. I suppose the answer depends on the ratio of multi- to single-part queries in the remaining workload?

Cheers,
Petter
aweisberg
Jan 23, 2013
Hi,

The fastest thing to do is that hash the keys and direct the single parts to the right partition along with the keys for that partition. That actually sounds like a good idea for the API. A single partition transaction with an array partition parameter, the server could hash all the partition keys and coalesce them into a single single part per partition.

The multi-partition will be slower because there will be several round trips to run the SQL. How much is a good question, it really depends if the batch size is large enough that it round tripping the batches isn't the dominant cost. Cluster size will be a break even factor in that since costs go up as number of partitions increases.

What you should do as an application developer depends on whether you are running a batch workload or a latency sensitive workload.

In a batch workload you should do the single parts everywhere trick (soon to be formalized in an API) with all the keys. I am working on elastic add/remove partitions right now and once that is enabled hashing the keys at the client to do less work in the DB will not work 100% reliably. If the cluster adds a new partition while you are still using the old hash function you may fail to reach all rows in the database (or reach some rows twice). You can tolerate it, or try and detect it by tracking the number of rows updated and making sure it is what you expect and take corrective action if it isn't.

In a latency sensitive workload you should do each update individually so the work can be interleaved with the rest of the workload. You can split the difference for better throughput by batching a few thousand keys each time (you also want to hash them so they route directly).

When in doubt doing single row updates is the way to go. You can always add more nodes to get more throughput.

Ariel
pettermahlen
Jan 25, 2013
Cool, am looking forward to the API support for 'single partitions everywhere'. And I must say in anticipation of that, doing single-row updates sounds like a pretty good option too; that should work pretty well for us in fact.

Thanks!