Jan 13, 2014
I doubt that any of this will work out of the box (though I could be wrong). VoltDB provides a JDBC interface, a set of native drivers in various languages and a bulk CSV loader that can all be load data into VoltDB. What advantages of Sqoop are you looking to leverage when loading data? If you tell us what you're trying to achieve, perhaps we can help figure out a good way to approach this.
Similarly, if Sqoop can query over JDBC, it might be able to do some of the Sqoop work to run queries, but I would worry about it's efficiency. Again, what are you trying to accomplish? Are you using Sqoop to move data into HDFS? Is VoltDB replacing a system that already uses Sqoop to connect to HDFS?
Jan 15, 2014
Thanks much for the reply.My problem statement is as below -
We have an application that keeps appending huge amount of rows to a file in HDFS every 2 seconds.I want a mechanism to bulkload this data into specific partitions in Voltdb.Since sqoop internally implements map reduce jobs to parallely process and load this data into underlying database, I thought this would be an ideal choice.Is there a better way to do this?
Similarly for the querying part, I have a KPI Engine that runs queries on voltdb (in a cluster ). I need to be able to run the query on each of the partitions in voltdb, collate the results and give back to my Engine.Here again I felt Sqoop would be the right choice as it runs map reduce jobs and collates results.
Further queries on integrating sqoop and voltdb
Jan 17, 2014
In addition to my above queries, I came accross the below 2 links for integration of sqoop and voltdb
These 2 links talk about exporting data in near realtime from VOLTDB to a file system or database.However I am looking for a similar system to load data from external sources into voltdb.Is there such a mechanism available?
Thanks in advance,
Jan 17, 2014
> I need to be able to run the query on each of the partitions in voltdb, collate the results and give back to my Engine.
You have 2 choices: You can run a multi-partition transaction, which VoltDB automatically executes on all partitions and automatically handles result processing from each, returning a single result set to the caller. Or you can run your query as a single partition transaction, and invoke it for each partition. In this strategy your client would be responsible for processing each result from each partition.
As for bulk loading, currently we have our command line utility, the CSV loader. And our sqlcmd utility that can be fed a script. Of course we have JDBC as well, but I think for the high throughput you are looking for, you may want to write a custom loader for your specific needs? If you do, consider open sourcing it (and contributing it), we'd love to add an HDFS bulk loader utility to our list of utilities.
Further queries on bulk loader for voltdb and queries on multiple partitions
Jan 19, 2014
Thanks a lot for your reply.
You have mentioned that I have 2 choices: either run a multi-partition transaction, which VoltDB automatically executes on all partitions and automatically handles result processing from each, returning a single result set to the caller. Or run the query as a single partition transaction, and invoke it for each partition. In this strategy your client would be responsible for processing each result from each partition.
Which approach would you recommend, I mean which one will give me the best performance ? My system is very performance intensive.If I run a multi-partition transaction will it be very network intensive and chatty?
I am also very eager to develop a bulk loader utility to load from HDFS/local file system, however I don' know how to go about doing this.Can you please give me some pointers to documentation around how to develop custom loaders for voltdb
Jan 21, 2014
Multi-partition queries that execute a single batch of SQL are efficient and not chatty. You can return result sets up to 50MB from VoltDB this way. Another, more scalable way, to do this with VoltDB is to utilize the export framework. For the export pattern, you define a table to export - and as you find candidate data that needs to be extracted from the system (this can include roll-ups, view calculated rows, SQL processed data), you write that data to the export table (using SQL INSERT).
Data written to the export table will be pushed to a JDBC connection or written to a flat file by VoltDB.
The CSVLoader is open source. You can see how it does bulk loading in to VoltDB as an example for a loader.