Forum: VoltDB Architecture

Post: Handling terabytes of data

Handling terabytes of data
raich
Nov 9, 2010
Hi,


We have an application that grows around 31.5 GB per month with 15 million rows of data collected everyday. In VoltDB's white paper you have mentioned about 'in-memory processing'. As you can see our data may grow around order of terabytes in 3/4 years. In such a scenario it's not clear how VoltDB will manage such a huge volume of data even we assume that we scale out rapidly.


Thanking you all in advance.


raich
re: Handling terabytes of data
tcallaghan
Nov 17, 2010
Raich,


VoltDB is an in-memory database so you'll need enough collective RAM in your cluster to hold all the table and index data for your application. You can build a cluster with lots of RAM and we will use it effectively (we do not have a limit on how much memory we can use).


Having said that, our focus is OLTP performance and provide a means, via our "export" functionality, to facilitate moving static data into other data storage solutions.


I need to better understand your use-case to give advice on how VoltDB is an entire or partial solution for you, specifically:


1. Is all the data required for the OLTP component of your application?
2. If not, are there OLAP requirements for older data?
3. How do you currently manage both types of data right now?


If you are more comfortable discussing this privately feel free to email me at tcallaghan@voltdb.com


-Tim