Million dollar performance for less than $100K – Impossible, right?
Not impossible. Pervasive.
With Pervasive RushAnalytics™ you can, access data, check, cleanse, transform and persist it, then analyze it, all in one workflow with one product. Then you can execute that workflow on virtually any hardware environment, including Hadoop clusters, without re-designing.
How long do workflows like that take to execute?
Performance varies wildly, of course, based on your data, your hardware, your cluster or server configuration, and a hundred other factors. But here are some numbers to get you into the right ballpark. On less than $100K of industry standard hardware in a 3 node cluster with HBase:
- Ingest (Flume), parse, persist (HBase), check, transform and analyze NetFlow data for network optimization and cybersecurity at
3 million events/sec.
- Do a simple query on log files for operational intelligence at
>40 million recs/sec. (No, that’s not a typo.)
- Do an ETL Load of 18 billion row TCP-H line item data at 3 TB/hour; then full table scan and aggregate Query 1 for business intelligence at
>30 million recs/sec.
- Analyze Malstone B weblog data at 4 TB/hour, aka
>10 million recs/sec.
This is the level of performance that customers of DataRush powered solutions like RushAnalytics take for granted, while the rest of the world says, "That's impossible." What could you accomplish with power like this at your disposal?