A new kind of Hadoop database was born to meet the requirement for scaling out the petabyte level RDBMS database within Hadoop infrastructure. It keeps the features of RDBMS while it can scale out like NoSQL. The product called Splice Machine has its major version 1.0 released as the first ever Hadoop RDBMS in the market. This implies that the code change can be greatly minimised in order to move existing SQL based apps into Hadoop infrastructure. This 2 year old company is started up by CEO cofounder Monte Sweben via fundraising US$19 million from Mohr Davidow Ventures, InterWest Partners. It claims to be a real-time transactional SQL on Hadoop database and a direct competitor to the SQL giant Oracle.
Trial version can be downloaded via filling out the form at http://www.splicemachine.com/product/download/.
The download link will be emailed to your inbox with an expiry time of 2 hours.
Splice Machine can be installed in two modes:
Standalone: A commodity machine should have at least 8GB of RAM, with 4+ GB available,
and at least 3x the available disk as data you intend to load. It can be installed on Mac/Win+Cygwin/Linux.
Cluster: Each commodity machine should have at least 15GB RAM and at least 3x the available
disk as data you intend to load. It can only be installed on Linux with platform like Cloudera CDH 4.x/5.x, Hortonworks HDP 2.x and MapR 4.x.
Version 1.0 brings a number of new features. For details, please read http://doc.splicemachine.com
- Native Backup and Recovery
- User/Role Authentication and Authorization
- Parallel, Bulk Export
- Data Upsert
- Management Console for Explain Trace
- MapReduce Integration
- HCatalog Integration
- Analytic Window Functions
- Log Capture Capability
No comments:
Post a Comment