by   |    |  Estimated reading time: 2 minutes  |  in Business Technology   |  tagged , , ,

There has been a lot of talk about in-memory databases in recent months. Initially as a way to speed up analytical applications (although this isn’t entirely new – most databases already offer in-memory OLAP analysis for example). Recently the discussion has broadened with SAP trying to position their in-memory HANA database as a revolutionary replacement for traditional relational databases (RDBMS) – in particular arch enemy Oracle’s database.

Good time for a reality check. Relational databases are used in virtually every commercial application that exists today, so they are not going away anytime soon. A more likely development is that RDBMS will start to adopt some of the technologies of in-memory databases, allowing a large share of existing applications to get most of the benefit from in memory databases without needing to make a bigger investment to move to a database with completely different characteristics and therefore likely incompatibilities. We have seen this before when for example Object, OLAP, or XML native databases were hot. What happened then was that regular RDBMS got some of the same capabilities, such as the ability to index XML content and query it through XQuery. My money is on a similar development this time as well. At the same time in order to get widespread usage in-memory databases will have to adopt a lot of the characteristics of relational databases, such as persistence for backup and recovery purposes, ACID transactions, SQL support and more.

So, should we be happy about in-memory databases? Yes we should. If nothing else they will push vendors to introduce new capabilities (like the ability to load parts or entire databases into memory) in to the databases we all use today. Will the in-memory databases emerging today replace the existing widespread relational databases? No they won’t. Although I would not exclude the possibility of one or two of them over time, and provided they get more similar to relational databases, managing to establish with an at least noticeable market share.

4 Responses

  1. Avatar

    James Webber

    Sorry Dan, but we take all of our IFS data out of the rdbms and populate in memory olap cubes (IBM cognos tm1). Unless oracle gets a lot more flexible I can’t see this changing anytime soon.

    Reply
  2. Avatar

    Ben Hill

    Dan, thats a very common point of view from the majority working solely with RDBMS Solutions (including for Analytics). But users of true In-Memory Technology such as IBM Cognos TM1, Jedox Palo and Daptech Keystone which couple “In-Memory” with “Cube” Technology (Not Imitation’s – Like MS PowerPivot) will disagree on the whole – These systems serve different purposes to a RDBMS by focusing on the analytics as a whole function. They do it damn well and are implemented in a fraction of the time it takes using traditional relational technology – As proven by OLAP Surveys (The OLAP Report, The BI Verdict) time and time again.

    Reply
  3. Dan Matthews

    Dan Matthews

    Ben, James – I fully agree that for analytical applications in-memory technology works very well. However useful in that contextI don’t think in-memory databases will replace RDBMS gor generic use anytime soon. The will likely increaes competition and thus push the existin RDMBS:es in a good direction for the beneift of us all.

    Reply
  4. Avatar

    Robert Young

    Ummm. In-Memory (or on SSD as primary store) databases are useful, and efficient, only if the schemas are fully normalized. Otherwise, you’re still doing flatfile-in-engine coding-centric application as has been done from COBOL to java. When random I/O is (nearly) as cheap as sequential, sequential has nothing to offer but bloat. The cost, expressed as $$$/database not bytes, for In-Memory and SSD will never match spinning rust. You have to reduce the footprint, and Dr. Codd has shown us how to do that.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *