Introduction to NOSQL and cassandra, part 2

In part 1 of this talk I presented few of the theoretical concepts behind nosql and cassandra.

In this talk we deep dive into the Cassandra API and implementation. The video is again in Hebrew, but the slides are multilingual ;-)

  • Started with a short recap of some of RDBMS and SQL properties, such as ADIC, why SQL if very programmer friendly, but is also limited in its support for large scale systems.
  • Short recap of the CAP theorem
  • Short recap of what N/R/W are
  • Cassandra Data Model: Cassandra is a column oriented DB which follows a similar data model to Google’s BigTable
  • Do you know SQL? So you better start forgetting it, Cassandra is a different game.
  • Vocabulary:
    • Keyspace – a logical buffer for application data. For example – Billing keyspace, or statistics keyspace, appX keyspace etc
    • ColumnFamily – similar to SQL tables. Aggregates columns and rows
    • Keys (or Rows). Each set of columns is identified by a key. A key is unique per Column Family
    • Columns – the actual values. Columns are represented by triplets – (name, value, timestamp)
    • Super-Columns – Facebook’s addition to the BigTable model SuperColumns are columns who’s values is a list of Columns. (but this is not recursive, you can only have one level of super-columns)
  • One way to think of cassandra is as a key-value store, but with extra functionality:
    • Each key has multiple values. In Cassandra jargon those are Columns
    • When reading or writing data it’s possible to read/write a set of columns for one specific key (row) atomically. This set of columns may either be a specified by the list column names, or by a slice predicate, assuming the columns are sorted in some way (that’s a configuration parameter)
    • In a addition, a multi-get operation is supported and a row-range-read operation is supported as well.
    • Row-range-read operations are supported only of a partitioner is defined which supports that (configuration parameter)
  • Key concept: In SQL you add your data first and then retrieve it in ad-hoc manner using select queries and where clauses; In Cassandra you can’t do that. Data can only be retrieved by it’s row key, so you have to think about how you’re going to be reading your data before you insert it. This is a conceptual diff b/w SQL and Cassandra.
  • I covered the Cassandra API methods:
    • get
    • get_slice
    • multiget
    • multiget_slice
    • get_count
    • get_range_slice
    • insert
    • batch_insert
    • delete
    • (these are the 0.4 api method. In 0.5 it’s a little different)
  • Between N/R/W, N is set per keyspace; R is defined per each read operation (get/multiget/etc) and W is defined per write operation (insert/batch_insert/delete)
  • Applications play with their R/W values to get different effects, for example they use QUORUM to get high consistency levels, or DC_QUORUM for a balance of high consistency and performance, W=0 to have async writes with reduced consistency.
  • Cassandra defines different sorting orders on it’s columns. Sort order may be defined at the ColumnFamily level and is used to get a slice of columns, for example, read all columns that start with a… and end with z…
  • There are several out of the box sort types, such as ascii, utf, numeric and date; Applications may also add their own sorters; This is as far as I recall the only place where Cassandra allows external code to be hooked in.
  • Thrift is a protocol and a library for cross-process communication and is used by Cassandra. You define a thrift interface and then compile it to the language of your choosing – C++, Java, Python, PHP etc. This makes it very easy for cross-language processes to talk to each other.
  • Thrift is also very efficient serializing and  deserializing objects and is also space-efficient (much more than Java serialization is).
  • I did not have enough time to cover the Gossip protocol used by Cassandra internally to learn about the health of its hosts.
  • I also did not have enough time to cover the Repair-on-reads algorithm used by Cassandra to repair data inconsistencies lazily.
  • I did not have time to talk about consistent hashing, which is what cassandra implements internally to reduce overhead of joined or dropped hosts occurrences.

So, as you can see, this was an overloaded, 1h+ talk with a lot to grasp. Wish me luck implementing Cassandra into outbrain!

4 Responses to “Introduction to NOSQL and cassandra, part 2”

  1. Hi Ran

    I got some idea about cassandra after reading the above

    We had some requirement like below

    I have a table which has the columns like Category_name,Section_name,article,is_published_by with multiple records in the table.

    I want to retrieve a query based on condition like belongs some category_name ‘X’ so then all values which belongs to X category will be retrieved along with other 3 columns

    For E.g.,

    select * from Table where category_name =’Category1′;

    Here we are using category_name as key and retrieving all the records

    Please let me know if it would be possible.

    Can you please help me on this and kindly share your views.

    By Mehar Chaitanya on Jan 29, 2010

  2. Hi Mehar, one of the differences b/w SQL and Column oriented stores such as Cassandra is that you have to think very good when inserting your data, about how you’re going to retrieve it. Unlike SQL, in cassandra you only have primary keys, no secondary indexes. What you’re describing cannot simply be achieved in cassandra unless you plan for it ahead of time.
    So if, for example, you create a CF Categories which is keyed by “Category1″ etc and holds the list of all records belonging to Category1 then that would be possible. But the usual case is that this scheme also calls for a lot of data denormalizations and repetition, so there’s a tradeoff.
    You may also be interested in lazyboy, which sort of implements secondary indexes scheme over cassandra.

    By Ran Tavory on Jan 29, 2010

2 Trackback(s)

  1. Mar 3, 2010: Links about Apache Cassandra « Waltersf's Blog
  2. Mar 12, 2011:

Sorry, comments for this entry are closed at this time.