Wednesday, August 9, 2017

Cassandra Performance Tips - Batch inserts, Materialized views, Fallback strategy

Introduction

During a recent project we ran into multiple issues with Cassandra's performance. For example with queries being slow or having timeouts on only a specific environment (though they should have the same setup), inconsistently stored results, and how to optimize batch inserts when using Scala.

This blogpost describes how they were solved or attempted to be solved.


Setup: Cassandra running in a cluster with three nodes.

Performace related lessons learned

  1. On certain environments of the DTAP build-street slow queries (taking seconds) and weird consistency results appeared. Not all 4 environments were the same, though Acceptance and Production where as much the same as possible.

    We found as causes:

    Slow queries and timeouts: Cassandra driver was logging at both OS and driver level
    Inconsistently stored results: The clocks from different clients accessing C* were not the same, some off for minutes. Since the default in v3 of the Datastax/Cassandra driver protocol is clientside generated timestamps, you can get in trouble of course, since then the one with the most recent timestamp just always wins. But implementing serverside also won't be obvious, since different C* coordinators can give a millisec different timestamp.

  2. For Gatling performance tests written in Scala, we first needed inserting 50K records in a Cassandra database, simulating users already registered to the system. Trying to make this perform several options were tried:

    a- Plain string concatenated or prepared statements where taking over 5mins in total
    b- Inserting as a batch (apply batch) has limit of 50KiB in text size. That limit is too low for us: 50K records is almost 5MB. Splitting up was too much of a hassle.
    c- Making the calls async, as done here: https://github.com/afiskon/scala-cassandra-example
    But we were getting:

    17:20:57.275 [ERROR] [pool-1-thread-13] TestSimulation - ERROR while inserting row nr 13007, exception =
    com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /172.20.25.101:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/172.20.25.101] Pool is busy (no available connection and the queue has reached its max size 256)))
    at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
    at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:336)
    at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
    at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
    at cle.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:102)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1184)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1120)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:295)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
    at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
    at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
    at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
    at UserDAO.insert(UserDao.scala:58)
    ...


    Turns out it is the driver's local acquisition queue that fills up. You can increase it via poolingOptions.setMaxQueueSize, see: http://docs.datastax.com/en/developer/java-driver/3.1/manual/pooling/#acquisition-queue
    We set it to 50000 to be safe it would just queue all records (50K). For a production environment this might not be a good idea of course, you might need to tune it to your needs.
    And the threads we set to 20 in the Executioncontext (used by the DAO from the example from github above). You can set it as in this example: http://stackoverflow.com/questions/15285284/how-to-configure-a-fine-tuned-thread-pool-for-futures

  3. Increasing CPU from 4 to 8 did seem to improve performance, less CPU saturation.

  4. Each time adding one more materialized view increases insert performance by 10%  (see here)

  5. For consistency and availability when one of the nodes might be gone or unreachable due to network problems, we setup Cassandra write such that first EACH_QUORUM is tried, then if fails, LOCAL_QUORUM as fallback strategy.

Below articles did help to analyse the problems further:

No comments: