Wednesday, August 16, 2017

Lessons learned - Jackson, API design, Kafka

Introduction

This blogpost describes a bunch of lessons learned during a recent project I worked on.
They are just a bunch grouped together, too small to "deserve" their own separate post :)
Items discussed are: Jackson, JSON, API design, Kafka, Git.


Lessons learned

  • Pretty print (nicely format) JSON in a Linux shell prompt:

    cat file.json | jq

    You might have to 'apt-get install jq' first.

  • OpenOffice/LibreOffice formulas:

    To increase date and year by one from cell H1031=DATE(YEAR(H1031)+1; MONTH(H1031)+1; DAY(H1031))

    Count how many times a range of cells A2:A4501 has the value 1: =COUNTIF(A2:A4501; 1)

  • For monitoring a system a split between a health-status page and performance details is handy. The first one is to show issues/metrics that would require immediate action. Performance is for informational purposes, and usually does not require immediate action.

  • API design: Even if you have a simple method that just returns a date (string) for example, always return JSON (and not just a String of that value). Usueful for backwards compatibility: more fields can be easily added later.

  • When upgrading gitlab, it (gitlab) had changed a repository named 'users' to 'users0'. Turns out 'users' is a reserved repository name in gitlab since version 8.15.

    To change your local git settings to the new users0 perform these steps to update your remote origin:

    # check current setting
    $ git remote -vorigin  https://gitlab.local/gitlab/backend/users (fetch)
    origin  https://gitlab.local/gitlab/backend/users (push)

    # change it to the new one
    $ git remote set-url origin https://gitlab.local/gitlab/backend/users0

    # see it got changed
    $ git remote -v
    origin  https://gitlab.local/gitlab/backend/users0 (fetch)
    origin  https://gitlab.local/gitlab/backend/users0 (push)

  • Jackson JSON generating (serializing): probably a good practice is to not use @JsonInclude(JsonInclude.Include.NON_EMPTY)  or NON_NULLm since that would mean a key will be just not in the JSON when its value is empty or null. That could be confusing to the caller: sometimes it's there sometimes not.  So just leave it in, so it will be set to null.   Unless it would be a totally unrelated field like: amount and currency. If amount is null, currency (maybe) doesn't make sense, so then it could be left out.

  • Java:

    Comparator userComparator = (o1, o2)->o1.getCreated().compareTo(o2.getCreated());

    can be replaced now in Java 8 by:

    Comparator userComparator = Comparator.comparing(UserByPhoneNumber::getCreated);

  • Kafka partitioning tips: http://blog.rocana.com/kafkas-defaultpartitioner-and-byte-arrays

  • Kafka vs RabbitMQ:

    - Kafka is optimized for producers producing lots of data (batch-oriented producers) and consumers that are usually slower that the producers.
    - Performance: Rabbit: makes about 20K/s  Kafka: up to 150K/s.
    - Unlike other message system, Kafka brokers are stateless. This means that the consumer has to maintain how much it has consumed.

Friday, August 11, 2017

Java: generate random Date between now minus X months plus Y months

Introduction

This blogpost shows a Java 8+ code example on how to generate a timestamp between two months relative to today ("now").

The code

This example code creates a random java.util.Date between 12 months ago from today until 1 month ahead from today, which will be available in variable randomDate.

LocalDateTime nowMinusYear = LocalDateTime.now().minusMonths(12);
ZonedDateTime nowMinusYearZdt = nowMinusYear.atZone(ZoneId.of("Europe/Paris"));
beginTimeInMilliseconds = nowMinusYearZdt.toInstant().toEpochMilli();

LocalDateTime nowPlusMonth = LocalDateTime.now().plusMonths(1);
ZonedDateTime nowPlusMonthZdt = nowPlusMonth.atZone(ZoneId.of("Europe/Paris"));
endTimeInMilliseconds = nowPlusMonthZdt.toInstant().toEpochMilli();

System.out.println("System.out.currentInmillis = " + System.currentTimeMillis() + ", beginTimeInMilliseconds = " + beginTimeInMilliseconds + ", endTimeInMilliseconds = " + endTimeInMilliseconds);

Date randomDate = new Date(getRandomTimeInMillisBetweenTwoDates());
...

private static long getRandomTimeInMillisBetweenTwoDates() {
   long diff = endTimeInMilliseconds - beginTimeInMilliseconds + 1;
   return beginTimeInMilliseconds + (long) (Math.random() * diff);
}



How do Kubernetes and its pods behave regarding SIGTERM, SIGKILL and HTTP request routing

Introduction

During a recent project we saw that HTTP requests are still arriving in pods (Spring Boot MVC controllers) even though Kubernetes' kubelet told the pod to exit by sending it a SIGTERM.
Not nice, because that means that those HTTP requests that still get routed to the (shutting down) pod will most likely fail, since the Spring Boot Java process for example has already closed already all its connection pools.

See this post (also shown below) for an overview of the Kubernetes architecture, e.g regarding kubelets.


Analysis

The process for Kubernetes to terminate a pod is as follows:
  1. The kubelet always sends a SIGTERM before a SIGKILL.
  2. Only when a POD does not finish within the graceful period (default 30 sec) after SIGTERM, the kubelet sends a SIGKILL.
  3. Kubernetes keeps routing traffic to a pod until the readiness probe fails, even after the pod received a SIGTERM.
So for a pod there is always an interval between receiving the SIGTERM and the next readiness probe request for that pod. In that period requests can (and most likely) will still be routed to that pod, and even (business) logic can still be executed in the terminated pod.

This means that after sending the SIGTERM, the readiness probes must fail as soon as possible to prevent the SIGTERMed pod from receiving more HTTP requests. But still there will be a (small) period of time requests can be routed to the pod.

A solution would be to terminate the webserver within the pod's process (in this case Spring Boot's webserver) immediately gracefully after receiving a SIGTERM. This way any still directed requests  before the readiness probe fails will fail in any way, i.e no more requests are accepted.  
So still you would have some failing requests getting passed on to the pod.  But at least no business logic will be executed anymore.

This and other options/considerations are discussed here.





Wednesday, August 9, 2017

Cassandra Performance Tips - Batch inserts, Materialized views, Fallback strategy

Introduction

During a recent project we ran into multiple issues with Cassandra's performance. For example with queries being slow or having timeouts on only a specific environment (though they should have the same setup), inconsistently stored results, and how to optimize batch inserts when using Scala.

This blogpost describes how they were solved or attempted to be solved.


Setup: Cassandra running in a cluster with three nodes.

Performace related lessons learned

  1. On certain environments of the DTAP build-street slow queries (taking seconds) and weird consistency results appeared. Not all 4 environments were the same, though Acceptance and Production where as much the same as possible.

    We found as causes:

    Slow queries and timeouts: Cassandra driver was logging at both OS and driver level
    Inconsistently stored results: The clocks from different clients accessing C* were not the same, some off for minutes. Since the default in v3 of the Datastax/Cassandra driver protocol is clientside generated timestamps, you can get in trouble of course, since then the one with the most recent timestamp just always wins. But implementing serverside also won't be obvious, since different C* coordinators can give a millisec different timestamp.

  2. For Gatling performance tests written in Scala, we first needed inserting 50K records in a Cassandra database, simulating users already registered to the system. Trying to make this perform several options were tried:

    a- Plain string concatenated or prepared statements where taking over 5mins in total
    b- Inserting as a batch (apply batch) has limit of 50KiB in text size. That limit is too low for us: 50K records is almost 5MB. Splitting up was too much of a hassle.
    c- Making the calls async, as done here: https://github.com/afiskon/scala-cassandra-example
    But we were getting:

    17:20:57.275 [ERROR] [pool-1-thread-13] TestSimulation - ERROR while inserting row nr 13007, exception =
    com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /172.20.25.101:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/172.20.25.101] Pool is busy (no available connection and the queue has reached its max size 256)))
    at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
    at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:336)
    at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
    at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
    at cle.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:102)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1184)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1120)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:295)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
    at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
    at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
    at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
    at UserDAO.insert(UserDao.scala:58)
    ...


    Turns out it is the driver's local acquisition queue that fills up. You can increase it via poolingOptions.setMaxQueueSize, see: http://docs.datastax.com/en/developer/java-driver/3.1/manual/pooling/#acquisition-queue
    We set it to 50000 to be safe it would just queue all records (50K). For a production environment this might not be a good idea of course, you might need to tune it to your needs.
    And the threads we set to 20 in the Executioncontext (used by the DAO from the example from github above). You can set it as in this example: http://stackoverflow.com/questions/15285284/how-to-configure-a-fine-tuned-thread-pool-for-futures

  3. Increasing CPU from 4 to 8 did seem to improve performance, less CPU saturation.

  4. Each time adding one more materialized view increases insert performance by 10%  (see here)

  5. For consistency and availability when one of the nodes might be gone or unreachable due to network problems, we setup Cassandra write such that first EACH_QUORUM is tried, then if fails, LOCAL_QUORUM as fallback strategy.

Below articles did help to analyse the problems further:

Tuesday, August 8, 2017

Gatling Lessons learned

Introduction

This post describes a couple of best practices when using Gatling, the Scala based performance and load-testing tool.


Also one or two Scala related tips will be shown.

Lessons learned

  1. You can't set a header-field in the header like this in a scenario:

    .header("request-tracing-id", UUID.randomUUID().toString())

    This is because the scenario is only created once, and unless you use session variables it, it is all static (a function).

    To solve this one can use a feeder, like this:

    val feeder = Iterator.continually(Map("traceHeader" -> UUID.randomUUID().toString))

    And then replace the UUID.randomUUID().toString() line with:

    .header("request-trace-id", "${traceHeader}")

  2. A Scala 2.11 example of a ternary (maybe not the best solution Scala-wise but readable :)

    .value(availableBalanceDecimal, if (dto.availableBalanceDecimal.isEmpty) null else dto.availableBalanceDecimal.get)

  3. Connecting correctly to a service with a Gatling test in one environment (Nginx, ingress, Kubernetes) for some reason did not work. But it was able to connect to the service under test correctly in another environment. Apparently it had something to do with a proxy in between because had to add .proxy() and it worked:

    val httpConf = http
    .baseURL("http://172.20.33.101:30666") // Here is the root for all relative URLs
    .header(HttpHeaderNames.ContentType, HttpHeaderValues.ApplicationJson)
    .header(HttpHeaderNames.Accept, HttpHeaderValues.ApplicationJson)
    .proxy(Proxy("performance.project.com", 30666))   // note it is the *same* machine as the baseURL,but specified by name... 


  4. A .check() with in it a .saveAs() will *not* happen when the earlier expression evaluates to false, or the conversion fails.
    Kindof makes sense when evaluates to false, but you might miss this one; or maybe you don't even want this .is() because all it means now only isFinished will be set to FINISHED and else it won't be set in the below example.

    someText is always found in the session, but the other one, isFinished, is not.

    .check(jsonPath("$[0].status").saveAs("someText"))
    .check(jsonPath("$[0].status").is("FINISHED").saveAs("isFinished"))
    ...

    .exec(session => {
       val isFinished = session.get("isFinished").asOption[String]
       logger.debug("Generated isFinished = {}", isFinished.getOrElse("Could not find expected isFinished..."))
       session
    })

    .doIf(session =>
       (!session.get("someText").as[String].equals("FINISHED") ||
       session.get("isFinished").as[Boolean].equals(false)
    ))(
    ...


    When running at DEBUG level the above logs:
       Session(Map(someText -> TIMED_OUT, ...)  /// So .saveAs() ocurred
       13:45:56.983 [DEBUG] SomeSimulation - Generated isFinished = Could not find expected isFinished...  


    So the second.saveAs() did not occur for isFinished at all, since it is not set to true nor false; it is not set at all!

Wednesday, April 12, 2017

Lessons learned Docker microservices architecture with Spring Boot

Introduction

During my last project consisting of a Docker microservices architecture, built with Spring Boot, using RabbitMQ as communication channel, I learned a bunch of lessons, here's a summary of them.

Architecture

Below is a high level overview of the architecture that was used.


Docker

  • Run 1 process/service/application per docker container (or put stuff in init.d but that's not intended use of docker)

  • Starting background processes in the CMD cause container to exit. So either have a script waiting at the end (e.g tail -f /dev/null) or keep the process (i.e the one prefixed with CMD) running in the foreground. Other useful Dockerfile tips you can find here

  • As far as I can tell Docker checks if Dockerfile has changed, and if so, creates a new image instance (diffs only?)

  • Basic example to start RabbitMq docker image, as used in the build tool:

    $ docker pull 172.18.19.20/project/rabbitmq:latest
    docker rm -f build-server-rabbitmq
    $ # Map the RabbitMQ regular and console ports
    $ docker run -d -p 5672:5672 -p 15672:15672 --name build-server-rabbitmq 172.18.19.20/rabbitmq:latest

  • If there's no docker0 interface (check by running command ifconfig) then probably there are ^M characters in the config file at /etc/default/docker/docker.config. To fix it, perform a dos2unix on that file.

  • Check for errors at startup of docker in /var/log/upstart/docker.log

  • If your docker push <image> asks for a login (and you don't expect that) or it returns some weird html like "</html>" then you're probably missing the host in front of the image name, e.g: 172.18.19.20:6000/projectname/some-service:latest

  • Stuff like /var/log/messages is not visible in a Docker container, but is in its host! So look there for example to find out why a process is not starting/gets killed at startup without any useful logging (like we had with clamd)

  • How to remove old dangling unused docker images: docker rmi $(docker images --filter "dangling=true" -q --no-trunc)

Spring Boot

  • Some jackson2 dependencies were missing from the generated Spring Initializr project, noticed when creating unittests. These dependencies were additionally needed in scope test:

    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-databind</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-annotations</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-core</artifactid>
      <version>2.5.0</version></dependency>


    Not sure anymore why these then didn't get <scope>test</scope> then... Guess it was needed also in some regular code... :)

  • In the Spring Boot AMQP Quick Start the last param has name .with(queueName) during binding, but that's the topic key! (which is related to the binding key used at sending), so not the queue name.

  • Spring Boot Actuator's /health will check all related dependencies! So if you have a dependency in your pom.xml to a project which uses spring-boot-starter-amqp, /health will check now for an AMQP queue being up! So add a for those if you don't want that.

  • Spring Boot's default AppAplicationIT probably needs a @DirtiesContext for your tests, otherwise the tests might re-use or create more beans than you think (we saw that in our message receiver tests helper class).

  • @Transactional in Spring: by default only for unchecked exceptions!! It's documented but still a thing to watch out for.

  • And of course: Spring's @Transactional does not work on private methods (due to proxy stuff it creates)

  • To see in Spring Boot the transaction logging, put this in application.properties:

    logging.level.org.springframework.jdbc=TRACE
    logging.level.org.springframework.transaction=TRACE


    Note that by default @Transactional just rolls back, it does not log anything, so if you don't log your runtime exceptions, you won't see much in your logs.

  • mockMvc from spring is not really invoking from "outside", our spring sec context filter (for which you can use @Secured(role)) was allowing calls while no authentication was provided for. RestTemplate seems to work from "the outside".

  • Scan order can mess up @ControllerAdvice error handler it seems. Had to change the order sometimes:

    Setup:
    - Controller is in: com.company.request.web.
    - General error controller is in com.company.common package.

    Had to change
    @ComponentScan(value = {"com.company.security", "com.company.common", "com.company.cassandra", "com.company.module", "com.company.request"})

    to

    @ComponentScan(value = {"com.company.security", "com.company.cassandra", "com.company.module", "com.company.request", "com.company.common"})

    Note that the general error controller has now been put in last

  • Spring Boot footprint seems relatively big especially for microservices. At least 500MB or something is needed, so we have quite big machines for about 20 services. Maybe plain Spring (iso Spring boot) might be more lightweight...

Bamboo build server

  • When Bamboo gets slow and the CPU seems quite busy and memory availability on its server seems fine, increase the Xmss and Xmsx (or related). Found this out because the java Bamboo process was running out of heap sometimes, increasing heap also fixed performance.

  • To have Bamboo builds fail on quality gates not met in SonarQube, install in Sonar the build breaker plugin. See the plugin docs and Update Center. This FAQ says so.

Stash

  • The Stash (now called Bitbucket) API: in /rest/git/1.0/projects/{projectKey}/repos/{repositorySlug}/tags a 'slug' is just a repository name. 

Microservices with event based architecture

  • When you do microservices, IMMEDIATELY take into account during coding + reviews that multiple instances can do concurrent access to database.

    This has affect on your queries. Most likely correct implementation for uniqueness check on inserts:
    1- add unique constraint
    2- run insert
    3- catch uniqueness exception --> you know it already exists. Solution with SELECT NOT EXISTS is not guaranteed unique.

  • Also take deleting of data (e.g user deletes himself) into account from the start. Especially when using events and/or eventual consistency in combination with an account-balance or similar. Because what if one services in the whole chain of things to execute for a delete fails? Has the user still some money left on his/her account then? In short: take care of CRUD.

  • Multiple services are sending the same event? That can indicate 2 services are doing the same thing --> Not good probably.

  • Microservices advantages:

    - Forces you to better think about where to put stuff in comparison to monolith where you more often can be tempted to "just do a quick fix".
    - language independency for service implementation: choose the best language for the job

    Disadvantages:
    - more time needed for design
    - eventual consistency is quite tough to understand & work with, also conceptually
    - infrastructure is more complex including all communication between services

    More cons can be found here.

Tomcat

  • Limit the maximum size of what can be posted to a servlet is not as easy as it seems for REST services:

    - maxPostSize in Tomcat is enforced only for specific contenttype: Tomcat only enforces that limit if the content type is application/x-www-form-urlencoded

    - And the other 3 below XML options are for multipart only:

    <multipart-config>
      <!-- 52MB max -->
      <max-file-size>52428800</max-file-size>
      <max-request-size>52428800</max-request-size>
      <file-size-threshold>0</file-size-threshold></multipart-config>


    So that one won't work for uploading just a byte[]. The only solution is in the servlet (e.g Spring @Controller) you'll have to check for the limit you want to allow.

  • maxthreads seems set to be unlimited by default or something. 50 seems to perform better. (workerthreads) 

Security

  • To securely generate a random number: SecureRandom randomGenerator = SecureRandom.getInstance("NativePRNG");

  • Good explanation of secure use of a salt to use for hashing can be found here

Cassandra

  • Unique constraints are not possible in Cassandra, so there you will even have to implement unique constraints in the business logic (and make it eventually consistent)

  • CassandraOperations query for one field:

    Select select = QueryBuilder.select(MultiplePaymentRequestRequesterEntityKey.ID).from(MultiplePaymentRequestRequesterEntity.TABLE_NAME);
    select.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
    select.where(QueryBuilder.eq(MultiplePaymentRequestRequesterEntityKey.REQUESTER, requester));
    return cassandraTemplate.queryForList(select, UUID.class);


    See also here.

  • Note below two keys don't seem to get picked up by Cassandra in the Spring Data Cassandra version 1.1.4.RELEASE:  

    <groupid>org.springframework.data</groupid>
    <artifactid>spring-data-cassandra</artifactid>

    @PrimaryKeyColumn(name = OTHER_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meUserId;

    @PrimaryKeyColumn(name = ME_USER_ID, ordinal = 1, type = PrimaryKeyType.CLUSTERED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meId;

    This *does* get picked up: put it into a separate class:

    @Data
    @AllArgsConstructor
    @PrimaryKeyClass
    public class HistoryKey implements Serializable {

      @PrimaryKeyColumn(name = HistoryEntity.ME_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID meUserId;

      @PrimaryKeyColumn(name = HistoryEntity.OTHER_USER_ID, ordinal = 1, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID otherUserId;

      @PrimaryKeyColumn(name = HistoryEntity.CREATED, ordinal = 2, type = PrimaryKeyType.CLUSTERED, ordering = Ordering.DESCENDING)
      private Date created;

    }

  • Don't use Cassandra for all types of use cases. An RDMS still has its value, e.g for ACID requirements. Cassandra is eventually consistent.

Kubernetes

Miscellaneous

  • Use dig for DNS resolving problems

  • Use pgAdmin III for PostgreSQL GUI

  • To stop SonarQube complaining about unused private fields when using Lombok @Data annotation: add to each of those classes @SuppressWarnings("PMD.UnusedPrivateField")

  • Managed to not need transactions nor XA transactions for message publishing, message reading, store db, message sending, by using the confirm + ack mechanism.
    And allow message to be read again. DB then sees: oh already stored (or do an upsert).
    So,when processing message from the queue:
    1- store in db
    2- send message on queue
    3- only then ack back to queue that read was successful

  • Performance: instantiate the Jackson2 ObjectMapper once as static, not in each call, so:
    private static final ObjectMapper mapper = new ObjectMapper();
  • Javascript: when an exception occurs in a callback and it is not handled, processing just ends. Promises have better error handling.

  • clamd would not start correctly; it would try to start but then show 'Killed' when started via the commandline. Turns out it runs out of memory when starting up.  Though we had enough RAM (16G total, 3G free), it turns out clamd needs swap configured!

  • Linux bash shell script to loop through projects for tagging with projects with spaces in their name:

    PROJECTS="
      project1
      project space2
    ";
    IFS=$'\n'
    for PROJECT in $PROJECTS
    do
      TRIM_LEADING_SPACE_PROJECT="$(echo -e "${PROJECT}" | sed -e 's/^[[:space:]]*//')"
      echo "Cloning '$TRIM_LEADING_SPACE_PROJECT'"
      git clone --depth=1 http://$USER:$GITPASSWD@github.com/projects/$TRIM_LEADING_SPACE_PROJECT.git
    done

  • OpenVPN in Windows 10: Sometimes it hangs on "Connecting..."  It doesn't show the popup to enter username/pwd. Go to View logs. Then when you see: Enter management password in the logs: ???? you have to kill the OpenVPN Daemon under Processes tab (windows taskmanager). The service is stopped when exiting the app but that's not enough!

  • javascript/nodejs log every call that comes in:

    app.use(function (req, res, next) {
      console.log('Incoming request = ' + new Date(), req.method, req.url);
      logger.debug('log at debug level');
      next()
    }

  • If ever your mouse is suddenly not working anymore your VirtualBox guest, kill the process in your guest-machine mentioned in comment 5 here. After that the mouse works again in your vbox guest.

  • Fix Firefox to version 45.0.0 for selenium driver tests:

    sudo apt-get install -y firefox=45.0.2+build1-0ubuntu1
    sudo apt-mark hold firefox

  • Setting the cookie attribute Secure (indicating cookie should only be sent over httpS) can be seen when using curl to request the URL(s) that should send that cookie plus the new attribute, even when using HTTP. See also my previous post.

    But when using a browser and HTTP, you probably won't see the secure cookie appear in the cookie store. This is (probably) because the browser knows not to store it in that case because it's HTTP being used.

  • Idempotency within services is key for resilience and be able to resend an event or perform an API call again.

Wednesday, January 18, 2017

OpenVPN how to route all IPV4 traffic through the OpenVPN tunnel

Introduction

Originally I was connected from a Windows 10 machine via OpenVPN to a network (segment?) for "our" project. I could access all servers and websites related to it.  But when switching to another project (using the same OpenVPN settings) I could only access the new project's servers when at the premise of that project. At home or from any other place, I could not get to the servers, e.g Jenkins. The error shown was "This site can't be reached" in Chrome. See screenshot below for the exact error:



But I could get to the microservices pods directly by IP address, e.g 172.18.33.xyz (xyz are not the same in below example IP addresses, just obfuscators). So quite strange.

The administrator of the OpenVPN server didn't know how to fix the problem either. Suggested was to make sure "to route all IPV4 traffic through VPN". That made me search on the interwebs and I found below solution to work, without having to change any server settings. (I did not even have access to those server settings.)

Analyzing the problem

A) Trying the website with the hostname:
C:\Users\moi>tracert website.eu
Tracing route to website.eu [183.45.163.xyz] over a maximum of 30 hops:
  1     1 ms     1 ms     1 ms  MODEM [192.169.178.x]
  2    20 ms    19 ms    20 ms  d13.xs4all.com [195.109.5.xyz]
  3    22 ms    22 ms    22 ms  3d13.xs4all.com [195.109.7.xyz]
...


B) Trying the well-known google gateway:
C:\Users\moi>tracert 8.8.8.8
Tracing route to google-public-dns-a.google.com [8.8.8.8] over a maximum of 30 hops:
  1     1 ms     1 ms     1 ms  MODEM [192.169.178.x]
  2    21 ms    20 ms    21 ms  d12.xs4all.com [195.109.5.xyz]
...

Hmm so its route goes via the same initial gateway for both external IPs and the hostname, so not via the VPN.

C) Trying with the IP that works (note not the IP for the hostname from above!):
C:\Users\moi>tracert 172.18.33.xyz
Tracing route to 172.18.33.xyz over a maximum of 30 hops
  1    97 ms    21 ms    20 ms  192.169.200.xyz
  2    45 ms    98 ms    29 ms  172.16.11.xyz
  3   130 ms    65 ms    68 ms  172.16.11.xyz
...

As you can see, the first entrypoint gateway is a different one, and most likely the wrong one.

The solution

The solution was to add this to the .ovpn OpenVPN configuration file:

route-method exe
route-delay 2
redirect-gateway def1

For me even only the last line (redirect-gateway def1) was sufficient, but for others the other two lines had to be added too.

D) After adding the setting, you can see the IP of the gateway changed to, the what turns out to, be the correct one:
C:\Users\moi>tracert website.eu
Tracing route to website.eu [183.45.163.yyy] over a maximum of 30 hops:
  1   143 ms    31 ms    21 ms  192.169.200.xyz
  2    21 ms    20 ms    21 ms  static.services.de [88.20.160.xyz]
  3    21 ms    21 ms    25 ms  10.31.17.xyz
  4    25 ms    21 ms    91 ms  10.31.17.xyz
...

References used:
- http://superuser.com/questions/120069/routing-all-traffic-through-openvpn-tunnel
- http://askubuntu.com/questions/665394/how-to-route-all-traffic-through-openvpn-using-network-manager

Wednesday, December 14, 2016

Nodejs Express enabling HSTS and cookie secure attribute


To enable HSTS (adds Strict-Transport-Security header to the response) for Node + Express using Helmet, this should be sufficient to add to your app.js:

var hsts = require('hsts')

app.use(hsts({
  maxAge: 15552000  // 180 days in seconds
}))

But a few things didn't work for me. First of all I had to add the force: true attribute. Also I was using an older version of HSTS ([0.14.0 instead of current latest release 2.0.0), so the maxAge was not in seconds but in milliseconds. It says in the README at github but I hadn't checked the version we were using...

To enable the 'secure' attribute for cookies (so they are only sent over HTTPs), I added the cookie-session module. But there is an issue with the secure flag. See the code below for more details.

Below is a full viewcounter.js application that shows the result for enabling hsts and secure cookes. Note also the comments in the code.

Start it with: node viewcounterhelmet.js

viewcounterhelmet.js:

var cookieSession = require('cookie-session')
var helmet = require('helmet');
var express = require('express')

var app = express()

app.use(helmet());
// Works, makes in response show Cache-Control, Pragma, Expires: app.use(helmet.noCache());
var oneYearInSeconds = 31536000;
app.use(helmet.hsts({
  maxAge: oneYearInSeconds,
  includeSubDomains: true,
  force: true
}));

var expiryDate = Date.now() + 60 * 60 * 1000; // 1 hour in milliseconds
app.use(cookieSession({
  name: 'session',
  secret: '10dfaf09-cf6f-43a9-b40b-4eaacbcceb8a',
  maxAge: expiryDate,
  // Not setting secure attr does not show it Secure
  // secure : true, // Set to true and no cookies at all in response. Created ticket for this for v1.2.0
  // secure : false, // Set to false it indeed doesn't show the Secure value with the cookie; it does show the cookies
  secureProxy: true, // Since this is running behind a proxy. But deprecated when using 2.0.0-alpha! Says to use secure option but that stops passing on cookies. When set to true, the cookie is set to Secure. If commented out, cookie not set to Secure
  httpOnly: true  // Don't allow javascript to access the cookie
}))

app.get('/', function (req, res, next) {
  // Update something in the session, needed for a cookie to appear
  req.session.views = (req.session.views || 0) + 1

  // Write response
  res.end(req.session.views + ' views')
})

app.listen(3000)

So when issuing the following curl command, this is the result:

vagrant$ curl -c - -v http://localhost:3000/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:3000
> Accept: */*
< HTTP/1.1 200 OK
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Download-Options: noopen
< X-XSS-Protection: 1; mode=block
< Surrogate-Control: no-store
< Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidate
< Pragma: no-cache
< Expires: 0
< Strict-Transport-Security: max-age=31536; includeSubDomains
* Added cookie session="eyJ2aWV3cyI6MX0=" for domain localhost, path /, expire 2961374488
< Set-Cookie: session=eyJ2aWV3cyI6MX0=; path=/; expires=Sun, 04 Nov 2063 04:01:28 GMT; secure; httponly
* Added cookie session.sig="DJaPtrG-tmTnVr33fOWXqWGnVlw" for domain localhost, path /, expire 2961374488
< Set-Cookie: session.sig=DJaPtrG-tmTnVr33fOWXqWGnVlw; path=/; expires=Sun, 04 Nov 2063 04:01:28 GMT; secure; httponly
< Date: Fri, 02 Dec 2016 13:30:45 GMT
< Connection: keep-alive
< Content-Length: 7
<
* Connection #0 to host localhost left intact
1 views# Netscape HTTP Cookie File
# http://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.

#HttpOnly_localhost     FALSE   /       TRUE    2961374488      session eyJ2aWV3cyI6MX0=
#HttpOnly_localhost     FALSE   /       TRUE    2961374488      session.sig     DJaPtrG-tmTnVr33fOWXqWGnVlw

Notice the Strict-Transport-Security header and the set cookies with the 'secure' attribute set.

But when setting secure:true (also for the currently latest version 1.2.0), this is the result:

vagrant$ curl -c - -v http://localhost:3000/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:3000
> Accept: */*
< HTTP/1.1 200 OK
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Download-Options: noopen
< X-XSS-Protection: 1; mode=block
< Surrogate-Control: no-store
< Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidate
< Pragma: no-cache
< Expires: 0
< Strict-Transport-Security: max-age=31536; includeSubDomains
< Date: Tue, 13 Dec 2016 11:27:37 GMT
< Connection: keep-alive
< Content-Length: 7
* Connection #0 to host localhost left intact

So no cookies anymore, while they should be there!  Reported issue here.

Versions used:
  • node js: 6.7.0
  • express: 4.13.30
  • cookie-session: 1.2.0
  • helmet: 0.14.0




Saturday, December 3, 2016

Kubernetes Software Components Architecture Diagram

For a recent innovative project we were able to work with Kubernetes 1.3 as tool for automating deployment, scaling, and management of containerized applications. In our case (micro)services.


A lot of documentation is available, but I was missing an as-complete-as-possible architecture picture of all components within k8s. Or sometimes the link between certain components was not shown, or a related but very relevant component for the system was missing (e.g. Flannel). Below is the result of this: a full components architecture diagram of all components in Kubernetes. It was hard to fit everything in, but I think I have everything for the level of detail I wanted to capture.

The following components can be found in the diagram: kubectl, etcd, master node, worker node, controller manager, scheduler, etcd-watch, API server, Docker, kubeproxy, deployments, container, pod, service, kubelet, label selectors, config maps, secrets, daemon sets, jobs, flannel (for subnet), client app accessing the services.

Let me know in the comments if I'm missing anything! 😊


Inspiration for this diagram came from (amongst other things):

Wednesday, February 10, 2016

Tomcat 7 request processing threading architecture and performance tuning

Tomcat has many parameters for performance tuning (see http://tomcat.apache.org/tomcat-7.0-doc/config/http.html), but for some attribute descriptions it is not a 100% clear how some properties affect each other.

A pretty good explanation regarding acceptCount and maxThreads can be found here: http://techblog.netflix.com/2015/07/tuning-tomcat-for-high-throughput-fail.html 
But that article is missing a ... picture! That's what I tried to create here, with the numbers 1-5 indicating the steps described in the above article:



Just in case should the Netflix URL ever get lost and for easy reference, here's the 5 steps:




The Tomcat attribute maxThreads worked best in my case with value 50, as it won't saturate the machine/CPU. (due to too many workerthreads, many context-switches)


To set maxThreads when using Spring Boot + embedded Tomcat: http://stackoverflow.com/questions/31432514/how-to-modify-tomcat8-acceptcount-in-spring-boot


Other remarks


Referenced material