Friday, March 30, 2018

Gitlab maven:3-jdk-8 Cannot get the revision information from the scm repository, cannot run program "git" in directory error=2, No such file or directory

Introduction

In a Gitlab project setup the most recent Docker image for maven was always retrieved from the internet before each run by having specified image: maven:3-jdk-8 in the gitlab-ci.yml. The image details can be found here.

Of course this is not a best-practice; your build can suddenly start failing at a certain point because an update to the image might have something changed internally causing things to fail.
What you want is controlled updates. That way you can anticipate on builds failing and plan the upgrades in your schedule.

The issue and workarounds/solutions

And indeed suddenly on March 29 2018 our builds started failing with this error:

[ERROR] Failed to execute goal org.codehaus.mojo:buildnumber-maven-plugin:1.4:create (useLastCommittedRevision) on project abc: Cannot get the revision information from the scm repository :
[ERROR] Exception while executing SCM command.: Error while executing command. Error while executing process. Cannot run program "git" (in directory "/builds/xyz"): error=2, No such file or directory

That message is quite unclear: is git missing? Or is the directory wrong? Or could the maven buildnumber plugin not find the SCM repository?
After lots of investigation it turned out the maven:3-jdk-8 image indeed had changed about 18 hours before.
And after running the maven command in a local version of that Docker image indeed the same error occured!  Awesome, the error was reproducable.
And after installing git again in the image with:

- apt-get update
- apt-get install git -y


the error disappeared!  But a new one appeared:

[ERROR] The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
This also hadn't happened before. After some searching it turned out it might be the surefire and failsafe plugins being outdated.
So I updated them to 2.21.0 and indeed the build succeeded.

Here's the issue reported in the Docker Maven github. UPDATE: it is caused by an openjdk issue (on which the maven:3-jdk-8 is based upon.

This issue made us realize we really need an internal Docker repository. And so we implemented that :)

One disadvantage about Docker images is that you can't specify a commit hash to use. Yes you can specify a digest instead of a tag, but that is a unique UUID hashcode only. You can't see from that hashcode anymore the (related) tagname.







Saturday, February 3, 2018

How to prevent Chromium from rebooting a Raspberry Pi 3 model B Rev 1.2




I tried to use a Raspberry Pi 3 model B Rev 1.2 as a dashboard for monitoring a couple of systems using Chromium as browser.

Tip: use  this to have it never turn off the display:
sudo xset s off
sudo xset -dpms
sudo xset s noblank

I had only two tabs open all the time and was using the Revolver browser extension to rotate the tabs. One tab had the default Datadog page open, another a custom dashboard within Kibana that refreshed every 15 minutes.
Using all default settings, within a few hours the Pi would reboot out of its own! So something got it to do that.

It seemed the browser (tabs) or the Javascript in them were just leaking so much memory that the Pi ran out of memory.  I tried multiple times with the same default setup, but the behavior was the same each time.

So I tried a couple of other things:
  1. Have the Revolver plugin fully reload the page. Still a reboot of the Pi, though it took a bit longer

  2. Added --process-per-site to the startup shortcut of Chromium. This causes Chrome to create less processes and that should reduce the memory usage a bit. But still a reboot of the Pi; though again it took a bit longer.
    Note that this also comes with its own weaknesses.

  3. Added --disable-gpu-program-cache to the startup shortcut of Chromium. Again still rebooted the Pi after a while.

  4. Tried other browsers like Midori and Firefox Iceweasel.  Midori does not have a Revolver-like plugin, so it didn't fit the requirements. Firefox's only add-on that should work gave some kind of "invalid format" error (don't remember exactly) when trying to install it. The other add-ons for Firefox were not compatible with Iceweasel.

So in the end I did not find a solution :(  I just built a cron-job that would restart the browser every 5 hours.
If you found a way to fix this problem, let the world know in the comments!


Friday, December 29, 2017

Upgrading Dell M4700 from 500G HDD to 1T Samsung Evo 850 SSD v-nand

Below is a list of things encountered when upgrading a Dell Precision M4700 laptop with Windows 7 with a 500G HDD to a 1 terabyte Samsung Evo v-nand 850 SSD.

  • Used HDClone 7 Free Edition to make a copy of the current harddisk. Copied onto a Samsung T5 external SSD.  HDClone very nicely copies over everything, even from a live (running) Windows 7 machine. It creates all partitions also on the external SSD; these are visible as separate drives when reconnecting the USB drive. It can be even made bootable, but I didn't need that.

  • Swapped in the SSD as shown here: https://www.youtube.com/watch?v=D6Cn3bONxEo
    Note how the SDD has to "click" with the notches of the bracket (metal frame).

  • Put in the Dell Windows 7 SP1 DVD. It installed. For quite a while it shows "Windows is loading files...", but in the end it got through, after about 5-10 minutes.

  • After logging in, the wireless device was not detected. So no internet. Also other drivers were not installed yet or failed:

  • So: "Ethernet controller", "Network controller", "PCI Simple Communications Controller", "SM Bus Controller", "Universal Serial Bus (USB) Controller", "Unknown device"

  • Then tried to use the Dell 'Resource Media' DVD to install the drivers. But the usage of the program that starts then is just plain impossible to understand. E.g see the screenshot below:


    The touchpad is marked as installed (I think that the checkbox indicates that). When I then installed any other driver, no checkbox appeared at all on the left of any of them.
    Plus, what is the order to install the drivers? Found this post but seems a lot of manual work. Plus you have to know which devices you have in your machine to know which driver matches. Eg this post shows what driver to look for for one specific error. That list you should be able to find on the bill when you ordered your M4700.

    In the end managed to get the ethernet driver installed (filter on the word 'ethernet' on the Dell drivers page, you should find "Intel I2xx/825xx Gigabit Ethernet Network Controller Drivers")

  • Then managed to get the internet connection working via a wired cable after installing the above ethernet driver. Then used the Dell's Analyze Detect Drivers option  to update the correct drivers in the right order.
    All but the 'Unknown device' errors where gone in the Device Manager. Didn't dare to update the BIOS since that was working fine before.

  • After that about 180 windows updates to install and then all worked fine. Score of machine is now 7.3 on a scale of maximum 7.9 (no idea what it was before I upgraded):



    But I do notice the difference in startup for example: complete Windows 7 startup from powered off state is about 10-15 seconds. Not bad :)

  • And then the tedious job of installing all non-OS software began...
Lessons learned for next time:
  • HDClone is very handy
  • Export all browsers's favorites before taking out the old disk. You can find the favorites back, but not in an easy importable format.
  • Impossible to understand the Dell Resource Media DVD.  
  • Keep the service tag of your M4700 ready.

Thursday, December 28, 2017

Logback DBAppender sometimes gives error on AWS Aurora: IllegalStateException: DBAppender cannot function if the JDBC driver does not support getGeneratedKeys method *and* without a specific SQL dialect

LOGBack DBAppender IllegalStateException


Sometimes when starting a Spring Boot application with Logback DBAppender configured for PostgreSQL or AWS Aurora in logback-spring.xml, it gives this error:

java.lang.IllegalStateException: Logback configuration error detected: ERROR in ch.qos.logback.core.joran.spi.Interpreter@22:16 - RuntimeException in Action for tag [appender] java.lang.IllegalStateException: DBAppender cannot function if the JDBC driver does not support getGeneratedKeys method *and* without a specific SQL dialect

The error can be quite confusing. From the documentation it says that Logback should be able to detect the dialect from the driver class.

But apparently it doesn't. Sometimes. After investigating, it turns out that this error is also given when the driver can't connect correctly to the database. Because it will then not be able to find the metadata either, which it uses to detect the dialect. And thus you get this error too in that case!
A confusing error message indeed.

A suggestion in some post was to specify the <sqlDialect> tag, but that is not needed anymore in recent Logback versions. Indeed, it now gives these errors when putting it in logback-spring.xml file either below <password> or below <connectionSource>:

ERROR in ch.qos.logback.core.joran.spi.Interpreter@25:87 - no applicable action for [sqlDialect], current ElementPath  is [[configuration][appender][connectionSource][dataSource][sqlDialect]]
or
ERROR in ch.qos.logback.core.joran.spi.Interpreter@27:79 - no applicable action for [sqlDialect], current ElementPath  is [[configuration][appender][sqlDialect]]
To get a better error message it's better to implement the setup of the LogBack DBAppender in code, instead of in the logback-spring.xml. See for examples here and here.




Thursday, November 2, 2017

What's not so good about my new Dell XPS 15 laptop (+ a bunch of good things)

Recenty I got a new laptop, again a Dell. I decided to go for a thin "ultrabook", the XPS 15, 16G RAM, 512G SSD. A good review you can find here. I didn't take the 4K version on purpose, since some reviews say it is struggling a bit with that. Plus some software just can't handle it, like Remote Desktop, so  you have to scale down anyway.



Here is an overview of the pros and cons I found while using it.

Pros

  • Sleek design

  • Thin

  • A lot lighter than the M4700

  • Smaller power-supply

  • Fast; no problem with 3-4 IntelliJ workspaces open, over 50 Chrome tabs, DBeaver, Firefox with about 10 tabs

Cons

  • Some backlight bleeding in the bottom right corner of the screen. Most notable when showing a black screen. Not really noticeable during daylight. Here's an example of a really bad case.

  • Crappy keyboard; the page up-down, home, end keys can only be used by pressing the Fn key. If you are a coder, it's really annoying, since you use those keys a lot. Hope next time they make separate keys again.

  • Corners get scratched easily if you put it in your bag without a protecting sleeve

  • Sometimes a flicker (screen turns completely black) on the externally connected screen. Not sure yet if it's the cable.

  • Screen can't fold back flat fully.

  • For some reason the default resolution is set to 125% right after using it for the first time.

  • The connection for the HDMI cable is on the side! And sometimes sits a bit in the way when using the mouse.

  • The professional version of the XPS 15 named Precision 5520 should give you better quality components. But the warranty period has been reduced from 3 years to 1 year. Does it still make it worth to buy the over 500 euros more expensive Precision 5520? Apparently they don't dare to give a longer warrant anymore for the better quality components...


Just got a sleeve for the XPS-15, a CushCase. Ordered via Amazon. Took about 3 weeks to arrive. Fits in a regular mailbox. Fits the XPS-15 nicely; no need to really push.

Wednesday, August 16, 2017

Lessons learned - Jackson, API design, Kafka

Introduction

This blogpost describes a bunch of lessons learned during a recent project I worked on.
They are just a bunch grouped together, too small to "deserve" their own separate post :)
Items discussed are: Jackson, JSON, API design, Kafka, Git.


Lessons learned

  • Pretty print (nicely format) JSON in a Linux shell prompt:

    cat file.json | jq

    You might have to 'apt-get install jq' first.

  • OpenOffice/LibreOffice formulas:

    To increase date and year by one from cell H1031=DATE(YEAR(H1031)+1; MONTH(H1031)+1; DAY(H1031))

    Count how many times a range of cells A2:A4501 has the value 1: =COUNTIF(A2:A4501; 1)

  • For monitoring a system a split between a health-status page and performance details is handy. The first one is to show issues/metrics that would require immediate action. Performance is for informational purposes, and usually does not require immediate action.

  • API design: Even if you have a simple method that just returns a date (string) for example, always return JSON (and not just a String of that value). Usueful for backwards compatibility: more fields can be easily added later.

  • When upgrading gitlab, it (gitlab) had changed a repository named 'users' to 'users0'. Turns out 'users' is a reserved repository name in gitlab since version 8.15.

    To change your local git settings to the new users0 perform these steps to update your remote origin:

    # check current setting
    $ git remote -vorigin  https://gitlab.local/gitlab/backend/users (fetch)
    origin  https://gitlab.local/gitlab/backend/users (push)

    # change it to the new one
    $ git remote set-url origin https://gitlab.local/gitlab/backend/users0

    # see it got changed
    $ git remote -v
    origin  https://gitlab.local/gitlab/backend/users0 (fetch)
    origin  https://gitlab.local/gitlab/backend/users0 (push)

  • Jackson JSON generating (serializing): probably a good practice is to not use @JsonInclude(JsonInclude.Include.NON_EMPTY)  or NON_NULLm since that would mean a key will be just not in the JSON when its value is empty or null. That could be confusing to the caller: sometimes it's there sometimes not.  So just leave it in, so it will be set to null.   Unless it would be a totally unrelated field like: amount and currency. If amount is null, currency (maybe) doesn't make sense, so then it could be left out.

  • Java:

    Comparator userComparator = (o1, o2)->o1.getCreated().compareTo(o2.getCreated());

    can be replaced now in Java 8 by:

    Comparator userComparator = Comparator.comparing(UserByPhoneNumber::getCreated);

  • Kafka partitioning tips: http://blog.rocana.com/kafkas-defaultpartitioner-and-byte-arrays

  • Kafka vs RabbitMQ:

    - Kafka is optimized for producers producing lots of data (batch-oriented producers) and consumers that are usually slower that the producers.
    - Performance: Rabbit: makes about 20K/s  Kafka: up to 150K/s.
    - Unlike other message system, Kafka brokers are stateless. This means that the consumer has to maintain how much it has consumed.

Friday, August 11, 2017

Java: generate random Date between now minus X months plus Y months

Introduction

This blogpost shows a Java 8+ code example on how to generate a timestamp between two months relative to today ("now").

The code

This example code creates a random java.util.Date between 12 months ago from today until 1 month ahead from today, which will be available in variable randomDate.

LocalDateTime nowMinusYear = LocalDateTime.now().minusMonths(12);
ZonedDateTime nowMinusYearZdt = nowMinusYear.atZone(ZoneId.of("Europe/Paris"));
beginTimeInMilliseconds = nowMinusYearZdt.toInstant().toEpochMilli();

LocalDateTime nowPlusMonth = LocalDateTime.now().plusMonths(1);
ZonedDateTime nowPlusMonthZdt = nowPlusMonth.atZone(ZoneId.of("Europe/Paris"));
endTimeInMilliseconds = nowPlusMonthZdt.toInstant().toEpochMilli();

System.out.println("System.out.currentInmillis = " + System.currentTimeMillis() + ", beginTimeInMilliseconds = " + beginTimeInMilliseconds + ", endTimeInMilliseconds = " + endTimeInMilliseconds);

Date randomDate = new Date(getRandomTimeInMillisBetweenTwoDates());
...

private static long getRandomTimeInMillisBetweenTwoDates() {
   long diff = endTimeInMilliseconds - beginTimeInMilliseconds + 1;
   return beginTimeInMilliseconds + (long) (Math.random() * diff);
}



How do Kubernetes and its pods behave regarding SIGTERM, SIGKILL and HTTP request routing

Introduction

During a recent project we saw that HTTP requests are still arriving in pods (Spring Boot MVC controllers) even though Kubernetes' kubelet told the pod to exit by sending it a SIGTERM.
Not nice, because that means that those HTTP requests that still get routed to the (shutting down) pod will most likely fail, since the Spring Boot Java process for example has already closed already all its connection pools.

See this post (also shown below) for an overview of the Kubernetes architecture, e.g regarding kubelets.


Analysis

The process for Kubernetes to terminate a pod is as follows:
  1. The kubelet always sends a SIGTERM before a SIGKILL.
  2. Only when a POD does not finish within the graceful period (default 30 sec) after SIGTERM, the kubelet sends a SIGKILL.
  3. Kubernetes keeps routing traffic to a pod until the readiness probe fails, even after the pod received a SIGTERM.
So for a pod there is always an interval between receiving the SIGTERM and the next readiness probe request for that pod. In that period requests can (and most likely) will still be routed to that pod, and even (business) logic can still be executed in the terminated pod.

This means that after sending the SIGTERM, the readiness probes must fail as soon as possible to prevent the SIGTERMed pod from receiving more HTTP requests. But still there will be a (small) period of time requests can be routed to the pod.

A solution would be to terminate the webserver within the pod's process (in this case Spring Boot's webserver) immediately gracefully after receiving a SIGTERM. This way any still directed requests  before the readiness probe fails will fail in any way, i.e no more requests are accepted.  
So still you would have some failing requests getting passed on to the pod.  But at least no business logic will be executed anymore.

This and other options/considerations are discussed here.





Wednesday, August 9, 2017

Cassandra Performance Tips - Batch inserts, Materialized views, Fallback strategy

Introduction

During a recent project we ran into multiple issues with Cassandra's performance. For example with queries being slow or having timeouts on only a specific environment (though they should have the same setup), inconsistently stored results, and how to optimize batch inserts when using Scala.

This blogpost describes how they were solved or attempted to be solved.


Setup: Cassandra running in a cluster with three nodes.

Performace related lessons learned

  1. On certain environments of the DTAP build-street slow queries (taking seconds) and weird consistency results appeared. Not all 4 environments were the same, though Acceptance and Production where as much the same as possible.

    We found as causes:

    Slow queries and timeouts: Cassandra driver was logging at both OS and driver level
    Inconsistently stored results: The clocks from different clients accessing C* were not the same, some off for minutes. Since the default in v3 of the Datastax/Cassandra driver protocol is clientside generated timestamps, you can get in trouble of course, since then the one with the most recent timestamp just always wins. But implementing serverside also won't be obvious, since different C* coordinators can give a millisec different timestamp.

  2. For Gatling performance tests written in Scala, we first needed inserting 50K records in a Cassandra database, simulating users already registered to the system. Trying to make this perform several options were tried:

    a- Plain string concatenated or prepared statements where taking over 5mins in total
    b- Inserting as a batch (apply batch) has limit of 50KiB in text size. That limit is too low for us: 50K records is almost 5MB. Splitting up was too much of a hassle.
    c- Making the calls async, as done here: https://github.com/afiskon/scala-cassandra-example
    But we were getting:

    17:20:57.275 [ERROR] [pool-1-thread-13] TestSimulation - ERROR while inserting row nr 13007, exception =
    com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /172.20.25.101:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [/172.20.25.101] Pool is busy (no available connection and the queue has reached its max size 256)))
    at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
    at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:336)
    at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
    at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
    at cle.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:102)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1184)
    at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1120)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:295)
    at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
    at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
    at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
    at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
    at UserDAO.insert(UserDao.scala:58)
    ...


    Turns out it is the driver's local acquisition queue that fills up. You can increase it via poolingOptions.setMaxQueueSize, see: http://docs.datastax.com/en/developer/java-driver/3.1/manual/pooling/#acquisition-queue
    We set it to 50000 to be safe it would just queue all records (50K). For a production environment this might not be a good idea of course, you might need to tune it to your needs.
    And the threads we set to 20 in the Executioncontext (used by the DAO from the example from github above). You can set it as in this example: http://stackoverflow.com/questions/15285284/how-to-configure-a-fine-tuned-thread-pool-for-futures

  3. Increasing CPU from 4 to 8 did seem to improve performance, less CPU saturation.

  4. Each time adding one more materialized view increases insert performance by 10%  (see here)

  5. For consistency and availability when one of the nodes might be gone or unreachable due to network problems, we setup Cassandra write such that first EACH_QUORUM is tried, then if fails, LOCAL_QUORUM as fallback strategy.

Below articles did help to analyse the problems further:

Tuesday, August 8, 2017

Gatling Lessons learned

Introduction

This post describes a couple of best practices when using Gatling, the Scala based performance and load-testing tool.


Also one or two Scala related tips will be shown.

Lessons learned

  1. You can't set a header-field in the header like this in a scenario:

    .header("request-tracing-id", UUID.randomUUID().toString())

    This is because the scenario is only created once, and unless you use session variables it, it is all static (a function).

    To solve this one can use a feeder, like this:

    val feeder = Iterator.continually(Map("traceHeader" -> UUID.randomUUID().toString))

    And then replace the UUID.randomUUID().toString() line with:

    .header("request-trace-id", "${traceHeader}")

  2. A Scala 2.11 example of a ternary (maybe not the best solution Scala-wise but readable :)

    .value(availableBalanceDecimal, if (dto.availableBalanceDecimal.isEmpty) null else dto.availableBalanceDecimal.get)

  3. Connecting correctly to a service with a Gatling test in one environment (Nginx, ingress, Kubernetes) for some reason did not work. But it was able to connect to the service under test correctly in another environment. Apparently it had something to do with a proxy in between because had to add .proxy() and it worked:

    val httpConf = http
    .baseURL("http://172.20.33.101:30666") // Here is the root for all relative URLs
    .header(HttpHeaderNames.ContentType, HttpHeaderValues.ApplicationJson)
    .header(HttpHeaderNames.Accept, HttpHeaderValues.ApplicationJson)
    .proxy(Proxy("performance.project.com", 30666))   // note it is the *same* machine as the baseURL,but specified by name... 


  4. A .check() with in it a .saveAs() will *not* happen when the earlier expression evaluates to false, or the conversion fails.
    Kindof makes sense when evaluates to false, but you might miss this one; or maybe you don't even want this .is() because all it means now only isFinished will be set to FINISHED and else it won't be set in the below example.

    someText is always found in the session, but the other one, isFinished, is not.

    .check(jsonPath("$[0].status").saveAs("someText"))
    .check(jsonPath("$[0].status").is("FINISHED").saveAs("isFinished"))
    ...

    .exec(session => {
       val isFinished = session.get("isFinished").asOption[String]
       logger.debug("Generated isFinished = {}", isFinished.getOrElse("Could not find expected isFinished..."))
       session
    })

    .doIf(session =>
       (!session.get("someText").as[String].equals("FINISHED") ||
       session.get("isFinished").as[Boolean].equals(false)
    ))(
    ...


    When running at DEBUG level the above logs:
       Session(Map(someText -> TIMED_OUT, ...)  /// So .saveAs() ocurred
       13:45:56.983 [DEBUG] SomeSimulation - Generated isFinished = Could not find expected isFinished...  


    So the second.saveAs() did not occur for isFinished at all, since it is not set to true nor false; it is not set at all!