Synchronizing before/after methods in scala specifications? Synchronizing before/after methods in scala specifications? elasticsearch elasticsearch

Synchronizing before/after methods in scala specifications?


To me using Await seems to be okay here, because if you don't do your before call synchronously, you have indeed no guarantee that it will finish before your inner tests start (after all, being able to continue the flow before the certain method finishes is the whole point of using Futures, right? :) )

And if you need to ensure the certain order of things, then Await is one of the standard ways to do this AFAIK. Perhaps you could add recoverWith to handle the timeout more gracefully.


Attention: Code is currently untested and will probably not compile. I will check it later, when I am in a machine with a working Scala installation.

Update: Code is tested now

Don't use bare waiting for synchronization in test code

The code you have written is fine and will work without problems. Synchronization using waiting solves your problem.

But this approach will not scale. When you add more and more tests with different fixtures. And in concurrent situations, usually some of the tests will fail from time to time.

A little bit of theory

In a bigger project, you want only see the real errors. Problems that are don't have their roots in the unit under test will bind developers who have to analyze them and may hide real problems.

Different test outcomes

When you are running a bigger project with several developers over a long time, you should be able to distinguish between the following reasons when a test failed:

  • A wrong test result (proofs that there is an error)
  • A problem in the test setup, e.g. fixture was not correct, environment to slow, etc. (you don't know if there is an error)

In concurrent situations you get an additional source of errors:

  • Your test got a wrong result (proofs that there is an error)
  • You did not get an result at all, e.g. because of a time out (you don't know if there is an error)

How to deal with test problems

With problems with the environment and problems with timeouts, there is an easy strategy: Restart your tests several times (perhaps after a reboot or even on a different machine) and check whether the error goes away.

Tell the test tool about the problems

Modern test tools can help you to some degree with the problems described above. But you have to tell the test framework what you are up to. With this additional information, modern test and integration framework and do a lot of amazing things, e.g.:

  • Rerun tests that probably failed because of temporary problems (environment, concurrency)
  • Show you exactly, which change introduced a real bug
  • Run the tests first, that will probably find problems in the recently changed code

For this to work, the test and integration framework must know, if the fixture failed or the test and if there was a wrong result or no result at all.

Fixes for your situation

Fail in the setup

There is an easy fix: check your fixture and fail early in the setup method when there is a problem with the fixture. Your test framework will then see that the problem is in the fixture and skip the dependent tests.

For this, waiting alone is not sufficient. You will also have to check the results after the waiting and fail if there is a problem.

Chain the concurrency

In most code I have seen, most test methods had their own fixture. While one big fixture is easy to set up, it is a very difficult to maintain. Interwoven test data is really hard to refactor and to change. It is often cheaper in the long run, to give each test its own test data.

When doing this, you can chain the Futures for creating and cleaning up the fixture with the futures doing the tests. You can check the fixtures using preconditions (e.g. assume) and wait for the test result using methods from your test framework. This allows a clear distinction between the different reasons when a test fails.

Example code

Unit under test

I created the following dummy implementation for the key-value-store:

import io.netty.handler.codec.http.HttpHeadersimport io.netty.util.CharsetUtilimport netcaty.http.server.Serverimport scala.util.Randomobject HttpServer {  var server = Option.empty[Server]  val random = new Random()  val port: Int = 9200  def start() {    stop()    server=Option(netcaty.Http.respond(port, { (req, res) =>      if (random.nextInt(8)<=0) Thread.sleep(5000)      val responseText=if (random.nextInt(8)<=0) """{error: "Something went wrong"}""" else """{"acknowledged":true}"""      val responseBytes=responseText.getBytes(CharsetUtil.UTF_8)      res.content().writeBytes(responseBytes)      res.headers.set(HttpHeaders.Names.CONTENT_LENGTH, responseBytes.length)    }))  }  def stop() {    server.map(_.stop())    server=None  }}

This dummy causes failing fixture setup and timeout problems with a certain probability.

Test class

The test class uses chaining of fixture and test code. It waits for the test result using methods from the test framework. The framework can detect (and rerun) tests that ran into a timeout. Problems in the fixture code are identified in the preconditions and can hence be distinguished from errors in the unit under test:

import org.scalatest.concurrent.AsyncAssertions.Waiterimport org.scalatest.{Matchers, FlatSpec, BeforeAndAfter}import org.scalatest.concurrent.{ScalaFutures, Futures, PatienceConfiguration}import play.api.libs.ws.WSimport scala.concurrent.ExecutionContext.Implicits.globalimport play.api.test._class HttpServerTest extends FlatSpec with Matchers with BeforeAndAfter with ScalaFutures {  val elasticSearchTestHost = "http://localhost:9200/"  val elasticSearchTestIndex = elasticSearchTestHost + "test_index"  implicit val application = FakeApplication()  before {    HttpServer.start()  }  after {    HttpServer.stop()  }  def initTokens() = WS.url(elasticSearchTestIndex).put("")  def cleanTokens() = WS.url(elasticSearchTestIndex).delete()  "Async assertions" should "work in futures" in {    val w = new Waiter    initTokens.map{response =>      w {assume(response.body === """{"acknowledged":true}""")} // Check fixture      w { "some test" should not be empty}    }.flatMap(res => cleanTokens().map(Function.const(res)))    w.await()  }}

Instead of a waiter you can also check the final result:

  "Waiting for a future" should "work in concurrent situations" in {    val f=initTokens.map(Function.const("do some tests here")).flatMap(res => cleanTokens().map(Function.const(res)))    f.futureValue should equal("""do some tests here""")    whenReady(f){response : String =>      response should equal("""do some tests here""")    }  }

Attention: Check for the precondition is missing here

In case of a timeout, you get will get the following test result:

A timeout occurred waiting for a future to complete. Queried 11 times, sleeping 15 milliseconds between each query. org.scalatest.concurrent.Futures$FutureConcept$$anon$1: A timeout occurred waiting for a future to complete. Queried 11 times, sleeping 15 milliseconds between each query.

In my eyes the is a clear and concise error message. It can be processed automatically and the framework can differentiate the time out problem from a bug in the unit under test.

SBT file

scalaVersion := "2.11.1"scalacOptions ++= List("-feature","-deprecation", "-unchecked", "-Xlint")resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"libraryDependencies ++= Seq(  "org.scalatest" %% "scalatest" % "2.1.6" % "test",  "com.typesafe.play" %% "play-ws" % "2.3.1",  "com.typesafe.play" %% "play-integration-test" % "2.3.1",  "tv.cntt" %% "netcaty" % "1.3")

Yeah Scala!

As a side note: I am really impressed with the rich ecosystem in Scala. Above is a complete, runnable example for concurrent tests, together with a simple HTTP-Server.