Integrate JobRunr into Play Framework [Update Sep 2025]
- The admin dashboard can run on the same instance as the background server. We don't need to deploy a separate instance for the admin dashboard. This is generally okay because the admin dashboard doesn't use much resources.
- Regarding the above, if your traction is low enough, running the web service, the background job, and the admin dashboard in a single instance (e.g. Render instance) should be possible. It'd save even more cost.
case object
cannot be used as a job parameter due to the limitation of its json serialisation. JobRunr uses Jackson for serialization.- We don't need to use
Future
in the background job. In fact, we should actively avoidFuture
in the background job code. We can avoid it by usingAwait.result(..)
at the top level. In general, usingFuture
withThread.sleep(..)
may cause a thread starvation problem. - We've upgraded to the Pro version because we need to use priority queues, mutex, and reserved workers. We've learned that the queue configuration needs to be set up for both the background server and the job scheduler. Otherwise, the scheduled job wouldn't run on the correct queue.
- We've "unprivate" the background server member in order to support running all the pending background jobs in test. We'll show it in the later section.
- Setting up cron jobs is only needed in the instance that we set up the background server.
This blog shows you how you can integrate JobRunr with Play Framework.
JobRunr is one of the popular background processing for a JVM application, but it is catered toward Java and Spring Boot. Since there's no popular Scala background processing framework, and I don't want to just roll out my own, I've decided to use JobRunr.
This is what Scala intends: to be able to utilise a Java library and framework. This is one of the main reasons why Scala is on JVM and interoperable with Java.
JobRunr's quick architecture overview
JobRunr has 4 main components: the job itself, the background job runner (for some reason, it is called "backgroundJobServer"), the scheduler (who adds a new job request), and the admin dashboard.
- Job requests and their handlers are the jobs that you want to queue and to be done by JobRunr.
- The scheduler is an instance in your application where you can schedule a job request. When doing so, a job request would be added to JobRunr's database table.
- The admin dashboard exposes the web interface for debugging and for other controls (e.g. triggering a recurring job).
- The background job runner (or "backgroundJobServer") runs on its own process. It would poll for new jobs from the database and run the jobs when appropriate.
JobRunr uses a database and supports Postgres, and you would have to set up the database connection properly. However, it completely manages its own database tables out of the box. You wouldn't need to do anything.
Set up a job request and its handler
Setting up a job request is straightforward but there are 2 things to look out for:
- A job request should be serializable because it will be serialized and deserialized in and out of a database.
- A job request handler needs to play well with Play's injection framework because it might need to inject other important things like
WSClient
or Slick-related code for reading and writing database.
Here's a simple request and its handler:
case class ProcessUpdateRequest(updateId: String) extends JobRequest {
def getJobRequestHandler = classOf[ProcessUpdateRequestHandler]
}
@Singleton
class ProcessUpdateRequestHandler @Inject() (
wsClient: WSClient,
app: Application
)(implicit ec: ExecutionContext)
extends JobRequestHandler[ProcessUpdateRequest] {
def run(req: ProcessUpdateRequest): Unit = {
// do something...
}
}
Notice that ProcessUpdateRequestHandler
is working within the Play's injection framework and is able acquire the instance of Application
.
Initialisation using Play Modules
If you depend on Play Framework a lot, you will need to learn to master Play Modules. A Play module can be used to initialise instances of classes and register them with the Play's injection framework, which uses Guice underneath I think.
The main instance that we want to initialise is JobRequestScheduler
, which is used for scheduling a job request to be worked on.
Let's set up a module first by making JobRunrModule
and adding it to application.conf
:
play.modules.enabled += "modules.JobRunrModule"
Next, we want to make an injectable JobRequestScheduler
, so you will be able to use it in various places inside you Play application. JobRequestScheduler
requires StorageProvider
, which in turn requires a DataSource.
I'd recommend separating out StorageProvider
into its own injectable instance because it'll be used by the background job runner as well.
First, we make the StorageProviderProvider
module:
@Singleton
class StorageProviderProvider @Inject() (
config: Configuration,
lifecycle: ApplicationLifecycle
)(implicit
ec: ExecutionContext
) extends Provider[StorageProvider] {
lazy val provider: StorageProvider = {
val dbConfig = new HikariConfig()
dbConfig.setDataSource({
val d = new slick.jdbc.DatabaseUrlDataSource()
d.setDriverClassName(config.get[String]("slick.dbs.default.db.properties.driver"))
d.setUrl(config.get[String]("slick.dbs.default.db.properties.url"))
d.setDeregisterDriver(true)
d
})
// Render and others put a limit on the Postgres connections.
// Please be mindful because:
// During a zero-downtime deployment, there'll be 2 instances running. Therefore, the number of connections used will be doubled.
// In the background where the parallelization is low, we don't need that many connections.
dbConfig.setMaximumPoolSize(3)
dbConfig.setMinimumIdle(3)
dbConfig.setConnectionTimeout(10000)
dbConfig.setValidationTimeout(5000)
val dataSource = new HikariDataSource(dbConfig)
lifecycle.addStopHook(() => Future(dataSource.close()))
val provider = SqlStorageProviderFactory.using(dataSource)
lifecycle.addStopHook(() => Future { provider.close() })
provider
}
def get(): StorageProvider = provider
}
Then, we make the JobRunrBaseConfiguration
module. Remember we mentioned earlier that the scheduler and the background server needs to share the exact same config in order to work correctly. This module achieves that:
@Singleton
class JobRunrBaseConfiguration @Inject() (
app: Application,
storageProvider: StorageProvider
)(implicit
ec: ExecutionContext
) extends Provider[JobRunrConfiguration] {
import JobRunrBaseConfiguration._
private[this] val logger = Logger(this.getClass)
// It is important that the scheduler and the background runner uses the same queue config.
// It's also important that this is a def, so it works in test.
def get(): JobRunrConfiguration = JobRunrPro
.configure()
.useStorageProvider(storageProvider)
.useQueues("normal", "high", "normal", "low")
.withJobFilter(new DefaultRetryFilter(3))
.useJobActivator(new JobActivator {
def activateJob[T](tpe: Class[T]): T = app.injector.instanceOf[T](tpe)
})
.useBackgroundJobServer(
BackgroundJobServerConfiguration
.usingStandardBackgroundJobServerConfiguration()
.andWorkerCount(12)
.andDynamicQueuePolicy(
new FixedSizeWorkerPoolDynamicQueuePolicy(
"tenant:",
Map("dedicated" -> 3.asInstanceOf[Integer]).asJava
)
)
.andInterruptJobsAwaitDurationOnStopBackgroundJobServer(
app.mode match {
case Mode.Dev | Mode.Test => Duration.ofSeconds(1)
case Mode.Prod => Duration.ofSeconds(200)
}
),
// Notice that the background server isn't set to start automatically.
// This is because the config is used in the web service,
// which only queues jobs (not runs them)
false
)
}
The above code also gives an example how to set up the priority queues and a dynamic queue.
Please notice that JobActivator
is responsible for retrieving a job request handler through Play's injection framework. In our case, T
would be ProcessUpdateRequestHandler
. This is how everything is tied together.
Now we can set up the scheduler:
@Singleton
class JobRequestSchedulerProvider @Inject() (
lifecycle: ApplicationLifecycle,
baseConfiguration: JobRunrBaseConfiguration
)(implicit
ec: ExecutionContext
) extends Provider[JobRequestScheduler] {
private[this] val logger = Logger(this.getClass)
lazy val scheduler: JobRequestScheduler = {
val scheduler = baseConfiguration
.get()
.initialize()
.getJobRequestScheduler()
lifecycle.addStopHook(() => Future { scheduler.shutdown() })
scheduler
}
def get(): JobRequestScheduler = scheduler
}
Finally, we set up the root module for Play to initialise as shown below:
class JobRunrModule extends AbstractModule {
override def configure(): Unit = {
bind(classOf[StorageProvider])
.toProvider(classOf[StorageProviderProvider])
bind(classOf[JobRequestScheduler])
.toProvider(classOf[JobRequestSchedulerProvider])
.asEagerSingleton()
}
}
FYI, we are using Slick. Therefore, we reuse the database URL which is set for Slick.
Now you should be able inject JobRequestScheduler
into any place you would like to use it. For example, here's how you can use it in a controller:
class TestController @Inject() (
jobScheduler: JobRequestScheduler,
cc: ControllerComponents
) extends AbstractController(cc) {
def index = Action {
jobScheduler.enqueue(ProcessUpdateRequest("some_id"))
Ok("It works!")
}
}
Or you can use a more comprehensive version that allows you to configure a multitude of things:
jobScheduler.create(
JobBuilder
.aJob()
.withJobRequest(ProcessUpdateRequest("some_id"))
.withQueue("high")
.withMutex(s"mutex/some_id")
.withLabels(s"tenant:dedicated")
)
Initialise JobRunr Background Server & Admin Dashboard
Using a 3rd-party framework like JobRunr is nicer than building my own framework because they also offer an admin dashboard, which would have been tedious to build myself.
I'd recommend putting the background server and admin dashboard on the same instance. This instance will run a main class named JobRunrMain
:
object JobRunrMain {
def main(args: Array[String]): Unit = {
val app = GuiceApplicationBuilder(Environment.simple(Mode.Prod)).build()
Play.start(app)
new JobRunrMain(app).initialize()
Thread.currentThread().join()
}
}
class JobRunrMain(app: Application) {
lazy val jobRunrConfig: JobRunrConfiguration = app.injector
.instanceOf[JobRunrBaseConfiguration]
.get()
.useDashboard(
JobRunrDashboardWebServerConfiguration
.usingStandardDashboardConfiguration()
.andPort(8000)
.andDynamicQueueConfiguration("Tenants", "tenant:")
)
// This will be used in tests
lazy val backgroundJobServer: BackgroundJobServer = {
val field = jobRunrConfig.getClass.getDeclaredField("backgroundJobServer")
field.setAccessible(true)
field.get(jobRunrConfig).asInstanceOf[BackgroundJobServer]
}
def initialize(): Unit = {
val _ = jobRunrConfig.initialize()
setupCronJobs()
// Start the background server
backgroundJobServer.start()
}
def setupCronJobs(): Unit = {
val scheduler = app.injector.instanceOf[JobRequestScheduler]
scheduler.createRecurrently(
RecurringJobBuilder
.aRecurringJob()
.withJobRequest(YourRecurringJob())
.withInterval(Duration.ofSeconds(60))
.withQueue("high")
)
}
}
You can run it in the dev mode with: sbt 'runMain JobRunrMain'
. If you use sbt-native-packager
, you can run it with: ./bin/<your-app> -main JobRunrMain
.
A JobRunr gotcha about recurring jobs
If you set up a recurring job, do not use Duration
. We've found that the clock is reset on every restart; a deploy requires a restart.
Think about the implication: if you set the duration to be every hour, and you deploy every 30 minutes, your recurring job will never run.
If you want a job to run every hour, you should use the cron expression instead.
Run enqueued background jobs in test
What we need to do is to start the background server, wait for all jobs to be run, then stop the background server.
Here's the code that does exactly that:
// First of all, disable retrying.
storageProvider
.getJobList(
JobSearchRequestBuilder.aJobSearchRequest().build(),
Paging.AmountBasedList.ascOnCreatedAt(100000)
)
.asScala
.toList
.foreach { job =>
job.setAmountOfRetries(0)
storageProvider.save(job)
}
val jobRunrMain = new JobRunrMain(app)
jobRunrMain.backgroundJobServer.start()
// You will need to implement the wait until mechanism yourself.
waitUntil(timeoutSeconds = 30) {
storageProvider
.getJobList(
JobSearchRequestBuilder
.aJobSearchRequest()
.withStateNames(Seq(
StateName.ENQUEUED,
StateName.PROCESSING
).asJava)
.build(),
Paging.AmountBasedList.ascOnCreatedAt(1000)
)
.asScala
.toList
.isEmpty
}
jobRunrMain.backgroundJobServer.stop()
That's it. Now you can ensure all the pending background jobs finish before checking your assertions.
Parting thoughts
In my previous life, I tried to build my own background job framework for my previous startup, but I never had time to build all the bells and whistles (e.g. admin dashboard) and battle-test it. That was expected because building a background job framework was never the main job. I'm not gonna make that mistake again. So far I'm happy with JobRunr.
Wow, this is a long blog post. But I hope this helps you learn how to integrate with JobRunr and how to use Play Modules to initialise the things you want to use.
- Programmable Tooltip for Mac, my secret productivity app when I worked at Stripe.
- Superintendent.app, an Excel alternative that supports SQL and can handle very large CSV files.