Speed up R scripts – Use your PROCESSOR (future.apply)

Most of us have the hardware to speed up our scripts in R. We should first look at our processors and use them to parallelize the operations we perform (multithreading). In today’s post, we will show you how we can speed up our scripts using the future.apply library. We will load the library:

The built- in library function availableCores shows us how many cores our processor has:

My computer has 8 cores, which I can use for parallel processing.

Let’s also load the data we already used in the previous post about speeding up scripts:

This time our grid is larger and has 1000 polygons.

Let’s write a function that we want to execute in a loop. This function will:

  • select a particular polygon from the grid layer,
  • cut a section from the raster r that is in the area of the selected polygon,
  • calculate the average value of the pixels from the cut section that contains the polygon.

Our function looks like this:

Let’s run the function in a loop using the R base function sapply() and see how long it takes using system.time():

The sapply function was executed in 155.4 s.

Now we will use the capabilities of our processor to speed up our computations. First, we need to define a plan for future function:

And replace sapply with the future_apply function:

The sapply_future function ran in 39.3 seconds. Almost 4 x faster.

The future.apply library has many other features that we can use to speed up our scripts. We recommend reading their documentation.

Leave a Reply

Your email address will not be published.

Translate using Google Translate»
Social media & sharing icons powered by UltimatelySocial

Podoba Ci się nasza strona? Odwiedź nasz profil