Performance impact of "... on MirrorValue"

Hey all,

I’m currently trying to figure out the fastest way to load all board items with the new API versions (2023-10 and above). One thing I realized during my tests is that using ... on MirrorValue has quite a negative impact on the loading performance. Here are two screenshots of queries to fetch a slice of 500 items in a board. In the first example, ... on MirrorValue isn’t used, it takes the API about 8 seconds to respond. In the second example ... on MirrorValue is used, it takes the API roughly 48 seconds to respond. So it’s about 6 times slower.

Is this something that could be improved? If I use the main board view on, it loads very fast in comparison.

Without “… on MirrorValue”:

With “… on MirrorValue”:

1 Like

Hello there @xatxat,

I checked this with the team and they shared that you should avoid fetching MirrorValue for large amount of items if possible.

This was one of the reasons we added an explicit MirrorValue, so that developers don’t request mirrored columns by default in each API request.

I hope that helps!


1 Like

Hi Matias,

thanks for checking with the team. Appreciate it.

The app I’m working on turns monday boards into spreadsheets. So the options I now have are:

a) The export will take very long but will contain the values of mirror columns (for example, a board with 5000 items and two mirror columns would take roughly 8 minutes to export).

b) The export will not contain mirror columns, but it will be faster (for example, a board with 5000 items would take about 100 seconds to export).

Both options are not so great. And even the 100 seconds from case b) is quite slow, tbh. So I’ll have to figure out how to deal with it. There are quite a few apps in the marketplace that require loading all board items.

Maybe some sort of dump-endpoint would be nice, which just returns a dump of an entire board, without the graphql stuff. So that third-party apps can load as fast as the main board view on :slight_smile:

– Simon

One suggestion I might make, is use the “items_page” to retrieve the item IDs and some basic information that returns fast. (maybe just the IDs).

Then use a simple items(ids: $itemIds){id, column_values{... on ColumnType}} query and batch the Ids you got in the first step to get the actual information.

Execute a few batches at a time (using Promise.all() (javascript) or similar depending on language). You can then process them into the final output you want.

I’ll leave the exact algorithm to you. But I suspect by using items_page to quickly get the item ID pages and queue them for retrieval, and fire an event when each retrieval completes to then process the results into the output. (while other batches are being fetched)

Keep your batches reasonable sized because you don’t want to starve your event loop waiting on I/O but you also don’t want to block anything with long processing times.


Thanks! What you’re describing is more or less how I did it before 2023-10. What changed now is that it’s not possible (or soon won’t be possible) anymore to fetch more than 100 items by ID at once. Before I used batches of 250 items.

I’m aware that there are some tricks to speed it up a bit by running requests in parallel: For example, you can also fetch the cursor of each page upfront and then do the more complex query for each page in parallel. However, it’s not really nice and I’m sure the monday devs are also not happy about third-party apps DOSing their API like this :wink:

Hey @xatxat just a clarification – we increased the number of items that you can return using items_page to 500.


  • if you’re returning items from a specific board using items_page, the limit is 500 items.
  • If you’re returning items across multiple boards using the root items query, it’s 100 items.

EDIT: apologies for any confusion; I found that some of our docs still said the limit was 100 items. I have updated them.

1 Like

Hi @dipro, thanks for hopping in!

Yep, I’m aware of those limitations. Considering a board with 5000 items where I need the values of the mirror columns, would you suggest running 10 queries in parallel with 500 items each?