Rate limitations are preventing my query

I am using the Monday API to extract data from 1 board! The problem I am having is that this board, whilst it has a tiny number of rows, has tons of columns, and so I keep on running into rate limitations.
I have tried adding limit and page restrictions, but I need my query to run under 9 minutes if I want to use it on a cloud function. The query is super simple:
{boards(ids: ) {items(limit: 20, page:1) {name column_values {title id text}}}}’
Can anyone help? Or is it possible to remove the rate limitations for this query?
FYI, I also tried to replace the text with the json values object, but sadly, this does not contain the full information I need for every column.

Also, following up on this, I calculated the complexity of my query and its a tiny 34. I don’t know why i am not able to fetch the data, this is an urgent matter can somebody please help?

Hi @data, welcome to our community!

Happy to take a look here.

To clarify: our rate limits are on a minute-by-minute basis, so even if one query has a low complexity, if you’re calling this query in quick succession, it has a possibility of exceeding the rate limit.

Do you mind providing me with the account name of the monday.com account you’re testing on? Should be something like EXAMPLE.monday.com? I can then check out your account and see if there’s something up there.

Thanks!
Helen

100%, its uniteddwelling

I would appreciate any help as this is fairly urgent! I need to be able to pull at least 20 items for this query and this is just not working

Hello @Helen , any update on this? I would love to get help on this as quickly as possible!

Hello @Helen any update on this? Its a really urgent issue, if there is somebody else you could point me to that could help me that would be great as well.

Hey there @data,

I’m so sorry this issue is still persisting! Thanks for providing the account slug - I’ve looked it up in our records, and it does seem like your account should have access to 10,000,000 complexity per minute. That said, could you please provide the exact query you are currently using to pull the data? I’d love to take a closer look and see what might be causing this for you!

-Alex

Hey @data,

I wanted to keep you posted on this and explain what has been causing the behavior for you. Recently, a 1,000,000 complexity limit per query was introduced. This means that if a specific query has a complexity rating above 1,000,000 was called, it would fail as the complexity would be too high.

We’ve now adjusted the limits to be a bit more lenient, and the current complexity limit per query is 5,000,000 (5M). We are going to post an announcement with more details soon.

I hope this helps!

-Alex

This is good to know Alex. I was wondering why our integrations suddenly started failing with complexity limits.

It would have been really good if monday let paying customers know before reducing service levels - at the same time as increasing prices! ROI is damaged both ways.

3 Likes

Absolutely!

Thank you for this feedback, we’ll be sure to forward it along for internal discussions.

Hi all,

On a related note, I’m getting some confusing errors from the API. Using the complexity API it looks like we get 10,000,000 complexity points per minute, and I can use this to throttle my updates to Monday. However, I appear to be hitting another limit and I get the following error message:

Monday API Error Complexity budget exhausted, query cost 1020 budget remaining 28 out of 50000000 reset in 164 seconds

So, a few question regarding this:

  • What’s this 50,000,000 limit? It appears to reset less frequently than one minute.
  • Is there a way to query for the status of this limit?

I am getting the same thing since early this morning. I have tried to contact support and my enterprise account rep for insight into what changed. Support has escalated my ticket to the ‘Developer Success’ team, but their SLA is 2 days.

@Helen can you please help shed some light on what changed today? This is causing me a lot of problems with my custom integrations today.

With the major outage this morning, now this, it has not been a good day for my internal teams using Monday.

1 Like

Hi @mmoulton!

Getting an answer for you now :smiley:.

1 Like

HI @mmoulton!

Apologies, been a rather busier day than anticipated.

It seems the 50 million limit you’re running into is a new limitation that’s been set for every 10 minutes (which is 5 times your per-minute complexity budget).

For all of the information, I would definitely recommend checking out Dipro’s post here.

I hope this helps!

Thank you for the follow up @Helen.

@dipro So I’m clear, the new limit for all account types, including enterprise accounts such as mine, is 50M complexity units per 10 minutes?

This change is half of the old limit, where the old limits were 10M complexity units / 1 minute, or 100M per 10 minutes.

This is a material change to your service offerings without any due notice to your customers. How does a change such as this get implemented without any consideration to the impact to your users, let alone proper notice to those who would be impacted?

This change is causing many of our automations to fail regularly, causing significant impact to my day-to-day business operations.

I’ve escalated this problem in all the ways I have available currently. It’s disappointing that I have to make a post such as this on your public forums to ask for this change to be rolled back/updated to restore the original service level we signed our contract for.

Please reach out directly so we can find a resolution to this problem and I can bring some stability back to my organizations use of Monday.

1 Like

Hi @Helen,

What’s the reason behind the 50M limit? Is Monday having performance problems internally?

It would be best if Monday simplified its limits. In some ways, I would prefer that the 10M limit was reduced to 5M, and the 50M limit was removed. At least we can query Monday for our remaining complexity in the 10M case.

Hi @amarsden and @mmoulton,

Thank you both for the candid feedback. I can definitely understand where you’re coming from and empathize with your situation!

The change in rate limit resulted from our Product team’s belief that, for efficient queries, the 5million/query rate would be enough. For instance, the same data set can be returned with a complexity of either 10 million or 10 thousand.

An example would be if you were to query boards, then groups, then items, this query would have a complexity magnitudes higher than if you were to query boards, then items, then groups. A simplified explanation for this behavior is that because items can only belong to a single group, the complexity is calculated for that one group. Alternatively, when querying a board’s groups, they are measured as a collection and not a single unit.

Do you mind sharing an example of some queries that you are all making that’s causing things to break? I would be more than happy to take a look and see how we can make things more efficient. One thing to keep in mind: mutations will be more costly than queries, so this could potentially be the issue you’re facing. Let me know if so.

Also, I’m more than happy to submit this as feedback for our team’s consideration. We’re here to listen and to support you all!

1 Like

Thank you for the continued dialog @Helen, @dipro.

There are several issues with this change, but for the purpose of this conversation we will ignore the fact that there was no notice to this change, and that this change fundamentally changed contractual terms.

The primary driver as to why we are exhausting this new budget has nothing to do with overly complex queries, but the simple fact you charge a minimum 100,000 units for a ‘items_by_column_values’ query.

Since Monday is missing several key pieces of functionality, such as cross-board rollups, or the ability to lookup a value from another table (VLOOKUP), we have had to implement these features ourselves through custom integrations. By their nature, these automations rely on finding the appropriate items within a Monday board where we don’t know the item’s ID. Thus the use of the ‘items_by_column_values’ query. You then multiply this by the fact that there are hundreds of items being manipulated by our various teams each minute, this adds up.

Hope this helps explain that this problem can’t go away by simply writing more “efficient” queries as your team postulates, but is rooted in the fact there is a significant tax placed on areas of the API that are critical for integrations to actually use Monday as a WorkOS.

I’ve attached a screenshot of the impact this is having on my business. This is one of our automations in Integromat showing all the jobs that failed due to the query limit being exhausted. These jobs are now all queued up to try again. The problem is that people are still working, so once more budget frees up, it’s a race to use that budget by some of these queued jobs, but also new ones coming in from people doing current work. This is a thundering herd problem that doesn’t resolve itself until rate frees up as people stop doing work, or the retry attempts finally fail and the job fails completely.

1 Like

Hi @Helen,

The main concern from my end is that each mutation costs 30K, so we can only do 333 mutations every minute. Is there any way to reduce the cost of a mutation? Here’s the GraphQL we are using:

#graphql
    mutation ($boardId: Int!, $itemId: Int!, $columnValues: JSON!) {
      change_multiple_column_values (board_id: $boardId, item_id: $itemId, column_values: $columnValues) {
        id
      }
    }

Moving an item to a different group also costs 30K:

#graphql
    mutation ($itemId: Int!, $groupId: String!) {
      move_item_to_group (item_id: $itemId, group_id: $groupId) {
        id
      }
    }

It would be great if we could do the group and item mutations in a single query without additional cost, e.g.,

#graphql
    mutation ($boardId: Int!, $itemId: Int!, $columnValues: JSON!, $groupId: String) {
      change_multiple_column_values (board_id: $boardId, item_id: $itemId, column_values: $columnValues, group_id: $groupId) {
        id
      }
    }

Hi @mmoulton and @amarsden,

Thank you both for sharing your candid experience. Our team is more than willing and happy to hear all of your candid thoughts and opinions!

Feel free to check out the announcement I made regarding the rollback of our recent complexity limitation change here.

Let me know if you have any questions at all!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.