Regular re-creation of the big board

I have kind of ecommerce solution. Some of my customers have accounts in Monday. I want to upload data from my solution to my customer’s account in Monday. Let’s say I have customer’s token from Monday.

I need to create board which have 8 columns and about ~300 items.
Items each time might be different. Not all, but a lot of them.
So first I want to delete all items then create new.
The problem is when I’m deleting or adding items I can do this just with first ~20 items and then I get an error:

{"errors":[{"message":"Complexity budget exhausted, query cost 30001 budget remaining 22851 out of 1000000 reset in 47 seconds"}],"account_id":123456789}

All that I’m testing now on my free account.

Will I face the same limitation on real Monday account? If so, how should I do in my scenario?

It depends on the kind of queries/mutations you’re performing. You will definitely face the same limitation on a real Monday account, so make sure that you resolve these issues before releasing your app. For apps the complexity budget is increased to 5M.

Adding the complexity object to your graphql query or mutation can give you a lot insight which calls are heavy on your complexity budget.

These two links will help on complexity and rate limits:

Thanks for the answer!
Question is also: is my case correct? I mean when I want to delete and create a lot of items: ~300 delete and ~300 create? Maybe there are some bulk operations to do that rather than delete/create one element at a time or some approach to allow doing this in more efficient way?

There is no bulk delete or create at the moment. You’ll have to create/delete one after the other

1 Like

Thanks for the answer. It is sad.

If I want reload data into board is the only way delete/create items or is there more efficient way in order to decrease complexity of queries?

Hello @Volodymyr!

You can either delete everything and create new items, or if the changes might be updates to existing items, use column values changing mutations instead.

You can also delete entire groups using a query like this one:

mutation {
  delete_group(board_id: 1234567890, group_id: "topics") {
    id
  }
}

Hi! Thanks for the answer!

I noticed if you delete group, visually items also disappear.

But if you then try to get items via API you get them. It seems items still exist on the board, they are just hidden. Is it so?

If yes, doesn’t it lead to some troubles in the future related with some board limits on number of items? I mean you can’t create new items since some time.

Hello again,

Matias here!

Which query are you using for retrieving the items after they have been deleted?

Hello, Matias!

I get items with this query:
query request($id:Int, $itemsLimit:Int, $page:Int) { boards(ids:[$id]) { items(limit: $itemsLimit, page: $page) { id } } }

Board id is set the same I set when delete groups.

Group I delete with this query:
mutation request($boardId:Int! $groupId:String!) { delete_group (board_id: $boardId, group_id: $groupId) { id }}

My case is I want fully clear board, so first I delete all groups, then I delete all items.

Hello @Volodymyr!

That is odd. I can’t reproduce the issue.

Could you please send us an email to appsupport@monday.com with screenshots of this issue in the API Playground from monday, showing the query and the response?

From there, we can do some following and look for the source of the issue.

Cheers,
Matias

Hey Volodymyr,
Our app - Moneylogz faces the same challenge.
We sync data from external API to monday.com boards.
The initial sync can be of a size of ~5000 items, and a daily update vary from dozens to hundreds each, depend on the customer’s size.

As there’s no batch item creation operation, we had to add a layer of rate limit protection to our api calls.
Looks something like this (node js code):

(Bare in mind that we might sync multiple boards simultaneously, so we set our minimum available complexity to 500k, and not 0)

executeWithRateLimit = async (req, operationName, callApi) => {
  const res = await callApi();
  log({ operationName: operationName, res: JSON.stringify(res)});

  if (res.data.complexity.after <= 500000) {
    const sleepDuration = res.data.complexity.reset_in_x_seconds + 2; // extra 2 seconds for safety
    console.log(`complexity sleep ${sleepDuration} secs`);

    await new Promise(resolve => setTimeout(resolve, sleepDuration * 1000));
  }
  return res;
}

And the actual invocation:

  await items.reduce(async (a, item) => {
    // Wait for the previous item to finish processing
    await a;
    // Process this item
    const res = await executeWithRateLimit(req, 'create_item', async () => await createItem(item);
  }, Promise.resolve());

For now it operates well in production, but very slow…
e.g. 6000 items at ~2 items/sec with the sleep mechanism takes at least 1 hour to complete.

Hope this helps,
And really looking forward for a batch update api call :pray: .

Etay.

Hi @Matias.Monday !
It’s strange, but now I can’t reproduce this. Maybe I did something wrong.
Thanks!

1 Like

Hi @etaytch
I implemented very similar solution to you.
I determine when I can do next request and wait until that. I even added 2 seconds as you did :slight_smile:
In my case board with couple of hundreds groups and ~15 item inside each group is uploaded about ~30 minutes. Very similar time to you.

1 Like