Updates sent to monday via graphql api are not allways saved

Hi,

I’m updating monday using graphql and some of my updates are not saved incosistently and I don’t know why.

Here’s a log af an update that I’ve made, we can see in the column_values the result from monday saying ot was saved with the value “45”

but the data in monday did not reflect this change.

See Monday change log:

When I ran the same code again today - the change was saved. and I got the same result from monday.

Here’s my graphql:

`#graphql
  mutation ($id: Int!,$value:JSON!,$board:Int!,$column_id:String!) {
change_column_value(
 item_id:$id
 column_id:$column_id,
 board_id:$board,
 value:$value
) {
 id
name,
column_values(ids:[$column_id]){
  id
  title
  value
}
}
}

95% of these updates are saved, but I’m hoping to get to 100%. Can you investigate?

Hello @noam.honig,

I can’t find that specific case in the logs, but I see a few similar ones for that board that show that you are reaching your rate limit.

If you reach the rate limit, no calls will be made until the complexity points quota resets (as explained in the link above).

You can also find that link how to avoid hitting the limit.

Since I can not see that specific call, I am not 100% sure that is the problem but since I see a few calls with the same mutation for the same board with that issue, it is likely to be the source of the issue.

Please let me know if that helps!

Cheers,
Matias

Hi @Matias.Monday, Thanks for getting back to me

I see the multiple rate limit errors - but as I’ve tested, they are not for this specific update - where I’ve included the result that I’ve got back from monday that confirms the update and event show the updated value (as indicated in the screenshot above).

Do you have any way of tracing it?

Since you’ve mentioned rate limits, I have an additional question - I’ve followed the article and added the complexity info to my query, and it seems that this very simple one item, one value mutation (detailed above) costs 30015 out of my 1m rate limit.
In the article it says that “specific objects (like updates) have a default complexity of 25.”

Can you give me a tip on how to optimize this single row, single column mutation to cost less?

Hello again @noam.honig,

Oh! My bad! I did not realize that where it says “result” it is coming directly from our end.

In that case, I can take a look if you give me a timestamp of the moment when you use the mutation and it fails. I would need location (country), date, hour, minute and the mutation itself (with the IDs so I can check using them). It should be not older than a week.

Regarding the mutation, you are also querying for information (id, name, and column values) and inside the column values you are getting id, title and value. Which means you are using nesting here. I don’t think you can optimize it more if you want to get that information in the response.

Let me know about the timestamp with the requested information please so we can take a look!

Cheers,
Matias

Hi again @Matias.Monday,

You can find the timestamp, the values and the mutation at the start of this thread.

I’m located in Israel, but the server is an heroku server located in Ireland (I think) with the timezone of San Fransisco (I think)

If you can trace it by the item id - it’s 3868224425 - let me know if there is more infomration I can provide to help.

As for the cost - even if I remove the result values (which I don’t think I should, since I want to verify the udpate) it still costs 30000 - is that the minimal cost for an update?.

Here’s my graphql:

mutation ($id: Int!,$value:JSON!,$board:Int!,$column_id:String!) {
change_column_value(
 item_id:$id
 column_id:$column_id,
 board_id:$board,
 value:$value
) {
  id
} 

  complexity{
    query
  }
}

And here is the result I got:

{
  "data": {
    "change_column_value": {
      "id": "3881749608"
    },
    "complexity": {
      "query": 30001
    }
  },
  "account_id": 11603941
}

How can i get it down to 25 as detailed in the docs?

Hello again @noam.honig,

You are correct. I believed the nesting in the response would had increased the complexity points usage but I was incorrect. There is nothing you can do there to lower the complexity usage.

You will have to monitor the usage by using complexity {query before after reset_in_x_seconds} and make your app avoid hitting the limit using that information.

About the error you got, that is strange, I can not find any errors at that time in our logs.

Would you be able to provide a new timestamp when you get this error again? Without any errors being shown in our logs, it is hard to check the source of the issue.

If you can, please send us another timestamp with the full query, full response from our server, date, hour, minute and location and we will see if it appears in the logs.

Please send this to appsupport@monday.com so that we can follow this and (if needed) create a report about it for further investigation.

Cheers,
Matias

Hi Matias,

Thanks for getting back to me.

Regarding the update - there wasn’t an error - that’s what confused me - I got a confirmation from Monday that the value was updated, and the result reflected that - but the data eventually didn’t reflect that update.

Next time it’ll happen, I’ll let you know - but it’s a hard issue to pin down.

As for the complexity points, am I correct in understanding that updating a column value will cost at least 30K points per update? Does that mean that I can update only 33 updates per minute?

That seems correct @noam.honig,

We do not have a list of complexity points usage per mutation/query, but I just tested this and it looks like changing a column value using

  • change_simple_column_value
  • change_column_value
  • change_multiple_column_values

uses at least 30k complexity points.

Cheers,
Matias