Python async await - Internal Server Error

I was suggested to look into the python async await functionality to help speed up a tool I am building that makes thousands of api requests. I understand I will likely run into some limits that monday.com sets, but my example below is only making 41 small post requests, so I shouldn’t be hitting any limitations.

Below I have three functions. Not explicitly shown in the main function, I have a list of tuples that contain (board id, column name, column type, defualt values) - col_func_vars. The default values are used when making status columns. Because I am still new to using async/await I have two different methods. One actually uses parallelism and the other, sequential. Running the code using the sequential method, everything works and it takes ~23 seconds. Running the code with parallelism, most of the columns are created and it takes ~10 seconds, but then I run into a server error:

aiohttp.client_exceptions.ClientResponseError: 500, message='Internal Server Error', url=URL('https://api.monday.com/v2')

Post Request Async

async def post_request(data, url=apiUrl, headers=headers, timeout=timeout):
    try:
        async with aiohttp.ClientSession() as session:
            async with session.post(url=url, headers=headers, data=json.dumps(data), timeout=timeout) as response:
                response.raise_for_status()
                return await response.json(), response.status
    except requests.exceptions.HTTPError as errh: 
        print("HTTP Error:", errh)
        return None
    except requests.exceptions.ReadTimeout as errrt: 
        print("Timeout Error:", errrt)
        return None
    except requests.exceptions.ConnectionError as conerr: 
        print("Connection Error:", conerr)
        return None
    except requests.exceptions.RequestException as errex: 
        print("Error Request:", errex)
        return None

Create Column Data

def create_col_payload(board_id, col_name, column_type, default_values=None):
    if default_values:
        vals_dict = {}
        for cnt, vals in enumerate(default_values):
            # label index rules:
            # 5 == status default (empty)
            # 0-19, 101-110, 151-160
            if cnt == 5:
                cnt += 1
            if cnt > 19:
                hold_val = 81 # 101 - 20
                cnt += hold_val
            if cnt > 110:
                hold_val = 40 # 151 - 111
                cnt += hold_val
            if cnt > 160:
                print('TOO MANY LABELS')
                sys.exit()
            
            vals_dict[cnt] = vals
        status_values = {"labels": vals_dict}
    else:
        status_values = ''

    query = """
    mutation ($boardId: ID!, $titleName: String!, $columnType: ColumnType!, $defaultValues: JSON) {
        create_column(
            board_id: $boardId
            title: $titleName
            column_type: $columnType
            defaults: $defaultValues
        ) {
            id
            title
        }
    }
    """

    variables = {
    'boardId': board_id,
    'titleName': col_name,
    'columnType': column_type,
    'defaultValues': json.dumps(status_values, separators=(',', ':'))
    }

    data = {
    'query': query,
    'variables': variables
    }
    
    return data

Main

async def create_main():
    ...

    ### Parallelism
    tasks = [asyncio.create_task(post_request(create_col_payload(*vars))) for vars in col_func_vars]
        await asyncio.gather(*tasks)


    ### Sequence
    for vars in col_func_vars:
            await asyncio.create_task(post_request(create_col_payload(*vars)))

Run Code

if __name__ == "__main__":
    asyncio.run(create_main())

EDITS
After further investigation, I found one of the columns that was returning the server error (there are 9 total columns that return the error). I used both the api playground as well as specifically singled out that column in my list of tuples…works with both. So the only other reasonable explanation would be a limitation I am hitting.

EDITS 2
After reading through some additional monday.com posts about passing multiple request at the same time, I found that batching works. I used asyncio.Semaphore(x) where x = int. This method took ~16 seconds. I’d still like to understand why/what this limitation is that I am hitting.

sem = asyncio.Semaphore(5)
async def post_request(data, url=apiUrl, headers=headers, timeout=timeout):
    try:
        async with sem:
            async with aiohttp.ClientSession() as session:
    ...

Hello there @Binx96,

Would you be able to please fill this form so our team can take a look into this? Please add as much detail about your case as possible. Add the error messages you are getting, and if you can, a timestamp (date, hour, minute) with the exact queries/mutations you are using (including IDs and values).

Looking forward to hearing from you via the form!

Cheers,
Matias

Thanks @Matias.Monday. I am already in contact with the support team regarding similar limitation issues. Hopefully we can get them resolved soon.

1 Like

Happy to help @Binx96, we will do our best!

After a week working with the support team it seems that the root cause is due to “race conditions”. For others, async/await processes requests concurrently. Sometimes if more than 1 processes run time is too close to another process, the process can produce incorrect results.

An example could be, we want to create two items. We POST the data to Monday’s database. The first request goes through and updates the database. But the second update also went through and updated the database as well. Now we have two different versions of the database. The item creations are happening too fast for the database to update and refresh.

The easiest solution would be to add some “sleep” behaviors. Unfortunately, depending the data size, this can drastically increase your processing time.