One anecdote: I was working on a feature for an internal web app where the user could upload a 60K line CSV file to a website, it was saved in S3 and then a Lambda uploaded it to a Postgres database.
The naive way to do it is just to submit the file to the API, the API saves it to S3 and then does bulk INSERTS.
Claude did the first part right - create a pre signed S3 URL, send it to the browser and the file is uploaded directly to S3.
But it did the second part incorrectly - a bulk INSERT after getting the file from S3.
The correct way was to use the AWS extension that lets you upload the file directly from S3 into a table. The difference is 40 minutes vs 2 minutes. Of course Lambda times out in 15.
If it took 40 minutes to insert a 60K line CSV with native postgres using bulk inserts there is something seriously wrong with the code.
Isn’t that what I just said?