Was trying to use Copilot (ChatGPT based?) to work on automating MDX remediation. Idea was to feed it a text file, have it fix the MDX where it could, and return a file of corrected records.
Used the same, identical file of 61 records for two days. Got back counts of 35, 38, 43, 65, 64, 51, etc on various iterations. When it would eventually count the records correctly after one particularly aggravating run, I asked why did it count 65, 64 and 38 before correctly getting 61.
The answer? "I guessed." I kid you not.
Switched my ChatGPT account and apparently it quit guessing so much.