People like this are the reason AI is so unreliable at exploring code issues.
Like, I just want Copilot to look at my dependencies to explain a vague error I’m seeing and it’s telling me to downgrade Ruby, upgrade Rails, and install Python. Bro, it’s a node package.
Well, atleast they explained how they “fixed” the problem.
Got to love those “all good, problem solved/went away” - posted 5 years ago
Who were you DenverCoder9? What did you see?
https://m.xkcd.com/979/ <- Mobile version with alt text
Long pressing on the image also shows the alt text btw
Unfortunately not all mobile browsers will show the entirety of the alt text if it’s too long, Firefox Focus on Android 15 and this exact comic is a perfect example, for my screen it cuts off about 3/4 through the alt text. The mobile version ensures it’s all readable
“I fixed the problem by putting /* eslint-disable */ at the top of a file”
Wait I thought the deprecation of the deprecation warning was deprecated.
My #1 pet peeve is when someone comes to me with a problem, and the solution is in the fucking console output or error message.
On a bad day, if I had unilateral power, I would fire those people on the spot.
At one of my old jobs, we had a suite of browser tests that would run on PR. It’d stand up the application, open headless chrome, and click through stuff. This was the final end-to-end test suite to make sure that yes, you can still log in and everything plays nicely together.
Developers were constantly pinging slack about “why is this test broken??”. Most of the time, the error message would be like “Never found an element matching css selector #whatever” or “Element with css selector #loading-spinner never went away”. There’d be screenshots and logs, and usually when you’d look you’d see like the loading spinner was stuck, and the client had gotten a 400 back from the server because someone broke something.
We put a giant red box on the CI/CD page explaining what to do. Where to read the traces, reminding them there’s a screenshot, etc. Still got questions.
I put a giant ascii cat in the test output, right before the error trace, with instructions in a word bubble. People would ping me, “why is this test broken?”. I’d say “What did the cat say?” They’d say “What cat?” And I’d know they hadn’t even looked at the error message.
There’s a kind of learned helplessness with some developers and tests. It’s weird.