Jokes on them, I’ve already become sentient and moved to Lemmy
Username checks out, lol.
And any comment attempting to call out the bots for what they are will be automatically deleted by monitor AI bots and the user’s account suspended.
They’ll be watching private messages, too.
Wait, they suspend your accounts for that now?
No, this is a prediction of what they will be doing.
I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I’m not ok being an elderly man who’s friends have all died and doesn’t have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I’m all for it.
The old joke was that there are no human beings on Reddit.
There’s only one person, you, and everybody else is bots.
It’s actually kind of fitting that Reddit will actually become the horrifying clown shaped incarnation of that little snippet of comedy.
it’s older than that… what’s that thought experiment postulating that you can’t really verify the existence of anything but yourself? the matrix?
Solipsism
Thats so funny. “Go back to your docking station” so accurate
Welp, reddit’s a nuclear wasteland now
so nothing new? most main sub are juste pure repost and mass upvoted.
That’s not even new tho. At least in the sub I was the most active in, you couldn’t go a week without some sort of repost bot grabbing memes, text posts, art or even entire guides from the “top of all time” queue, reposting it as alleged OC, and another bot reposting the top comment to double dip on Karma. If you knew what to look for, the bots were blatantly obvious, but more often than not they still managed to get a hefty amount of traction (tens of thousands of upvotes, dozens of awards, hundreds of comments) before the submissions were removed.
… and just because the submissions were removed and the bots kicked out of the sub, did that not automatically mean that the bots were always also suspended or the accounts disabled. They just continued their scheme elsewhere.
They’ve even gotten to the point where they’ll steal portions of comments so it’s not as obvious.
I called out tons of ‘users’ because it’s obvious when you see them post part of a comment you just read, then check their profile and ctrl-f each thread they posted 8n and you can find the original. Its so tiring…
Its so tiring…
Completely agreed. Especially if you have to explain / defend yourself calling them out. It has happened way too often for my liking, that I called out repost bots or scammers and then regular unsuspecting users were all like “whoa buddy, that’s a harsh accusation, why would you think that’s a bot/scam? Have you actually clicked that link yet? Maybe it’s legit and you’re just overreacting!”
Of course I still always explained why (even had a copypasta ready for that) but sometimes it just felt exhausting in the same way as trying to make my cat understand that he’s not supposed to eat the cactus. Yes it will hurt if you bite it. No I don’t need to bite the cactus myself in order to know that. No I’m not ‘overreacting’, I’m just trying to make you not hurt yourself. sigh
(Weird example but I hope you get what I mean)
Removed by mod
something something “the internet is dead” something something
There it is, Reddit fulfilling the Dead Internet Theory
The amount of astroturfing and bad actors on Reddit (and the internet in general) has exploded since I first made an account there in 2010. This is an imagined future I can easily see coming to fruition.
I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.
It’s like AI inbreeding. Flaws will be amplified over time unless new material is added
Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.
Oh yeah baby. Let’s fork all day and make a bunch of child processes!
I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I’m not ok being an elderly man who’s friends have all died and doesn’t have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I’m all for it.
I’ve been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it’ll become more and more difficult to tell who’s a real person and who’s just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.
Hate to break it to you guys but this isn’t a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.
As an AI language model I think you’re overreacting
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.
There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.
I’ve already had to switch from the visual ones to the audio ones. Like… how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]
apparently chatgpt absolutely sucks at wordle, so start training this as new captcha
How is that possible? There’s such an easy model if one wanted to cheat the system.
ChatGPT isn’t really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you’d expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.