They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point, writes Guardian columnist Arwa Mahdawi
I don’t use A.I. because I’ve had nothing but negative interactions with A.I. Customer service bots that fail to give adequate responses, unhelpful and incorrect search result summaries, and, “art,” that looks like shit hasn’t made me want to sign up for ChatGPT or Gemini. For most people, this isn’t a moral stance, it’s just that the product isn’t worth paying for. Stop framing people that don’t use A.I. as luddites with an ax to grind just because tech bros spent billions on a product that isn’t good yet.
It’s fair to say that the environmental and ethical concerns are significant and I wouldn’t look down in anyone refusing to use AI for those reasons. I don’t look down on vegetarians or vegans either - I don’t have to agree with someone’s moral stance or choices to respect them.
You only notice AI-generated content when it’s bad/obvious, but you’d never notice the AI-generated content that’s so good it’s indistinguishable from something generated by a human.
I don’t know what percentage of the “good” content we see is AI-generated, but it’s probably more than 0 and will probably go up over time.
Maybe, but that doesn’t change the fact that it was trained on stolen artwork and is being used to put artists out of work. I think that, and the environmental effect, are better arguments against AI than some subjective statement about whether or not it’s good.
Shit take, the more AI-made media is online, the harder it is for AI developing companies to improve on previous models.
It won’t be indistinguishable from media made with human effort, unless you enjoy wasting your time on cheap uninteresting manmade slop then you won’t be fooled by cheap uninteresting and untrue AI-made slop.
I was talking about ai training on ai output, ai requires genuine data, having a feedback loop makes models regress, see how ai makes yellow pictures because of the ghibli ai thing
Sure, that mainly applies when it’s the same model training on itself. If a model trains on a different one, it might retrieve some good features from it, but the bad sides as well
If they weren’t trained on the same data, it ends up similar
Training inferior models with superior models output can lower the gap between both. It’ll not be optimal by any means and you might fuck its future learning, but it will work to an extent
The data you feed it should be good quality though
I don’t use A.I. because I’ve had nothing but negative interactions with A.I. Customer service bots that fail to give adequate responses, unhelpful and incorrect search result summaries, and, “art,” that looks like shit hasn’t made me want to sign up for ChatGPT or Gemini. For most people, this isn’t a moral stance, it’s just that the product isn’t worth paying for. Stop framing people that don’t use A.I. as luddites with an ax to grind just because tech bros spent billions on a product that isn’t good yet.
It’s fair to say that the environmental and ethical concerns are significant and I wouldn’t look down in anyone refusing to use AI for those reasons. I don’t look down on vegetarians or vegans either - I don’t have to agree with someone’s moral stance or choices to respect them.
But you’re right, LLMs are full of crap.
LLMs definitely are full of crap. But that isn’t the point of them (even if some corporations make it seem like it is)
They are supposed to be used for text generation. And you are supposed to read through everything afterwards to correct any hallucinations.
It can’t work on its own, and make mistakes about 30% of the time.
But there are use cases where that isn’t a problem. Use them as inspiration for creative writing prompts for example. They are crazy good at that.
Truth is definitely a bit of a blind spot for LLMs.
Wait till you see the price of a burger in another five years.
Yea, it’s often really fucking cheap for the value, just like streaming services to an extent
Customer service AI sucks, I think we can all agree to this
But if you really believe that ChatGPT and Gemini is mainly for generating art, then you’re completely wrong
You only notice AI-generated content when it’s bad/obvious, but you’d never notice the AI-generated content that’s so good it’s indistinguishable from something generated by a human.
I don’t know what percentage of the “good” content we see is AI-generated, but it’s probably more than 0 and will probably go up over time.
Maybe, but that doesn’t change the fact that it was trained on stolen artwork and is being used to put artists out of work. I think that, and the environmental effect, are better arguments against AI than some subjective statement about whether or not it’s good.
Shit take, the more AI-made media is online, the harder it is for AI developing companies to improve on previous models.
It won’t be indistinguishable from media made with human effort, unless you enjoy wasting your time on cheap uninteresting manmade slop then you won’t be fooled by cheap uninteresting and untrue AI-made slop.
deleted by creator
They all use each other’s data to improve. That’s federated learning!
In a way, it’s good because it helps have more competition
I was talking about ai training on ai output, ai requires genuine data, having a feedback loop makes models regress, see how ai makes yellow pictures because of the ghibli ai thing
Sure, that mainly applies when it’s the same model training on itself. If a model trains on a different one, it might retrieve some good features from it, but the bad sides as well
AI requires genuine data, period. Go read about it instead of spewing nonsense.
If they weren’t trained on the same data, it ends up similar
Training inferior models with superior models output can lower the gap between both. It’ll not be optimal by any means and you might fuck its future learning, but it will work to an extent
The data you feed it should be good quality though