They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point, writes Guardian columnist Arwa Mahdawi
I was talking about ai training on ai output, ai requires genuine data, having a feedback loop makes models regress, see how ai makes yellow pictures because of the ghibli ai thing
Sure, that mainly applies when it’s the same model training on itself. If a model trains on a different one, it might retrieve some good features from it, but the bad sides as well
If they weren’t trained on the same data, it ends up similar
Training inferior models with superior models output can lower the gap between both. It’ll not be optimal by any means and you might fuck its future learning, but it will work to an extent
The data you feed it should be good quality though
They all use each other’s data to improve. That’s federated learning!
In a way, it’s good because it helps have more competition
I was talking about ai training on ai output, ai requires genuine data, having a feedback loop makes models regress, see how ai makes yellow pictures because of the ghibli ai thing
Sure, that mainly applies when it’s the same model training on itself. If a model trains on a different one, it might retrieve some good features from it, but the bad sides as well
AI requires genuine data, period. Go read about it instead of spewing nonsense.
If they weren’t trained on the same data, it ends up similar
Training inferior models with superior models output can lower the gap between both. It’ll not be optimal by any means and you might fuck its future learning, but it will work to an extent
The data you feed it should be good quality though