Only reason why’d you read AI content is because it might be fun to see how good or bad it is. But to read it to know more about a subject is a no go. The amount of proof that it just guesses or makes things up has ruined any trust I have for it.
AIs are trained to fool humans into thinking it’s a human writing it. Not to actually write accurate/good/novel content. I think it’s called a alignment issues in AI safety terms.
Only reason why’d you read AI content is because it might be fun to see how good or bad it is. But to read it to know more about a subject is a no go. The amount of proof that it just guesses or makes things up has ruined any trust I have for it.
AIs are trained to fool humans into thinking it’s a human writing it. Not to actually write accurate/good/novel content. I think it’s called a alignment issues in AI safety terms.