Summary MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical ...
I suppose they can add source URL of information, so, you can verify correctness.
But then I don’t get it why we need lying AI if we can get URL in the first place. So, it will work just like any other good search engine.
Sorry if I sound salty, but I still don’t get why companies put fake AI engines everywhere.
I suppose they can add source URL of information, so, you can verify correctness. But then I don’t get it why we need lying AI if we can get URL in the first place. So, it will work just like any other good search engine.
Sorry if I sound salty, but I still don’t get why companies put fake AI engines everywhere.