alphacyberranger@lemmy.world to Programmer Humor@programming.devEnglish · 1 year agoIts not wrong thoughlemmy.worldimagemessage-square135fedilinkarrow-up1901arrow-down184
arrow-up1817arrow-down1imageIts not wrong thoughlemmy.worldalphacyberranger@lemmy.world to Programmer Humor@programming.devEnglish · 1 year agomessage-square135fedilink
minus-squareGBU_28@lemm.eelinkfedilinkEnglisharrow-up3arrow-down1·1 year agoAgain you aren’t seeing this because these models are being developed for private enterprise purposes. Regarding deep machine code analysis, sure, that’s gonna take work but the whole hallucination thing is an off the shelf, rookie problem these days
minus-squareRikudou_Sage@lemmings.worldlinkfedilinkEnglisharrow-up1·1 year agoIt’s not, though. Hallucinations are inherent to the technology, it’s not a matter of training. Good training can greatly reduce the likelihood, but cannot solve it.
minus-squareGBU_28@lemm.eelinkfedilinkEnglisharrow-up1·1 year agoTraining doesn’t solve hallucination. I didn’t say that
Again you aren’t seeing this because these models are being developed for private enterprise purposes.
Regarding deep machine code analysis, sure, that’s gonna take work but the whole hallucination thing is an off the shelf, rookie problem these days
It’s not, though. Hallucinations are inherent to the technology, it’s not a matter of training. Good training can greatly reduce the likelihood, but cannot solve it.
Training doesn’t solve hallucination. I didn’t say that