• Veraticus@lib.lgbtOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Or you’ve simply misunderstood what I’ve said despite your two decades of experience and education.

    If you train a model on a bad dataset, will it give you correct data?

    If you ask a question a model it doesn’t have enough data to be confident about an answer, will it still confidently give you a correct answer?

    And, more importantly, is it trained to offer CORRECT data, or is it trained to return words regardless of whether or not that data is correct?

    I mean, it’s like you haven’t even thought about this.