Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • rab@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    10 months ago

    I can’t fathom why google would force diversity into AI.

    People use AI as tools. If the tool doesn’t work correctly, people will not use it, full stop. It’s that simple.

    There are many different AI out there that don’t behave this way and people will be quick to move on to one of those instead.

    Surprisingly stupid even for google.

  • yildolw@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    Oh no, not racial impurity in my Nazi fanart generator! /s

    Maybe you shouldn’t use a plagiarism engine to generate Nazi fanart. Thanks

  • NotJustForMe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    It’s okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      WDYM?

      Only their new SW trilogy comes to mind, but in SW racism among humans was something limited to very backwards (savage by SW standards) planets, racism of humans towards other spacefaring races and vice versa was more of an issue, so a villain of any kind of human race is normal there.

      It’s rather the purely cinematographic part which clearly made skin color more notable for whichever reason, and there would be some racists among viewers.

      Probably they knew they can’t reach the quality level of OT and PT, so made such things intentionally during production so that they could later complain about fans being racist.

      • NotJustForMe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Have you read the article? It was about misrepresenting historical figures, racism was just a small part.

        It was about favoring diversity, even if it’s historically inaccurate or even impossible. Something Disney is very good at.

        • GiveMemes@jlai.lu
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          10 months ago

          Are you referring to the little mermaid? If so, get tf over yourself… it’s literally a fictional children’s story.

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 months ago

    Why would anyone expect “nuance” from a generative AI? It doesn’t have nuance, it’s not an AGI, it doesn’t have EQ or sociological knowledge. This is like that complaint about LLMs being “warlike” when they were quizzed about military scenarios. It’s like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy

    • UlrikHD@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I’m pretty sure it’s generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn’t output black or Asian nazis.

      it doesn’t have EQ or sociological knowledge.

      It sort of does (in a poor way), but they call it bias and tries to dampen it.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I don’t disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        10 months ago

        At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it’s still the original data deep down.

        At some point that won’t be true and it will be a proper intelligence. But we’re not there yet.

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Nah, the problem here is literally that they would edit your prompt and add “of diverse races” to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

    • stockRot@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      3
      ·
      10 months ago

      Why shouldn’t we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        I DO expect better use from new technologies. I don’t expect technologies to do things that they cannot. I’m not saying it’s unreasonable to expect better technology I’m saying that expecting human qualities from an LLM is a category error

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I don’t know how you’d solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.

    But that’s because the AI doesn’t know how to solve the problem.

    Because the AI doesn’t know anything.

    Real intelligence simply doesn’t work like this, and every time you point it out someone shouts “but it’ll get better”. It still won’t understand anything unless you teach it exactly what the solution to a prompt is. It won’t, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.

    • random9@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      You don’t do what Google seems to have done - inject diversity artificially into prompts.

      You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for “american woman” you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For “german 1943 soldier” the accurate historical images are obviously far less likely to contain racially diverse people in them.

      If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.

      • xantoxis@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Ultimately this is futile though, because you can do that for these two specific prompts until the AI appears to “get it”, but it’ll still screw up a prompt like “1800s Supreme Court justice” or something because it hasn’t been trained on that. Real intelligence requires agency to seek out new information to fill in its own gaps; and a framework to be aware of what the gaps are. Through exploration of its environment, a real intelligence connects things together, and is able to form new connections as needed. When we say “AI doesn’t know anything” that’s what we mean–understanding is having a huge range of connections and the ability to infer new ones.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 months ago

          Oh really? Here’s Gemini’s response to “What would the variety of genders and skin tones of the supreme court in the 1800s have been?”

          The Supreme Court of the United States in the 1800s was far from diverse in terms of gender and skin tone. Throughout the entire 19th century, all the justices were white men. Women were not even granted the right to vote until 1920, and there wasn’t a single person of color on the Supreme Court until Thurgood Marshall was appointed in 1967.

          Putting the burden of contextualization on the LLM would have avoided this issue.

        • TheGreenGolem@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          That’s why I hate that they started to call them artificial intelligence. There is nothing intelligent in them at all. They work on probability based on a shit ton of data, that’s all. That’s not intelligence, that’s basically brute force. But there is no going back at this point, I know.

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      Edit: further discussion on the topic has changed my viewpoint on this, its not that its been trained wrong on purpose and now its confused, its that everything its being asked is secretly being changed. It’s like a child being told to make up a story by their teacher when the principal asked for the right answer.

      Original comment below


      They’ve purposefully overrode its training to make it create more PoCs. It’s a noble goal to have more inclusivity but we purposely trained it wrong and now it’s confused, the same thing as if you lied to a child during their education and then asked them for real answers, they’ll tell you the lies they were taught instead.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        This result is clearly wrong, but it’s a little more complicated than saying that adding inclusivity is purposedly training it wrong.

        Say, if “entrepreneur” only generated images of white men, and “nurse” only generated images of white women, then that wouldn’t be right either, it would just be reproducing and magnifying human biases. Yet this a sort of thing that AI does a lot, because AI is a pattern recognition tool inherently inclined to collapse data into an average, and data sets seldom have equal or proportional samples for every single thing. Human biases affect how many images we have of each group of people.

        It’s not even just limited to image generation AIs. Black people often bring up how facial recognition technology is much spottier to them because the training data and even the camera technology was tuned and tested mainly for white people. Usually that’s not even done deliberately, but it happens because of who gets to work on it and where it gets tested.

        Of course, secretly adding “diverse” to every prompt is also a poor solution. The real solution here is providing more contextual data. Unfortunately, clearly, the AI is not able to determine these things by itself.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          I agree with your comment. As you say, I doubt the training sets are reflective of reality either. I guess that leaves tampering with the prompts to gaslight the AI into providing results it wasn’t asked for is the method we’ve chosen to fight this bias.

          We expect the AI to give us text or image generation that is based in reality but the AI can’t experience reality and only has the knowledge of the training data we provide it. Which is just an approximation of reality, not the reality we exist in. I think maybe the answer would be training users of the tool that the AI is doing the best it can with the data it has. It isn’t racist, it is just ignorant. Let the user add diverse to the prompt if they wish, rather than tampering with the request to hide the insufficiencies in the training data.

          • TwilightVulpine@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I wouldn’t count on the user realizing the limitations of the technology, or the companies openly admitting to it at expense of their marketing. As far as art AI goes this is just awkward, but it worries me about LLMs, and people using it expecting it to respond with accurate, applicable information, only to come out of it with very skewed worldviews.

        • cheese_greater@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Why couldn’t it be tuned to simply randomize the skin tone where not otherwise specified? Like if its all completely arbitrary just randomize stuff, problem-solved?

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      I’ll get the usual downvotes for this, but:

      Because the AI doesn’t know anything.

      is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that’s what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).

      But there’s no way to argue that AI doesn’t know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I’m not trying to argue that “it will necessarily get better”. But there’s no argument that labels current AI technology as “not understanding” without resorting to a “special human sauce” argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.

      Edit: yeah, this went about as expected. I don’t know why the Lemmy community has so many weird opinions on AI topics.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 months ago

            A book is a physical representation of knowledge.

            Knowledge is something possessed by an actor capable to employ it. One way I can employ a textbook about Quantum Mechanics is by throwing it at you, for which any book would suffice, but I can’t put any of the knowledge represented within into practice. Throwing is purely Newtonian, I have some learned knowledge about that and plenty of innate knowledge as a human (we are badass throwers). Also I played Handball when I was a kid. All that is plenty of knowledge, and an object, to throw, but nothing about it concerns spin states. It also won’t hit you any differently than a cookbook.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              What exactly are you trying to argue? Yes, I wasn’t incredibly precise, a book isn’t literal knowledge, but I didn’t think that somebody would nitpick this hard. Do you really think this is in any way a productive line of argumentation?

              Knowledge is something possessed by an actor capable to employ it.

              Technically this is not correct, as e.g. a fully paralyzed and mute person can’t employ their knowledge, yet they still possess it.

              ™One way I can employ a textbook about Quantum Mechanics is by throwing it at you, for which any book would suffice, but I can’t put any of the knowledge represented within into practice.

              Why can’t you put any of the knowledge represented in the book into practice? You can still pick the book up and extract the knowledge.

              See how these are technically correct arguments, yet they are absolutely stupid?

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                10 months ago

                Technically this is not correct, as e.g. a fully paralyzed and mute person can’t employ their knowledge, yet they still possess it.

                You’d have to be past Hawkins levels of paralysis to not be able to employ that knowledge to come up with new physical theories. Now that was a nickpick.

                You can still pick the book up and extract the knowledge.

                That would be employing my knowledge of maths, of my general education, not of the QM knowledge represented in the book: I cannot employ the knowledge in the book to pick up the knowledge in the book because I haven’t picked it up yet. Causality and everything, it’s a thing.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  10 months ago

                  I have no idea what you’re getting at, and I don’t think you’re writing in good faith. I’ll stop here. Have a good day!

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        10 months ago

        Lemmy hasn’t met a pitchfork it doesn’t pick up.

        You are correct. The most cited researcher in the space agrees with you. There’s been a half dozen papers over the past year replicating the finding that LLMs generate world models from the training data.

        But that doesn’t matter. People love their confirmation bias.

        Just look at how many people think it only predicts what word comes next, thinking it’s a Markov chain and completely unaware of how self-attention works in transformers.

        The wisdom of the crowd is often idiocy.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Thank you very much. The confirmation bias is crazy - one guy is literally trying to tell me that AI generators don’t have knowledge because, when asking it for a picture of racially diverse Nazis, you get a picture of racially diverse Nazis. The facts don’t matter as long as you get to be angry about stupid AIs.

          It’s hard to tell a difference between these people and Trump supporters sometimes.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 months ago

            It’s hard to tell a difference between these people and Trump supporters sometimes.

            To me it feels a lot like when I was arguing against antivaxxers.

            The same pattern of linking and explaining research but having it dismissed because it doesn’t line up with their gut feelings and whatever they read when “doing their own research” guided by that very confirmation bias.

            The field is moving faster than any I’ve seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

            A lot of outstanding assumptions have been proven wrong.

            It’s a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              Exactly. They have very strong feelings that they are right, and won’t be moved - not by arguments, research, evidence or anything else.

              Just look at the guy telling me “they can’t reason!”. I asked whether they’d accept they are wrong if I provide a counter example, and they literally can’t say yes. Their world view won’t allow it. If I’m sure I’m right that no counter examples exist to my point, I’d gladly say “yes, a counter example would sway me”.

              • GiveMemes@jlai.lu
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                2
                ·
                10 months ago

                Yall actually have any research to share or just gonna talk about it?

            • GiveMemes@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              10 months ago

              Yall actually have any research to share or just gonna talk about it?

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Real intelligence simply doesn’t work like this

      There’s a certain point where this just feels like the Chinese room. And, yeah, it’s hard to argue that a room can speak Chinese, or that the weird prediction rules that an LLM is built on can constitute intelligence, but that doesn’t mean it can’t be. Essentially boiled down, every brain we know of is just following weird rules that happen to produce intelligent results.

      Obviously we’re nowhere near that with models like this now, and it isn’t something we have the ability to work directly toward with these tools, but I would still contend that intelligence is emergent, and arguing whether something “knows” the answer to a question is infinitely less valuable than asking whether it can produce the right answer when asked.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        I really don’t think that LLMs can be constituted as intelligent any more than a book can be intelligent. LLMs are basically search engines at the word level of granularity, it has no world model or world simulation, it’s just using a shit ton of relations to pick highly relevant words based on the probability of the text they were trained on. That doesn’t mean that LLMs can’t produce intelligent results. A book contains intelligent language because it was written by a human who transcribed their intelligence into an encoded artifact. LLMs produce intelligent results because it was trained on a ton of text that has intelligence encoded into it because they were written by intelligent humans. If you break down a book to its sentences, those sentences will have intelligent content, and if you start to measure the relationship between the order of words in that book you can produce new sentences that still have intelligent content. That doesn’t make the book intelligent.

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          What do you mean it has no world model? Of course it has a world model, composed of the relationships between words in language that describes that world.

          If I ask it what happens when I drop a glass onto concrete, it tells me. That’s evidence of a world model.

          • fidodo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            10 months ago

            A simulation of the world that it runs to do reasoning. It doesn’t simulate anything, it just takes a list of words and then produces the next word in that list. When you’re trying to solve a problem, do you just think, well I saw these words so this word comes next? No, you imagine the problem and simulate it in both physical and abstract terms to come up with an answer.

          • EpeeGnome@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            I can see the argument that it has a sort of world model, but one that is purely word relationships is a very shallow sort of model. When I am asked what happens when a glass is dropped onto concrete, I don’t just think about what I’ve heard about those words and come up with a correlation, I can also think about my experiences with those materials and with falling things and reach a conclusion about how they will interact. That’s the kind of world model it’s missing. Material properties and interactions are well enough written about that it ~~simulates ~~ emulates doing this, but if you add a few details it can really throw it off. I asked Bing Copilot “What happens if you drop a glass of water on concrete?” and it went into excruciating detail about how the water will splash, mentions how it can absorb into it or affect uncured concrete, and now completely fails to notice that the glass itself will strike the concrete, instead describing the chemistry of how using “glass (such as from the glass of water)” as aggregate could affect the curing process. Having a purely statistical/linguistic world model leaves some pretty big holes in its “reasoning” process.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          But you don’t really “know” anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn’t set up that way by any “intelligence”, the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you’ve received over your life).

          Thinking about how a thing works is, imo, the wrong way to think about if something is “intelligent” or “knows stuff”. The mechanism is neat to learn about, but it’s not what ultimately decides if you know something. It’s much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.

          And again, I’m not trying to argue that an LLM is intelligent, just that whether it is or not won’t be decided by talking about the mechanism of its “thinking”

          • intensely_human@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.

            I agree, and I formalize it like this:

            Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.

            So far I have not seen any reason not to consider these LLMs to be generally intelligent.

            • GiveMemes@jlai.lu
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              2
              ·
              10 months ago

              Literally anything based on opinion or creating new info. An AI cannot produce a new argument. A human can.

              It took me 2 seconds to think of something LLMs can’t do that AGI could.

  • RGB3x3@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

    This is honestly fascinating. It’s putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

    There’s a lot of learning to be done here and it would be sad to miss that opportunity.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      It’s putting human biases on full display at a grand scale.

      Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

      But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

      Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

          Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.

          • VoterFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

            I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.

            • kromem@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              10 months ago

              Except these kinds of data driven biases can creep in from all sorts of ways.

              Is there a bias in what images have labels and what don’t? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?

              Just because the sampling is broad doesn’t mean the processes involved don’t introduce procedural bias distinct from social biases.

    • Buttons@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      10 months ago

      It’s putting human biases on full display at a grand scale.

      The skin color of people in images doesn’t matter that much.

      The problem is these AI systems have more subtle biases, ones that aren’t easily revealed with simple prompts and amusing images, and these AIs are being put to work making decisions who knows where.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        In India they’ve been used to determine whether people should be kept on or kicked off of programs like food assistance.

  • FinishingDutch@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Honestly, this sort of thing is what’s killing any sort of enjoyment and progress of these platforms. Between the INCREDIBLY harsh censorship that they apply and injecting their own spin on things like this, it’s nigh on impossible to get a good result these days.

    I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

    It’s even more annoying that you can’t even PAY to get rid of these restrictions and filters. I’d gladly pay to use one if it didn’t censor any prompt to death…

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      I couldn’t agree more. I recently read an article that criticized “uncensored AI” for that it was capable of coming up with a plan for a nazi takeover of the world or something similar. Well duh, if that’s what you asked for then it should. If it truly is uncensored then it should be capable of plotting a similar takeover for gay furries too as well as also counter-measures for both of those plans.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        This points at a very crucial and deep divide in people’s social philosophy, which is how to ensure bad things are minimized.

        One major branch of this theory goes like:

        Make sure people are good people, and punish those who do wrong

        And the other major branch goes like:

        Make sure people don’t have the power needed to do wrong

        Very deep, very serious divide in our zeitgeist, and we never talk about it directly but I think we really should.

        (Or maybe we shouldn’t, because the conversation could be dangerous in the wrong hands)

        I’m in the former camp. I think people should have power, even if it enables them to do bad things.

    • crimsonpoodle@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Just run ollama locally and download uncensored versions— runs on my m1 MacBook no problem and is at the very least comparable to chatgpt3. Unsure for images though, but there should be some open source options. Data is king here, so the more you use a platform the better its AI gets (generally) so don’t give the corporations the business.

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I’ve never even heard of that, so I’m definitely going to check that out :D I’d much prefer running my own stuff rather than sending my prompts to god knows where. Big tech already knows way yoo much about us anyway.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        How powerful is ollama compared to say GPT-4?

        I’ve heard GPT-4 uses an enormous amount of energy to answer each prompt. Are the models runnable on personal equipment once they’re trained?

        I’d love to have an uncensored AI

        • crimsonpoodle@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Llama2 is pretty good but there are a ton of different models which have different pros and cons, you can see some of them here: https://ollama.com/library . However I would say that as a whole models are generally slightly less polished compared to chat gpt.

          To put it another way: when things are good they’re just as good, but when things are bad the AI will start going off the rails, for instance holding both sides on the conversation, refusing to answer, just saying goodbye, etc. More “wild westy” but you can also save the chats and go back to them so there are ways to mitigate, and things are only getting better.

    • mellowheat@suppo.fi
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      10 months ago

      I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

      The thing is, if it’s injecting diversity into a place where there shouldn’t have been diversity, this can usually be fixed by specifying better in the next prompt. Not by writing ragebait articles about it.

      But yeah, I’d also be happy to be able to use an unhinged LLM once in a while.

  • Underwaterbob@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.

    • AstridWipenaugh@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      Dave Chappelle did that with a blind black man that joined the Klan (back in the day before he went off the deep end)

  • Jeom@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    inclusivity is obviously good but what googles doing just seems all too corporate and plastic

    • guajojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      It’s trying so hard to not be racist that is being even more racist than other AI, is hilarious

  • BurningnnTree@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don’t specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      No matter what Google does, people are going to come up with gotcha scenarios to complain about.

      American using Gemini: “Please produce images of the KKK, historically accurate Santa’s Workshop Elves, and the board room of a 1950s auto company”

      Also Americans: “AH!! AH!!! Minorities and Women!!! AAAAAHHH!!!”

      I mean, idk, man. Why do you need AI to generate an image of George Washington when you have thousands of images of him already at your disposal?

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Because sometimes you want an image of George Washington, riding a dinosaur, while eating a cheeseburger, in Paris.

        Which you actually can’t do on Bing anyway, since it ‘content warning’ stops you from generating anything with George Washington…

        Ask it for a Founding Father though, it’ll even hand him a gat!

        https://lemmy.kya.moe/imgproxy?src=lemmy.world%2fpictrs/image/dab26e07-34c8-422e-944f-83d7f719ea2e.jpeg

        • pirat@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          The random lettuce between every layer is weirdly off-putting to me. It seems like it’s been growing on the burger for quite some time :D

          • FinishingDutch@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            Funnily enough, he’s not eating one in the other three images either. He’s holding an M16 in one, with the dinosaur partially as a hamburger (?). In the other two he’s merely holding the burger.

            I assume if I change the word order around a bit, I could get him to enjoy that burger :D

            • VoterFrog@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              This is the thing. There’s an incredible number of inaccuracies in the picture, several of which flat out ignore the request in the prompt, and we laugh it off. But the AI makes his skin a little bit darker? Write the Washington Post! Historical accuracy! Outrage!

              • FinishingDutch@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                10 months ago

                Well, the tech is of course still young. And there’s a distinct difference between:

                A) User error: a prompt that isn’t as good as it can be, with the user understanding for example the ‘order of operations’ that the AI model likes to work in.

                B) The tech flubbing things because it’s new and constantly in development

                C) The owners behind the tech injecting their own modifiers into the AI model in order to get a more diverse result.

                For example, in this case I understand the issue: the original prompt was ‘image of an American Founding Father riding a dinosaur, while eating a cheeseburger, in Paris.’ Doing it in one long sentence with several comma’s makes it harder for the AI to pin down the ‘main theme’ from my experience. Basically, it first thinks ‘George on a dinosaur’ with the burger and Paris as afterthoughts. But if you change the prompt around a bit to ‘An American Founding Father is eating a cheeseburger. He is riding on a dinosaur. In the background of the image, we see Paris, France.’, you end up with the correct result:

                Basically the same input, but by simply swapping around the wording it got the correct result. Other ‘inaccuracies’ are of course to be expected, since I didn’t really specify anything for the AI to go of. I didn’t give it a timeframe for one, so it wouldn’t ‘know’ not to have the Eiffel Tower and a modern handgun in it. Or that that flag would be completely wrong.

                The problem is with C) where you simply have no say in the modifiers that they inject into any prompt you send. Especially when the companies state that they are doing it on purpose so the AI will offer a more diverse result in general. You can write the best, most descriptive prompt and there will still be an unexpected outcome if it injects their modifiers in the right place of your prompt. That’s the issue.

                • VoterFrog@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  C is just a work around for B and the fact that the technology has no way to identify and overcome harmful biases in its data set and model. This kind of behind the scenes prompt engineering isn’t even unique to diversifying image output, either. It’s a necessity to creating a product that is usable by the general consumer, at least until the technology evolves enough that it can incorporate those lessons directly into the model.

                  And so my point is, there’s a boatload of problems that stem from the fact that this is early technology and the solutions to those problems haven’t been fully developed yet. But while we are rightfully not upset that the system doesn’t understand that lettuce doesn’t go on the bottom of a burger, we’re for some reason wildly upset that it tries to give our fantasy quasi-historical figures darker skin.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It’s silly to point at brand new technology and not expect there to be flaws. But I think it’s totally fair game to point out the flaws and try to make it better, I don’t see why we should just accept technology at its current state and not try to improve it. I totally agree that nobody should be mad at this. We’re figuring it out, an issue was pointed out, and they’re trying to see if they can fix it. Nothing wrong with that part.

    • OhmsLawn@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      It’s really a failure of one-size-fits-all AI. There are plenty of non-diverse models out there, but Google has to find a single solution that always returns diverse college students, but never diverse Nazis.

      If I were to use A1111 to make brown Nazis, it would be my own fault. If I use Google, it’s rightfully theirs.

      • PopcornTin@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        10 months ago

        The issue seems to be the underlying code tells the ai if some data set has too many white people or men, Nazis, ancient Vikings, Popes, Rockwell paintings, etc then make them diverse races and genders.

        What do we want from these AIs? Facts, even if they might be offensive? Or facts as we wish they would be for a nicer world?

  • Harbinger01173430@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    …white is a color. Also white people usually look pink, cream, orange or red. Only albinos look the closest to white though not white enough.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 months ago

      So what you’re saying is that a white actor should always be cast to play any character that was originally white whether they are the best actor or not?

      Keep in mind historical figures are largely white because of systemic racism and in your scenario the film and television industry would have to purposefully double down on the discrimination that empowered those people to meet your requirements.

      I’m not defending Google’s ham fisted approach. But at the same time it’s a great reinforcement of the reality that Large Language Models cannot and should not be relied upon for accurate information. LLMs are just as ham fisted for accurate information as Google’s approach to diversity in LLMs.

        • roofuskit@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Someone who is half white would have to play him right? So you’d have to exclude any truly dark skinned black people for the role. You know, because the American public would have never put someone dark skinned into the presidency.