• 89 Posts
  • 279 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle


























  • Thanks! It helps to have a lot more background and i haven’t looked too deeply into this.

    I was trying to keep my reply simple and directly to the point that they didn’t create their own launcher just because they wanted to.

    I didn’t know the first point, now I’m wondering if both sides wanted it dismissed in the U.S. at least. From the article I read it sounded like this was being pushed from Ironmace’s side.

    I had mentioned the founder’s involvement before, but only in a different reply on this same post.

    On the second point, at least as far as U.S. law is concerned, I’m not so sure that this is such a straightforward case. We’ve already seen in previous cases with video games that it’s okay for games to have the same game rules, mechanics, ideas, and principles. That’s why anyone can create a game like Tetris, Monopoly, or Pokemon (such as Palworld). As long as they don’t copy over assets directly, (sprites/models/verbatim text for the game rules, etc.) it’s ok to create a very similar game or even to be inspired by other games. Mostly this is what I understood after listening to some YouTube attorneys that were discussing this matter for Palworld (Hoeg Law and Attorney Tom).

    The difference here is that one of the founders did work for Nexon so it seems that a lot of the work was likely plagiarized (which is not illegal in the U.S. but it is unethical). It would have been interesting to see how this would play out in U.S. courts.

    Do you have any idea how the courts in South Korea view cases like this?

    On the third point, I had heard how they had recruited other employees, but I hadn’t heard about the founder agreeing to destroy the company info and failing to do so. Do you have a link/source for that?

    Thanks for the reply!

    Edit: asking for source, not because I’m doubting you, I just want to read up more on it.










  • SD? SD 3? The weights? All the above?

    Stable Diffusion is an open source image generating machine learning model (similar to Midjourney).

    Stable Diffusion 3 is the next major version of the model and, in a lot of ways, it looks better to work with than what we currently have. However, up until recently we were wondering if we would even get the model since Stability AI ran out of funding and they’re in the midst of being sold off.

    The “weights” refer to the values that make up the neural network. Basically by releasing the weights they are essentially saying that they are making the model open-source so that the community can retrain/fine-tune the model as much as we want.

    They made a wait list for those who are interested in getting notified once the model is released, and they turned it into a pun by calling it a “weights list”.



  • If you’re trying to compare “AI” and electrical use you need to compare every use case to how we traditionally do things vs how any sort of “AI” does it. Even then we need to ask ourselves if there’s a better way to do it, or if it’s worth the increase in productivity.

    For example, a rain sensor on your car.
    Now, you could setup some AI/ML model with a camera and computer vision to detect when to turn on your windshield wipers.
    But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers.
    The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

    On the other hand, I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper (another ML model) to quickly translate and transcribe what was said in a matter of seconds. In this case, Whisper uses less electricity.

    In the context of this article we’re talking about DLSS where Nvidia has trained a few different ML models for upscaling, optical flow (predicting where the pixels/objects are moving to next), and frame generation (being able to predict what the in-between frames will look like to boost your FPS).

    This can potentially save energy because it puts less of a load on the CPU, and most of the work is being done at a lower resolution before upscaling it at the end. But honestly, I haven’t seen anyone compare the energy use differences on this yet… and either way you’re already using a lot of electricity just by gaming.