![](https://lemmy.world/pictrs/image/0af10ed3-a982-492e-8aec-a765d3e43eee.png)
![](https://lemmy.zip/pictrs/image/77ad0608-acf8-4508-bf30-fb3df37c6bac.webp)
$20 for these 7 games:
- Nickelodeon All-Star Brawl 2
- Pennys Big Breakaway
- The Elder Scrolls III: Morrowind Game of the Year Edition
- Arzette: The Jewel of Faramore
- Dishonored
- Hyperbolica
- Blazing Chrome
$20 for these 7 games:
Epoch fail!!!
(Relevant xkcd):
https://xkcd.com/376/
I don’t think you can add any extra text when crossposting with Lemmy, but I’m not sure what instances like kbin allow.
Either way, I would expect that treating it the same way that the browser view currently does would be a good start for this specific case.
Is this how we want it to work though?
There’s a lot of context missing on one post, while it’s visible on the other. And this isn’t a problem when viewing these posts through a browser instead of Voyager.
That GitHub “archive here” link leads to a page where it hasn’t been archived… (or was the archive removed??).
Thanks! It helps to have a lot more background and i haven’t looked too deeply into this.
I was trying to keep my reply simple and directly to the point that they didn’t create their own launcher just because they wanted to.
I didn’t know the first point, now I’m wondering if both sides wanted it dismissed in the U.S. at least. From the article I read it sounded like this was being pushed from Ironmace’s side.
I had mentioned the founder’s involvement before, but only in a different reply on this same post.
On the second point, at least as far as U.S. law is concerned, I’m not so sure that this is such a straightforward case. We’ve already seen in previous cases with video games that it’s okay for games to have the same game rules, mechanics, ideas, and principles. That’s why anyone can create a game like Tetris, Monopoly, or Pokemon (such as Palworld). As long as they don’t copy over assets directly, (sprites/models/verbatim text for the game rules, etc.) it’s ok to create a very similar game or even to be inspired by other games. Mostly this is what I understood after listening to some YouTube attorneys that were discussing this matter for Palworld (Hoeg Law and Attorney Tom).
The difference here is that one of the founders did work for Nexon so it seems that a lot of the work was likely plagiarized (which is not illegal in the U.S. but it is unethical). It would have been interesting to see how this would play out in U.S. courts.
Do you have any idea how the courts in South Korea view cases like this?
On the third point, I had heard how they had recruited other employees, but I hadn’t heard about the founder agreeing to destroy the company info and failing to do so. Do you have a link/source for that?
Thanks for the reply!
Edit: asking for source, not because I’m doubting you, I just want to read up more on it.
Joke’s on us for trusting them to do what?
It was on Steam, up until Nexon sued them because they suspected stolen assets were used.
So far it doesn’t look like that was true, and the case that was filed in the U.S. was eventually dismissed (since it should be handled by the courts in South Korea).
So hopefully we’ll see it back in the steam store, eventually.
Yeah, it looks like Nexon was trying to crush their competition (there was a lead developer that left Nexon and went to work on Dark and Darker).
The police didn’t find anything obvious when they investigated Nexon’s allegations. And they had already had an audit conducted by an external group:
Our code was built from scratch. Most of our assets are purchased from the Unreal marketplace. All other assets and all game designs docs were created inhouse. This has already been audited by an outside agency. As far as we know you cannot copyright a game genre.
https://www.vg247.com/dark-and-darker-devs-raided-by-police-following-accusations-of-stolen-assets
The lawsuit that Nexon filed in the U.S. was eventually dismissed, but Steam pulled the game from their store, so that damage was already done:
https://gamerant.com/dark-and-darker-nexon-lawsuit-dismissed/
PvPvE
Different teams are thrown in around a map, you then work with your team to try to survive and make it out to the end.
You don’t have to engage other players on other teams, but chances are they’ll engage you.
Seriously?!
While you’re absolutely correct, for those who don’t know, Windows does have an IoT version of their OS that removes most of the bloatware.
Not just the U.S.
Avalanche Studios has their headquarters in Sweden and they’re closing their studio in Canada (per this article).
Additionally, Phoenix Labs (Dauntless & Fae Farm) is a Canadian game developer and they just let go of a significant number of developers and cancelled all future projects (about 3 weeks ago):
https://www.pcgamer.com/gaming-industry/dauntless-developer-phoenix-labs-lays-off-employees-and-cancels-in-development-projects-says-its-the-last-resort-to-ensure-phoenix-labs-can-survive/
While Microsoft was the one shutting down multiple Game Developers last month, those studios are also based all over:
Tango Gameworks - Japan
Alpha Dog Games - Canada
Arkane Studios - (Headquarters in France, but shutting down their Studio in the U.S.)
Roundhouse Studios - U.S.
Edit: formatting
How many game studios is that within a 2 month period?!
SD? SD 3? The weights? All the above?
Stable Diffusion is an open source image generating machine learning model (similar to Midjourney).
Stable Diffusion 3 is the next major version of the model and, in a lot of ways, it looks better to work with than what we currently have. However, up until recently we were wondering if we would even get the model since Stability AI ran out of funding and they’re in the midst of being sold off.
The “weights” refer to the values that make up the neural network. Basically by releasing the weights they are essentially saying that they are making the model open-source so that the community can retrain/fine-tune the model as much as we want.
They made a wait list for those who are interested in getting notified once the model is released, and they turned it into a pun by calling it a “weights list”.
Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:
Interpreting between frames
For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.
Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.
Optical Flow -
This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.
Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.
While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.
Edit: added some sources below and fixed up optical flow description.
https://www.digitaltrends.com/computing/everything-you-need-to-know-about-nvidias-rtx-dlss-technology/
https://www.youtube.com/watch?v=pSiczcJgY1s
If you’re trying to compare “AI” and electrical use you need to compare every use case to how we traditionally do things vs how any sort of “AI” does it. Even then we need to ask ourselves if there’s a better way to do it, or if it’s worth the increase in productivity.
For example, a rain sensor on your car.
Now, you could setup some AI/ML model with a camera and computer vision to detect when to turn on your windshield wipers.
But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers.
The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.
On the other hand, I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper (another ML model) to quickly translate and transcribe what was said in a matter of seconds. In this case, Whisper uses less electricity.
In the context of this article we’re talking about DLSS where Nvidia has trained a few different ML models for upscaling, optical flow (predicting where the pixels/objects are moving to next), and frame generation (being able to predict what the in-between frames will look like to boost your FPS).
This can potentially save energy because it puts less of a load on the CPU, and most of the work is being done at a lower resolution before upscaling it at the end. But honestly, I haven’t seen anyone compare the energy use differences on this yet… and either way you’re already using a lot of electricity just by gaming.
Correction: “Weight List” ;)
Interstellar “Inception”, dream training scene:
https://www.youtube.com/watch?v=0b-H8oQUs1A
Edit: Freudian slip.
Can he swing
From a web?
No he can’t
he’s a pig…
https://www.youtube.com/watch?v=BARjPuUN36Y&t=20s