Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.
I help maintain Nixpkgs.
https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)
With efficient cpus and lack of dedicated gpus I doubt the 4W of RAM is really that much of a battery drain.
What? If anything, it’d be more drain relatively speaking.
4W is quite a lot if you consider that a decently efficient laptop should draw 5-8W at idle max.
The usual; check the server and client logs.
Archive is/ph/today etc. is playing dirty with DNS and actively lying to Cloudflare and some others too I believe. They all do not work.
This should allow
averagenon-technical users to keep up with development, without reading Github comments or knowing how to program.
;)
Mastodon’s UI for groups is terrible. This community is indistinguishable from an account named “@Firefox” with thousands of followers unless you open its page and notice it says “Group” and understand what that means.
Your browser cannot block server-side abuse of your personal data. These consent forms are not about cookies; they’re about fooling users into consenting to abuse of their personal data. Cookies are just one of many many technological measures required to carry out said human rights abuse.
I’d look further into that bug because it’s not happening on my end.
Do you have a better source than a 5 y/o comment in an issue?
Freetube won’t have anything to do with h265 as youtube does not serve that format in any way.
Drive is under a different org:
Steam is its own package manager and native games usually assume that an FHS-conformant is present. Neither of those mesh well with Nix notoriously has nothing comparable to an FHS and usually requires everything to be defined in its terms.
Right from the horse’s mouth ;)
https://mastodon.social/users/protonprivacy/statuses/112162248226735964
He had the last 6 months or so to work on it. He resigned from the Nouveau project and RH in September and likely joined Nvidia a little while later where he would have had plenty of time to work on this patch series.
more and more customary that (for some reason) they want your photo
Gotta keep the people with different skin colour out
What does this have to do with privacy? It’s just a userscript to modify the regular Twitter website with all its human rights abuse.
Realtek LAN is usually not too bad.
For WiFi, you want mediatek or Intel though.
They’re in the middle of a rollout of a rewrite and have promised to publish the source soon.
While I wouldn’t put it past tech bros to use such unethical measures for their latest grift, it’s not a given that it’s actually claudebot
. Anyone can claim to be claudebot
, googlebot
, boredsquirrelbot
or anything else. In fact it could very well be a competitor aiming to harm Claude’s reputation.
v3 is worth it though
[citation needed]
Sometimes the improvements are not apparent by normal benchmarks, but would have an overall impact - for instance, if you use filesystem compression, with the optimisations it means you now have lower I/O latency, and so on.
Those would show up in any benchmark that is sensitive to I/O latency.
Also, again, [citation needed] that march optimisations measurably lower I/O latency for compressed I/O. For that to happen it is a necessary condition that compression is a significant component in I/O latency to begin with. If 99% of the time was spent waiting for the device to write the data, optimising the 1% of time spent on compression by even as much as 20% would not gain you anything of significance. This is obviously an exaggerated example but, given how absolutely dog slow most I/O devices are compared to how fast CPUs are these days, not entirely unrealistic.
Generally, the effect of such esoteric “optimisations” is so small that the length of your unix username has a greater effect on real-world performance. I wish I was kidding.
You have to account for a lot of variables and measurement biases if you want to make factual claims about them. You can observe performance differences on the order of 5-10% just due to a slight memory layout changes with different compile flags, without any actual performance improvement due to the change in code generation.
That’s not my opinion, that’s rather well established fact. Read here:
So far, I have yet to see data that shows a significant performance increase from march optimisations which either controlled for the measurement bias or showed an effect that couldn’t be explained by measurement bias alone.
There might be an improvement and my personal hypothesis is that there is at least a small one but, so far, we don’t actually know.
More importantly, if you’re a laptop user, this could mean better battery life since using more efficient instructions, so certain stuff that might’ve taken 4 CPU cycles could be done in 2 etc.
The more realistic case is that an execution that would have taken 4 CPU cycles on average would then take 3.9 CPU cycles.
I don’t have data on how power scales with varying cycles/task at a constant task/time but I doubt it’s linear, especially with all the complexities surrounding speculative execution.
In my own experience on both my Zen 2 and Zen 4 machines, v3/v4 packages made a visible difference.
“visible” in what way? March optimisations are hardly visible in controlled synthetic tests…
It really doesn’t make sense that you’re spending so much money buying a fancy CPU, but not making use of half of its features…
These features cater towards specialised workloads, not general purpose computing.
Applications which facilitate such specialised workloads and are performance-critical usually have hand-made assembly for the critical paths where these specialised instructions can make a difference. Generic compiler optimisations will do precisely nothing to improve performance in any way in that case.
I’d worry more about your applications not making any use of all the cores you’ve paid good money for. Spoiler alert: Compiler optimisations don’t help with that problem one bit.
As always, stable releases are about how frequently breaking changes are introduced. If breaking changes potentially happening every day is fine for you, you can use unstable. For many use-cases however, you want some agency over when exactly breaking changes are introduced as point releases a la NixOS provide you with a 1 month window to migrate for each release.