In a wide-ranging conversation with Verizon open-source officer Dirk Hohndel, ‘plodding engineer’ Linus Torvalds discussed where Linux is today and where it may go tomorrow.
…
As for the release numbers, Torvalds reminded everyone yet again, they mean nothing. Hohndel said, “You typically change the major number around 19 or 20, because you get bored.” No, replied Torvalds, it’s because, “when I can’t count on my fingers and toes anymore it’s time for another ‘major’ release.”
…
So, what should you do about the constant weekly flow of Linux security bug fixes? Greg Kroah-Hartman, the maintainer of the Linux stable kernel, thinks you should constantly update to the newest, most secure stable Linux kernel. Torvalds agrees but can see the case for sticking with older kernels and relying on less frequent security patch backports.
…
Switching to a more modern topic, the introduction of the Rust language into Linux, Torvalds is disappointed that its adoption isn’t going faster. “I was expecting updates to be faster, but part of the problem is that old-time kernel developers are used to C and don’t know Rust. They’re not exactly excited about having to learn a new language that is, in some respects, very different. So there’s been some pushback on Rust.”
…
The pair then moved on to the hottest of modern tech topics: AI. While Torvalds is skeptical about the current AI hype, he is hopeful that AI tools could eventually aid in code review and bug detection.
In the meantime, though, Torvalds is happy about AI’s side effects. For example, he said, “When AI came in, it was wonderful, because Nvidia got much more involved in the kernel. Nvidia went from being on my list of companies who are not good to my list of companies who are doing really good work.”
Nvidia have been big kernel contributers for a long time, even before the “fuck you nvidia” thing. They hold their graphics driver close to their chest, but have done a lot of other work for the kernel.
What’s an example? I would have thought, back then especially, their driver (and maybe nvapi) was most of the software they shipped.
My memory is fuzzy, but they have had their tegra SOC since the 2000s, and somewhat more recently they have been a big player in data center networking.
And ever since CUDA became a thing they have been a big name in HPC and super computers, which is usually Linux based.
So they have done a lot of behind the scenes Linux work (and possibly BSD?).
Yeah, afaik the tegra was only used for embedded, closed source devices though, no? Did they submit any non-proprietary tegra support upstream?
And afaik CUDA has also always been proprietary bins. Maybe you mean they had to submit upstream fixes here and there to get their closed-source stuff working properly?