Linux is a branch of development of the old unix class of systems. Unix is not necessarily open and free. FOSS is what is classified as open and free software. Unix since its inception was deeply linked to specific industrial private interests, let’s not forget all this while we examine the use of linux by left minded activists. FOSS is nice and cool, but it is nearly 99.99% run on non-open and non-free hardware. A-political proposals of crowd-funding and diy construction attempts have led to ultra-expensive idealist solutions reserved for the very few and the eccentric affluent experimenters

Linux vs Windows is cool and trendy, is it? Really is it alone containing any political content? If there is such what is it? So let’s examine it from the base.

FOSS, People, as small teams or individuals “producing as much as they can and want” offering what they produced to be shared, used, and modified by anyone, or “as much as they need”. This is as much of a communist system of production and consumption as we have experienced in the entirety of modern history. No exchange what so ever, collective production according to ability and collective consumption according to need.

BUT we have corporations, some of them mega-corps, multinationals who nearly monopolize sectors of computing markets, creating R&D departments specifically to produce and offer open and free code (or conditionally free). Why? Firstly because other idiots will join their projects and offer further development (labor), contribute to their projects, for “free”, but they still retain the leadership and ownership of the project. Somehow, using their code, without asking why they were willing to offer it in the first place, it is cool to use it as long as we can say we are anti/against/ms-win free.

Like false class consciousness we have fan-boys of IBM, Google, Facebook, Oracle, Qt, HP, Intel, AMD, … products against MS.

Back when unix would only run on enterprise ultra-expensive large scale systems and expensive workstations (remember Dec, Sun, Sgi, … workstations that were priced similarly to 2 brand new fast sportscars each) and the PC market was restricted to MS or the alternative Apple crap, people tried and tried to port forms of unix into a PC. Some really gifted hacking experts were able to achieve such marvels, but it was so specific to hardware that the examples couldn’t be generalized and utilized massively.

Suddenly this genious Finn and his friends devised a kernel that could make most PC hardware available work and unix with a linux kernel could boot and run.

IBM saw eventually a way back into the PC market it lost by handing dos out to the subcontractors (MS), and saw an opportunity to take over and steer this “project” by promoting RedHat. After 2 decades of behind the scenes guidance since the projected outcome was successful in cornering the market, IBM appeared to have bought RH.

Are we all still anti-MS and pro-IBM,google,Oracle,FB,Intel/AMD?

The bait thrown to dumb fish was an automated desktop that looked and behaved just like the latest MS-win edition.

What is the resistance?

Linus Trovalds and a few others who sign the kernel today make 6figure salaries ALL paid by a handful of computing giants that by offering millions to the foundation control what it does. Traps like rust, telemetry, … and other “options” are shoved daily into the kernel to satisfy the paying clients’ demands and wishes.

And we, in the left are fans of a multimilioner’s “team” against a “trilioner’s” team. This is not football or cricket, or F1. This is your data in the hands of multinationals and their fellow customer/agencies. Don’t forget which welfare system maintains the hierarchy of those industries whether the market is rosy or gray. Do I need to spell out the connection?

Beware of multinationals bearing gifts.

Yes there are healthier alternatives requiring a little more work and study to employ, the quick and easy has a “cost” even when it is FOSS.

.

  • Prologue7642@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I always actually wonder if that is an actual issue. Apart from some duplicate effort with things like packaging for different distros (which is something that distro maintainers do anyway) I don’t really get this point. For me, this only makes sense for proprietary packages and not for open source.

    Apart from some small differences in how you install packages, using most distros is basically the same.

    I am always confused by this point because I see it repeated everywhere, but never with a good argument supporting it.

    • FuckBigTech347@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I only ever see people who work on proprietary software make this argument. For FOSS this is a non-issue. If you have the source code available you can just compile it against the libs on your system and it will just work In most cases unless there was a major change in some lib’s API. And even then you can make some adjustments yourself to make it work. Distro maintainers tend to do this.

    • debased@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      For many admittedly smaller apps, it’s always a bit of a pain to have to install it manually because the dev simply gave up trying to package it for “the big 3” and distro maintainers can’t care about all small programs, although the current system works well enough for most programs.

      However i am not a developer, so i can’t speak firsthand about the difficulty of packaging and maintaining your app on different distros across years, and i’m not sure if the brunt of maintaining all these apps should fall onto distro maintainers.

      About users and using distros, i can agree that it’s roughly the same either way with the only real difference most of the time being “do you use apt or pacman to install packages”

      • Prologue7642@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Fair enough, but I only see that for some niche projects. And at that point you are probably not a regular user and can do it yourself.

        There is an issue on the other side, if you only provide appimage/flatpak it is much less customizable. You can’t optimize your software for your CPU, you can’t mix and match what version of the libraries your software uses. Personally, I think it is always a good idea to provide a flatpak alternative for those that want it, but I don’t see it as a replacement for regular packaging.

        Edit: I would much rather see something like nix being used to describe the dependencies. That is in my opinion the best solution, which also allows you to more easily port it to other systems.

        • debased@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Ideally, it’d be good enough to simply have say, an appimage/flatpak and have the source code and then let distro maintainers/end users build it how they want/need to, i have had the pleasure of trying to get NVENC working in OBS under Debian 10 and that was a massive pain, due to both outdated nvidia-drivers, i had to recompile ffmpeg with the right flags and that would break after every update, the easiest way was to get an OBS flatpak that came prebuilt with it all IIRC I guess my problems with that were mainly because i used debian stable at that time, it’s probably not as much of a pain now that i’m on sid.

          I don’t know anything about Nix, i heard a lot of good about it and how it’s “all config files” or something but the prospect of learning a whole new world scares me, but i trust your judgment on that. I’ll stick to what i know on my boring ass debian sid :D

          • Prologue7642@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I would imagine that if you weren’t on Debian stable, it would be much better. From what I’ve seen, dealing with anything Nvidia on stable distros is pain.

            I just recently started working with it and it is really nice. You have NixOS, where you can define basically everything with just nix config files. You want to run MPD on some port, sure just use add this option, and we will create config file and put it in right place. It is really easy to define your entire system with all the options in one place. I don’t think I’ve ever had to change anything in /etc I just need to change an option in my system config. I think something like this is probably the future of Linux.

            Nix by itself is just a language that is used to configure things. You can do things like to define all the dependencies for your project with it, so it is easy to build by anyone with nix (which you can install basically anywhere). By doing it like this you can be sure all the dependencies are defined, so it is really easy to port the software to other distros even if you weren’t using Nix.