• 9488fcea02a9@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    If there is such a big push for power efficiency from RISC cpus, why arm instead of risc-v?

    Companies would save a ton on licensing costs

    • Superb@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I’m gonna bet a lot of it is business. They could use a risc-v core, but that could require a lot more in-house expertise. Paying arm for a license also means you get a lot of support from arm on integration, performance, etc

    • alessandro@lemmy.caOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      People that control companies, CEOs, are mostly people hired by company’s share holders (ie: Steve Jobs). Due their position (they don’t really “own” the company) they do whatever it takes to keep their own job up: make the company make money as quick possible, mid or short terms.

      RISC-V require a foresight to the future, where company spend more money now, but will get stuff for free in the future. The problem is in the CEO themselves: they are supposed to make “bleed” money to the company (risk to be fired) just to, hopefully, give the company free RISC-V and freedom… all while they don’t know if they are already fired in the meanwhile.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    6
    ·
    edit-2
    1 year ago

    Why? Planned obsolescence, I imagine?

    EDIT: Title was changed, used to say something about ARM chips taking over.

      • RightHandOfIkaros@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        9
        ·
        1 year ago

        If by ARM you mean “phasing out all x86 chips and forcing everyone to buy ARM chips because they’re cheaper to produce than x86 chips,” then I guess.

        • Superb@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          16
          arrow-down
          4
          ·
          1 year ago

          Well yes, but not just because they’re cheaper. x86 is ancient and bloated. Computers could be just as fast but use way less power with a more modern ISA like Arm

          • echo64@lemmy.world
            link
            fedilink
            arrow-up
            11
            arrow-down
            6
            ·
            1 year ago

            I just gotta pour water on this. I’m sorry. It bothers me.

            Apple did some amazing marketing around their chip to make people think its arm that made it so good. I’m sorry, it’s not. The Intel chips that came out the next year were even better.

            Do you know what the secret sauce is? Tsmc. They constantly buy the latest and greatest chip fab tech, and if you use them, your stuff is gonna be next level by default. Intels fabs upgraded their tech the year after tsmc did, and well that solved that problem, suddenly just as good or better.

            Apples’ secret sauce wasn’t arm. It was buying TSMC an entire factory. They literally bought the company an entire new factory for a deal that would guarantee apple a minimum number of fab time per year in TSMC fabs.

            And of course the kicker is that none of these cpus actually run x86 or arm. Haven’t done for decades, the machine code is compiled down to a chip specific bytecode at execution time. Bloat isn’t a problem because the cpu doesn’t run x86.

            • Superb@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              12
              arrow-down
              4
              ·
              1 year ago

              Oh boy!

              Yes there are a lot of factors that make the M series chips so impressive and their incredibly small node size (which is what they get from tsmc) is one of them. The choice of arm is another huge one.

              And of course the kicker is that none of these cpus actually run x86 or arm. Haven’t done for decades, the machine code is compiled down to a chip specific bytecode at execution time. Bloat isn’t a problem because the cpu doesn’t run x86.

              Are you talking about microcode? Because that is not at all analogous to compilation. I don’t think you have a good grasp of the hardware that you’re talking about.

              At the end of the day, the processor does still “run x86”. The implementation detail of most instructions being microcoded doesn’t change that. The x86 isa is large, complex, and old. It has compatibility decisions that date back all the way to the Datapoint 2200.

              • echo64@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                6
                ·
                edit-2
                1 year ago

                the choice of arm is not impactful at all. you can try to explain why you think, i suggest avoiding the terms “large”, “complex”, “old” because none of that means anything. arm isn’t a spring chicken itself you know.

                it also does nothing to explain why suddenly intel cpus are just as fast or faster magically as soon as they upgraded their chip fabs. are you :O suggesting that arm is as “large”, “complex”, “old” as x86 and that’s why it wasn’t able to compete with the young upstart x86 cpus that year?!

                • Superb@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  10
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  x86 could always compete in raw performance, but never in efficiency. If we were to compare two hypothetical cpus on the same node size, one arm and one x86, that can both run a program at the same speed; I guarantee you the arm one will use less power.

                  We can argue the pros and cons of x86 vs arm all day long but suggesting that the choice isn’t impactful is just wrong.

            • voxel@sopuli.xyz
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              1 year ago

              arm IS more efficient on the instruction level (faster conditions directly in the instructions, better prediction, it’s overall more efficient)
              even armv4 is technically more efficient thrn modern x86, assuming identical node size

              • echo64@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                oh yeah and x86 has a billion extensions that requires multiple arm instructions to execute. but non of this matters as none of the arm or x86 chipsets actually execute arm or x86 machine code, it’s all transformed (sorry i can’t use the word compile here people get mad) into processor specific microcode making the whole thing moot

                • Superb@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 year ago

                  You don’t understand what microcode is, it’s not a magic spell that can hide all problems of an instruction set.

          • RightHandOfIkaros@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            5
            ·
            1 year ago

            Its 100% because they’re cheaper and the company can make more profit by forcing everyone to switch. Any perceived benefit is only a consequence. Nobody can convince me otherwise.

            • JDubbleu@programming.dev
              link
              fedilink
              arrow-up
              14
              arrow-down
              2
              ·
              1 year ago

              It is strictly due to power efficiency. ARM is insanely power efficient when put up against x86. Our phones run it, laptops are starting to run it (ever wonder why MacBooks have 20+ hour battery lives now?), hell AWS is switching their data centers to ARM because of the energy savings. It’ll save the world a lot of energy since 10% of our electricity is used for computers.

              No one is forcing you to run out and buy an ARM system, and x86 is gonna be supported for a very long time. Software will be developed for both platforms in parallel as it’s going to be at least a decade before it reaches dominance.

              Did you feel this was when we went from 32 to 64 bit computers? If so, we still write software for them even though many people, myself included, haven’t used a 32 bit computer since the 2000s.

  • alessandro@lemmy.caOP
    link
    fedilink
    arrow-up
    3
    arrow-down
    7
    ·
    1 year ago

    CISC vs RISC.

    Apple vs. Oranges: yeah, it’s an unfruitful discussion that can go on forever, but we can put terms that are equal for both. For example: which one provide more protein per kg., costs less work or environment impact?

    So, CISC and RISC: assume the best is how and what they do.

    I think the best example is comparing a F1 car vs. a Rally car… it’s all about the kind of road: few big bumps on the road, and the F1 got no chance. On a flat straight road? Now, here’s the challenge for a the rally car.

    The road we chose, basically set the winner. CISC and RISC follow the same kind of logic: CISC is the heavy stuffed CPU (like a rally one) good for nearly any kind of environment. Basically they always win on scientific calculations and evolution… where no body can predict which " kind of power" there may need in future. To some this is bloat, but in truth CISCS cpu are meant for rapid evolution where you don’t know what expect next. Minecraft is one example in the gaming industry: no one expected that future games had to generate worlds from scratch with computation.

    RISC are the F1 cars, if you don’t change rules all by sudden, you can deliver enormous, yet VERY SIMPLE, processing power: really cheap and quickly… so long you don’t plan to build supercomputers to discover new things (supercomputers that make predictable jobs are fine tho)

    • T4V0@lemmy.pt
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I would argue that CISC vs RISC was mostly relevant 20~30 years ago. Today’s CPUs are a different kind of beast, for example they implement decoders that break down instructions into micro-ops, a RISC-like behavior.

      For further reading.