• 0 Posts
  • 99 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle


  • Yes. But what if the world was 1/3rd Linux, 1/3rd windows, 1/3rd OSX?

    The 1/3 running macOS (they haven’t called in OS X in many years now) wouldn’t have to worry, because Apple provides kernel event access for security tools running in user space. The CrowdStrike Falcon Sensor driver on macOS runs as a System Extension, and runs 100% in user space (“Ring 3” in Intel parlance) only — so if it misbehaves, the kernel can just shut it down and continue on its merry way.

    The problem with Windows (and to a certain extend Linux) is that Falcon Sensor needs to run in kernel mode (Ring 0) on those OS’s, and if it fucks up you lose all guarantees that the kernel and all of the apps running on the system haven’t been fucked with, hence the need for a full system crash/shutdown. The driver can (and did) put these systems in an indeterministic state. But that can’t happen on modern macOS with modern System Extensions.



  • …until the CrowdStrike agent updated, and you wind up dead in the water again.

    The whole point of CrowdStrike is to be able to detect and prevent security vulnerabilities, including zero-days. As such, they can release updates multiple times per day. Rebooting in a known-safe state is great, but unless you follow that up with disabling the agent from redownloading the sensor configuration update again, you’re just going to wing up in a BSOD loop.

    A better architectural solution like would have been to have Windows drivers run in Ring 1, giving the kernel the ability to isolate those that are misbehaving. But that risks a small decrease in performance, and Microsoft didn’t want that, so we’re stuck with a Ring 0/Ring 3 only architecture in Windows that can cause issues like this.


  • That company had the power to destroy our businesses, cripple travel and medicine and our courts, and delay daily work that could include some timely and critical tasks.

    Unless you have the ability and capacity to develop your own ISA/CPU architecture, firmware, OS, and every tool you use from the ground up, you will always be, at some point, “relying on others stuff” which can break on you at a moments notice.

    That could be Intel, or Microsoft, or OpenSSH, or CrowdStrike^0. Very, very, very few organizations can exist in the modern computing world without relying on others code/hardware (with the main two that could that come to mind outside smaller embedded systems being IBM and Apple).

    I do wish that consumers had held Microsoft more to account over the last few decades to properly use the Intel Protection Rings (if the CrowdStrike driver were able to run in Ring 1, then it’s possible the OS could have isolated it and prevented a BSOD, but instead it runs in Ring 0 with the kernel and has access to damage anything and everything) — but that horse appears to be long out of the gate (enough so that X86S proposes only having Ring 0 and Ring 3 for future processors).

    But back to my basic thesis: saying “it’s your fault for relying on other peoples code” is unhelpful and overly reductive, as in the modern day it’s virtually impossible to do so. Even fully auditing your stacks is prohibitive. There is a good argument to be made about not living in a compute monoculture^1; and lots of good arguments against ever using Windows^2 (especially in the cloud) — but those aren’t the arguments you’re making. Saying “this is your fault for relying on other peoples stuff” is unhelpful — and I somehow doubt you designed your own ISA, CPU architecture, firmware, OS, network stack, and application code to post your comment.

    ——- ^0 — Indeed, all four of these organizations/projects have let us down like this; Intel with Spectre/Meltdown, Microsoft with the 28 day 32-bit Windows reboot bug, and OpenSSH just announced regreSSHion.
    ^1 — My organization was hit by the Falcon Sensor outage — our app tier layers running on Linux and developer machines running on macOS were unaffected, but our DBMS is still a legacy MS SQL box, so the outage hammered our stack pretty badly. We’ve fortunately been well funded to remove our dependency on MS SQL (and Windows in general), but that’s a multi-year effort that won’t pay off for some time yet.
    ^2 — my Windows hate is well documented elsewhere.


  • Along came Creative Labs with their AWE32, a synthesizer card that used wavetable synthesis instead of FM.

    Creative Labs did wavetable synthesis well before the AWE32 — they released the Wave Blaster daughter board for the Sound Blaster 16, two full years before the AWE32 was released.

    (FWIW, I’m not familiar with any motherboards that had FM synthesis built-in in the mid 90’s. By this time, computers were getting fast enough to be able to do software-driven wavetable synthesis, so motherboards just came with a DAC).

    Where the Sound Blaster really shined was that the early models were effectively three cards in one — an Adlib card, a CMS card, and a DAC/ADC card (with models a year or two later also acting as CD-ROM interface cards). Everyone forgets about CMS because Adlib was more popular at the time, but it was capable of stereo FM synthesis, whereas the Adlib was only ever mono.

    (As publisher of The Sound Blaster Digest way back then, I had all of these cards and more. For a few years, Creative sent me virtually everything they made for review. AMA).


  • I certainly wouldn’t run to HR right away — but unfortunately, it’s true sometimes that people just aren’t a good fit for whatever reason. Deadweight that isn’t able to accomplish the tasks that need to be done doesn’t do you any favours — if you’re doing your job and their jobs because they just can’t handle the tasks that’s hardly fair to you, and isn’t doing the organization any good — eventually you’ll burn out, nobody will pickup the slack, and everyone will suffer for it.

    My first instinct in your situation however would be that everyone has got used to the status quo, including the staff you have to constantly mentor. Hopefully if you can coach them into doing the work for themselves and keeping them accountable to tasks and completion dates will help change the dynamic.


  • I’m a tech manager with a 100% remote team of seven employees. We’re a very high performing team overall, and I give minimal hand-holding while still fostering a collaborative working environment.

    First off, you need to make outcomes clear. Assign tasks, and expect them to get done in a reasonable timeframe. But beyond that, there should be no reason to micro-manage actual working hours. If some developer needs some time during the day to run an errand and wants to catch up in the evening, fine by me. I don’t need them to be glued to their desk 9-5/10-6 or for some set part of the day — so long as the tasks are getting done in reasonable time, I let me employees structure their working hours as they see fit.

    Three times a week we have regular whole-team checkins (MWF), where everyone can give a status update on their tasks. This helps keep up accountability.

    Once a month I reserve an hour for each employee to just have a general sync-up. I allow the employee to guide how this time is used — whether they want to talk about issues with outstanding tasks, problems they’re encountering, their personal lives, or just “shoot the shit”. I generally keep these meetings light and employee-directed, and it gives me a chance to stay connected with them on both a social level and understand what challenges they might be facing.

    And that’s it. I’ve actually gone as far as having certain employees who were being threatened with back-to-office mandates to have them converted to “remote employee” in the HR database so they’d have to lay off threatening them — only 2 of my 7 employees are even in the same general area of the globe (my employees are spread in 3 different countries at the moment), and I don’t live somewhere with an office, so having some employees forced to report to an office doesn’t help me in the slightest (I can’t be in 6 places at once — I live far enough away I can’t be in any of those places on a regular basis!).

    Your employees may have got used to you micro-managing them. Changing this won’t happen overnight. Change from a micro-manager into a coach, and set them free. And if they fail…then it’s time to talk to HR and to see about making some changes. HTH!




  • To put things into context, IBM didn’t get ripped off in any way (at least not from DOS - the whole IBM/Microsoft OS/2 debacle is a different story). The earliest PCs (IBM PC, IBM PC XT, IBM PC Jr., and associated clones) didn’t really have the hardware capabilities needed to permit a more advanced operating system. There was no flat memory model, no protection rings, and no Translation Look-aside Buffer (TLB). The low maximum unpaged memory addressing limit (1MB) made it difficult to run more than one process at a time, and really limits how much OS you can have active on the machine (modern Windows by way of example reserves 1GB of virtual RAM per process just for kernel memory mapping).

    These things did exist on mainframe and mini computers of the day — so the ideas and techniques weren’t unknown — but the cheaper IBM PCs had so many limitations that those techniques were mostly detrimental (there were some pre-emptive OSs for 8086/8088 based PCs, but they had a lot of limitations, particularly around memory management and protection), if not outright impossible. Hence the popularity of DOS in its day — it was simple, cheap, didn’t require a lot of resources, and mostly stayed out of the way of application development. It worked reasonably well given the limitations of the platforms it ran on, and the expectations of users.

    So IBM did just fine from that deal — it was when they went in with Microsoft to replace DOS with a new OS that did feature pre-emptive multitasking, memory protection, and other modern techniques that they got royally screwed over by Microsoft (vis: the history of OS/2 development).


  • As someone who has done some OS dev, it’s not likely to be of much help. DOS didn’t have much of any of the defining features of most modern OS’s — it barely had a kernel, there was no multitasking, no memory management, no memory protection, no networking, and everything ran at the same privilege level. What little bit of an API was there was purely through a handful of software interrupts — otherwise, it was up to your code to communicate with nearly all the hardware directly (or to communicate with whatever bespoke device driver your hardware required).

    This is great for anyone that wants to provide old-school DOS compatibility, and could be useful in the far future to aid in “digital archaeology” (i.e.: being able to run old 80’s and early 90’s software for research and archival purposes on “real DOS”) — but that’s about it. DOS wasn’t even all that modern for its time — we have much better tools to use and learn from for designing OS’s today.

    As a sort of historical perspective this is useful, but not likely for anything else.


  • AWS already had to effectively do this. AWS only exists in two regions in China because they licensed much of the AWS software to be run by a pair of Chinese-government affiliated ISPs inside China (that is, Amazon doesn’t run AWS in either of its China zones — it’s run by a pair of Chinese companies who license AWS’s software).

    This is why the China AWS regions are often quite far behind in terms of functionality from every other region (they either haven’t licensed all the functionality, they don’t keep up-to-date at the same cadence as Amazon, or Amazon is holding certain functions back), and why you can’t really access them from the standard AWS console.

    So in effect, Amazon did have to give their software to Chinese-government affiliated companies in order to continue operating in China.



  • It’s mostly improved chemistries and manufacturing processes. What we call “lithium ion batteries” aren’t the same today as they were even just in the 2010s. We have newer chemistries (lithium cobalt oxide, lithium manganese oxide, lithium iron phosphate, lithium manganese cobalt, lithium nickel cobalt aluminium oxide, etc.), newer solid state battery technologies, better cell packaging, and overall better manufacturing processes.

    Will these cells still have 100% capacity after 15 years? Likely not — but even if they’re only at 80 - 90% of their original capacity that’s still quite a lot of driving capacity for most EVs.

    Here is one non-peer reviewed study on Tesla battery deterioration, which shows that at the ~10 year mark, battery capacity loss is at around 17%. However, it’s worth noting that cars that hit the 8 through 10 year marks were more likely using older battery chemistry and construction techniques; newer cars at the 7 year mark only showed a roughly 7% battery capacity loss.

    Time will tell, but the situation is significantly less bleak than naysayers (and the oil industry) want you to believe.


  • The batteries on modern EVs doesn’t wear anywhere near the rate that people think they do. A properly cared for battery (which doesn’t require much care other than keeping it charged properly) will easily last 15+ years — and likely beyond the lifetime of the car they were installed into. Manufacturers already offer 8+ year battery warranties on new EVs, because they know they can easily beat that (barring a manufacturing defect of some kind).

    (In Japan, Nissan has been taking cells out of old Leafs that have at least 80% remaining capacity and are making them into home power packs. The Nissan Leaf was one of the first EVs and used an older battery chemistry — and even there, the batteries are typically outliving the cars they were originally installed into).

    It’s a little difficult to say with certainty what the lifetime of an EV battery is going to be like right now, as EVs with modern chemistries aren’t yet 15 years old (they’re more like 5 to 7 years old at most). Anecdotally, those I know with EVs in that age range typically have less than 1% capacity loss (and ODB-II reader can typically check this for you, so it’s not difficult to determine).

    Now of course it’s possible that someone has abused the hell out of their vehicle in ways that reduce the battery life (like routinely driving it to completely pull-over-to-the-side-of-the-road empty before recharging) — but as mentioned above an ODB-II reader will quickly show what the battery capacity is like. Hopefully used car sellers would check this themselves and provide it to buyers — but if not, ODB-II readers on Amazon aren’t terribly expensive to buy to check for oneself.

    Battery wear concerns are going to be more for “classic” EV collectors in 30+ years time, and won’t be for your typical EV driver.


  • The levers available to the Federal Government in this area are few; the Provinces hold most of the cards when it comes to housing, and it disgusts me that all too many of my fellow Canadians have so little clue as to how the system works that they blame the Federal Government (and/or “Trudope”), while letting their Provincial leaders (the majority of which are Conservatives) off the hook.

    Just today we’re seeing the Premiere of Alberta attempting to halt some of the Federal Governments deals with the municipalities to enhance housing supplies — purely because if they let the Feds provide assistance they won’t have a cudgel to hold against them anymore. It’s you don’t do enough to help and we won’t let you!

    The only policy solution to the current housing woes is more housing supply. And that’s ultimately in the hands of the Provinces.


  • Truly “poor people” (to use your words) typically don’t buy a lot of new cars in the first place. People on the lower end of the income scale are the main drivers of the used vehicle market.

    Incentivizing EV purchases and infrastructure ultimately helps everyone. It will bring efficiencies to the supply chain, and will drive investment into resources that should help drive prices down. At the same time, within the next 5 years or so you should see growth in the used EV market, which as more stock becomes available and used EVs become more normalized should make them more economical to purchase (as they’re already more economical to run and maintain).

    More new EVs now means more used EVs down the road, which will allow people to get into a better car for less money.