• 0 Posts
  • 91 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle



  • SLS is on track to be more expensive when adjusted for inflation per moon mission than the Apollo program. It is wildly too expensive, and should be cancelled.

    This coupled with the fact that the rocket is incapable of sending a manned capsule to low earth orbit which is the the lunar gateway is planned to a Rectilinear Halo Orbit instead.

    Those working in the space industry know that SpaceX’s success is not because of Elon but instead Gwynne Shotwell. She is the President and CEO of SpaceX and responsible for all things SpaceX. The best outcome after the election is to remove Elon from the board and revoke his ownership of what is effectively a defense company for political interference in this election. Employees at SpaceX would be happy, the government would be happy, and the American people would be happy.


  • The technical definition of AI in academic settings is any system that can perform a task with relatively decent performance and do so on its own.

    The field of AI is absolutely massive and includes super basic algorithms like Dijsktra’s Algorithm for finding the shortest path in a graph or network, even though a 100% optimal solution is NP-Complete, and does not yet have a solution that is solveable in polynomial time. Instead, AI algorithms use programmed heuristics to approximate optimal solutions, but it’s entirely possible that the path generated is in fact not optimal, which is why your GPS doesn’t always give you the guaranteed shortest path.

    To help distinguish fields of research, we use extra qualifiers to narrow focus such as “classical AI” and “symbolic AI”. Even “Machine Learning” is too ambiguous, as it was originally a statistical process to finds trends in data or “statistical AI”. Ever used excel to find a line of best fit for a graph? That’s “machine learning”.

    Albeit, “statistical AI” does accurately encompass all the AI systems people commonly think about like “neural AI” and “generative AI”. But without getting into more specific qualifiers, “Deep Learning” and “Transformers” are probably the best way to narrow down what most people think of when they here AI today.


  • This is truly a terrible accident. Given the flight tracking data and the cold, winter weather at the time, structural icing is likely to have caused the crash.

    Ice will increase an aircraft’s stall speed, and especially when an aircraft is flown with autopilot on in icing conditions, the autopilot pitch trim can end up being set to the limits of the aircraft without the pilots ever knowing.

    Eventually the icing situation becomes so severe that the stall speed of the ice-laden wing and elevator exceeds the current cruising speed and results in a aerodynamic stall, which if not immediately corrected with the right control inputs will develop into a spin.

    The spin shown in several videos is a terrifying flat spin. Flat spins develop from normal spins after just a few rotations. It’s very sad and unfortunate that we can hear that both engines are giving power while the plane is in a flat spin towards the ground. The first thing to do when a spin is encountered is to eliminate all sources of power as this will aggravate a spin into a flat spin.

    Once a flat spin is encountered, recovery from that condition is not guaranteed, especially in multi-engine aircraft where the outboard engines create a lot of rotational inertia.


  • CodeInvasion@sh.itjust.workstoScience Memes@mander.xyzHumor
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    6 months ago

    It took Hawking minutes to create some responses. Without the use of his hand due to his disease, he relied on the twitch of a few facial muscles to select from a list of available words.

    As funny as it is, that interview, or any interview with Hawkins contains pre-drafted responses from Hawking and follows a script.

    But the small facial movements showing his emotion still showed Hawking had fun doing it.




  • I am an LLM researcher at MIT, and hopefully this will help.

    As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

    The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

    This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

    This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

    Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

    This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

    From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

    —-

    +more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.



  • I am a pilot and this is NOT how autopilot works.

    There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.

    The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).

    An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.

    Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.





  • I’m convinced that we should use the same requirements to fly an airplane as driving a car.

    As a pilot, there are several items I need to log on regular intervals to remain proficient so that I can continue to fly with passengersor fly under certain conditions. The biggest one being the need for a Flight Review every two years.

    If we did the bare minimum and implemented a Driving Review every two years, our roads would be a lot safer, and a lot less people would die. If people cared as much about driving deaths as they did flying deaths, the world would be a much better place.


  • You are absolutely right on all accounts. I’m sorry you’ve had shitty landlords, I wish there was a better way to weed those people out, because as it stands, the balance of power is heavily in the favor of the landlord due to the micro-monopolistic nature of renting a place for years at a time.

    Renting vs Buying is very dependent on your local market. I have friends in Ottawa that I’ve run the numbers for and it would literally never be profitable to purchase a home compared to continuing to rent. Some areas two years is the break even point. These days with high interest rates, the break even on buying vs renting is after about 5 or 6 years. I encourage anyone to check it out for themselves! :)

    https://www.nytimes.com/interactive/2014/upshot/buy-rent-calculator.html

    (For anyone stuck behind the paywall, install this chrome extension to get past it: https://github.com/iamadamdev/bypass-paywalls-chrome)

    I could’ve have been clear, but my situation has a very slight net benefit for me, and since my tenants only plan to live in the are for two years, they are getting the better end of the deal. In the end though, there is a mutual benefit and that’s what a competitive market should tend towards (as opposed to the monopolistic nature of corporate apartment housing which encourages the opposite).

    My point is that the people who hate all landlords instead of just the bad ones don’t understand the economic realities of housing. It’s actually the mom and pops that rent out their homes for a short period that make renting cheaper on average for the market as a whole. Mostly because they are imperfect businessmen/women and don’t understand the full cost of being a landlord before it’s too late. Instead, most mom and pop landlords are just hoping to break even.


  • CodeInvasion@sh.itjust.workstomemes@lemmy.worldThe system is broken
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    10 months ago

    It’s clear there is a fundamental misunderstanding in the amount of capital required to own an investment property without first living in it as a primary residence for a few years.

    If one were to purchase a property with the expressed intent of immediately renting it, most banks will require at least 25% down with no option to pay PMI to cover the difference. That’s an insane amount of money to put down just so the landlord can make a negative cash flow for the first 10 years. If an investor has that kind of money, and still want to be involved in real estate, they should buy a share in an apartment complex where the margins are more favorable, and the property actually has a positive cash flow.

    Thus nearly ever single family home was purchased initially as a primary residence, with the intent to live there. But then by some circumstance one way or another they needed toove away. Selling a home will cost you 10% of the home’s value in fees. So if that person has any intent to return to the home in the future, it’s better to eat the temporary loss and rent out the property.