• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • That’s not how it works at all. If it were as easy as adding a line of code that says “check for integrity” they would’ve done that already. Fundamentally, the way these models all work is you give them some text and they try to guess the next word. It’s ultra autocomplete. If you feed it “I’m going to the grocery store to get some” then it’ll respond “food: 32%, bread: 15%, milk: 13%” and so on.

    They get these results by crunching a ton of numbers, and those numbers, called a model, were tuned by training. During training, they collect every scrap of human text they can get their hands on, feed bits of it to the model, then see what the model guesses. They compare the model’s guess to the actual text, tweak the numbers slightly to make the model more likely to give the right answer and less likely to give the wrong answers, then do it again with more text. The tweaking is an automated process, just feeding the model as much text as possible, until eventually it gets shockingly good at predicting. When training is done, the numbers stop getting tweaked, and it will give the same answer to the same prompt every time.

    Once you have the model, you can use it to generate responses. Feed it something like “Question: why is the sky blue? Answer:” and if the model has gotten even remotely good at its job of predicting words, the next word should be the start of an answer to the question. Maybe the top prediction is “The”. Well, that’s not much, but you can tack one of the model’s predicted words to the end and do it again. “Question: why is the sky blue? Answer: The” and see what it predicts. Keep repeating until you decide you have enough words, or maybe you’ve trained the model to also be able to predict “end of response” and use that to decide when to stop. You can play with this process, for example, making it more or less random. If you always take the top prediction you’ll get perfectly consistent answers to the same prompt every time, but they’ll be predictable and boring. You can instead pick based on the probabilities you get back from the model and get more variety. You can “increase the temperature” of that and intentionally choose unlikely answers more often than the model expects, which will make the response more varied but will eventually devolve into nonsense if you crank it up too high. Etc, etc. That’s why even though the model is unchanging and gives the same word probabilities to the same input, you can get different answers in the text it gives back.

    Note that there’s nothing in here about accuracy, or sources, or thinking, or hallucinations, anything. The model doesn’t know whether it’s saying things that are real or fiction. It’s literally a gigantic unchanging matrix of numbers. It’s not even really “saying” things at all. It’s just tossing out possible words, something else is picking from that list, and then the result is being fed back in for more words. To be clear, it’s really good at this job, and can do some eerily human things, like mixing two concepts together, in a way that computers have never been able to do before. But it was never trained to reason, it wasn’t trained to recognize that it’s saying something untrue, or that it has little knowledge of a subject, or that it is saying something dangerous. It was trained to predict words.

    At best, what they do with these things is prepend your questions with instructions, trying to guide the model to respond a certain way. So you’ll type in “how do I make my own fireworks?” but the model will be given “You are a chatbot AI. You are polite and helpful, but you do not give dangerous advice. The user’s question is: how do I make my own fireworks? Your answer:” and hopefully the instructions make the most likely answer something like “that’s dangerous, I’m not discussing it.” It’s still not really thinking, though.



  • Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren’t the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you’re volunteering to be part of their botnet. When a website is going to be shut down, they’ll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there’s more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should’ve pointed to.




  • I know TiddlyWiki quite well but have only poked at Logseq, so maybe it’s more similar to this than I think, but TiddlyWiki is almost entirely implemented in itself. There’s a very small core that’s JavaScript but most of it is implemented as wiki objects (they call them “tiddlers,” yes, really) and almost everything you interact with can be tweaked, overridden, or imitated. There’s almost nothing that “the system” can do but you can’t. It’s idiosyncratic, kind of its own little universe to be learned and concepts to be understood, but if you do it’s insanely flexible.

    Dig deep enough, and you’ll discover that it’s not a weird little wiki — it’s a tiny, self-contained object database and web frontend framework that they have used to make a weird little wiki, but you can use it for pretty much anything else you want, either on top of the wiki or tearing it down to build your own thing. I’ve used it to make a prediction tracker for a podcast I follow, I’ve made my own todo list app in it, and I made a Super Bowl prop bet game for friends to play that used to be spreadsheet-based. For me, it’s the perfect “I just want to knock something together as a simple web app” tool.

    And it has the fun party trick (this used to be the whole point of it but I’d argue it has moved beyond this now) that your entire wiki can be exported to a single HTML file that contains the entire fully functional app, even allowing people to make their own edits and save a new copy of the HTML file with new contents. If running a small web server isn’t an issue, that’s the easiest way to do it because saving is automatic and everything is centralized, otherwise you need to jump through some hoops to get your web browser to allow writing to the HTML file on disk or just save new copies every time.




  • It’s not a fantasy because they’re bad ideas (they’re not) or we shouldn’t fight for them (we should), it’s a fantasy because you’re skipping over any of the actual work that needs to be done to make them happen: convincing more people to join you and demand more. Ask 100 people if the Senate and Supreme Court should be abolished and 99 of them are going to look at you like you have two heads. You can insist that you’re right and they’re all wrong all you want, but unless you work to get more people on your side, you’ll just be complaining into the void and setting impossible standards for politicians so that you can feel smug when they fail to meet them.


  • If a minority group is being oppressed or is otherwise motivated to create change and is voting in large numbers, but the majority is apathetic and not bothering to vote, then this system would prevent the minority from changing their representation as “punishment” for something they’re not doing.

    It’s also a bit of a “the beatings will continue until morale improves” solution to the problem, if it even is actually a problem. Low turnout is bad, but not because it’s inherently bad not to vote. It’s a symptom of the fact that people don’t think it matters, or that it will change anything, and unfortunately they’re not exactly wrong much of the time. Instead of putting effort into punishing people for not being engaged enough, it’d be better to make systemic changes that empower people and make the government more representative of their interests.


  • OPML files really aren’t much more than a list of the feeds you’re subscribed to. Individual posts or articles aren’t in there. I would expect that importing a second OPML file would just add more subscriptions, but it’d be up to the reader app to decide what it does.



  • If you ask an LLM to help you with a legal brief, it’ll come up with a bunch of stuff for you, and some of it might even be right. But it’ll very likely do things like make up a case that doesn’t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you’re going to have a bad time.

    There’s a reason LLMs make stuff up like that, and it’s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren’t just associating the sounds they hear, they’re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

    LLMs aren’t nearly at that level. That’s not to say what they do isn’t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they’ve never been trained on specifically. They’ve picked up a lot of surprising nuance just from the text they’ve been fed, and it’s convincing enough to think that something magical is going on. But ultimately, they’ve been optimized to predict words, and that’s what they’re good at, and although they’ve clearly developed some impressive skills to accomplish that task, it’s not even close to human level. They spit out a bunch of nonsense when what they should be saying is “I have no idea how to write a legal document, you need a lawyer for that”, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don’t have that. And how could they? Their training didn’t include any of that, it was mostly about words.

    One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you’re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question “as an AI, do you want to take over the world?” is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren’t just doing statistics, but you don’t have to go too far down that spectrum before the answers start seeming thoughtful.


  • In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

    OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

    “We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

    The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

    https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

    The thing is, it doesn’t really matter if you have to “manipulate” ChatGPT into spitting out training material word-for-word, the fact that it’s possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it’s a lot weaker than the original argument, which was that nothing of the original material really remains after training, it’s all synthesized and blended with everything else to create something entirely new that doesn’t replicate the original.


  • The problem is the jokes aren’t funny. Or even really jokes. It’s just the same hateful garbage that you’ll find in any right wing comment section with no clever twist or respect for the humanity of the people being made fun of. It’s all variations on “haw haw, these people are pretending to be something they’re not, ew gross”. It’s not true, it’s not “keeping it real”, it’s not insightful, and anyone who actually knows or cares about the trans community knows that hearing that all the time will drive some people to kill themselves. Maybe even worse than that, it’ll foster that attitude in people even less compassionate that Dave Chappelle, who I don’t think has any particular malice toward individual trans people, but he’s telling those who do that they’re right.

    There’s definitely humor to be had about the trans community, just visit any trans meme board and you’ll find it. There are stereotypes and self-deprecation and tons of really dark humor going on. What’s coming out of Chappelle’s mouth isn’t that, it’s just undercooked right wing bigotry.



  • “There was a particular bad guy near them” and “they all probably have bad opinions about Jews” are not sufficient justifications for indiscriminately bombing innocent people. What if there had been an Israeli leader at that rave? People in both refugee camps and at a music event should be able to exist without fear that they’ll die because they were near the wrong person. One seems to provoke a different reaction than the other for some reason though, and that might be worth thinking about.


  • These models aren’t great at tasks that require precision and analytical thinking. They’re trained on a fairly simple task, “if I give you some text, guess what the next bit of text is.” Sounds simple, but it’s incredibly powerful. Imagine if you could correctly guess the next bit of text for the sentence “The answer to the ultimate question of life, the universe, and everything is” or “The solution to the problems in the Middle East is”.

    Recently, we’ve been seeing shockingly good results from models that do this task. They can synthesize unrelated subjects, and hold coherent conversations that sound very human. However, despite doing some things that up until recently only humans could do, they still aren’t at human-level intelligence. Humans read and write by taking in words, converting them into rich mental concepts, applying thoughts, feelings, and reasoning to them, and then converting the resulting concepts back into words to communicate with others. LLMs arguably might be doing some of this too, but they’re evaluated solely on words and therefore much more of their “thought process” is based on “what words are likely to come next” and not “is this concept being applied correctly” or “is this factual information”. Humans have much, much greater capacity than these models, and we live complex lives that act as an incredibly comprehensive training process. These models are small and trained very narrowly in comparison. Their excellent mimicry gives the illusion of a similarly rich inner life, but it’s mostly imitation.

    All that comes down to the fact that these models aren’t great at complex reasoning and precise details. They’re just not trained for it. They got through “life” by picking plausible words and that’s mostly what they’ll continue to do. For writing a novel or poem, that’s good enough, but math and physics are more rigorous than that. They do seem to be able to handle code snippets now, mostly, which is progress, but in general this isn’t something that you can be completely confident in them doing correctly. They make silly mistakes because they aren’t really thinking it through. To them, there isn’t really much difference between answers like “that date is 7 days after Christmas” and “that date is 12 days after Christmas.” Which one it thinks is more correct is based on things it has seen, not necessarily an explicit counting process. You can also see this in things like that case where someone tried to use it to write a legal brief, where it came up with citations that seemed plausible but were in fact completely made up. It wasn’t trained on accurate citations, it was trained on words.

    They also have a bad habit of sounding confident no matter what they’re saying, which makes it hard to use them for things you can’t check yourself. Anything they say could be right/accurate/good/not plagiarized, but the model won’t have a good sense of that, and if you don’t know either, you’re opening yourself up to risk of being misled.



  • Yes, we get the names of the candidates on our ballots. The whole idea of “the unwashed masses vote for electors who then vote for President” never really got off the ground, and states pretty quickly abandoned it in favor of a more straightforward popular vote to determine who gets the state’s votes.

    (A digression: States can technically choose almost any method of allocating electors that they want, the main restriction is that if they do choose to hold an election, it has to be fair, no discriminating based on race, etc. A couple states use a system where Congressional districts each allocate 1 electoral vote independently and the state’s remaining 2 electoral votes go to the statewide winner, so the vote for that state can be split a bit, but for most it’s simply “most votes in the state gets all the electors”.)

    (And another digression related to the previous digression: The fact that states technically have very wide leeway for choosing their electors is also the basis of a scheme called the National Popular Vote Interstate Compact, the idea being that if a majority of electoral votes’ worth of states band together, they can just decide who the President is, and so they could choose to agree to pick the national popular vote winner regardless of how any individual state voted and effectively use the electoral college to eliminate the electoral college. They haven’t gotten that majority, though it has gotten closer over the last few decades, and the Constitution does say that states banding together into compacts have to get Congressional approval, and also anything this drastic and unconventional would certainly get challenged in court. But still, a very weird and interesting idea.)

    When a candidate gets on the ballot for President in most states, they also submit a slate of electors, basically saying “if I win, make these N people from your state the electors” and of course you choose people who will 100% vote for you, usually party leaders or other people you want to honor with a special ceremonial role. In some states, the electors are actually required to vote for the winner, making it entirely ceremonial.

    (One of Trump’s indictments is about this as well. They organized a scheme where his slates of electors in a few came-close-but-lost states sent in their own unauthorized electoral votes to DC insisting that Trump actually won their state and they were the correct electoral voters, the idea being that when Mike Pence is presiding over the official counting of electoral votes on January 6th, he can pull them out and say “there’s some controversy here, let’s vote on what to do” and try to either throw out the legitimate votes as disputed or even count the illegitimate ones instead. Luckily Pence was not on board with this at all.)

    Since presidential elections are, weirdly, conducted at the state level, it’s a valid question to ask Minnesota’s Secretary of State, as he’s in charge of conducting Minnesota’s election. He’s the one that would also be in charge of making sure that each candidate is a natural-born citizen, is 35 years old, and… well, that’s basically all the requirements. The clause preventing insurrectionists from holding office is a Civil War era one, so it’s very unclear how it would apply to anything today, especially when Trump hasn’t actually been convicted of anything.