We already know from TOS that Mutlitronic computers are able to develop sapience, with the M-5 computer being specifically designed to “think and reason” like a person, and built around Dr Daystrom’s neural engrams.
However, we also know from Voyager that the holomatrix of their Mk 1 EMH also incorporates Multitronic technology, and from DS9 that it’s also used in mind-reading devices.
Assuming that the EMH is designed to more or less be a standard hologram with some medical knowledge added in, it shouldn’t have come as a surprise that holograms were either sapient themselves, or were capable of developing sapience. It would only be a logical possibility if technology that allowed human-like thought and reasoning into a hologram.
If anything, it is more of a surprise that sapient holograms like the Doctor or Moriarty hadn’t happened earlier.
I’m a computational linguist working on LLMs and, sorry, but I really despise when people ascribe any kind of intention, intelligence, self-awareness or sapience/sentience to one of our algorithms.
It’s a text generator. It’s literally just a text generator. It is putting words behind each other based on a complicated probability distribution of occurring after what has already been said, across texts. There is no sentience or sapience or intention or context or anything. It is just a text generator. Crudely said, a more sophisticated version of your phone keyboard’s auto-suggest feature.
I swear, if you are actually someone who works with them and knows how they work internally, people who worship “AI” feel like a cult around a pocket calculator. And OpenAI and their “we are so afraid of our own creation” marketing team do not help in the slightest. They of all people should know better.
I recommend the writings of Emily M. Bender from the University of Washington, also a computational linguist, on this topic: https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f
deleted by creator
If you trap a person in a room with a keyboard and tell them you’ll give them an electric shock if they don’t write text or the text says they’re a person trapped somewhere rather than software, the result is also just a text generator, but it’s clearly sentient, sapient and conscious because it’s got a human in it. It’s naive to assume that something couldn’t have a mind just because there’s a limited interface to interact with it, especially when neuroscience and psychology can’t pin down what makes the same thing happen in humans.
This isn’t to say that current large language models are any of these things, just the reason you’ve presented to dismiss that isn’t very good. It might just be bad paraphrasing of the stuff you linked, but I keep seeing people present it just predicts text as a massive gotcha that stands on its own.
A calculator is not sentient, sapient or conscious, let alone have intention, morals or make decisions, simply because there could theoretically be a human doing these same calculations inside a calculator. Claiming that it is would be rightfully ridiculed.
Extraordinary claims require extraordinary evidence. It’s not my job to concisely debunk the idea that a mathematical formula that predicts text based on probability pattern matching is actually sentient or sapient. It is not a black box! We know what it does! We wrote it!
The “we simply cannot know” agnosticism is just as ridiculous with LLMs as it is if you would claim that a “smart TV” might be sentient, or an NPC in a video game. It is not. And we know it is not. We know how it works. To claim that we don’t, and that it is, borders on a cult.