![](https://fedia.io/media/0d/90/0d9097fcd085a5a00c935073e45acc5736f8f471cfdec99dfe7b6d12f3dd3710.png)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
Cloaks are actually quite historical, they’re very easy to make and useful in a variety of conditions.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Cloaks are actually quite historical, they’re very easy to make and useful in a variety of conditions.
When you delve into the details of what those bullet points actually entailed, they were all far far worse in medieval times.
I suppose Biden could have him officially assassinated. That’s legal now.
The specific subject that Triton is telling Ariel about is where babies come from.
The problem isn’t stuff going in, it’s the baby coming out.
Wait until she finds out how she’ll be doing it once she’s human. I suspect she’ll prefer this approach.
Did take companies long to stop pretending like they care.
Of course they care, they care about what their customers think because that’s where their money comes from. This is just how corporations work, and it would have the opposite outcome if their customer base wanted those goals of theirs.
If you want corporations to change then convince them that they’ll make more money that way, by whatever means. Through customer preferences, regulations, etc. Don’t expect a corporation to “do what’s right because it’s right,” any more than you should expect a shark to “do what’s right.” It’s not designed that way.
Oh, neat. The first one blew up the door, and then the second one literally flew inside and went down the hallway to reach the cache.
And sometimes that’s exactly what I want, too. I use LLMs like ChatGPT when brainstorming and fleshing out fictional scenarios for tabletop roleplaying games, for example, and in those situations coming up with plausible nonsense is specifically the job at hand. I wouldn’t want to go “ChatGPT, I need a description of the interior of a wizard’s tower is like” and get the response “I don’t know what the interior of a wizard’s tower is like.”
Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.
Being slightly wrong means more of an endorphin rush when people realize they can pounce on the flaw they’ve spotted, I guess.
Don’t sweat downvotes, they’re especially meaningless on the Fediverse. I happen to like a number of applications for AI technology and cryptocurrency, so I’ve certainly collected quite a few of those and I’m still doing okay. :)
There was a politics subreddit I was on that had a “downvoting is not allowed” rule. There’s literally no way to tell who’s downvoting on Reddit, or even if downvoting is happening if it’s not enough to go below 0 or trigger the “controversial” indicator.
I got permabanned from that subreddit when someone who’d said something offensive asked “why am I being downvoted???” And I tried to explain to them why that was the case. No trial, one million years dungeon, all modmail ignored. I guess they don’t get to enforce that rule often and so leapt at the opportunity to find an excuse.
Downvotes for not getting it right, I presume.
Which makes me concerned that the “Hole for Pepnis” answer has so many upvotes.
Those holes look open to me.
I recall reading once upon a time that the original idea for this exemption was that it was for literal scholars - a few hundred priestly intellectual sorts that were professional serious full-time Torah-studiers. But the exemption didn’t have any specific criteria listed for what that meant, so the ultra-orthodox all wound up saying “yeah, I study the Torah all day too, so I qualify.”
You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.
Sure, in those situations. I find that it doesn’t take that much effort to write a prompt that gets me something useful in most situations, though. You just need to make some effort. A lot of people don’t put in any effort, get a bad result, and conclude “this tech is useless.”
It also isn’t telepathic, so the only thing it has to go on when determining “what you want” is what you tell it you want.
I often see people gripe about how ChatGPT’s essay writing style is mediocre and always sounds the same, for example. But that’s what you get when you just tell ChatGPT “write me an essay about X.” It doesn’t know what kind of essay you want unless you tell it. You have to give it context and direction to get good results.
“Just give me this and I’ll do the rest” is actually a pretty great workflow, in my experience. AI isn’t at the point where you can just set it loose to work on its own but as a collaborator it saves me a huge amount of hassle and time.
Funny how we’re big into privacy here, and then money comes up and lots of people are “wait no, not that kind of privacy.”