The new forums will be named Coin Return (based on the most recent
vote)! You can check on the status and timeline of the transition to the new forums
here.
We now return to our regularly scheduled PA Forums. Please let me (Hahnsoo1) know if something isn't working. The Holiday Forum will remain up until January 10, 2025.
How Do Lawyers Work in the [Twitter] Thread: now with (failed) Bolivian Coups
Posts
Because in these corners they already want to stick their dicks into things they don't understand on a conceptual or practical level.
Edit: perfect totp to ya
Ready to kick some.... ready to kick some ass....
It's all fantasy, none of the women in porn are real and if they are real I'll never meet them, and if I do meet them they won't look, sound, or act like that, and they definitely won't deserve to be treated like that.
So, whatever strokes your boat.
twitch.tv/Taramoor
@TaramoorPlays
Taramoor on Youtube
We all get this I'm sure, but I'd just like to add that LLMs can't even "know" anything, they are models that do not have a capacity for knowledge. A quadratic equation does not know what a line is because it does not know at all.
Essentially: most LLMs can't even correctly handle a brainteaser as simple as something in the format "Alice has 5 brothers and she has 6 sisters. How many sisters does Alice's brother have?"
They don't know anything.
I'm "kupiyupaekio" on Discord.
Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.
yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer
This can become extremely obvious when you pull small twists on the common input data, so e.g. for text if you ask a common question/riddle but change some important factor it'll likely repeat the common answer to the unaltered riddle. Ask if 1kg of steel is heavier than 5kg of feathers and you'll likely get the response that they're the same weight, for instance.
This also means that you can't judge a LLMs "intelligence" by asking it questions it's been trained on. It quite literally has the cheat sheet built into the math if it's a common enough question, which a lot of the ones published in psychology papers are! You need to make your own novel questions, except of course if you publish those online those only work until your work is trained into the next version of the model...
maybe I'm just lucky, but, ah, hoo boy have I been blessed by the Jehovah's Thicknesses or something
but also if you keep futzing with it enough you can convince it to give you the wrong answer sooner or later, even after getting it right the first time
The settlers there have wood for sheep.
https://steamcommunity.com/profiles/76561197970666737/
I just tried it, and it gets the right answer, but is insane along the way:
Final answer is correct. First point is fine. Second point is fine. Third point, uh, no, "pound" is not used in the metric system. Fourth point: no, "a pound of feathers and a kilogram of lead" do _not_ weigh the same. But somehow it ignored the first half of that sentence and gave me the right answer in the end?
And it still can't do math, even though it uses some sort of more-serious serif-y font for the answer:
You’d have to design an entirely different kind of system to ration things out. It’s a mildly fucked and oft-hallucinatory equivalent of memorization (yet not even quite that, really)
Yeah, you can tell it the right answer is wrong and it'll "apologize" and generate something else instead of being capable of going "no, this is the right answer, and here's why"
The thing that fucked me up about this is that there's technically a wrapper on the whole thing. It's essentially all a story to be auto-completed; the LLM starts with something like (very simplified): "What follows is a conversation between a user and a very helpful AI. The user says:" (you fill in this blank) "And the AI replies..." and it auto-completes that section, rinse and repeat.
It just feels so... clearly a sham as a framework, the amount of shamelessness required to argue that "oh yeah this is a person and it's talking you, plus it really knows how to answer, believe me" is galling.
Nobody knows the answer anymore.
Because of woke
During Pride month?
We're better than this
Obviously gender can be more complicated than that, but there's an implicit gender given by referring to Alice as she.
Lulti-Level Marketing.
https://steamcommunity.com/profiles/76561197970666737/
1. AI is getting better all the time, and soon there will be no problem it can't solve. Get on the AI train right now, or be left behind. All business use cases must move over to AI, and every person needs to stay up with the biggest releases or be rendered obsolete.
2. AI doesn't reason, doesn't know anything, doesn't create knowledge, and is therefor useless. Look here, I have a handful of use cases that AI isn't very good at, watch it fail. Also, look at these hallucinations. This is literally, completely, and utterly useless.
In my experience, the utility really is in between those two extremes. Like, being able to produce answers from a vast store of knowledge, in response to natural language prompts, and to "reason" out patterns from that store for new use cases is very useful for certain things. Like, for me, I pair program with an AI much of the day. I tell it what functions I want it to write, it writes it, I tell it how to modify it to reflect my use case more specifically, I do manual modifications if need be, but it certainly accelerates my coding by a lot. No, its not correct all of the time, neither is Google, neither is asking your professor, neither is learning from your dear Granny. Critical thinking and testing for oneself is still needed.
Like most new technologies, its not a solution to every problem, a large part of being able to use it is to actually know the relevant use cases, and to know if it fits (also, not making a use case just so you have an excuse to use AI)
And here I slightly change the riddle and it fucks it up:
It produces a very verbose, apparently well-reasoned explanation that is completely incorrect and devoid of understanding, but simply has the APPEARANCE of thought.
Expansive Oration Dispensing
Every single one of these "AI" algorithms are being pursued as new avenues for management to have yet more ways to exploit and under compensate labor.
Never mind the incredibly negative impact on the environment these algorithms have, both in water consumption and power consumption. So even if the transparently false narratives these snake oil peddlers are trying to manufacture consent with had some truth to them, there is a real and deleterious cost that is simply not worth paying.
Rock Band DLC | GW:OttW - arrcd | WLD - Thortar
https://bsky.app/profile/theuglymachine.bsky.social/post/3kv2zhksxnb2j
edit: oh bluesky doesn't embed, lemme post the image
lin-manuel miranda's large language model multi-level marketing for men-loving-men
https://jacobin.com/2024/06/ai-data-center-energy-usage-environment
anyway just something to think about as yous keep "testing" ai to say how much it sucks/doesn't suck
https://gofund.me/fa5990a5