so was the "we do not understand the concept of love" sample (from a Black Panther Party speech) on a CD or something, or did Hideki Naganuma just like to sample civil rights activist speeches (see also: Wrapped in Black sampling Malcolm X)
only the finest hand-drawn hentai for me, all of Methonium's stuff was drawn pre-AI anyways
It's all fantasy, none of the women in porn are real and if they are real I'll never meet them, and if I do meet them they won't look, sound, or act like that, and they definitely won't deserve to be treated like that.
hell they can't even understand the concept of truth, they literally don't know what that means
We all get this I'm sure, but I'd just like to add that LLMs can't even "know" anything, they are models that do not have a capacity for knowledge. A quadratic equation does not know what a line is because it does not know at all.
Essentially: most LLMs can't even correctly handle a brainteaser as simple as something in the format "Alice has 5 brothers and she has 6 sisters. How many sisters does Alice's brother have?"
They don't know anything.
My favorite musical instrument is the air-raid siren.
For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.
Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.
0
BroloBroseidonLord of the BroceanRegistered Userregular
For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.
Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.
yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer
Modern "AI" is, in some ways, just an incredibly space-efficient compression method. It's also incredibly lossy, though, so generally the only times it's going to be spitting out exact copies of its training data is if the inputs are common enough in that data - e.g. the Mona Lisa for image generation, specific bible verses for text. They're "overtrained" and those specific patterns get burned in the math driving the whole thing.
This can become extremely obvious when you pull small twists on the common input data, so e.g. for text if you ask a common question/riddle but change some important factor it'll likely repeat the common answer to the unaltered riddle. Ask if 1kg of steel is heavier than 5kg of feathers and you'll likely get the response that they're the same weight, for instance.
This also means that you can't judge a LLMs "intelligence" by asking it questions it's been trained on. It quite literally has the cheat sheet built into the math if it's a common enough question, which a lot of the ones published in psychology papers are! You need to make your own novel questions, except of course if you publish those online those only work until your work is trained into the next version of the model...
only the finest hand-drawn hentai for me, all of Methonium's stuff was drawn pre-AI anyways
It's all fantasy, none of the women in porn are real and if they are real I'll never meet them, and if I do meet them they won't look, sound, or act like that, and they definitely won't deserve to be treated like that.
So, whatever strokes your boat.
maybe I'm just lucky, but, ah, hoo boy have I been blessed by the Jehovah's Thicknesses or something
For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.
Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.
yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer
but also if you keep futzing with it enough you can convince it to give you the wrong answer sooner or later, even after getting it right the first time
This can become extremely obvious when you pull small twists on the common input data, so e.g. for text if you ask a common question/riddle but change some important factor it'll likely repeat the common answer to the unaltered riddle. Ask if 1kg of steel is heavier than 5kg of feathers and you'll likely get the response that they're the same weight, for instance.
I just tried it, and it gets the right answer, but is insane along the way:
(my question: ) which is heavier, a pound of feathers, or a kilogram of lead
ChatGPT
A kilogram of lead is heavier than a pound of feathers.
Here’s the reasoning:
* A kilogram is a unit of mass in the metric system, equivalent to approximately 2.2 pounds.
* A pound (lb) is a unit of weight used in the imperial system.
* Despite the confusion that can arise from the word "pound" being used in both systems, a kilogram is a larger unit of mass than a pound.
* Therefore, even though a pound of feathers and a kilogram of lead both weigh the same (approximately 2.2 pounds), the kilogram of lead has a greater mass.
Final answer is correct. First point is fine. Second point is fine. Third point, uh, no, "pound" is not used in the metric system. Fourth point: no, "a pound of feathers and a kilogram of lead" do _not_ weigh the same. But somehow it ignored the first half of that sentence and gave me the right answer in the end?
And it still can't do math, even though it uses some sort of more-serious serif-y font for the answer:
I mean the ultimate problem is an LLM isn’t reasoning anything out, it’s just weighing what seems to be the most likely response from its training set; it’s doing a math, but it’s not the math you asked for, it’s the math required to do “Received query; given dataset, what is the most likely response? [calculate] [generate and submit likeliest response given input as limited by data set]”
You’d have to design an entirely different kind of system to ration things out. It’s a mildly fucked and oft-hallucinatory equivalent of memorization (yet not even quite that, really)
For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.
Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.
yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer
but also if you keep futzing with it enough you can convince it to give you the wrong answer sooner or later, even after getting it right the first time
Yeah, you can tell it the right answer is wrong and it'll "apologize" and generate something else instead of being capable of going "no, this is the right answer, and here's why"
I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda
I mean the ultimate problem is an LLM isn’t reasoning anything out, it’s just weighing what seems to be the most likely response from its training set; it’s doing a math, but it’s not the math you asked for, it’s the math required to do “Received query; given dataset, what is the most likely response? [calculate] [generate and submit likeliest response given input as limited by data set]”
You’d have to design an entirely different kind of system to ration things out. It’s a mildly fucked and oft-hallucinatory equivalent of memorization (yet not even quite that, really)
The thing that fucked me up about this is that there's technically a wrapper on the whole thing. It's essentially all a story to be auto-completed; the LLM starts with something like (very simplified): "What follows is a conversation between a user and a very helpful AI. The user says:" (you fill in this blank) "And the AI replies..." and it auto-completes that section, rinse and repeat.
It just feels so... clearly a sham as a framework, the amount of shamelessness required to argue that "oh yeah this is a person and it's talking you, plus it really knows how to answer, believe me" is galling.
Essentially: most LLMs can't even correctly handle a brainteaser as simple as something in the format "Alice has 5 brothers and she has 6 sisters. How many sisters does Alice's brother have?"
They don't know anything.
Nobody knows the answer anymore.
Because of woke
Did you just assume Alice's gender?
During Pride month?
I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda
Indeed it will. I think that's part of the problem with AI right now, is there are basically two camps (beyond the ethical problems, which are vast). They are:
1. AI is getting better all the time, and soon there will be no problem it can't solve. Get on the AI train right now, or be left behind. All business use cases must move over to AI, and every person needs to stay up with the biggest releases or be rendered obsolete.
2. AI doesn't reason, doesn't know anything, doesn't create knowledge, and is therefor useless. Look here, I have a handful of use cases that AI isn't very good at, watch it fail. Also, look at these hallucinations. This is literally, completely, and utterly useless.
In my experience, the utility really is in between those two extremes. Like, being able to produce answers from a vast store of knowledge, in response to natural language prompts, and to "reason" out patterns from that store for new use cases is very useful for certain things. Like, for me, I pair program with an AI much of the day. I tell it what functions I want it to write, it writes it, I tell it how to modify it to reflect my use case more specifically, I do manual modifications if need be, but it certainly accelerates my coding by a lot. No, its not correct all of the time, neither is Google, neither is asking your professor, neither is learning from your dear Granny. Critical thinking and testing for oneself is still needed.
Like most new technologies, its not a solution to every problem, a large part of being able to use it is to actually know the relevant use cases, and to know if it fits (also, not making a use case just so you have an excuse to use AI)
Here's another example of it not actually understanding anything:
On my way to St. Ives I saw a man with 7 wives. Each wife had 7 sacks. Each sack had 7 cats. Each cat had 7 kittens. Kitten, cats, sacks, wives. How many were going to St. Ives?
ChatGPT
The riddle "On my way to St. Ives" is a classic puzzle that plays on wordplay and misdirection. Here's the breakdown:
You encountered a man with 7 wives.
Each wife had 7 sacks.
Each sack had 7 cats.
Each cat had 7 kittens.
The riddle asks how many were going to St. Ives. The trick lies in the initial statement "On my way to St. Ives," which suggests that you are the one going to St. Ives. The rest of the details about the man, his wives, sacks, cats, and kittens are simply distractions meant to confuse.
Therefore, the answer to how many were going to St. Ives is 1 (you, the person telling the story).
And here I slightly change the riddle and it fucks it up:
On my way to St. Ives I saw a man going to St. Ives with 7 wives. Each wife had 7 sacks. Each sack had 7 cats. Each cat had 7 kittens. Kitten, cats, sacks, wives. How many were going to St. Ives?
ChatGPT
Let's break down the riddle step by step:
You saw a man going to St. Ives.
The man had 7 wives.
Each wife had 7 sacks.
Each sack had 7 cats.
Each cat had 7 kittens.
The riddle asks how many were going to St. Ives. This time, the key detail is that you saw the man going to St. Ives, but there is no mention of whether the wives, sacks, cats, or kittens were also going there.
Since the riddle specifies that you saw only the man going to St. Ives, and there is no indication that anyone else mentioned (wives, sacks, cats, or kittens) were also going to St. Ives, the answer remains:
1 person (the man you saw going to St. Ives).
It produces a very verbose, apparently well-reasoned explanation that is completely incorrect and devoid of understanding, but simply has the APPEARANCE of thought.
I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda
Indeed it will. I think that's part of the problem with AI right now, is there are basically two camps (beyond the ethical problems, which are vast). They are:
1. AI is getting better all the time, and soon there will be no problem it can't solve. Get on the AI train right now, or be left behind. All business use cases must move over to AI, and every person needs to stay up with the biggest releases or be rendered obsolete.
2. AI doesn't reason, doesn't know anything, doesn't create knowledge, and is therefor useless. Look here, I have a handful of use cases that AI isn't very good at, watch it fail. Also, look at these hallucinations. This is literally, completely, and utterly useless.
In my experience, the utility really is in between those two extremes. Like, being able to produce answers from a vast store of knowledge, in response to natural language prompts, and to "reason" out patterns from that store for new use cases is very useful for certain things. Like, for me, I pair program with an AI much of the day. I tell it what functions I want it to write, it writes it, I tell it how to modify it to reflect my use case more specifically, I do manual modifications if need be, but it certainly accelerates my coding by a lot. No, its not correct all of the time, neither is Google, neither is asking your professor, neither is learning from your dear Granny. Critical thinking and testing for oneself is still needed.
Like most new technologies, its not a solution to every problem, a large part of being able to use it is to actually know the relevant use cases, and to know if it fits (also, not making a use case just so you have an excuse to use AI)
Every single one of these "AI" algorithms are being pursued as new avenues for management to have yet more ways to exploit and under compensate labor.
Never mind the incredibly negative impact on the environment these algorithms have, both in water consumption and power consumption. So even if the transparently false narratives these snake oil peddlers are trying to manufacture consent with had some truth to them, there is a real and deleterious cost that is simply not worth paying.
StraightziHere we may reign secure, and in my choice,To reign is worth ambition though in HellRegistered Userregular
That guy does some fun blacksmithing projects. He also recreated the Elden Ring throwing daggers recently, and seeing the real version of them helped make a lot of sense of how they are supposed to function.
I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda
lin-manuel miranda's large language model multi-level marketing for men-loving-men
+27
Garlic Breadi'm a bitch i'm a bitch i'm a bitch i'm aRegistered User, Disagreeableregular
The thing about the power thing(along with the environmental stuff) is that it represents a cost the companies are tanking with the plan of eventually breaking even somehow. Therefore when imagining the "future" of AI you have to imagine a cost per-inquiry. So the question becomes not just "how useful is this information" or "how truthful is this information" but also "how much would people be willing to spend for this?". I don't see the individual person paying the per-inquiry cost.
Posts
Because in these corners they already want to stick their dicks into things they don't understand on a conceptual or practical level.
Edit: perfect totp to ya
Ready to kick some.... ready to kick some ass....
It's all fantasy, none of the women in porn are real and if they are real I'll never meet them, and if I do meet them they won't look, sound, or act like that, and they definitely won't deserve to be treated like that.
So, whatever strokes your boat.
twitch.tv/Taramoor
@TaramoorPlays
Taramoor on Youtube
We all get this I'm sure, but I'd just like to add that LLMs can't even "know" anything, they are models that do not have a capacity for knowledge. A quadratic equation does not know what a line is because it does not know at all.
Essentially: most LLMs can't even correctly handle a brainteaser as simple as something in the format "Alice has 5 brothers and she has 6 sisters. How many sisters does Alice's brother have?"
They don't know anything.
Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.
yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer
This can become extremely obvious when you pull small twists on the common input data, so e.g. for text if you ask a common question/riddle but change some important factor it'll likely repeat the common answer to the unaltered riddle. Ask if 1kg of steel is heavier than 5kg of feathers and you'll likely get the response that they're the same weight, for instance.
This also means that you can't judge a LLMs "intelligence" by asking it questions it's been trained on. It quite literally has the cheat sheet built into the math if it's a common enough question, which a lot of the ones published in psychology papers are! You need to make your own novel questions, except of course if you publish those online those only work until your work is trained into the next version of the model...
maybe I'm just lucky, but, ah, hoo boy have I been blessed by the Jehovah's Thicknesses or something
but also if you keep futzing with it enough you can convince it to give you the wrong answer sooner or later, even after getting it right the first time
The settlers there have wood for sheep.
https://steamcommunity.com/profiles/76561197970666737/
I just tried it, and it gets the right answer, but is insane along the way:
Final answer is correct. First point is fine. Second point is fine. Third point, uh, no, "pound" is not used in the metric system. Fourth point: no, "a pound of feathers and a kilogram of lead" do _not_ weigh the same. But somehow it ignored the first half of that sentence and gave me the right answer in the end?
And it still can't do math, even though it uses some sort of more-serious serif-y font for the answer:
You’d have to design an entirely different kind of system to ration things out. It’s a mildly fucked and oft-hallucinatory equivalent of memorization (yet not even quite that, really)
Yeah, you can tell it the right answer is wrong and it'll "apologize" and generate something else instead of being capable of going "no, this is the right answer, and here's why"
The thing that fucked me up about this is that there's technically a wrapper on the whole thing. It's essentially all a story to be auto-completed; the LLM starts with something like (very simplified): "What follows is a conversation between a user and a very helpful AI. The user says:" (you fill in this blank) "And the AI replies..." and it auto-completes that section, rinse and repeat.
It just feels so... clearly a sham as a framework, the amount of shamelessness required to argue that "oh yeah this is a person and it's talking you, plus it really knows how to answer, believe me" is galling.
Nobody knows the answer anymore.
Because of woke
During Pride month?
We're better than this
Obviously gender can be more complicated than that, but there's an implicit gender given by referring to Alice as she.
Lulti-Level Marketing.
https://steamcommunity.com/profiles/76561197970666737/
1. AI is getting better all the time, and soon there will be no problem it can't solve. Get on the AI train right now, or be left behind. All business use cases must move over to AI, and every person needs to stay up with the biggest releases or be rendered obsolete.
2. AI doesn't reason, doesn't know anything, doesn't create knowledge, and is therefor useless. Look here, I have a handful of use cases that AI isn't very good at, watch it fail. Also, look at these hallucinations. This is literally, completely, and utterly useless.
In my experience, the utility really is in between those two extremes. Like, being able to produce answers from a vast store of knowledge, in response to natural language prompts, and to "reason" out patterns from that store for new use cases is very useful for certain things. Like, for me, I pair program with an AI much of the day. I tell it what functions I want it to write, it writes it, I tell it how to modify it to reflect my use case more specifically, I do manual modifications if need be, but it certainly accelerates my coding by a lot. No, its not correct all of the time, neither is Google, neither is asking your professor, neither is learning from your dear Granny. Critical thinking and testing for oneself is still needed.
Like most new technologies, its not a solution to every problem, a large part of being able to use it is to actually know the relevant use cases, and to know if it fits (also, not making a use case just so you have an excuse to use AI)
And here I slightly change the riddle and it fucks it up:
It produces a very verbose, apparently well-reasoned explanation that is completely incorrect and devoid of understanding, but simply has the APPEARANCE of thought.
Expansive Oration Dispensing
Every single one of these "AI" algorithms are being pursued as new avenues for management to have yet more ways to exploit and under compensate labor.
Never mind the incredibly negative impact on the environment these algorithms have, both in water consumption and power consumption. So even if the transparently false narratives these snake oil peddlers are trying to manufacture consent with had some truth to them, there is a real and deleterious cost that is simply not worth paying.
Rock Band DLC | GW:OttW - arrcd | WLD - Thortar
https://bsky.app/profile/theuglymachine.bsky.social/post/3kv2zhksxnb2j
edit: oh bluesky doesn't embed, lemme post the image
lin-manuel miranda's large language model multi-level marketing for men-loving-men
https://jacobin.com/2024/06/ai-data-center-energy-usage-environment
anyway just something to think about as yous keep "testing" ai to say how much it sucks/doesn't suck
https://gofund.me/fa5990a5