Check out this thread for an update on DJ Eebs.

How Do Lawyers Work in the [Twitter] Thread: now with (failed) Bolivian Coups

19394969899101

Posts

  • Death of RatsDeath of Rats Registered User regular
    edited June 24
    AI porn seems like it would have a huge consumer base in the most toxic of corners.

    Because in these corners they already want to stick their dicks into things they don't understand on a conceptual or practical level.

    Edit: perfect totp to ya

    Death of Rats on
    No I don't.
  • Mc zanyMc zany Registered User regular
    edited June 24
    Lanz wrote: »
    hell they can't even understand the concept of truth, they literally don't know what that means

    But can they Understand… UNDERSTAND

    UNDERSTAND
    UNDERSTAND

    UNDERSTAND
    UNDERSTAND

    THE CONCEPT OF LOVE

    UNDERSTAND
    UNDERSTAND

    UNDERSTAND
    UNDERSTAND

    UNDERSTAND
    UNDERSTAND

    THE CONCEPT OF LOVE

    UNDERSTAND
    UNDERSTAND

    UNDERSTAND
    UNDERSTAND

    UNDERSTAND
    UNDERSTAND

    THE CONCEPT

    THE CONCEPT

    OF LOVE


    Ready to kick some.... ready to kick some ass....

    Mc zany on
  • NeveronNeveron HellValleySkyTree SwedenRegistered User regular
    so was the "we do not understand the concept of love" sample (from a Black Panther Party speech) on a CD or something, or did Hideki Naganuma just like to sample civil rights activist speeches (see also: Wrapped in Black sampling Malcolm X)

  • GrisloGrislo Registered User regular
    tynic wrote: »
    Never give an ape a wank computer

    ygvk7ec40grx.jpg

    This post was sponsored by Tom Cruise.
  • GrisloGrislo Registered User regular
    I just needed to get that out of my system.

    This post was sponsored by Tom Cruise.
  • TaramoorTaramoor Storyteller Registered User regular
    Mortal Sky wrote: »
    only the finest hand-drawn hentai for me, all of Methonium's stuff was drawn pre-AI anyways

    It's all fantasy, none of the women in porn are real and if they are real I'll never meet them, and if I do meet them they won't look, sound, or act like that, and they definitely won't deserve to be treated like that.

    So, whatever strokes your boat.

  • proxy_hueproxy_hue Registered User regular
    hell they can't even understand the concept of truth, they literally don't know what that means

    We all get this I'm sure, but I'd just like to add that LLMs can't even "know" anything, they are models that do not have a capacity for knowledge. A quadratic equation does not know what a line is because it does not know at all.

  • KupiKupi Registered User regular
    Saw this linked in a discussion around AI: https://arxiv.org/abs/2406.02061

    Essentially: most LLMs can't even correctly handle a brainteaser as simple as something in the format "Alice has 5 brothers and she has 6 sisters. How many sisters does Alice's brother have?"

    They don't know anything.

    My favorite musical instrument is the air-raid siren.
  • burboburbo Registered User regular
    For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.

    Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.

  • BroloBrolo Broseidon Lord of the BroceanRegistered User regular
    burbo wrote: »
    For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.

    Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.

    yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer

  • NeveronNeveron HellValleySkyTree SwedenRegistered User regular
    Modern "AI" is, in some ways, just an incredibly space-efficient compression method. It's also incredibly lossy, though, so generally the only times it's going to be spitting out exact copies of its training data is if the inputs are common enough in that data - e.g. the Mona Lisa for image generation, specific bible verses for text. They're "overtrained" and those specific patterns get burned in the math driving the whole thing.

    This can become extremely obvious when you pull small twists on the common input data, so e.g. for text if you ask a common question/riddle but change some important factor it'll likely repeat the common answer to the unaltered riddle. Ask if 1kg of steel is heavier than 5kg of feathers and you'll likely get the response that they're the same weight, for instance.

    This also means that you can't judge a LLMs "intelligence" by asking it questions it's been trained on. It quite literally has the cheat sheet built into the math if it's a common enough question, which a lot of the ones published in psychology papers are! You need to make your own novel questions, except of course if you publish those online those only work until your work is trained into the next version of the model...

  • Mortal SkyMortal Sky queer punk hedge witchRegistered User regular
    Taramoor wrote: »
    Mortal Sky wrote: »
    only the finest hand-drawn hentai for me, all of Methonium's stuff was drawn pre-AI anyways

    It's all fantasy, none of the women in porn are real and if they are real I'll never meet them, and if I do meet them they won't look, sound, or act like that, and they definitely won't deserve to be treated like that.

    So, whatever strokes your boat.

    maybe I'm just lucky, but, ah, hoo boy have I been blessed by the Jehovah's Thicknesses or something

  • Houk the NamebringerHouk the Namebringer Nipples The EchidnaRegistered User regular
    Brolo wrote: »
    burbo wrote: »
    For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.

    Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.

    yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer

    but also if you keep futzing with it enough you can convince it to give you the wrong answer sooner or later, even after getting it right the first time

  • Captain InertiaCaptain Inertia Central OhioRegistered User regular
    Jehovah’s Thiccness

    l7ygmd1dd4p1.jpeg
    3b2y43dozpk3.jpeg
  • ChicoBlueChicoBlue Registered User regular
    A couple of weeks ago it was giving me wrong answers because it was calculating Tan and Cotan incorrectly.

  • JedocJedoc In the scuppers with the staggers and jagsRegistered User regular
    And you don't mess with the Cotan.

    GDdCWMm.jpg
  • ShadowfireShadowfire Vermont, in the middle of nowhereRegistered User regular
    Jedoc wrote: »
    And you don't mess with the Cotan.

    The settlers there have wood for sheep.

  • djmitchelladjmitchella Registered User regular
    Neveron wrote: »
    This can become extremely obvious when you pull small twists on the common input data, so e.g. for text if you ask a common question/riddle but change some important factor it'll likely repeat the common answer to the unaltered riddle. Ask if 1kg of steel is heavier than 5kg of feathers and you'll likely get the response that they're the same weight, for instance.

    I just tried it, and it gets the right answer, but is insane along the way:
    (my question: ) which is heavier, a pound of feathers, or a kilogram of lead

    ChatGPT
    A kilogram of lead is heavier than a pound of feathers.

    Here’s the reasoning:

    * A kilogram is a unit of mass in the metric system, equivalent to approximately 2.2 pounds.
    * A pound (lb) is a unit of weight used in the imperial system.
    * Despite the confusion that can arise from the word "pound" being used in both systems, a kilogram is a larger unit of mass than a pound.
    * Therefore, even though a pound of feathers and a kilogram of lead both weigh the same (approximately 2.2 pounds), the kilogram of lead has a greater mass.

    Final answer is correct. First point is fine. Second point is fine. Third point, uh, no, "pound" is not used in the metric system. Fourth point: no, "a pound of feathers and a kilogram of lead" do _not_ weigh the same. But somehow it ignored the first half of that sentence and gave me the right answer in the end?

    And it still can't do math, even though it uses some sort of more-serious serif-y font for the answer:
    pdq8e353tzcq.png
    k46wrdx2v9lm.png

  • LanzLanz ...Za?Registered User regular
    edited June 24
    I mean the ultimate problem is an LLM isn’t reasoning anything out, it’s just weighing what seems to be the most likely response from its training set; it’s doing a math, but it’s not the math you asked for, it’s the math required to do “Received query; given dataset, what is the most likely response? [calculate] [generate and submit likeliest response given input as limited by data set]”

    You’d have to design an entirely different kind of system to ration things out. It’s a mildly fucked and oft-hallucinatory equivalent of memorization (yet not even quite that, really)

    Lanz on
    waNkm4k.jpg?1
  • OptyOpty Registered User regular
    Brolo wrote: »
    burbo wrote: »
    For the record, I just put that brain teaser in GPT4o, and it correctly got the answer. So while it is true that they don't "know" things, the patterns they can recognize are getting broader and more complex.

    Or, possibly, this solution exists out there somewhere in something that it scraped, and its just straight up pulling that out.

    yeah if it's a solution that exists, or has quantifiable permutations on an existing solved problem, then it will just modify and repeat that answer

    but also if you keep futzing with it enough you can convince it to give you the wrong answer sooner or later, even after getting it right the first time

    Yeah, you can tell it the right answer is wrong and it'll "apologize" and generate something else instead of being capable of going "no, this is the right answer, and here's why"

  • PoorochondriacPoorochondriac Ah, man Ah, jeezRegistered User regular
    I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda

  • proxy_hueproxy_hue Registered User regular
    Lanz wrote: »
    I mean the ultimate problem is an LLM isn’t reasoning anything out, it’s just weighing what seems to be the most likely response from its training set; it’s doing a math, but it’s not the math you asked for, it’s the math required to do “Received query; given dataset, what is the most likely response? [calculate] [generate and submit likeliest response given input as limited by data set]”

    You’d have to design an entirely different kind of system to ration things out. It’s a mildly fucked and oft-hallucinatory equivalent of memorization (yet not even quite that, really)

    The thing that fucked me up about this is that there's technically a wrapper on the whole thing. It's essentially all a story to be auto-completed; the LLM starts with something like (very simplified): "What follows is a conversation between a user and a very helpful AI. The user says:" (you fill in this blank) "And the AI replies..." and it auto-completes that section, rinse and repeat.

    It just feels so... clearly a sham as a framework, the amount of shamelessness required to argue that "oh yeah this is a person and it's talking you, plus it really knows how to answer, believe me" is galling.

  • ToxTox I kill threads he/himRegistered User regular
    Kupi wrote: »
    Saw this linked in a discussion around AI: https://arxiv.org/abs/2406.02061

    Essentially: most LLMs can't even correctly handle a brainteaser as simple as something in the format "Alice has 5 brothers and she has 6 sisters. How many sisters does Alice's brother have?"

    They don't know anything.

    Nobody knows the answer anymore.

    Because of woke
    Did you just assume Alice's gender?
    During Pride month?

    We're better than this
    It's a shitpost

    Twitter! | Dilige, et quod vis fac
  • StraightziStraightzi Here we may reign secure, and in my choice, To reign is worth ambition though in HellRegistered User regular
    I think the use of the feminine pronoun to refer to Alice in the question was specifically there for that intent.

    Obviously gender can be more complicated than that, but there's an implicit gender given by referring to Alice as she.

  • ShadowfireShadowfire Vermont, in the middle of nowhereRegistered User regular
    I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda

    Lulti-Level Marketing.

  • burboburbo Registered User regular
    Indeed it will. I think that's part of the problem with AI right now, is there are basically two camps (beyond the ethical problems, which are vast). They are:

    1. AI is getting better all the time, and soon there will be no problem it can't solve. Get on the AI train right now, or be left behind. All business use cases must move over to AI, and every person needs to stay up with the biggest releases or be rendered obsolete.
    2. AI doesn't reason, doesn't know anything, doesn't create knowledge, and is therefor useless. Look here, I have a handful of use cases that AI isn't very good at, watch it fail. Also, look at these hallucinations. This is literally, completely, and utterly useless.

    In my experience, the utility really is in between those two extremes. Like, being able to produce answers from a vast store of knowledge, in response to natural language prompts, and to "reason" out patterns from that store for new use cases is very useful for certain things. Like, for me, I pair program with an AI much of the day. I tell it what functions I want it to write, it writes it, I tell it how to modify it to reflect my use case more specifically, I do manual modifications if need be, but it certainly accelerates my coding by a lot. No, its not correct all of the time, neither is Google, neither is asking your professor, neither is learning from your dear Granny. Critical thinking and testing for oneself is still needed.

    Like most new technologies, its not a solution to every problem, a large part of being able to use it is to actually know the relevant use cases, and to know if it fits (also, not making a use case just so you have an excuse to use AI)

  • Knight_Knight_ Dead Dead Dead Registered User regular
    honestly on brand. nobody makes any money on it except the guys at the top, and the product being pushed is mostly useless.

    aeNqQM9.jpg
  • GarthorGarthor Registered User regular
    edited June 24
    Here's another example of it not actually understanding anything:
    On my way to St. Ives I saw a man with 7 wives. Each wife had 7 sacks. Each sack had 7 cats. Each cat had 7 kittens. Kitten, cats, sacks, wives. How many were going to St. Ives?
    ChatGPT
    The riddle "On my way to St. Ives" is a classic puzzle that plays on wordplay and misdirection. Here's the breakdown:

    You encountered a man with 7 wives.
    Each wife had 7 sacks.
    Each sack had 7 cats.
    Each cat had 7 kittens.

    The riddle asks how many were going to St. Ives. The trick lies in the initial statement "On my way to St. Ives," which suggests that you are the one going to St. Ives. The rest of the details about the man, his wives, sacks, cats, and kittens are simply distractions meant to confuse.

    Therefore, the answer to how many were going to St. Ives is 1 (you, the person telling the story).

    And here I slightly change the riddle and it fucks it up:
    On my way to St. Ives I saw a man going to St. Ives with 7 wives. Each wife had 7 sacks. Each sack had 7 cats. Each cat had 7 kittens. Kitten, cats, sacks, wives. How many were going to St. Ives?
    ChatGPT
    Let's break down the riddle step by step:

    You saw a man going to St. Ives.
    The man had 7 wives.
    Each wife had 7 sacks.
    Each sack had 7 cats.
    Each cat had 7 kittens.

    The riddle asks how many were going to St. Ives. This time, the key detail is that you saw the man going to St. Ives, but there is no mention of whether the wives, sacks, cats, or kittens were also going there.

    Since the riddle specifies that you saw only the man going to St. Ives, and there is no indication that anyone else mentioned (wives, sacks, cats, or kittens) were also going to St. Ives, the answer remains:

    1 person (the man you saw going to St. Ives).

    It produces a very verbose, apparently well-reasoned explanation that is completely incorrect and devoid of understanding, but simply has the APPEARANCE of thought.

    Garthor on
  • Commander ZoomCommander Zoom Registered User regular
    Shadowfire wrote: »
    I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda

    Lulti-Level Marketing.

    Expansive Oration Dispensing

  • DarkPrimusDarkPrimus Registered User regular
    edited June 24
    burbo wrote: »
    Indeed it will. I think that's part of the problem with AI right now, is there are basically two camps (beyond the ethical problems, which are vast). They are:

    1. AI is getting better all the time, and soon there will be no problem it can't solve. Get on the AI train right now, or be left behind. All business use cases must move over to AI, and every person needs to stay up with the biggest releases or be rendered obsolete.
    2. AI doesn't reason, doesn't know anything, doesn't create knowledge, and is therefor useless. Look here, I have a handful of use cases that AI isn't very good at, watch it fail. Also, look at these hallucinations. This is literally, completely, and utterly useless.

    In my experience, the utility really is in between those two extremes. Like, being able to produce answers from a vast store of knowledge, in response to natural language prompts, and to "reason" out patterns from that store for new use cases is very useful for certain things. Like, for me, I pair program with an AI much of the day. I tell it what functions I want it to write, it writes it, I tell it how to modify it to reflect my use case more specifically, I do manual modifications if need be, but it certainly accelerates my coding by a lot. No, its not correct all of the time, neither is Google, neither is asking your professor, neither is learning from your dear Granny. Critical thinking and testing for oneself is still needed.

    Like most new technologies, its not a solution to every problem, a large part of being able to use it is to actually know the relevant use cases, and to know if it fits (also, not making a use case just so you have an excuse to use AI)

    Every single one of these "AI" algorithms are being pursued as new avenues for management to have yet more ways to exploit and under compensate labor.

    Never mind the incredibly negative impact on the environment these algorithms have, both in water consumption and power consumption. So even if the transparently false narratives these snake oil peddlers are trying to manufacture consent with had some truth to them, there is a real and deleterious cost that is simply not worth paying.

    DarkPrimus on
  • asofyeunasofyeun Registered User regular
    edited June 24
    behold, dream made true
    https://bsky.app/profile/theuglymachine.bsky.social/post/3kv2zhksxnb2j

    edit: oh bluesky doesn't embed, lemme post the image
    cxqcvawxcge4.jpg

    asofyeun on
  • FCDFCD Registered User regular
    The cool S is eternal.

    Gridman! Baby DAN DAN! Baby DAN DAN!
  • BroloBrolo Broseidon Lord of the BroceanRegistered User regular
    that is a cool 07fLySAt.pngword

  • StraightziStraightzi Here we may reign secure, and in my choice, To reign is worth ambition though in HellRegistered User regular
    That guy does some fun blacksmithing projects. He also recreated the Elden Ring throwing daggers recently, and seeing the real version of them helped make a lot of sense of how they are supposed to function.

  • GR_ZombieGR_Zombie Krillin It Registered User regular
    Billy Bat but it’s the cool S

  • Garlic BreadGarlic Bread i'm a bitch i'm a bitch i'm a bitch i'm a Registered User, Disagreeable regular
    I know that because we often discuss AI matters in here we casually throw in a lot of jargon, but if anyone is new or trying to catch up, "LLM" is short for Lin-Lanuel Miranda

    lin-manuel miranda's large language model multi-level marketing for men-loving-men

  • Garlic BreadGarlic Bread i'm a bitch i'm a bitch i'm a bitch i'm a Registered User, Disagreeable regular
    "The Hidden Environmental Impact of AI"
    https://jacobin.com/2024/06/ai-data-center-energy-usage-environment

    yxk5vc2pa5bd.jpg

    anyway just something to think about as yous keep "testing" ai to say how much it sucks/doesn't suck

  • KalTorakKalTorak One way or another, they all end up in the Undercity.Registered User regular
    mympodcj7ll8.png

  • nightmarennynightmarenny Registered User regular
    The thing about the power thing(along with the environmental stuff) is that it represents a cost the companies are tanking with the plan of eventually breaking even somehow. Therefore when imagining the "future" of AI you have to imagine a cost per-inquiry. So the question becomes not just "how useful is this information" or "how truthful is this information" but also "how much would people be willing to spend for this?". I don't see the individual person paying the per-inquiry cost.

    Help me raise a little cash for my transition costs
    https://gofund.me/fa5990a5
This discussion has been closed.