Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Good news! You don't have to program that way really. Just plop a LLM on the wider Internet and the training data takes care of it for you.
Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Good news! You don't have to program that way really. Just plop a LLM on the wider Internet and the training data takes care of it for you.
Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Good news! You don't have to program that way really. Just plop a LLM on the wider Internet and the training data takes care of it for you.
Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Good news! You don't have to program that way really. Just plop a LLM on the wider Internet and the training data takes care of it for you.
Oh sweet. Man-made horrors within comprehension!
Our grandfathers made the nuclear bomb. We made a computer that tells you to eat rocks.
Yeah but what if they harvest bot names from actual users?
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Good news! You don't have to program that way really. Just plop a LLM on the wider Internet and the training data takes care of it for you.
Oh sweet. Man-made horrors within comprehension!
Our grandfathers made the nuclear bomb. We made a computer that tells you to eat rocks.
The part that still gets me is people are trusting a souped up predictive text algorithm to give accurate answers when its training data is a polluted cesspit called the internet.
Really sends it home just how naive and lazy a large chunk of humanity is. May as well ask 4chan life planning advice.
Posts
And when the current name gets finally nuked they can work their way though various incarnations of "Diddys1000BabyOil".
Worse what if they specifically harvest bot names from actual users who were recently in bot matches, so that if you do see the name again not only is it a real person, they're not good enough for it to be completely obvious which iteration is real and witch was a bot.
There's an easy way to tell - are they spamming obscenities and slurs in general chat? Probably human unless bot makers start programming bots to do that too.
Good news! You don't have to program that way really. Just plop a LLM on the wider Internet and the training data takes care of it for you.
Nice to see Microsoft Tay still getting work.
Oh sweet. Man-made horrors within comprehension!
Our grandfathers made the nuclear bomb. We made a computer that tells you to eat rocks.
The part that still gets me is people are trusting a souped up predictive text algorithm to give accurate answers when its training data is a polluted cesspit called the internet.
Really sends it home just how naive and lazy a large chunk of humanity is. May as well ask 4chan life planning advice.
Over decades, with constant feedback, no resets, and (at least for most people) consequences
But then they whine that you banned TikTok.
Also humans use that training data to actually understand concepts and not just linkages between words