The Coin Return Foundational Fundraiser is here! Please donate!

Elizabeth Warren’s proposal to break up [Tech Monopolies]

12425262729

Posts

  • ElJeffeElJeffe Registered User, ClubPA regular
    redx wrote: »
    Cool. Deep learning really should die. "we don't have any fucking clue what the algorithm actually is" isn't really an acceptable answer for thing that involve people.

    Fuck no, deep learning is amazing and can give us new avenues for exploration and understanding. The problem isn't that deep learning and other fancy algorithms exist, it's just when they're used on human populations without understanding the side effects.

    Would you say I had a plethora of pinatas?
  • mrondeaumrondeau Montréal, CanadaRegistered User regular
    Well, you can use deep learning to see if bios supposedly scrubbed of information about any sensitive classes (gender identity, race, etc.) have actually been scrubbed. They usually have not.

    You can also talk to experts in that specific area. They even have entire journals on the topic.

  • ElJeffeElJeffe Registered User, ClubPA regular
    edited April 2019
    HamHamJ wrote: »
    Phyphor wrote: »
    schuss wrote: »
    That I'm largely fine with as AI/ML needs to be explainable.

    It really can't be as nobody understands why something does or does not happen. And if AI ever does come along it will be so complex as to be entirely unexplainable

    You should be able to evaluate the results however and check things like if your hiring AI does not hire any women.

    See, this is the sort of thing that machine learning can help with. If you create a hiring algorithm that doesn't hire women, that offers a chance to figure out why. Did you build inherent biases into the algorithm? Are you selecting for something that favors men over women?

    It would seem like this situation would present an excellent opportunity to explore what kind of biases exist in the system and possibly more effective ways to counter them, potentially at the root level.

    ElJeffe on
    Would you say I had a plethora of pinatas?
  • mrondeaumrondeau Montréal, CanadaRegistered User regular
    ElJeffe wrote: »
    HamHamJ wrote: »
    Phyphor wrote: »
    schuss wrote: »
    That I'm largely fine with as AI/ML needs to be explainable.

    It really can't be as nobody understands why something does or does not happen. And if AI ever does come along it will be so complex as to be entirely unexplainable

    You should be able to evaluate the results however and check things like if your hiring AI does not hire any women.

    See, this is the sort of thing that machine learning can help with. If you create a hiring algorithm that doesn't hire women, that's an excellent opportunity to figure out why. Did you build inherent biases into the algorithm? Are you selecting for something that favors men over women?

    It would seem like this situation would present an excellent opportunity to explore what kind of biases exist in the system and possibly more effective ways to counter them, potentially at the root level.

    There's a paper at NAACL you should probably read: https://arxiv.org/abs/1904.05233

  • PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    mrondeau wrote: »
    Well, you can use deep learning to see if bios supposedly scrubbed of information about any sensitive classes (gender identity, race, etc.) have actually been scrubbed. They usually have not.

    You can also talk to experts in that specific area. They even have entire journals on the topic.

    In one of the examples, the algorithms selected for verbiage used more by men, good luck erasing that

  • mrondeaumrondeau Montréal, CanadaRegistered User regular
    Phyphor wrote: »
    mrondeau wrote: »
    Well, you can use deep learning to see if bios supposedly scrubbed of information about any sensitive classes (gender identity, race, etc.) have actually been scrubbed. They usually have not.

    You can also talk to experts in that specific area. They even have entire journals on the topic.

    In one of the examples, the algorithms selected for verbiage used more by men, good luck erasing that

    Yes, so you can test the procedure you use to remove biases from the information you are using to make hiring decisions.

    Instead of the status quo, where any conscious or unconscious biases will be fully informed.

    Removing names from CV, during filtering, does not work. This will help identifying all the remaining features that predict protected classes.

  • This content has been removed.

  • schussschuss Registered User regular
    Oghulk wrote: »
    Deep learning has the same issues that a lot of systems do: garbage in, garbage out

    Yep, it isn't magic. It's only as good as the quality, volume and dimensions of contextual data are with proper tuning.
    As for CVs - you could theoretically lemmatize and retranslate, but that would be crappy. Really we should hire people differently as 1 page of qualifications doesn't have any chance at properly representing someone.

  • ElJeffeElJeffe Registered User, ClubPA regular
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    Would you say I had a plethora of pinatas?
  • ClipseClipse Registered User regular
    ElJeffe wrote: »
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    I would also point out here that "debiasing hiring" sounds like an unalloyed good on paper, but reality is not so clear; from first hand experience, if my company lists an engineering position (and puts serious effort into debiasing the wording of the listing, etc) we typically get maybe 10-15% female applicants. A truly unbiased filtering would end up propagating that percentage up to the applications reviewed by humans (HR, hiring managers, etc). The pretense that biased hiring practices is the sole cause of under-representation of certain groups in tech is adored by major tech companies because it means they can throw a bunch of, essentially, high tech bullshit at the problem and make it go away (or, more importantly: make people think it has gone away). "Let's just run resumes through an algorithm and claim we've solved discrimination" is a lot more attractive than actually trying to solve discrimination, and actually solving discrimination is a lot more complicated than just improving how we review resumes.

  • ZekZek Registered User regular
    Clipse wrote: »
    ElJeffe wrote: »
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    I would also point out here that "debiasing hiring" sounds like an unalloyed good on paper, but reality is not so clear; from first hand experience, if my company lists an engineering position (and puts serious effort into debiasing the wording of the listing, etc) we typically get maybe 10-15% female applicants. A truly unbiased filtering would end up propagating that percentage up to the applications reviewed by humans (HR, hiring managers, etc). The pretense that biased hiring practices is the sole cause of under-representation of certain groups in tech is adored by major tech companies because it means they can throw a bunch of, essentially, high tech bullshit at the problem and make it go away (or, more importantly: make people think it has gone away). "Let's just run resumes through an algorithm and claim we've solved discrimination" is a lot more attractive than actually trying to solve discrimination, and actually solving discrimination is a lot more complicated than just improving how we review resumes.

    There is no hiring practice that will create a 50/50 balance in tech, the root of that problem lies elsewhere. I don't think anybody would claim otherwise. But that doesn't mean there isn't gender discrimination happening in hiring as well.

  • ElJeffeElJeffe Registered User, ClubPA regular
    Zek wrote: »
    Clipse wrote: »
    ElJeffe wrote: »
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    I would also point out here that "debiasing hiring" sounds like an unalloyed good on paper, but reality is not so clear; from first hand experience, if my company lists an engineering position (and puts serious effort into debiasing the wording of the listing, etc) we typically get maybe 10-15% female applicants. A truly unbiased filtering would end up propagating that percentage up to the applications reviewed by humans (HR, hiring managers, etc). The pretense that biased hiring practices is the sole cause of under-representation of certain groups in tech is adored by major tech companies because it means they can throw a bunch of, essentially, high tech bullshit at the problem and make it go away (or, more importantly: make people think it has gone away). "Let's just run resumes through an algorithm and claim we've solved discrimination" is a lot more attractive than actually trying to solve discrimination, and actually solving discrimination is a lot more complicated than just improving how we review resumes.

    There is no hiring practice that will create a 50/50 balance in tech, the root of that problem lies elsewhere. I don't think anybody would claim otherwise. But that doesn't mean there isn't gender discrimination happening in hiring as well.

    Right, and automating some of this process allows us to isolate the various sources of discrimination and underrepresentation, which strikes me as a good thing. If we want to make the hiring practice truly gender neutral and focus efforts on creating more qualified females at the root level, a gender neutral automated pruning process can help with that. If we want to implement quotas, you can tweak some parameters and do that, too.

    The problem with using humans is that their biases, intentional or not, will come through whenever they're involved. Which means that a gender neutral hiring process governed by humans will be derailed whenever one of those humans is biased.

    If you automate the process, you just need the team building the algorithms to be unbiased, and then you've removed the possibility for infection, so to speak, at many more points.

    Basically, you're building one ideal, unbiased human, and replicating it.

    Would you say I had a plethora of pinatas?
  • mrondeaumrondeau Montréal, CanadaRegistered User regular
    ElJeffe wrote: »
    Zek wrote: »
    Clipse wrote: »
    ElJeffe wrote: »
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    I would also point out here that "debiasing hiring" sounds like an unalloyed good on paper, but reality is not so clear; from first hand experience, if my company lists an engineering position (and puts serious effort into debiasing the wording of the listing, etc) we typically get maybe 10-15% female applicants. A truly unbiased filtering would end up propagating that percentage up to the applications reviewed by humans (HR, hiring managers, etc). The pretense that biased hiring practices is the sole cause of under-representation of certain groups in tech is adored by major tech companies because it means they can throw a bunch of, essentially, high tech bullshit at the problem and make it go away (or, more importantly: make people think it has gone away). "Let's just run resumes through an algorithm and claim we've solved discrimination" is a lot more attractive than actually trying to solve discrimination, and actually solving discrimination is a lot more complicated than just improving how we review resumes.

    There is no hiring practice that will create a 50/50 balance in tech, the root of that problem lies elsewhere. I don't think anybody would claim otherwise. But that doesn't mean there isn't gender discrimination happening in hiring as well.

    Right, and automating some of this process allows us to isolate the various sources of discrimination and underrepresentation, which strikes me as a good thing. If we want to make the hiring practice truly gender neutral and focus efforts on creating more qualified females at the root level, a gender neutral automated pruning process can help with that. If we want to implement quotas, you can tweak some parameters and do that, too.

    The problem with using humans is that their biases, intentional or not, will come through whenever they're involved. Which means that a gender neutral hiring process governed by humans will be derailed whenever one of those humans is biased.

    If you automate the process, you just need the team building the algorithms to be unbiased, and then you've removed the possibility for infection, so to speak, at many more points.

    Basically, you're building one ideal, unbiased human, and replicating it.

    More importantly, you can confirm that your processes are truly removing biases where possible, so that you don't amplify existing biases.
    For example, removing names and obvious markers from CVs is not enough because enough non-obvious markers remain to provide the information.
    Those markers can be used by humans too, either maliciously or unconsciously.

    Obviously, deep learning is not going to replace humans in hiring, and the interview process is also a major issue, one where deep learning cannot help.

    CV filtering, job searches and applicants searches are places where deep learning is useful, and where we might be able to reduce human biases.
    For example, when picking the keywords, we can matches the ones used by all populations, rather than the ones used by white males from a major school.

    Another example, for recommendation, is to add a fairness component to the model's objective, such that it not only recommend popular groups of a specific genre, but also recommend all group of that genre at least some number of time.
    There's obviously a relevance/fairness trade-off here.

  • AngelHedgieAngelHedgie Registered User regular
    Facebook founder - "Break up Facebook":
    When Mark Zuckerberg started Facebook in his Harvard dorm room, Chris Hughes was one of his roommates and became a Facebook cofounder. Hughes left Facebook more than 10 years ago, but his time at Facebook earned him a fortune in the hundreds of millions of dollars.

    Now Hughes says that Facebook has grown too big and powerful. In a lengthy opinion piece for the New York Times, he argues that the company gives too much power to founder Mark Zuckerberg.

    "Mark is a good, kind person," Hughes writes. "But I’m angry that his focus on growth led him to sacrifice security and civility for clicks.

    "I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I’m worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them."

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • shrykeshryke Member of the Beast Registered User regular
    ElJeffe wrote: »
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    This literally already happens. If you go to places that help people get hired, they talk about it a ton. Part of what they try and teach is how to structure your resume to get passed the automated resume-checking systems.

  • FoefallerFoefaller Registered User regular
    I believe this is the right thread for this:

    SCOTUS rules yesterday that antitrust lawsuit against the Apple Store can proceed

    Basically, consumers file a class action antitrust lawsuit saying that Apple has a monopoly on iPhone apps that it uses to raise prices by charging high fees to the creators of 3rd party apps for each purchase, forcing them to raise prices to compensate. Apple tried to argue that only "direct purchasers" of a service can file antitrust suit, and since it was just an intermediary between users and third party software bought on the app store, it couldn't be sued under antitrust, and loss. However, the decision didn't decide whether the lawsuit was valid, only if the suit could be filed in the first place.

    Most of Silicon Valley is freaking out about this ruling regardless, because the "direct purchasers" defense was the fig leaf that pretty much every app and online store uses to protect themselves from this kind of lawsuit.

    And there is a unexpected twist too! This was a 5-4 decision where Kavanagh sided with the liberal justices for this ruling.

    steam_sig.png
  • zepherinzepherin Russian warship, go fuck yourself Registered User regular
    ElJeffe wrote: »
    The places I've worked where I've been involved in the hiring process, the selection process used by actual humans could absolutely be performed by machines. And not even especially smart machines.

    It's generally just scanning resumes and seeing if the buzzwords on the resume match the buzzwords in the job description. And weeding out the 80% of applications that are a complete mess.

    This is one of the reasons why it is surprisingly common to put every possible keyword, in white on the bottom of a resume.

  • CouscousCouscous Registered User regular
    A basic problem with deep learning is when an algorithm finds a correlation and then the company using it decides the correlation must be meaningful and now you are rejecting people for having the wrong zodiac sign.

  • CelestialBadgerCelestialBadger Registered User regular
    Couscous wrote: »
    A basic problem with deep learning is when an algorithm finds a correlation and then the company using it decides the correlation must be meaningful and now you are rejecting people for having the wrong zodiac sign.

    Algorithms are prone to making the same easy leaps as bigots. An algorithm can find that people in the poor side of town are more likely to have a criminal record, and people from the poor side of town are more likely to be black, and ends up autorejecting black people as job candidates due to statistical correlation.

  • AngelHedgieAngelHedgie Registered User regular
    Couscous wrote: »
    A basic problem with deep learning is when an algorithm finds a correlation and then the company using it decides the correlation must be meaningful and now you are rejecting people for having the wrong zodiac sign.

    Algorithms are prone to making the same easy leaps as bigots. An algorithm can find that people in the poor side of town are more likely to have a criminal record, and people from the poor side of town are more likely to be black, and ends up autorejecting black people as job candidates due to statistical correlation.

    The problem, unfortunately, is that too few people realize that algorithms, being the product of people, can also be biased. And yes, this is used to intentionally whitewash bigotry.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • CelestialBadgerCelestialBadger Registered User regular
    It's not that they are "biased" it's just that they pick up irrelevant cultural signifiers and trends. For instance, coders are more likely to have tattoos, so an algorithm might pick up on that and bring in everyone with a lot of cool tattoos in for a coding interview. Or, less innocently, it might figure out that most of the best coders in the world are men, so it will autoreject women for senior coding jobs.

  • HamHamJHamHamJ Registered User regular
    edited May 2019

    Is it irrelevant if it actually correlates? Like, with your example above, the objection seems to be ethical or legal in nature not that rejecting black applicants is not an effective strategy for reducing the number of applicants with criminal records.

    HamHamJ on
    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    Filtering resumes is an exercise in bias, you have to choose one person from a group with almost nothing to go on, you just hope to apply "good" biases like previous work experience, high marks, etc and not "bad" ones, but a system of linear equations has no real concept of a "bad" solution so it will use whatever you give it

  • HamHamJHamHamJ Registered User regular
    On the other hand if you have human bias that had created a bias in a dataset, like a male dominated profession, that bias can propagated if you use that dataset in machine learning.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • daveNYCdaveNYC Why universe hate Waspinator? Registered User regular
    On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

    -Charles Babbage

    Shut up, Mr. Burton! You were not brought upon this world to get it!
  • PhyphorPhyphor Building Planet Busters Tasting FruitRegistered User regular
    HamHamJ wrote: »
    On the other hand if you have human bias that had created a bias in a dataset, like a male dominated profession, that bias can propagated if you use that dataset in machine learning.

    Well sure, these systems are designed to give you back what you train them on, so you either have to scrub your dataset for all possible references no matter how oblique, which could include school names, verb choice, etc or train it on a dataset that is representative of your desired output to tell it that gender correlations are meaningless, a bit tricky if you have few women and desire an even split since you need lots of data for these things

  • IncenjucarIncenjucar VChatter Seattle, WARegistered User regular
    You want to have diversity in hiring, not just unbiased hiring. If dumb luck gets you a narrow range of people then you're still worse off for the loss of perspective.

  • CouscousCouscous Registered User regular
    A ton of things that correlate should be considered pretty irrelevant to what is being sought. Spurious correlations end up being justified by unsupported just so stories rather than being considered starting points for further investigation.

    If black employees perform poorly many of the workers are racist and give the black employees poor marks and work to get them shoved out regardless of real performance, an algorithm might decide to reject black applicants because of a spurious correlation.

  • HamHamJHamHamJ Registered User regular
    So it seems to me that part of the point of a protected class is to say that even if there is a real correlation in regards to that class you are not allowed to use it because of higher concerns of equality and so forth. And ultimately people and algorithms both should be judged based on disparate impact, not on attempting to establish intent. It doesn't matter if you say that they just didn't fit the office culture or whatever, if your hiring is wildly out of sync with the demographics of your applicants that should be illegal.

    While racing light mechs, your Urbanmech comes in second place, but only because it ran out of ammo.
  • CouscousCouscous Registered User regular
    edited May 2019
    There is usually no real way to tell if there just happens to be a correlation they are using in a racist manner or if the racism is part of the point. The only effective thing is to ban most things that are acting primarily as a proxy for race, intentionally or unintentionally.

    Pretending the motive is not racist is how things like prosecutors just happening to strike out all the potential black jurors in a case for non-race based reasons happens.

    Couscous on
  • mrondeaumrondeau Montréal, CanadaRegistered User regular
    We have multiple problems with bias and machine learning. First, yes, the models are very good at picking up correlation and biases in the training data (i.e all the previous hiring decisions).
    In particular, correlations that are not obvious to humans.
    This is obviously bad.

    Second, since the biases are a clear and simple signal, fixating on it is easy, and learning algorithms will use the simplest way to fit the data. This amplify the bias.

    Third, lots of people don't understand what machine learning can and cannot do, so models end up being used where they really should not.

    In other words, ask actual experts before deploying. They are the one who scream in horror when you tell them what you want to do.

  • PaladinPaladin Registered User regular
    mrondeau wrote: »
    We have multiple problems with bias and machine learning. First, yes, the models are very good at picking up correlation and biases in the training data (i.e all the previous hiring decisions).
    In particular, correlations that are not obvious to humans.
    This is obviously bad.

    Second, since the biases are a clear and simple signal, fixating on it is easy, and learning algorithms will use the simplest way to fit the data. This amplify the bias.

    Third, lots of people don't understand what machine learning can and cannot do, so models end up being used where they really should not.

    In other words, ask actual experts before deploying. They are the one who scream in horror when you tell them what you want to do.

    Being an actual expert is hard and there will always be a shortage of competent data scientists

    Marty: The future, it's where you're going?
    Doc: That's right, twenty five years into the future. I've always dreamed on seeing the future, looking beyond my years, seeing the progress of mankind. I'll also be able to see who wins the next twenty-five world series.
  • schussschuss Registered User regular
    Yep,correlations are often silly as theoretically lots of ordinal data correlates perfectly when normalized. That said, you generally want to then investigate why something has a close tie, not scream "eureka!"
    Often it's system capture or existing work process patterns, which you should probably ignore.

  • AngelHedgieAngelHedgie Registered User regular
    Zuckerberg argues that breaking up Facebook wouldn't fix things:
    “The question that I think we have to grapple with is that breaking up these companies wouldn’t make any of those problems better,” Zuckerberg said. “Right? So the ability to work on election or content systems... We have an ability, now because we’re a successful company and we’re large, to be able to go build the systems that I think are unprecedented.”

    “The systems, in many cases, are more sophisticated than any one a lot of governments have,” Zuckerberg continued, without explaining precisely what he meant by that. “And we can build that once, and we can have it apply to Facebook, and Instagram, and WhatsApp, and Messenger.”

    The only problem with Zuckerberg’s argument? Facebook is pretty bad at deploying systems that help solve today’s problems. The only thing Facebook seems to be good at is creating public relations events that make it look like it’s doing something. And it’s a lot easier to set up a “War Room” with people staring at screens than it is to actually address things like election interference by foreign governments. Especially when you don’t even let journalists talk to anyone in your so-called war room.

    He really cannot help himself, can he?

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • Martini_PhilosopherMartini_Philosopher Registered User regular
    Zuckerberg argues that breaking up Facebook wouldn't fix things:
    “The question that I think we have to grapple with is that breaking up these companies wouldn’t make any of those problems better,” Zuckerberg said. “Right? So the ability to work on election or content systems... We have an ability, now because we’re a successful company and we’re large, to be able to go build the systems that I think are unprecedented.”

    “The systems, in many cases, are more sophisticated than any one a lot of governments have,” Zuckerberg continued, without explaining precisely what he meant by that. “And we can build that once, and we can have it apply to Facebook, and Instagram, and WhatsApp, and Messenger.”

    The only problem with Zuckerberg’s argument? Facebook is pretty bad at deploying systems that help solve today’s problems. The only thing Facebook seems to be good at is creating public relations events that make it look like it’s doing something. And it’s a lot easier to set up a “War Room” with people staring at screens than it is to actually address things like election interference by foreign governments. Especially when you don’t even let journalists talk to anyone in your so-called war room.

    He really cannot help himself, can he?

    I don't know which is worse, that he's out there trying to defend his company and the way he does business or that YouTube is neck deep in what's essentially the same problem but doesn't get anywhere near the same amount of pressure from politicians and the public.

    All opinions are my own and in no way reflect that of my employer.
  • AngelHedgieAngelHedgie Registered User regular
    Zuckerberg argues that breaking up Facebook wouldn't fix things:
    “The question that I think we have to grapple with is that breaking up these companies wouldn’t make any of those problems better,” Zuckerberg said. “Right? So the ability to work on election or content systems... We have an ability, now because we’re a successful company and we’re large, to be able to go build the systems that I think are unprecedented.”

    “The systems, in many cases, are more sophisticated than any one a lot of governments have,” Zuckerberg continued, without explaining precisely what he meant by that. “And we can build that once, and we can have it apply to Facebook, and Instagram, and WhatsApp, and Messenger.”

    The only problem with Zuckerberg’s argument? Facebook is pretty bad at deploying systems that help solve today’s problems. The only thing Facebook seems to be good at is creating public relations events that make it look like it’s doing something. And it’s a lot easier to set up a “War Room” with people staring at screens than it is to actually address things like election interference by foreign governments. Especially when you don’t even let journalists talk to anyone in your so-called war room.

    He really cannot help himself, can he?

    I don't know which is worse, that he's out there trying to defend his company and the way he does business or that YouTube is neck deep in what's essentially the same problem but doesn't get anywhere near the same amount of pressure from politicians and the public.

    The main thing there is that neither Susan Wojcicki (CEO, YouTube) nor Sundar Pichai (CEO, Alphabet) have made themselves as publicly present as Zuckerberg. He's a lightning rod of his own making.

    XBL: Nox Aeternum / PSN: NoxAeternum / NN:NoxAeternum / Steam: noxaeternum
  • ZekZek Registered User regular
    "Please don't break us up, we can use our size to build things to influence elections" is not a compelling argument to be making to the GOP.

  • FoefallerFoefaller Registered User regular
    Zek wrote: »
    "Please don't break us up, we can use our size to build things to influence elections" is not a compelling argument to be making to the GOP.

    Well, to ne fair it would be if the GoP saw them the same way they do the NRA.

    steam_sig.png
  • surrealitychecksurrealitycheck lonely, but not unloved dreaming of faulty keys and latchesRegistered User regular
    The focus on deep learning is in any case a little confused. it is fair to say that a really big complex system is going to be very hard to analyse but the exact same sins have been committed in every single area that classifiers of any kind have been used

    you can smuggle your bias into the input set; you can smuggle it into output weighting, you can have it baked into the history of the system without it even being a feature of the analysis structure itself.

    big automated systems of classification are just a fact of life now; the solution is less about the details of "how do I get this ridiculously over complex fuckfest of an rnn to tell me what it's actually doing" (although analysis of this stuff is fascinating and the black box rep is a bit of a meme at this point) and more just the extremely boring process of slowly building a series of independent supervisory heuristics for the deployment, analysis, training and use of such classifiers in general. this will take a while and there will be lots of naive approaches that turn out to be bad but it is very worth considering how many of the errors now being identified have been features of public policy data analysis for literally decades in other forms!

    3fpohw4n01yj.png
  • Fuzzy Cumulonimbus CloudFuzzy Cumulonimbus Cloud Registered User regular
    There is a very good podcast that just came out about the ethics and impact of machine learning on the world. It is called sleepwalkers.
    The podcast does a very good job of covering all the issues people are grasping at here.

    https://www.sleepwalkerspodcast.com/

Sign In or Register to comment.