Poll: Should there be laws against A.I's hiring and firing?

Be advised that this is a public poll: other users can see the choice(s) you selected.

Page 1 of 3
1
2
3
LastLast
  1. #1
    Void Lord Doctor Amadeus's Avatar
    10+ Year Old Account
    Join Date
    May 2011
    Location
    In Security Watching...
    Posts
    43,753

    Should there be laws against A.I's hiring and firing?

    Algorithms will soon be in charge of hiring and firing. Not everyone thinks this is a good idea

    Better rules need to be urgently drafted to keep control of the AI systems that are already making decisions about our jobs, says the TUC.

    https://www.zdnet.com/article/algori...CAD-00-10aag7e
    Should there be laws against A.I's hiring and firing?

    I would no provided human bias wasn’t allowed. But as of right now yes.
    Last edited by Doctor Amadeus; 2021-03-29 at 03:46 PM.
    Milli Vanilli, Bigger than Elvis

  2. #2
    Banned Yadryonych's Avatar
    10+ Year Old Account
    Join Date
    Jul 2011
    Location
    Матушка Россия
    Posts
    2,006
    Absolutely rational inhumane mechanistic mind, the ideal capitalist.

    That would give you a glimpse of what votes would be here in this thread

  3. #3
    Pro-people, or not pro-people...easy choice to make.

  4. #4
    I know Amazon tried AI for potential hiring a few years ago, but it developed a bias against female candidates.

    https://www.reuters.com/article/us-a...-idUSKCN1MK08G

  5. #5
    This would be great if it was not a tool of exploitation. By itself, the idea and technology that could place workers in an ideal position for their skill and education are brilliant. But it will likely be used to just make a buck and exploit labor.

  6. #6
    Banned Yadryonych's Avatar
    10+ Year Old Account
    Join Date
    Jul 2011
    Location
    Матушка Россия
    Posts
    2,006
    Quote Originally Posted by Fencers View Post
    This would be great if it was not a tool of exploitation. By itself, the idea and technology that could place workers in an ideal position for their skill and education are brilliant. But it will likely be used to just make a buck and exploit labor.
    I'm pretty sure the AI would like to populate every position with ideal candidates instead, and those who not met these requirements would be laid off

  7. #7
    Pfft...the ideal wouldn't happen. The crap would get coded exactly as the company wants it.

  8. #8
    Banned Yadryonych's Avatar
    10+ Year Old Account
    Join Date
    Jul 2011
    Location
    Матушка Россия
    Posts
    2,006
    Quote Originally Posted by Shadowferal View Post
    Pfft...the ideal wouldn't happen. The crap would get coded exactly as the company wants it.
    what makes you feel the company wouldn't want it ideal?

  9. #9
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,235
    Algorithms have been responsible for hiring and firing for decades already, they're just "run" through a person. This is why dumping keywords into a resume will get you a lot more callbacks, why formatting is so important, why performance metrics are so critical to so many jobs, etc.

    That said, I don't see this as an improvement. Such systems tend to just reinforce existing biases. That's why AI has so consistently shown indications of racism, like facial identification software having trouble distinguishing between black people. It isn't because there's a deliberate bias inserted, but implicit assumptions and societal inequities are not accounted for. In the case of the facial AI, it was designed using white models and testing and they simply didn't even consider testing different skin tones, at all. "It isn't us, it's the machine" is just a lazy excuse.


  10. #10
    Quote Originally Posted by Yadryonych View Post
    what makes you feel the company wouldn't want it ideal?
    Define the company ideal. Almost certain it'll be very different from what labor laws allow.

  11. #11
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,235
    Quote Originally Posted by Yadryonych View Post
    what makes you feel the company wouldn't want it ideal?
    It's more that the "ideal" is that of the shareholders, not that of either employees or consumers. Capitalism is an inherently predatory system and the only people it benefits are the capitalists.

    Which means "the people who own the means of production", not "people who support a capitalist model".


  12. #12
    AI Medical Doctor first.
    Then AI drivers.
    AI Lawyer, AI Jury and Judge in one software.

    You don't have to worry about randomly getting select to Jury Duty.

  13. #13
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by Doctor Amadeus View Post
    Should there be laws against A.I's hiring and firing?

    I would no provided human bias wasn’t allowed. But as of right now yes.
    Human biases decide what the AI algorithm should be trying to maximize in the first place. That can't change until there is an AGI in the far future.

    If I was going to be fired I'd rather be fired by a traditional algorithm because at least I could look at the program and figure out why. If the AI is based on something like machine learning then there would be no accountability though because it's based on making everything an unexplained black box where the explanation is always "historical data/patterns said I should make this decision if I want maximize a certain metric". Where the metric could be anything the company or the programmer cares about.

  14. #14
    Banned Yadryonych's Avatar
    10+ Year Old Account
    Join Date
    Jul 2011
    Location
    Матушка Россия
    Posts
    2,006
    Quote Originally Posted by Endus View Post
    It's more that the "ideal" is that of the shareholders, not that of either employees or consumers.
    Yeah that's the point of business: to earn money for its owners. The AI being refined of human error or human bias would enable the optimal earnings

    - - - Updated - - -

    Quote Originally Posted by PC2 View Post
    Human biases decide what the AI algorithm should be trying to maximize in the first place.
    Yes, it would be profit and profit alone

  15. #15
    Quote Originally Posted by Yadryonych View Post
    Yeah that's the point of business: to earn money for its owners. The AI being refined of human error or human bias would enable the optimal earnings
    Regardless of laws and regulations. We had that years ago and people died as a result.

  16. #16
    Quote Originally Posted by Darth Phayde View Post
    I know Amazon tried AI for potential hiring a few years ago, but it developed a bias against female candidates.

    https://www.reuters.com/article/us-a...-idUSKCN1MK08G
    Saying that it "developed" a bias makes it sound that it's thinking too much on its own; I would more say that it in some way copied the bias against female candidates already present in the hiring process; and likely other biases as well. What is unknown if the human HR it copied had real reasons to prefer males, and specifically the candidates they choose; or whether they only rejected them based on gender.

    And it's not only a problem with gender - some AI-systems have copied biases based on zip-codes. There might also be other biases - above it avoided a women's college; what if the human recruiters favored their own college and avoided their archenemy in some college sport; the AI would likely copy that as well - and for the individual it sucks to be turned down for no good reason, even if you aren't in a "protected class" (and would be bad for the company - they likely want the best candidates, not stroke the ego of the recruiter).

    In one sense the advantage of an AI is that it's easier to expose such biases - as you can send in many applications and just vary the school/women's chess club/... and clearly see the difference; whereas human recruiters will have a larger variation and might notice the two applications next to each other.

    Obviously it would be better the AI systems could explain why their choices, but such "expert system" haven't worked out well in practice.

    - - - Updated - - -

    Quote Originally Posted by Yadryonych View Post
    Yeah that's the point of business: to earn money for its owners. The AI being refined of human error or human bias would enable the optimal earnings
    But for whom?

    Most companies will likely outsource the AI to another company as it isn't efficient to develop it in-house. So, which company do the AI try to make money for?

  17. #17
    Banned Yadryonych's Avatar
    10+ Year Old Account
    Join Date
    Jul 2011
    Location
    Матушка Россия
    Posts
    2,006
    Quote Originally Posted by Forogil View Post
    Saying that it "developed" a bias makes it sound that it's thinking too much on its own; I would more say that it in some way copied the bias against female candidates already present in the hiring process; and likely other biases as well. What is unknown if the human HR it copied had real reasons to prefer males, and specifically the candidates they choose; or whether they only rejected them based on gender.

    And it's not only a problem with gender - some AI-systems have copied biases based on zip-codes. There might also be other biases - above it avoided a women's college; what if the human recruiters favored their own college and avoided their archenemy in some college sport; the AI would likely copy that as well - and for the individual it sucks to be turned down for no good reason, even if you aren't in a "protected class" (and would be bad for the company - they likely want the best candidates, not stroke the ego of the recruiter).

    In one sense the advantage of an AI is that it's easier to expose such biases - as you can send in many applications and just vary the school/women's chess club/... and clearly see the difference; whereas human recruiters will have a larger variation and might notice the two applications next to each other.
    You don't really understand how does an A.I. work, do you? It's quite not like a trained monkey that copies the funny things it learnt from its owner

  18. #18
    Quote Originally Posted by Yadryonych View Post
    You don't really understand how does an A.I. work, do you? It's quite not like a trained monkey that repeats the funny things it learnt from its owner
    A program...that will do what it's programmed to do.

  19. #19
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,235
    Quote Originally Posted by Yadryonych View Post
    Yeah that's the point of business: to earn money for its owners. The AI being refined of human error or human bias would enable the optimal earnings
    That's the point of capitalism.

    That doesn't make it the point of business.

    - - - Updated - - -

    Quote Originally Posted by Forogil View Post
    Saying that it "developed" a bias makes it sound that it's thinking too much on its own; I would more say that it in some way copied the bias against female candidates already present in the hiring process; and likely other biases as well. What is unknown if the human HR it copied had real reasons to prefer males, and specifically the candidates they choose; or whether they only rejected them based on gender.

    And it's not only a problem with gender - some AI-systems have copied biases based on zip-codes. There might also be other biases - above it avoided a women's college; what if the human recruiters favored their own college and avoided their archenemy in some college sport; the AI would likely copy that as well - and for the individual it sucks to be turned down for no good reason, even if you aren't in a "protected class" (and would be bad for the company - they likely want the best candidates, not stroke the ego of the recruiter).
    It's the root problem with how we're approaching AI. We're trying to build it up from the status quo's data, but that status quo is the result of latent biases. So, in parsing that data, the AI cannot distinguish between relevant factors that would constitute reason to deny a candidate (like a lack of a required certification) and a factor that is nothing but hiring bias (like hiring fewer "ethnic" names under the (indefensible) assumption that they'll be more-poorly-educated). It's all the same to the AI; it does not have any understanding of what those factors are or the context that informs them. So it tends to enshrine those biases as actual choice guidelines, which means it actively starts to enforce those biases, deliberately.

    You'd need to use a data set that contained no bias whatsoever if you wanted to produce an AI that lacked bias, but such a dataset doesn't exist in practice.


  20. #20
    Quote Originally Posted by Yadryonych View Post
    You don't really understand how does an A.I. work, do you? It's quite not like a trained monkey that copies the funny things it learnt from its owner
    I do know how it works - sort of. The AI doesn't just copy the funny things, but it trains an ANN (well called "machine learning" nowadays) to mimic the outputs matching the data-set; so if the data-set had a bias the AI will likely copy that. Claiming that it "develops" a bias makes it sound as if the AI invents the bias that wasn't there.

    Thus if the human recruiters gives preference for one university the AI will likely match that; regardless of whether it was based on the university objectively being better, the perception of it being better, or the university winning the sports league. It could in theory be that the human recruiters don't have a bias - it's just that the university attracted the good students, and the recruiter evaluated everyone on other merits; I find that unlikely.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •