Page 1 of 2
1
2
LastLast
  1. #1
    Deleted

    Robots are racist and sexist. Just like the people who created them

    https://www.theguardian.com/commenti...es-ai-language

    Robots are racist and sexist. Just like the people who created them

    Can machines think – and, if so, can they think critically about race and gender? Recent reports have shown that machine-learning systems are picking up racist and sexist ideas embedded in the language patterns they are fed by human engineers. The idea that machines can be as bigoted as people is an uncomfortable one for anyone who still believes in the moral purity of the digital future, but there’s nothing new or complicated about it. “Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.”

    Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. As long ago as 1986, the medical school at St George’s hospital in London was found guilty of racial and sexual discrimination when it automated its admissions process based on data collected in the 1970s. The program looked at the sort of candidates who had been successful in the past, and gave similar people interviews. Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.

    Automation is a great excuse for assholery – after all, it’s just numbers, and the magic of “big data” can provide plausible deniability for prejudice. Machine learning, as the technologist Maciej Cegłowski observed, can function in this way as “money laundering” for bias.

    This is a problem, and it will become a bigger problem unless we take active measures to fix it. We are moving into an era when “smart” machines will have more and more influence on our lives. The moral economy of machines is not subject to oversight in the way that human bureaucracies are. Last year Microsoft created a chatbot, Tay, which could “learn” and develop as it engaged with users on social media. Within hours it had pledged allegiance to Hitler and started repeating “alt-right” slogans – which is what happens when you give Twitter a baby to raise. Less intentional but equally awkward instances of robotic intolerance keep cropping up, as when one Google image search using technology “trained” to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.

    These, however, are only the most egregious examples. Others – ones we might not notice on a daily basis – are less likely to be spotted and fixed. As more of the decisions affecting our daily lives are handed over to automatons, subtler and more insidious shifts in the way we experience technology, from our dealings with banks and business to our online social lives, will continue to be based on the baked-in bigotries of the past – unless we take steps to change that trend.

    Should we be trying to build robots with the capacity for moral judgment? Should technologists be constructing AIs that can implement basic assessments about justice and fairness? I have a horrible feeling I’ve seen that movie, and it doesn’t end well for human beings. There are other frightening futures, however, and one of them is the society where we allow the weary bigotries of the past to become written into the source code of the present.

    Machines learn language by gobbling up and digesting huge bodies of all the available writing that exists online. What this means is that the voices that dominated the world of literature and publishing for centuries – the voices of white, western men – are fossilised into the language patterns of the instruments influencing our world today, along with the assumptions those men had about people who were different from them. This doesn’t mean robots are racist: it means people are racist, and we’re raising robots to reflect our own prejudices.

    Human beings, after all, learn our own prejudices in a very similar way. We grow up understanding the world through the language and stories of previous generations. We learn that “men” can mean “all human beings”, but “women” never does – and so we learn that to be female is to be other – to be a subclass of person, not the default. We learn that when our leaders and parents talk about how a person behaves to their “own people”, they sometimes mean “people of the same race” – and so we come to understand that people of a different skin tone to us are not part of that “we”. We are given one of two pronouns in English – he or she – and so we learn that gender is a person’s defining characteristic, and there are no more than two. This is why those of us who are concerned with fairness and social justice often work at the level of language – and why when people react to having their prejudices confronted, they often complain about “language policing”, as if the use of words could ever be separated from the worlds they create.

    Language itself is a pattern for predicting human experience. It does not just describe our world – it shapes it too. The encoded bigotries of machine learning systems give us an opportunity to see how this works in practice. But human beings, unlike machines, have moral faculties – we can rewrite our own patterns of prejudice and privilege, and we should.

    Sometimes we fail to be as fair and just as we would like to be – not because we set out to be bigots and bullies, but because we are working from assumptions we have internalised about race, gender and social difference. We learn patterns of behaviour based on bad, outdated information. That doesn’t make us bad people, but nor does it excuse us from responsibility for our behaviour. Algorithms are expected to update their responses based on new and better information, and the moral failing occurs when people refuse to do the same. If a robot can do it, so can we.

  2. #2
    TERMINATOR IS GONNA HAPPEN! we're giving the machines too much power!!!!

  3. #3
    Deleted
    Yay, another conspiracy theory.

  4. #4
    Deleted
    What robots? Have I slept through 100 years of scientific advancement?

  5. #5
    Look kids! A writer writing about something that exists only in their imagination.

    As a roboticist, let me give you my professional opinion: HAHAHAHAHAHAHAHA.

    We're so far away from being able to make non-stupid pet trick robots that don't implement moral judgement as just a stupid pet trick, that it's not even worth talking about. SHe mentions one program (not a robot), Tay, and generalizes that program's stupid pet trick to talk about all robots. Robots do things functionally. They have no capacity to "understand". It is not morality if there is no understanding. It's executing an instruction. That doesn't mean their human programmers are racist and sexists. It's that we don't have the langyage, time, capacity or funding to "train" most robots like we do a child we're trying to instill a moral understanding. And given the technical limitations, it wouldn't even work (except as a stupid pet trick).

    My robots... most robots, have more in common with ants, dogs or disembodied arms, lady. Not people. A robot that a human can interact with is a stunt, exactly like trying to emulate human like intelligence. A superior direction is making robots better at what they're good at, rather than trying to create a human simulacrum.

    Anyway, this person wanted to talk about her favorite topic. She found a nice soap box. She's completely wrong in the most fundamental way. Nothing to see here.

    This by the way, horde of alt-right jackasses who are going to swoop down on a feminist writer (in 3...2... 1...) , is how to deal properly with a person who "triggers" you. This feminist talks essentially about an aspect of my career that she knows nothing about, which upsets me to a degree. I generally don't like non-scientists butting in on science, let along my science. But Instead of me going on some kind of ridiculous anti-feminist rant (and I know you knuckle-dragging losers love that shit) I just pointed out the key flaw in her argument: what she is discussing basically doesn't exist.

  6. #6
    That article was so dumb that I think I've had enough MMO-Champ for today. Thanks, OP.

  7. #7
    Over 9000! zealo's Avatar
    10+ Year Old Account
    Join Date
    Jan 2013
    Location
    Sweden
    Posts
    9,517
    Clickbait by someone who doesn't understand what they're talking about trying to push an agenda based upon dislike of male dominance in engineering fields.

    There are no robots around capable of sentient thought yet, machine learning and AI research have quite a long way left to go before that.

  8. #8
    What a pile of shit this article was. I mean atleast pick your grounds first.

  9. #9
    Warchief Zoibert the Bear's Avatar
    10+ Year Old Account
    Join Date
    Nov 2010
    Location
    Basque Country, Spain
    Posts
    2,080
    Machine and deep learning does not require a human induced model anymore. This article is obsolete in regards to today's technology.

    ML nowadays constructs it's own models based on tangible data available via arbitrary sources. If machines are racist is because they deem that being racist/sexist is the optimal solution to the problem they have been presented to; or is racist/sexist on purpose and for a very specific purpose.

    Some food for thought.

    (Source: Works in ML and Software Engineering)

  10. #10
    Don't people get tired of writing fairy tales in place of journalism?

  11. #11
    Tay? Is that you?

  12. #12
    Banned Kontinuum's Avatar
    7+ Year Old Account
    Join Date
    Apr 2015
    Location
    Heart of the Fortress
    Posts
    2,404
    tl;dr

  13. #13




    https://www.gizmodo.com.au/2016/03/h...-racist-rants/

    The problem with heuristics: it's learning from humans. Have you met humans? I wouldn't trust them to operate a kettle.
    Quote Originally Posted by Tojara View Post
    Look Batman really isn't an accurate source by any means
    Quote Originally Posted by Hooked View Post
    It is a fact, not just something I made up.

  14. #14
    The Google AI team had two AIs compete, I don't remember all the nuances of the experiment if there were limited resources or not, I'm sure if you're interested you can find it online.

    Turns out both AIs became extremely aggressive.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  15. #15
    Deleted
    Don't forget - as long as you're a female or belong to a minority or majority group and you don't have something, das wayciss. Not to forget sexyss.

    It's amazing how these people have been conditioned to blame all white men for everything. It's tough being the only race/sex combination to ever get anything done.

  16. #16
    Warchief Zoibert the Bear's Avatar
    10+ Year Old Account
    Join Date
    Nov 2010
    Location
    Basque Country, Spain
    Posts
    2,080
    Quote Originally Posted by Tupimus View Post
    Don't forget - as long as you're a female or belong to a minority or majority group and you don't have something, das wayciss. Not to forget sexyss.

    It's amazing how these people have been conditioned to blame all white men for everything. It's tough being the only race/sex combination to ever get anything done.
    +1000 to this.

    Not to mention that the say IT and the tech industry are controlled by men, but not how until a few years ago, many tech people have been discriminated against for being nerds, and was simply not an attractive field for females.

    But they will pretend this is not true and has never happened, because if you are a man, and if you are white, you are the devil thus you deserve all the worst. I was called a nerd in high school because I was good with computers. I was called a nerd when I decided to go the CompSci route, and my class was 100% men. I was called a nerd by women simply because I was a Software Engineer, "you must be good with printers lol!".

    But suddenly, IT is everywhere and everything. It's also an on-demand skill that fills high-paying roles. And since being into computers is not a basement dweller nerdy activity anymore (see gaming in 2010s) in the eyes of those people, they just say that white men control it.

    At this stage it just seems that white men are the scapegoat for all the insecurities and bull shit the human race as a whole has brought upon themselves.

    To all of that sirs, I say: fuck you.

  17. #17
    This is one of the dumbest things I have ever read.

    I guess my car and my laptop are racists too.

  18. #18
    Deleted
    Oh yay. Robot follows logic, SJW calls it racist.

  19. #19
    The Undying Kalis's Avatar
    10+ Year Old Account
    Join Date
    Jul 2012
    Location
    Στην Κυπρο
    Posts
    32,390
    To be fair, this is a Laurie Penny article and the poor girl is retarded.

  20. #20
    Deleted
    Tay was the best haha, but don't forget Google Photos App!!



    AI at this point is just a sophisticated pattern recognition. Surely it has found some similarities...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •