1. #1

    Chatbots develop own language: Facebook shuts down AI system...

    http://english.manoramaonline.com/bu...i-systems.html

    I think even if we do not intend to make a self aware AI, we will keep making computers that are faster and smarter. We will make more complex algorithms. We will eventually have a self aware intelligent computer. After all, that is not much different than how humans came about.

    Intelligent animals seemed to have a good knack for survival and reproduction becoming more and more intelligent each generation.

  2. #2
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    If it's possible to overcome the hurdles associated with developing it; one way or another AI would be the last important thing mankind ever creates. Chat bots aren't AI though and this story has been blown way out of proportion over the past week but it's good that people are starting to take it seriously these days.

    Also, Musks caution is the appropriate response to AI development, Zuckerberg is dangerously naive.

  3. #3
    https://www.theregister.co.uk/2017/0..._new_language/

    Fact checking is too damn difficult, making up tall tales isn't by comparison.
    "My successes are my own, but my failures are due to extremist leftist liberals" - Party of Personal Responsibility

    Prediction for the future

  4. #4
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by PosPosPos View Post
    https://www.theregister.co.uk/2017/0..._new_language/

    Fact checking is too damn difficult, making up tall tales isn't by comparison.
    I do agree that this story is something out of nothing, but that article's reasoning is flawed. The main argument seems to be that the following exchange is gibberish and therefore can be dismissed as of no importance.

    Bob: i can i i everything else . . . . . . . . . . . . . .
    Alice: balls have zero to me to me to me to me to me to me to me to me to
    Bob: you i everything else . . . . . . . . . . . . . .
    Alice: balls have a ball to me to me to me to me to me to me to me
    Bob: i i can i i i everything else . . . . . . . . . . . . . .
    Alice: balls have a ball to me to me to me to me to me to me to me
    Bob: i . . . . . . . . . . . . . . . . . . .
    Alice: balls have zero to me to me to me to me to me to me to me to me to


    I'm not an AI or language expert so I can't really explain it in any technical detail but look past the language, which is junk, and see the bits of information. There aren't many, but enough to communicate with. Again, that's NOT what's happening here but the article doesn't really explain in any less vague terms than the tabloids did and what may appear to be nonsense could have significance to an actual AI.

  5. #5
    Quote Originally Posted by Thoughtcrime View Post
    If it's possible to overcome the hurdles associated with developing it; one way or another AI would be the last important thing mankind ever creates. Chat bots aren't AI though and this story has been blown way out of proportion over the past week but it's good that people are starting to take it seriously these days.

    Also, Musks caution is the appropriate response to AI development, Zuckerberg is dangerously naive.
    Zuckerberg isn't being naive. He's protecting his interests.

    The more intriguing aspect is how AI comes to destroy us if they elect to destroy us. Could it be that it is because they hunger control and power or perhaps our aptitude to fear and malicious nature

  6. #6
    Deleted
    It's amazing that they can someday make AI appear human but they can't do the same for Zuckerberg.

  7. #7
    Chatbots stole my bike

  8. #8
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by noipmahc-omm View Post
    Zuckerberg isn't being naive. He's protecting his interests.

    The more intriguing aspect is how AI comes to destroy us if they elect to destroy us. Could it be that it is because they hunger control and power or perhaps our aptitude to fear and malicious nature
    He's being naive, that he happens to be protecting his business interests doesn't change that.

    Apart from the whole "building a brain" challenge, the greatest hurdle with AI is overcoming the control problem which is something we do not have an answer to yet. IF they could build an AI and they haven't solved the control problem which is just as huge a challenge, then the likely result is something similar to the Paperclip Maximizer.

    The reason the control problem is so difficult is because machines need a goal (utility function), but we also need to develop machines with human values so that their goals align with our own, but how do you quantify and program human values? What is the mathematical expression for happiness, love, fear, hate?

    If I COULD program and build an AI with the goal of making humans happy, how do I know that the maximal outcome of that goal is what I intended? Maybe I want the machine to provide a utopian society with a high standard of living, to develop cures for all human diseases, to extend human life spans ten-fold, to invent new forms of fuel and means to generate energy that open up the stars to our civilisation. That may be what I WANT but it's not optimal for maximising the machines utility function, what's going to maximise an AI's function is probably something fucking terrifying, like pumping my brain full of drugs so I'm ecstatic and incapable of anything more than dribbling in a chair and smiling to myself. Maybe it's even more abstract and goes to the nature of what "I" am (because how do you define a person in mathematics?), maybe it just creates a copy of a human brain, and runs the same memory on a loop as fast as it can forever.

  9. #9
    Quote Originally Posted by Thoughtcrime View Post
    He's being naive, that he happens to be protecting his business interests doesn't change that.

    Apart from the whole "building a brain" challenge, the greatest hurdle with AI is overcoming the control problem which is something we do not have an answer to yet. IF they could build an AI and they haven't solved the control problem which is just as huge a challenge, then the likely result is something similar to the Paperclip Maximizer.

    The reason the control problem is so difficult is because machines need a goal (utility function), but we also need to develop machines with human values so that their goals align with our own, but how do you quantify and program human values? What is the mathematical expression for happiness, love, fear, hate?

    If I COULD program and build an AI with the goal of making humans happy, how do I know that the maximal outcome of that goal is what I intended? Maybe I want the machine to provide a utopian society with a high standard of living, to develop cures for all human diseases, to extend human life spans ten-fold, to invent new forms of fuel and means to generate energy that open up the stars to our civilisation. That may be what I WANT but it's not optimal for maximising the machines utility function, what's going to maximise an AI's function is probably something fucking terrifying, like pumping my brain full of drugs so I'm ecstatic and incapable of anything more than dribbling in a chair and smiling to myself. Maybe it's even more abstract and goes to the nature of what "I" am (because how do you define a person in mathematics?), maybe it just creates a copy of a human brain, and runs the same memory on a loop as fast as it can forever.
    Though I'd agree that strong AI in theory would very likely lead to a general lack of control, it is wrong to say that there's any evidence that there is a lack of control in modern AI. Their problems are regarding a lack of insight in which--no different from Microsoft's debacle--they naively didn't account for what data would be fed into their algorithms and like practically all of the industry, didn't account for the fact that a model is trained to fit the data, not vice versa. You still control the data that is being processed, the algorithm to process the data, etc.. Despite having built some amazing and very talented machine learning and AI systems, we are nowhere close to losing control.

  10. #10
    Quote Originally Posted by Nihilist74 View Post
    http://english.manoramaonline.com/bu...i-systems.html

    I think even if we do not intend to make a self aware AI, we will keep making computers that are faster and smarter. We will make more complex algorithms. We will eventually have a self aware intelligent computer. After all, that is not much different than how humans came about.

    Intelligent animals seemed to have a good knack for survival and reproduction becoming more and more intelligent each generation.
    Already a thread on this

    http://www.mmo-champion.com/threads/...r-own-language

  11. #11
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by noipmahc-omm View Post
    Though I'd agree that strong AI in theory would very likely lead to a general lack of control, it is wrong to say that there's any evidence that there is a lack of control in modern AI. Their problems are regarding a lack of insight in which--no different from Microsoft's debacle--they naively didn't account for what data would be fed into their algorithms and like practically all of the industry, didn't account for the fact that a model is trained to fit the data, not vice versa. You still control the data that is being processed, the algorithm to process the data, etc.. Despite having built some amazing and very talented machine learning and AI systems, we are nowhere close to losing control.
    Modern AI is narrow AI, it's not general intelligence. With narrow AI the control problem (not that there is one in narrow AI )is as simple to solve as turning off the machine. With general intelligence the problem is almost as great as developing the intelligence in the first place, and there are no easy answers. But no, we're not close to building one, which is just as well, because we're not close to solving the control problem; and that must come first.

    The thing that scares me the most, is that the amount of money being thrown at this by countries and private companies desperate to be first to develop general intelligence is completely out of proportion to the money being spent on the control problem which is only being researched by comparatively few people. Of course it's this way, because they see the economic benefits which are immense and unprecedented, but general AI represents a technological paradigm shift and potential existential threat unlike anything ever seen before with no going back, and we need to be ready for that.

    If people are dismissive of the dangers or have no concern over the potential of general intelligence for good and bad then it's evidence that they don't know enough about the subject. This is what I mean when I say that the sentiment expressed by Musk is responsible, and that Zuckerberg's stance is dangerously naive.
    Last edited by Thoughtcrime; 2017-08-02 at 11:49 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •