Page 2 of 5 FirstFirst
1
2
3
4
... LastLast
  1. #21
    Quote Originally Posted by Vamperica View Post
    You know people are worried about AI killing us, but truthfully we will make AI sex bots and stop having kids....and we will just die out.
    it will never happen! or you will be abble to be friendzone by your sexdoll!!!

    More seriously i think the big deal are emotions. we have emotion but it's biological stuff in our brain, if a machine cant have emotions.. sound bad for us. we dont need mechanical a psychopath.


    Quote Originally Posted by ComputerNerd View Post
    AI itself won't be just one or the other.
    But a specific implementation of it.
    One specific example is going to do either great stuff or terrible stuff.

    great stuff could also be terrible. atome division for exemple but i agree all depend of the firt use of the AI
    Last edited by Niaraa; 2017-08-17 at 10:32 AM.

  2. #22
    Quote Originally Posted by xskarma View Post
    I agree with Elon Musk that we should start making serious legislation and rules for what we can do with AI and how we develop them. This is not something that should go unregulated and left to whatever people themselves think is okay and a good idea.
    Time to set up the Turing Registry!

  3. #23
    Herald of the Titans RaoBurning's Avatar
    10+ Year Old Account
    Join Date
    Sep 2009
    Location
    Arizona, US
    Posts
    2,728
    Quote Originally Posted by Thoughtcrime View Post
    If it's general intelligence it will find a way. What then?

    Boxing an AI is not solving the control problem.
    Thinking a real AI is a thing to be controlled is the problem. If it's fully sapient, we should treat it as such and not keep a shock collar on it.
    Quote Originally Posted by Wells View Post
    This is America. We always have warm dead bodies.
    if we had confidence that the President clearly did not commit a crime, we would have said that.

  4. #24
    The Unstoppable Force Theodarzna's Avatar
    7+ Year Old Account
    Join Date
    Sep 2015
    Location
    NorCal
    Posts
    24,166
    I'm skeptical we can.
    Quote Originally Posted by Crissi View Post
    i think I have my posse filled out now. Mars is Theo, Jupiter is Vanyali, Linadra is Venus, and Heather is Mercury. Dragon can be Pluto.
    On MMO-C we learn that Anti-Fascism is locking arms with corporations, the State Department and agreeing with the CIA, But opposing the CIA and corporate America, and thinking Jews have a right to buy land and can expect tenants to pay rent THAT is ultra-Fash Nazism. Bellingcat is an MI6/CIA cut out. Clyburn Truther.

  5. #25
    Sure as long as its an air gaped system with no connection to the internet.

  6. #26
    Herald of the Titans
    10+ Year Old Account
    Join Date
    Mar 2012
    Posts
    2,545
    I think it's more important and interesting to ask if we will rather than should we develop AI. To some degree AI has been around for awhile, it's just a matter of how "intelligent" we want to let AI get. The first gen of AI I think were the old Eliza-type programs 20 years ago that would respond to questions with a random generic answer like the Magic 8 balls. Now we're definitely in the second gen of AI with autonomous cars, smart-everything, AI in factories, and high-tech AI like IBM's Watson.

    It doesn't seem like there will be much chance to slow down AI since it has such a strong driver behind it in helping businesses automate. That's the golden goal of every company it seems, replacing human labor with far less expensive machines and robots. If AI allows Transportation workers to eliminate human trick drivers with AI autonomous trucks, that is huge for profits. Same with many lower-skilled jobs. It used to just be that automation would allow companies to replace many unskilled workers with machines. But AI opens the door to replacing many more jobs that humans do now with automation. Not to mention military applications. So I don't see much slowing it down. If anything it's set to grow and advance quickly now.

  7. #27
    No

    Just like the ancient Britons should never have let in the Anglo-Saxons we humans should never create something that could easily replace us.

    But alas we humans are sure to do it

  8. #28
    I don't think the issue right now is Skynet or the Matrix. I think the bigger issue is that we may be putting too much trust into AI systems to make important decisions when they are still basically only as good as the engineers who have created them. Like the dude who got decapitated in his Tesla when the on-board AI failed to recognize a truck turning directly in front of the car. However, maybe in 100 years when we have quantum or biological computers that can operate a billion times faster than anything currently in existence, then AI systems may become so complex that we can no longer understand why they do what they do. And if the systems in control of things like the power grid are allowed to determine when they are in danger, and if they also control offensive military systems, then maybe we could have a problem. In any case, it doesn't matter what restrictions we put on usage, the research will continue unbounded.

  9. #29
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by RaoBurning View Post
    Thinking a real AI is a thing to be controlled is the problem. If it's fully sapient, we should treat it as such and not keep a shock collar on it.
    Solving the control problem is about building an AI that doesn't need to be controlled because it's goals align with ours and yes this does need to happen before you ever turn it on or you've just created a machine that will optimise it's utility function without any concern for anything outside of that function, including human beings, their civilisation, the planet, or anything on it.

    Machine intelligence needn't think like humans at all while still being incredibly powerful. That's what the Paperclip problem is all about. (Google Paperclip maximiser).

    It's one of the greatest hurdles associated with developing true AI and we need to be sure we get it right the first time. Artificial general intelligence will be the most powerful optimisation process the universe has ever seen and if we ignore all the literature on AI safety from the past 30 years and just build one that does engage in a treacherous turn and gain a decisive strategic advantage what's potentially at risk is humanities cosmic endowment, the maximum possible sphere of human influence if travel at light speed were possible. (That is, the local group of galaxies).

    There's no taking it back once the genie is out of the bottle.
    Last edited by Thoughtcrime; 2017-08-19 at 10:34 AM.

  10. #30
    Seeking more knowledge and better tools is our primary function as a group since its the way we have evolved to survive. There shall never be a limit until we accomplish something and than decide how we should use it or not. An easy example is nuclear power, going by the same cautious we would not have mastered it. No its better to master it, then know some of its bad application simply needs to be monitored as a result. Just the same AI needs to be made, only once its made can we realize how it should be used.

  11. #31
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by Ouch View Post
    Seeking more knowledge and better tools is our primary function as a group since its the way we have evolved to survive. There shall never be a limit until we accomplish something and than decide how we should use it or not. An easy example is nuclear power, going by the same cautious we would not have mastered it. No its better to master it, then know some of its bad application simply needs to be monitored as a result. Just the same AI needs to be made, only once its made can we realize how it should be used.
    General intelligence is different. You can switch off a nuclear reaction. You cannot switch off an intelligence explosion. What if you got it wrong the first time and didn't realise until it's too late? That's what would happen in the scenario of a treacherous turn.

    You don't get to go back to the drawing board and refine anything because it's out of your hands by then.

  12. #32
    Quote Originally Posted by Thoughtcrime View Post
    General intelligence is different. You can switch off a nuclear reaction. You cannot switch off an intelligence explosion. What if you got it wrong the first time and didn't realise until it's too late? That's what would happen in the scenario of a treacherous turn.

    You don't get to go back to the drawing board and refine anything because it's out of your hands by then.
    Its not, nothing about AI makes it connected to everything and uncontroable, unless you make it be. Just like building the bombs is not the reason we dropped them. They could have never have been dropped on Japan, but we would have still made them. What we invent is never out of our hands unless we decide to.

  13. #33
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by Ouch View Post
    Its not, nothing about AI makes it connected to everything and uncontroable, unless you make it be. Just like building the bombs is not the reason we dropped them. They could have never have been dropped on Japan, but we would have still made them. What we invent is never out of our hands unless we decide to.
    Bombs can't think for themselves. They won't stop you turning them off, or changing their yield, or taking them apart and starting again. People do though, because they're intelligent. I'm not going to let you turn me off, or reprogram my brain to like the things you want me to, or take me apart and redesign me. Artificial general intelligence is closer to a living thing than an inanimate object in that it will fight for its survival based on simple computer logic, it wants to optimise it's utility function, if you do anything to interfere with that function it will stop you not because it's evil or broken but simply because it is optimal to do so. It cannot perform it's function if you change its function, it cannot perform it's function if you turn it off or take it apart. It can perform it's function if it kills the human trying to do those things.

  14. #34
    Quote Originally Posted by Thoughtcrime View Post
    Bombs can't think for themselves. They won't stop you turning them off, or changing their yield, or taking them apart and starting again. People do though, because they're intelligent. I'm not going to let you turn me off, or reprogram my brain to like the things you want me to, or take me apart and redesign me. Artificial general intelligence is closer to a living thing than an inanimate object in that it will fight for its survival based on simple computer logic, it wants to optimise it's utility function, if you do anything to interfere with that function it will stop you not because it's evil or broken but simply because it is optimal to do so. It cannot perform it's function if you change its function, it cannot perform it's function if you turn it off or take it apart.
    And again nothing stops us from making our AI research in a closed system and nothing stops us from turning it off after results. Everything alive, included AI can be turned off. You can be turned off right now. AI is our creation, it can be used or made how we want it to be. The only way to lose control, is to purposefully lose it.

  15. #35
    Human are a very, very intelligent specie, more than we think, remember we created computers, space launchers, internet, etc, in less than 60 years. The science and the science knowledge are getting farther and farther, faster and faster. Soon or later, we will create a IA so much performant, that it WILL become self-aware.
    We talk about Terminator, yes, it's a fiction, but it COULD happen, not today, but in 20 years, it could. But remember Terminator 2: the T-800 had a chip who permited him to learn, and after that chip was activated, he was even able to undestand human feelings, and even to feel them himself.
    Creating an IA is not the true matter, the only real thing is to create an IA who is able to undestand emotions and have empathy, and take decisions according to them.

    In fact, we don't need a cold and purely logical IA, we need to create artificial human beings.

  16. #36
    until quantum computing, real AI is a pipe dream

    might aswell talk about some other topic

  17. #37
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by Ouch View Post
    And again nothing stops us from making our AI research in a closed system and nothing stops us from turning it off after results. Everything alive, included AI can be turned off. You can be turned off right now. AI is our creation, it can be used or made how we want it to be. The only way to lose control, is to purposefully lose it.
    How have you solved the control problem? You haven't, you've just boxed it. It's not safe AI because as soon as it finds a way to circumvent the safety precautions that you've put in place it will do so, gain a decisive strategic advantage, and go straight to optimising it's utility function.

    Say you build a boxed AI, you quickly realise that it is attempting to optimise it's utility function while ignoring all safety procedures you've put in place (What are they? How are they defined in code?) so you turn it off and redesign it. The next time you turn it on, how do you know that you've actually fixed the problem? What tests are you performing to check safety? Are you sure the machine doesn't know that you'll just turn it off again if it doesn't give the appearance of passing all your tests?

    If you're wrong, and there's not really any way to know you are, and that machine ever gets out of it's box; you're fucked.

    Also, the point was not that I can't be turned off but that I will fight you. A general intelligence will be far more effective at stopping you than I ever could be. It's easy to envisage a scenario in which a machine intelligence pretends to be performing as you intended while subtly aligning resources UNTIL it has a decisive strategic advantage at which point a treacherous turn occurs and you cannot stop it. (That's what decisive strategic advantage means).

    In chess, one can know when a decisive strategic advantage is attained many moves in advance. At that point a master chess player will give up as the game is pointless from that point onward because resistance is pointless; there's no way to stop the check mate that's coming 5 - 50 moves from now, you see the same thing in pro-gaming such as SC2 tournaments. If a machine conceals it's nature until that moment passes, (not because it's evil or broken; but because this is the optimal thing to do, remember that it can't perform it's function if you turn it off) again, you wouldn't know it; and now you are fucked.

    In real life with a real general intelligence it would have many times greater intelligence than the entire human race combined and could potentially simulate all possible actions that human beings could take against it so that advantage could manifest years in advance; the machine itself could be working toward it from the second it's switched on without us ever knowing. In which case, we were fucked all along, we just couldn't see it.

    General intelligence that has been built without solving the control problem is just an optimisation process. All it "wants" to do is perform it's utility function optimally, whether that's to make paperclips, solve world hunger, make humans happy, destroy our enemies or create world peace. Without solving the control problem the outcome of all possible utility functions is probably horrifying.
    Last edited by Thoughtcrime; 2017-08-19 at 12:12 PM.

  18. #38
    Hope it comes soon, I want a robot friend, we can kill all humans together or what not.

  19. #39
    Quote Originally Posted by zorkuus View Post
    Err... that's not human based specifically. Most life on earth operates that way. "Peace" is an evolutionary risk. Getting rid of competition is not.
    True, but I meant it as in humans can make a choice in that regard.

    - - - Updated - - -

    Quote Originally Posted by Skulltaker View Post
    The problem is that we only acknowledge humans as actually intelligent. Therefor, most AIs are portrayed not as an artificial intelligence, but an artificial human intelligence.
    I wouldn't say we only acknowledge humans as intelligent. Simians and dolphins, as I recall, are on top of the intelligence chain as well (arguably below us, but still), the thing is we compare everything else to ourselves for deeming ours the most developed intelligence there is out there (that we've encountered, anyway).

  20. #40
    We cannot regulate china. An attempt to develop strict regulations on AI technology would result in China having the best killer robots in the world.
    TO FIX WOW:1. smaller server sizes & server-only LFG awarding satchels, so elite players help others. 2. "helper builds" with loom powers - talent trees so elite players cast buffs on low level players XP gain, HP/mana, regen, damage, etc. 3. "helper ilvl" scoring how much you help others. 4. observer games like in SC to watch/chat (like twitch but with MORE DETAILS & inside the wow UI) 5. guild leagues to compete with rival guilds for progression (with observer mode).6. jackpot world mobs.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •