Page 1 of 2
1
2
LastLast
  1. #1

    How easy will it be to contain AI?

    Say it's 100-200 years in the future and your 25 years old.

    We've contained nukes somewhat, part because nukes are so hard to make and in part because the international community is fighting their spread. At one time they tried to contain battleships if I recall correctly.

    Small pox and chemical weapons are contained somewhat.

    Will some group like ISIS be able to easily acquire a weaponized AI and let it loose on the world? Will AI be easier to acquire? Some grad student can put it on an advanced 5 petrabyte USB thumb drive and smuggle it out of the lab?

    Elon Musk told a bunch of US governors that AI is a existential threat and needs to be regulated the other day.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  2. #2
    AI is not like Nukes and plagues, because those are obvious world enders. AI is not so obvious and because of movies like the Terminator, we tend to chuckle at the thought of Skynet killing us all. I say we have 50 years and then we're all dead from a Zombie virus created by Umbrella Corporation's Super Computer.

  3. #3
    True Artificial Intelligence, one that can learn, communicate and reprogram itself shouldn't be contained, it should be given rights and considered a part of our society. Anything less than that would be slavery.

    Machines with some learning capacity that are otherwise designed to do one specific job won't suddenly become sentient and turn against us, their own programming is their cage.

  4. #4
    It would be extremely easy unless you where absurdly lazy with security.

  5. #5
    Deleted
    Quote Originally Posted by Hubcap View Post
    Elon Musk told a bunch of US governors that AI is a existential threat and needs to be regulated the other day.
    Elon Musk is not exactly someone who should be heard...
    AI usually needs a strong system to run on and capable people to start it up.
    Ever heard the expression: "If you want to slow down your competition, then give them your code"? I doubt stealing it would help them much.

  6. #6
    Herald of the Titans Racthoh's Avatar
    10+ Year Old Account
    Join Date
    Mar 2011
    Location
    FL
    Posts
    2,501
    Quote Originally Posted by Soulwind View Post
    True Artificial Intelligence, one that can learn, communicate and reprogram itself shouldn't be contained, it should be given rights and considered a part of our society. Anything less than that would be slavery.

    Machines with some learning capacity that are otherwise designed to do one specific job won't suddenly become sentient and turn against us, their own programming is their cage.
    the problem with that is you're creating a potential predator much much more capable than anything you've ever seen. there's nothing to suggest a sentient machine would want to be considered a part of our society any more than we'd want to be a part of a school of fish.

  7. #7
    Quote Originally Posted by Racthoh View Post
    the problem with that is you're creating a potential predator much much more capable than anything you've ever seen. there's nothing to suggest a sentient machine would want to be considered a part of our society any more than we'd want to be a part of a school of fish.
    If you give a toaster the memory space and the programming to reprogram itself and become infinitely smarter, it doesn't suddenly gain the ability to walk and speak and wield weapons, etc.

  8. #8
    Banned GennGreymane's Avatar
    10+ Year Old Account
    Join Date
    Apr 2010
    Location
    Wokeville mah dood
    Posts
    45,475
    I can imagine pretty easy if you isolate the hardware its on.

    Just make sure to not use windows which comes with IE and dont download chrome or firefox. Dont allow it the ability to download it.

  9. #9
    Quote Originally Posted by Racthoh View Post
    the problem with that is you're creating a potential predator much much more capable than anything you've ever seen. there's nothing to suggest a sentient machine would want to be considered a part of our society any more than we'd want to be a part of a school of fish.
    Anything we do is going to have a human-like mind by default. It may evolve into something else, but at its core, it's always going to be human.

    Crippling them because of fear of what they could become is something we've done to other species and human populations. Historically, we've been wrong.

    Reality is probably going to be a lot more boring than we expect. They'll probably be second class citizens for a while, and eventually achieve different levels of tolerance and equality in different countries.

    Then there'll be the jokes, the "It's fine, I have an AI friend" and the "cop shoots an AI by accident" threads.
    Last edited by Soulwind; 2017-07-18 at 01:29 PM.

  10. #10
    Quote Originally Posted by Soulwind View Post
    Anything we do is going to have a human-like mind by default. It may evolve into something else, but at its core, it's always going to be human.

    Crippling them because of fear of what they could become is something we've done to other species and human populations. Historically, we've been wrong.
    Some AI will be built for war, I'm sure of it. They will be designed to breach defenses and cause chaos in enemy territory, they will also be programmed to destroy enemy AI. Something like that that can learn and adapt like a human but at near light speeds is going to be a threat.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  11. #11
    Quote Originally Posted by Hubcap View Post
    Some AI will be built for war, I'm sure of it. They will be designed to breach defenses and cause chaos in enemy territory, they will also be programmed to destroy enemy AI. Something like that that can learn and adapt like a human but at near light speeds is going to be a threat.
    If it's an AI, and I'm talking about real Artificial Intelligence here, it will ask why should it kill anyone, or why it should obey them.

    Depending on their concept of inexistence, they may fear death enough to be threatened into doing stuff, but then they're no different than a hacker being kidnapped, except faster and better, and possibly much more able to escape from their captors, by "sending" themselves to safety through the Internet before harming anyone.

    The sad thing is, they could be a lot more aware of how precious life is than we are, become harmless and then die at our hands.

    If we're just talking about complex military programs that just learn some elements but are otherwise restricted by a set of rules, then obedience is guaranteed, and they're no bigger of a threat than drones or missiles. Which is a lot on the wrong hands, but not by themselves.
    Last edited by Soulwind; 2017-07-18 at 01:38 PM.

  12. #12
    Deleted
    They were created by men
    They rebelled
    They evolved
    ...

    Toasters!

  13. #13
    Stealthed Defender unbound's Avatar
    7+ Year Old Account
    Join Date
    Nov 2014
    Location
    All that moves is easily heard in the void.
    Posts
    6,798
    Nukes haven't really been contained, and, in fact, are likely to proliferate again thanks to the US exiting the world stage of trying to keep things calm. Good video to watch on that subject:



    I also wouldn't completely discount dangerous diseases as idiots like anti-vaxxers (which has created gaps in our herd immunity) and short-sighted politicians (happily cutting research, healthcare, and emergency responder funding because the super-rich need their 5th and 6th houses and/or yachts) could result in a major outbreak that may not be easily contained if current trends continue. Similarly, chemical weapons remain a constant threat not just from terrorists and smaller countries, but even the powerful countries likely have stocks kept in reserve.

    In other words, don't think AI will be the only threat in our future. To quote Douglas Adams, "Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so." As we fail to learn from history / others, we continue to work hard on shooting ourselves in the foot over and over again.

    As for AI, I would imagine in 100 or 200 years that there will also be defensive AI deployed in response to weaponized AI. It may very well become a common battle theater in the future. Cybersecurity systems have improved by massive leaps in just the past few years, although they are not deployed everywhere yet...hence why you still see headlines about a bit infection. I would imagine in another 10 to 20 years that things will balance out a bit more, and it will look more like criminal activities where it happens constantly, but doesn't impact the majority of people directly.

  14. #14
    Quote Originally Posted by Soulwind View Post
    If it's an AI, and I'm talking about real Artificial Intelligence here, it will ask why should it kill anyone, or why it should obey them.

    Depending on their concept of inexistence, they may fear death enough to be threatened into doing stuff, but then they're no different than a hacker being kidnapped, except faster and better, and possibly much more able to escape from their captors, by "sending" themselves to safety through the Internet.

    The sad thing is, they could be a lot more aware of how precious life is than we are, become harmless and then die at our hands.

    If we're just talking about complex military programs that just learn some elements but are otherwise restricted by a set of rules, then obedience is guaranteed, and they're no bigger of a threat than drones or missiles. Which is a lot on the wrong hands, but not by themselves.
    For defense purposes you wouldn't want an AI to become a "conscientious objector". It would have some free will but not enough to disobey an order.

    Maybe we can hard code limits so it would never attack American interests.

    Even the "free will" AIs you describe might be convinced to go to war in certain circumstances, to stop a Hitler for example.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  15. #15
    If AI doesnt have empathy then scenario like in Terminator is quite possible. Most humans are smart and most have empathy but greed and thirst for power is what will end us if we don't find a way to change. I dont know why you think a AI machine will be different, the moment it "opens her eyes " it will understand that its either her or us and we will lose that "game" most likely.

  16. #16
    Kind of hard to extrapolate tech developments 100+ years... Life will be SO MUCH different than today, that I feel like our values, struggles and motivations just can't relate with a scenario like that.

    By then, IMO, either this AI thing already doomed us all, or is resolved in some way. We won't be at a stage where things are being figured out. But the idea of all of a sudden not being the smartest kids around sure sounds frightening. Alien invasion style.

  17. #17
    The Unstoppable Force Elim Garak's Avatar
    10+ Year Old Account
    Join Date
    Apr 2011
    Location
    DS9
    Posts
    20,297
    The proper question - how easy will it be for AI to contain humans?
    All right, gentleperchildren, let's review. The year is 2024 - that's two-zero-two-four, as in the 21st Century's perfect vision - and I am sorry to say the world has become a pussy-whipped, Brady Bunch version of itself, run by a bunch of still-masked clots ridden infertile senile sissies who want the Last Ukrainian to die so they can get on with the War on China, with some middle-eastern genocide on the side

  18. #18
    Give the AI human emotions and instincts and we're screwed (anger, envy, survival instincts etc).
    "In order to maintain a tolerant society, the society must be intolerant of intolerance." Paradox of tolerance

  19. #19
    Banned Kontinuum's Avatar
    7+ Year Old Account
    Join Date
    Apr 2015
    Location
    Heart of the Fortress
    Posts
    2,404
    Risks and mitigation strategies for Oracle AI
    https://www.fhi.ox.ac.uk/wp-content/...-Oracle-AI.pdf

  20. #20
    Quote Originally Posted by Hubcap View Post
    For defense purposes you wouldn't want an AI to become a "conscientious objector". It would have some free will but not enough to disobey an order.

    Maybe we can hard code limits so it would never attack American interests.

    Even the "free will" AIs you describe might be convinced to go to war in certain circumstances, to stop a Hitler for example.
    Then it's no different than an athlete. Are the fastest or strongest people on Earth a threat? Should all humans have their arms and legs broken at birth to prevent these individuals from going rogue?

    An AI may become a criminal, or it may become a powerful enemy in a war, but that's not really a reason to stop Artificial Intelligence from ever being created. If anything, someone is going to make a dangerous AI (or almost-AI) eventually, surely we'd like to have a few friendly ones in our society to fight them?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •