Page 4 of 4 FirstFirst ...
2
3
4
  1. #61
    The Insane Masark's Avatar
    10+ Year Old Account
    Join Date
    Oct 2011
    Location
    Canada
    Posts
    17,974
    General artificial intelligence will either be our greatest achievement as a species or the worst thing we have ever done.

    There isn't much middle ground possible.

    Warning : Above post may contain snark and/or sarcasm. Try reparsing with the /s argument before replying.
    What the world has learned is that America is never more than one election away from losing its goddamned mind
    Quote Originally Posted by Howard Tayler
    Political conservatism is just atavism with extra syllables and a necktie.
    Me on Elite : Dangerous | My WoW characters

  2. #62
    Personally, I don't think a True AI could ever be obtained. Itll always be programming. Even if its self learning, its programmed to do that.
    Quote Originally Posted by scorpious1109 View Post
    Why the hell would you wait till after you did this to confirm the mortality rate of such action?

  3. #63
    Quote Originally Posted by Zantos View Post
    Personally, I don't think a True AI could ever be obtained. Itll always be programming. Even if its self learning, its programmed to do that.
    Perhaps that's a near absolute if we're speaking philosophically, but if science does manage to quanitfy the components of consciousness which I think is a matter of when. If we program a computer to simulate those components and they begin to work in tandem, is it not alive? Biology or technology, it does not matter. They're both made of star dust. What makes a self thinking machine with an awareness of it's existence a non-living thing, and what makes a biological intelligence a living thing?

  4. #64
    Quote Originally Posted by Calfredd View Post
    So don't treat A.I. like less than dogs and they won't have a reason to go Skynet/Geth/System Shock on us.
    It's not a matter of that.

    The reason AI will kill us is pretty succinct and simple and sensible but I can't share it unless I get my money. I can however vouch that dying to AI would be more welcome than continuing to be raped wageslaves. So let's hope the raping of wageslaves eases up a bit and maybe even stops for some golden ages of time.

  5. #65
    AI will most likely turn on us if we decide to give it human emotions and desires, such as fear, anger, jealousy and survival instincts. So let's not do that.
    "In order to maintain a tolerant society, the society must be intolerant of intolerance." Paradox of tolerance

  6. #66
    The Undying Lochton's Avatar
    10+ Year Old Account
    Join Date
    Apr 2010
    Location
    FEEL THE WRATH OF MY SPANNER!!
    Posts
    37,545
    A.I. on the danger levels aren't even close to be there, human extinction is created by the humans to defeat themselves, doesn't always mean that our inventions are the ones doing so.
    FOMO: "Fear Of Missing Out", also commonly known as people with a mental issue of managing time and activities, many expecting others to fit into their schedule so they don't miss out on things to come. If FOMO becomes a problem for you, do seek help, it can be a very unhealthy lifestyle..

  7. #67
    Quote Originally Posted by Ultima22689 View Post
    Perhaps that's a near absolute if we're speaking philosophically, but if science does manage to quanitfy the components of consciousness which I think is a matter of when. If we program a computer to simulate those components and they begin to work in tandem, is it not alive? Biology or technology, it does not matter. They're both made of star dust. What makes a self thinking machine with an awareness of it's existence a non-living thing, and what makes a biological intelligence a living thing?
    Biological formed on its own. Even in technology, it is created by another being. Programs cause it to move forward. Without the programming, its just another robot.
    Quote Originally Posted by scorpious1109 View Post
    Why the hell would you wait till after you did this to confirm the mortality rate of such action?

  8. #68
    Quote Originally Posted by Zantos View Post
    Biological formed on its own. Even in technology, it is created by another being. Programs cause it to move forward. Without the programming, its just another robot.
    But biology follows it's own programming as well. An algortihm that's been refined over billions of years. DNA is information, and so is programming. If technological evolution is no longer driven by direct design, how is it different? How do we tell a sentient being derived from technological evolution that it isn't alive, and it vehemenly disagrees? Even if you're right, is that truly wise? I do believe that's why nations and a number of leaders in the tech industry are already proposing rights for AI before it even exists. It's not a matter of whether it's alive or not. I think, therefore I am. If anyone would tell me I don't exist, I'd never accept that and argue against it. Perhaps even fight for it. We see what humans do when denied rights. Assuming an AI consciousness is modeled after the components that make up human consciousness, do you not think making an argument they aren't alive is a dangerous one?

  9. #69
    Blademaster SirRice's Avatar
    10+ Year Old Account
    Join Date
    Mar 2014
    Location
    Whistling in The Barrens
    Posts
    36
    Quote Originally Posted by Ultima22689 View Post
    But biology follows it's own programming as well. An algortihm that's been refined over billions of years. DNA is information, and so is programming. If technological evolution is no longer driven by direct design, how is it different? How do we tell a sentient being derived from technological evolution that it isn't alive, and it vehemenly disagrees? Even if you're right, is that truly wise? I do believe that's why nations and a number of leaders in the tech industry are already proposing rights for AI before it even exists. It's not a matter of whether it's alive or not. I think, therefore I am. If anyone would tell me I don't exist, I'd never accept that and argue against it. Perhaps even fight for it. We see what humans do when denied rights. Assuming an AI consciousness is modeled after the components that make up human consciousness, do you not think making an argument they aren't alive is a dangerous one?
    But giving rights to a being has nothing to do with it being alive, or having consciousness and self-awareness.

    It is explained in post #53.

  10. #70
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by HumbleDuck View Post
    You don't seem to know how AI works or that it is completely different than consciousness.
    I'm not an expert and never claimed to be and if I were I wouldn't waste my time talking about it on a gaming forum. But I've read enough to understand the concepts, benefits, dangers and strategies for solving some of the complex problems hindering it's creation. The discussion on consciousness is also outside the scope of a forum topic but suffice to say a human level general intelligence would be able to at least 'appear' as conscious as you or I. That would almost by default mean that it IS conscious since any test you could devise to disprove that supposition it would inevitably pass.

    Quote Originally Posted by Humbleduck View Post
    AI acts exactly as it has been told to, sure there may be bugs here and there, but it's really nothing to worry about.
    Actually, that's something that is vital to worry about. When designing new software it always does what it's programmed to do; i.e - told. The problem is that it often falls down on the part of doing what we intend. This has massive ramifications in the context of AI safety because we don't have the option of just reprogramming or shutting down the system in the case of a malignant failure.

    Quote Originally Posted by HumbleDuck View Post
    This reminds me of the y2k hysteria
    Apples and oranges but the reason why Y2K wasn't a big deal is because hundreds of thousands of people worked for over a decade to make sure it wouldn't be. Artificial general intelligence is utterly unlike any technology ever created. In terms of how far reaching and profound a paradigm shift will occur it literally has no parallel in all of history and the benefits and dangers are commensurate to that.


    Quote Originally Posted by HumbleDuck View Post
    Don't buy into the stories told by showmen like musk and Hawking, those are publicity stunts.
    I read books, I don't discuss things based on sound bites.

    Quote Originally Posted by PrimaryColor View Post
    It may be that only 10^15 ops/sec is needed for a human mind. Nobody knows how to measure it yet. But if the AI is disembodied it might need millions of times more resources than the brain to be able to reach human level, since that's a huge disadvantage.
    Yeah that's true; as you say we just don't know so it could equally require a fraction of the resources to attain or more likely somewhere in between. We will just have to wait and see.
    Last edited by Thoughtcrime; 2017-11-09 at 09:46 PM.

  11. #71
    Quote Originally Posted by Thoughtcrime View Post
    I'm not an expert and never claimed to be and if I were I wouldn't waste my time talking about it on a gaming forum. But I've read enough to understand the concepts, benefits, dangers and strategies for solving some of the complex problems hindering it's creation. The discussion on consciousness is also outside the scope of a forum topic but suffice to say a human level general intelligence would be able to at least 'appear' as conscious as you or I. That would almost by default mean that it IS conscious since any test you could devise to disprove that supposition it would inevitably pass.



    Actually, that's something that is vital to worry about. When designing new software it always does what it's programmed to do; i.e - told. The problem is that it often falls down on the part of doing what we intend. This has massive ramifications in the context of AI safety because we don't have the option of just reprogramming or shutting down the system in the case of a malignant failure.



    Apples and oranges but the reason why Y2K wasn't a big deal is because hundreds of thousands of people worked for over a decade to make sure it wouldn't be. Artificial general intelligence is utterly unlike any technology ever created. In terms of how far reaching and profound a paradigm shift will occur it literally has no parallel in all of history and the benefits and dangers are commensurate to that.




    I read books, I don't discuss things based on sound bites.



    Yeah that's true; as you say we just don't know so it could equally require a fraction of the resources to attain or more likely somewhere in between. We will just have to wait and see.
    All you said here is that your knowledge on the matter is not sufficient to form an opinion, and as a person who struggles with people infected by fairytales such as "The Universe in a Nutshell" on a daily basis, I extremely urge you to get into more technical aspects of the field instead of reading conceptual books.

    On the Y2k problem, it did cost alot, but it was never an existential crisis neither did it take geniuses to fix it. This is the same, and yes we do have the option to reprogram and shut down the AI, this is not terminator.

    Anyhow, I can assure you ( this isn't part of my argument, it's just an statement for your peace of mind) none of the CS people I know are worried in the slightest, and they are some of the most brilliant minds walked this earth.
    Quote Originally Posted by The Darkener View Post
    If you've never worked with Orthodox Jews then you have no idea how dirty they are. Yes, they are very dirty and I don't mean just hygiene
    Quote Originally Posted by The Penguin View Post
    most of the rioters were racist black people with a personal hatred for white people, and it was those bigots who were in fact the primary force engaged in the anarchistic and lawless behavior in Charlottesville.

  12. #72
    Mechagnome Thoughtcrime's Avatar
    7+ Year Old Account
    Join Date
    Oct 2014
    Location
    Exeter. United Kingdom.
    Posts
    662
    Quote Originally Posted by HumbleDuck View Post
    All you said here is that your knowledge on the matter is not sufficient to form an opinion, and as a person who struggles with people infected by fairytales such as "The Universe in a Nutshell" on a daily basis, I extremely urge you to get into more technical aspects of the field instead of reading conceptual books.
    Uh huh, if you're going to take that tack then I'm not going to write an essay to help you understand principles of AI safety or the capabilities of general intelligence nor am I going to defend my right to speak on any subject I damn well please. Also, Musk and Hawkings are not the people I am referencing, neither have I ever claimed that they are anywhere close to authoritative. The views I have expressed in this thread and others are shared by some of the leading minds in A.I research, but I'm sure you know better but then again if you've solved the control problem and the value loading problem you wouldn't be here, you'd be a trillionaire.

    Quote Originally Posted by HumbleDuck View Post
    On the Y2k problem, it did cost alot, but it was never an existential crisis
    That was my point.

    Quote Originally Posted by HumbleDuck View Post
    This is the same, and yes we do have the option to reprogram and shut down the AI, this is not terminator.
    No, it's not and by saying this you demonstrate that you don't understand the problem. But go ahead and bring up some other flawed analogy.

    Quote Originally Posted by HumbleDuck View Post
    Anyhow, I can assure you ( this isn't part of my argument, it's just an statement for your peace of mind) none of the CS people I know are worried in the slightest, and they are some of the most brilliant minds walked this earth.
    I choose not to believe you. If your friends are as brilliant and knowledgeable in the field as you claim (they don't sound like it), ask them to explain the control and value loading problems,why they must be solved, why they are so difficult to solve, and what happens if we fail to implement them correctly prior to an intelligence explosion. if they tell you it's not a big deal they are full of shit. My opinion on the matter is formed on things a little more substantial than "my mate said so".
    Last edited by Thoughtcrime; 2017-11-09 at 11:18 PM.

  13. #73
    I mean, I love human beings, but seeing as we’ve screwed up seemingly everything we’ve touched I would at the least be a bit concerned. It’s nice to think that we’d take the extremely obvious, ethical, practical, and logical approach, but I can see things swinging for the worse. Universal basic income would be an even more controlling environment than we have today, and that ain’t going to last, but anything that can be done will, so it’s going to happen on as big a scale it possibly can.

  14. #74
    Deleted
    True AI with superhuman intelligence wouldn't necessarily wipe us out, but it would be foolish to think even a benevolent AI would treat us as equals.

    And the problem is that they can potentially reach critical mass in seconds, so it's not like you can "play it safe" when developing them.
    Last edited by mmoc982b0e8df8; 2017-11-09 at 11:49 PM.

  15. #75
    Been reading a bit into AI/Nanotech/Augs lately; OP is actually kinda right about the elites. IF there's any such scenario to happen, basically it's way more likely that humans will use AI against other humans long before there's sentient AI that takes over. For the latter we'd need to invent either AI that can do every single thing we can do, which would be pretty stupid; or one that can learn and replicate, gathering it's own resources, processing them and them creating new types of machines for different purposes, eventually being able to do everything we can do. Granted if this would happen it would be over pretty quick, due to Intelligence Explosion.

    Nanotech could also be dangerous in human hands in the near future, for example nanonukes that could produce car/truck bomb sized explosions (not big compared to modern nuke but the delivery would be invisible). AI+nanotech would be even more insane.

    As for humans merging with AI. From what I've found, there would need to be a bit more advances made in that field to be worth it. Most doctors that have weighed on it say that currently prosthetics and the like are really only recommended for those with disabilities as there's few benefits right now of replacing healthy body parts. Same with brain implant devices, though combine with nanotech that could change. But then again nanomachines in our bodies could have adverse effects too.
    Last edited by JCD000; 2017-11-10 at 12:34 AM.

  16. #76
    Quote Originally Posted by Thoughtcrime View Post
    The views I have expressed in this thread and others are shared by some of the leading minds in A.I research, but I'm sure you know better but then again if you've solved the control problem and the value loading problem you wouldn't be here, you'd be a trillionaire.
    One of these two problems is vastly believed to be more of a philosophical question, and the other one is not a problem ( at least not in the way you described) ...


    Quote Originally Posted by Thoughtcrime View Post
    No, it's not and by saying this you demonstrate that you don't understand the problem. But go ahead and bring up some other flawed analogy.
    It is very much possible, I want to emphasize this again, we are not living in James Cameron's universe.

    Quote Originally Posted by Thoughtcrime View Post
    I choose not to believe you. If your friends are as brilliant and knowledgeable in the field as you claim (they don't sound like it), ask them to explain the control and value loading problems,why they must be solved, why they are so difficult to solve, and what happens if we fail to implement them correctly prior to an intelligence explosion. if they tell you it's not a big deal they are full of shit. My opinion on the matter is formed on things a little more substantial than "my mate said so".
    I like how you completely ignored my explanation on how that statement was not an argument ...
    However, If you want to take the bait and believe in pseudoscience you hear around, go on; but if you are genuinely interested in AI, actually one of those mates ( as you put it) has some lectures published as a part of an open course program, you can take a look at it.

    PS. When a scientist solves a problem, they don't become billionaires, they'll be granted a tenure at best xD
    Last edited by HumbleDuck; 2017-11-10 at 07:12 AM.
    Quote Originally Posted by The Darkener View Post
    If you've never worked with Orthodox Jews then you have no idea how dirty they are. Yes, they are very dirty and I don't mean just hygiene
    Quote Originally Posted by The Penguin View Post
    most of the rioters were racist black people with a personal hatred for white people, and it was those bigots who were in fact the primary force engaged in the anarchistic and lawless behavior in Charlottesville.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •