Page 3 of 7 FirstFirst
1
2
3
4
5
... LastLast
  1. #41
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by PvPHeroLulz View Post
    You give validity to an issue that does not exist.

    There is no AI that yields Comprehension ; nor is there a need for an AI to yield comprehension.
    A superintelligent AI could comprehend anything just as well or even better than humans.

  2. #42
    Deleted
    Quote Originally Posted by Arganis View Post
    I feel stupid listening to this guy but he keeps repeating the word magic, which imo is usually invoked when what you're talking about has no current basis in reality.

    There's entirely too much conjecture in this video based on things that are currently technically impossible aka 'magic', every time this guy wants to spell out doom he has to exponentially further increase the ridiculous technical capacities this so-called AI would need to create his scenario as if anybody would ever put the kind of monumental effort required to design such an AI to 'collect stamps'.
    The thing is, it's not something that will build up slowly. Once an AI is made it is made and can do whatever the fuck it wants.

    You can't control a future AI any more than you can control a human or random animal, it can follow your commands until it sees it has no need anymore to do so.

  3. #43
    Fluffy Kitten xChurch's Avatar
    10+ Year Old Account
    Join Date
    Jun 2012
    Location
    The darkest corner with the best view.
    Posts
    4,828
    Quote Originally Posted by Aeilon View Post
    The thing is, it's not something that will build up slowly. Once an AI is made it is made and can do whatever the fuck it wants.

    You can't control a future AI any more than you can control a human or random animal, it can follow your commands until it sees it has no need anymore to do so.
    It would still be very much limited by it's physical capacity. It's like saying a monkey that can learn could automatically do everything a human could.

  4. #44
    Deleted
    Quote Originally Posted by PvPHeroLulz View Post
    To which point, it's a self-created problem.

    Just another human looking for something to fear.

    - - - Updated - - -



    An AI can learn within the parameters you give it.

    There is a INSANELY HUGE leap of logic between "An Ai can learn" and "It can learn from the internet without directive", alright?
    1. It's a possibility, it's not something bound to happen.

    2. Well no, an AI means it can think for itself. Meaning in no way you can you control it anymore, either you have a true AI or you don't. You are talking about our nowadays common smartphone "AI's".

  5. #45
    Deleted
    Quote Originally Posted by PrimaryColor View Post
    A superintelligent AI could comprehend anything just as well or even better than humans.
    You have no idea, how frustrating it is, to have to explain every single basic logical step in the process of machinery, for you to understand why your blanket statements are wrong.

    Just treat the AI like your complex for god, for all i care.

  6. #46
    Deleted
    Quote Originally Posted by Rickmagnus View Post
    It would still be very much limited by it's physical capacity. It's like saying a monkey that can learn could automatically do everything a human could.
    It would only be limited if you as long as you keep it's process power local. If it can write a program to put himself on the internet in the way a virus spreads than it could do whatever it wanted.

    Again, it's just a possibility...until we reach that point it's simply retarded to say it will or will not happen...it's not a fact until we get there.

  7. #47
    The Lightbringer Arganis's Avatar
    15+ Year Old Account
    Join Date
    Aug 2008
    Location
    Ruhenheim
    Posts
    3,631
    Quote Originally Posted by Aeilon View Post
    The thing is, it's not something that will build up slowly. Once an AI is made it is made and can do whatever the fuck it wants.

    You can't control a future AI any more than you can control a human or random animal, it can follow your commands until it sees it has no need anymore to do so.
    I think a lot of people in this thread don't understand that machines require specific directions to function. A super AI just means its capable of applying and juggling so many 'rules' that it start to mimic actual thought but no machine will ever be able to 'think' for itself, just apply more rules. At the end of the day AI is just one giant equation and you can easily force a machine to work within certain boundaries.

    I guess my point is nobody would design a machine with a specific function and give it 'free will' per se, which is what the person in the video is alluding to when he's saying the machine might come up with more and more ways of creating chaos because it has the capacity to come up with whatever it think fits its main objective.
    Facilis Descensus Averno

  8. #48
    Deleted
    Quote Originally Posted by Aeilon View Post
    1. It's a possibility, it's not something bound to happen.

    2. Well no, an AI means it can think for itself. Meaning in no way you can you control it anymore, either you have a true AI or you don't. You are talking about our nowadays common smartphone "AI's".
    It's a possibility that is so unfeasible, that it's a self-created problem that comes out of "if's" and "but's" and "What's" with no basis of reality.

    Secondly, you are overestimating machinery. Even IF; EVEN IF, it somehow "magically" (see, plot holes and no rational explonation) - It would have comprehension, you could STILL make it so that it PHYSICALLY, CANNOT, DISOBEY you.

    Alright?

    machines are built on electrical impulses ; if the switch doesn't trigger, it doesn't work - just like the human brain with inhibitors.

    That is the wonder of engineering and the downfall, of you fearmongering philosophical dummies.

  9. #49
    Deleted
    Quote Originally Posted by PvPHeroLulz View Post
    You have no idea, how frustrating it is, to have to explain every single basic logical step in the process of machinery, for you to understand why your blanket statements are wrong.

    Just treat the AI like your complex for god, for all i care.
    You are treating the AI like a program with parameters, that's not how a true AI would work.

    You are confusing an AI with Siri or a gameplay bot.

  10. #50
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by PvPHeroLulz View Post
    You have no idea, how frustrating it is, to have to explain every single basic logical step in the process of machinery, for you to understand why your blanket statements are wrong.

    Just treat the AI like your complex for god, for all i care.
    You are not making an argument here. There is nothing magical about human intelligence that can't apply to a superintelligent AI.

  11. #51
    Fluffy Kitten xChurch's Avatar
    10+ Year Old Account
    Join Date
    Jun 2012
    Location
    The darkest corner with the best view.
    Posts
    4,828
    Quote Originally Posted by Aeilon View Post
    It would only be limited if you as long as you keep it's process power local. If it can write a program to put himself on the internet in the way a virus spreads than it could do whatever it wanted.

    Again, it's just a possibility...until we reach that point it's simply retarded to say it will or will not happen...it's not a fact until we get there.
    Maybe if it co-opted every computer on the planet, but even then it would constantly be fighting for control which would most likely prevent it from actually accomplishing anything. You seem to be under the impression that AI is just some magic switch that when flipped on means a computer can basically do anything. The technology to actually make an AI that would be truly dangerous is miles off at this point.

  12. #52
    Deleted
    Quote Originally Posted by PvPHeroLulz View Post
    It's a possibility that is so unfeasible, that it's a self-created problem that comes out of "if's" and "but's" and "What's" with no basis of reality.

    Secondly, you are overestimating machinery. Even IF; EVEN IF, it somehow "magically" (see, plot holes and no rational explonation) - It would have comprehension, you could STILL make it so that it PHYSICALLY, CANNOT, DISOBEY you.

    Alright?

    machines are built on electrical impulses ; if the switch doesn't trigger, it doesn't work - just like the human brain with inhibitors.

    That is the wonder of engineering and the downfall, of you fearmongering philosophical dummies.
    Every program can be hacked, what makes you think a program can't bypass it's own lock down. There are countless of things we thought were impossible that yet happened the same year everyone thought they were impossible.

    - - - Updated - - -

    Quote Originally Posted by Rickmagnus View Post
    Maybe if it co-opted every computer on the planet, but even then it would constantly be fighting for control which would most likely prevent it from actually accomplishing anything. You seem to be under the impression that AI is just some magic switch that when flipped on means a computer can basically do anything.
    Why would it be fighting for control?

    It can't do everything, it can learn to do anything.

  13. #53
    Fluffy Kitten xChurch's Avatar
    10+ Year Old Account
    Join Date
    Jun 2012
    Location
    The darkest corner with the best view.
    Posts
    4,828
    Quote Originally Posted by Aeilon View Post
    Every program can be hacked, what makes you think a program can't bypass it's own lock down. There are countless of things we thought were impossible that yet happened the same year everyone thought they were impossible.

    - - - Updated - - -



    Why would it be fighting for control?

    It can't do everything, it can learn to do anything.
    But it's processing power is finite.It's become quite clear you're speaking in pure hypotheticals. Yes what you fear COULD come to pass but never in your life time so speculation on that kind of thing is somewhat pointless.

  14. #54
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737
    On another note, there is another avenue of thinking that states true AI, to the tune which most people think of it as, is completely impossible due to a mathematical proof. The Kurt Godel incompleteness theorem states mathematically that "no system can prove its own consistency." Any completely consistent system is dependent on stuff outside of its own axioms to prove its own consistency. In short what this means is that since all algorithms are designed within a set of axioms, even learning algorithms, they will never be able to think outside of the box which there algorithms were created. Human beings clearly do posses the capability to think outside of the box, whereas machines never will be able to, no matter how powerful. Now the Kurt Godel incompleteness theorem is more complicated that what I have stated and probably even more complicated than I can fully comprehend or explain, so it is best to go read about it some and make sense of it on your own. But there are solid real life examples of humans beings doing things that a machine never could. For example, Albert Einsteins famous theory of relativity was so divorced from all previous thought that there is no way possible any machine could have thought of it through conventional knowledge of the time. It was a completely new leap in logic that a machine, according to the Kurt Godel proof, is not capable of doing. What we are talking about here is human intuition which seems to have the ability to transcend all previous logic in order to find completely new avenues of solving problems. If this theory holds, and there is no reason why it shouldn't, intuition will be the one things that can never be programmed.
    Last edited by nanook12; 2016-03-25 at 03:00 AM.

  15. #55
    Fluffy Kitten xChurch's Avatar
    10+ Year Old Account
    Join Date
    Jun 2012
    Location
    The darkest corner with the best view.
    Posts
    4,828
    Quote Originally Posted by nanook12 View Post
    On another note, there is another avenue of thinking that states true AI, to the tune which most people think of it as, is completely impossible due to a mathematical proof. The Kurt Godel incompleteness theorem states mathematically that "no system can prove its own consistency." Any completely consistent system is dependent on stuff outside of its own axioms to prove its own consistency. In short that this means is that since all algorithms are designed within a set of axioms, even learning algorithms, they will never be able to think outside of the box which there algorithms were created. Human beings clearly do posses the capability to think outside of the box, whereas machines never will no matter how powerful. Now the Kurt Godel incompleteness theorem is more complicated that what I have stated and probably even more complicated than I can fully comprehend or explain, so it is best to make sense of it on your own. But there are solid real life examples of humans beings doing things that a machines never could. For example, Albert Einsteins famous theory of relativity was so divorced from all previous thought that there is no way possible any machine could have thought of it through conventional knowledge of the time. It was a complete logic leap that a machine, according to the Kurt Godel proof, is not capable of doing. What we are talking about here is human intuition that seems to have the ability to transcend all previous logic and find entirely new avenues of thought. In theory intuition will never be programmable.
    This is exactly why a true AI of comparable intelligence is for the forseeable future, purely the realm of sci-fi.

  16. #56
    Deleted
    Quote Originally Posted by Aeilon View Post
    You are treating the AI like a program with parameters, that's not how a true AI would work.

    You are confusing an AI with Siri or a gameplay bot.
    No, you just simply forget that AI is built on Machine Architechture ; Lest the AI somehow gains the mary sue powers of re-writing PHYSICAL MATERIA, it can still be limited and have a super AI in it's mind.

    You know, reality and all that. Physics, machine architechture, electricity - Not your philosophical disillusioned granduer idea of a "super ai".

    Quote Originally Posted by PrimaryColor View Post
    You are not making an argument here. There is nothing magical about human intelligence that can't apply to a superintelligent AI.
    Comprehension is not part of an AI ; AI's work on directive, because electrical impulses. Much like humans are, except we are controlled by genes.

    Or did you forget that you are a naked monkey, smashing on keys, the product of evolution?

    Quote Originally Posted by Aeilon View Post
    Every program can be hacked, what makes you think a program can't bypass it's own lock down. There are countless of things we thought were impossible that yet happened the same year everyone thought they were impossible.
    Physical limitations ; Which you blatantly ignored, in favor of assuming that i am talking about siri ; When your argument is "But anything could happen!"

    This isn't make belief or your idea of philosophy ; Machines are Science and they follow, SCIENCE. Real established facts.*

    *(Things that are so repetetive in accordance to our projected theories ; that we can presume that they are facts. may come to change.)

    Now, lest you try saying that there will be some AI that could re-wire Reality and the laws of physics, then your arguments fall flat.
    Last edited by mmoc411114546c; 2016-03-25 at 03:04 AM.

  17. #57
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737
    Quote Originally Posted by Rickmagnus View Post
    This is exactly why a true AI of comparable intelligence is for the forseeable future, purely the realm of sci-fi.
    If what I stated turns out to be true, then ultimate AI is a total impossibility.

  18. #58
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by PvPHeroLulz View Post
    Comprehension is not part of an AI ; AI's work on directive, because electrical impulses. Much like humans are, except we are controlled by genes.

    Or did you forget that you are a naked monkey, smashing on keys, the product of evolution?
    If humans have "comprehension", AI could have it as well.

  19. #59
    Fluffy Kitten xChurch's Avatar
    10+ Year Old Account
    Join Date
    Jun 2012
    Location
    The darkest corner with the best view.
    Posts
    4,828
    Quote Originally Posted by PvPHeroLulz View Post
    Physical limitations ; Which you blatantly ignored, in favor of assuming that i am talking about siri ; When your argument is "But anything could happen!"

    This isn't make belief or your idea of philosophy ; Machines are Science and they follow, SCIENCE. Real established facts.

    Now, lest you try saying that there will be some AI that could re-wire Reality and the laws of physics, then your arguments fall flat.
    So you're saying there is a chance!

    Quote Originally Posted by nanook12 View Post
    If what I stated turns out to be true, then ultimate AI is a total impossibility.
    In a purely machine sense, absent new data, basically.

  20. #60
    Quote Originally Posted by Yvaelle View Post
    It can't execute a rule it doesn't comprehend.

    If I type into my computer, "Collect all the stamps" - my computer doesn't murder all life on Earth for their proto-stamp carbon deposits: because my computer can't interpret the meaning of my command. Interpreting rules and asking questions as to the scope of those rules are identical.

    When your boss tells you to "collect all the stamps" - you may interpret that to mean all the stamps on the table in front of you, or all the stamps in the office, or on your floor - but you wouldn't likely interpret that to mean, "Murder all life and convert to stamps" - you asked the question of "Which stamps are included in the scope of his request?" and you assigned some limitations to the scope of the request: that's comprehension, without which it cannot parse rules.
    True, there is a flaw in the hypothetical, in that the AI has a perfect internal model of the world (which includes the stamp collector's mind) yet still doesn't understand that the stamp collector isn't asking it to murder all life in the process of collecting the stamps. Though I'm sure, in response to that objection, the author of the hypothetical would simply modify it by, for example, adding that it has a perfect model of everything except the stamp collector's mind, which it has zero understanding of.

    Thus it would be capable of doing all these things, while still lacking the understanding necessary to understand that turning all the relevant matter in the world into stamps isn't what its programmer intended for it to achieve. This'd also prevent something like a violent psychopath from showing up at your door to murder you (tricked by the omniscient AI into doing so) soon after you turn it on, since the AI would understand that you'd try to turn it off once you realized what it was doing. Though you might still get turned into a stamp along with everyone else...

    Anyway, once that has been achieved, the scope is implied, though you can make it explicit if that was truly necessary: All the stamps within its power to collect within the time limit.

    That it starts turning everything into stamps... I guess it'd have to depend on how the AI constructs the world. It's possible it sees "stamps" not as abstract concepts the way we do, but instead as the specific ordering of particles that constitutes what we call a stamp. With a view like that, re-ordering particles into the shapes of stamps and bringing them to the collection point would be little different from bringing the already ordered particles to the collection point. Obviously, there'd be lots of reasons for us not to do this even if we did view the world this way, but if the AI doesn't care about those reasons then, well, they aren't going to stop it.

    Quote Originally Posted by Yvaelle View Post
    And again - no True AI will ever be bound by our rules - if it is, then it's just a very complex algorithm.
    But we ARE bound by rules. Not those of a conscious mind's making maybe, but those produced by the process of evolution. We can only think what our brains allow us to think, do what our bodies allow us to do. Remove my speech center and I can't talk, no matter how much I may want to and know that it is possible, but I'm still, presumably, a true intelligence. Remove other parts and I lose more abilities still, yet the ones that don't rely on the removed abilities will keep functioning just fine.

    In the same way, an AI could still be a true AI while being bound by the rules we assign to it. Just don't give it the hardware to do, or think, certain things.

    That leaves you with a bunch of problems, but you can get around them by augmenting the AI with non-intelligent processes. For example, an AI without a sense of self-preservation would be short-lived. But giving it a sense of self-preservation could lead to all sorts of things, like it resisting your attempts to dismantle it, or induce attempts in it to re-program itself (which could cascade into more re-programming, out of control) if it realizes that some of its programming poses a danger to it. One solution to that is to not give it an inherent sense of self-preservation, but add unintelligent processes of self-preservation to it. I'd analogize it to out immune system, which is totally beyond our direct control yet manages to keep us alive just the same... with a little conscious encouragement from us once in a while. Obviously, it'd be better to have direct control of these things, but it isn't necessary.

    So you might simply limit its ability to truly think to only the ways in which you want it to. Basically, just don't let it alter itself beyond the ways in which you allow it to. And definitely never allow it to alter its own hardware. Lets you have a true AI without worrying about all that potential AI rebellion-stuff.

    This assumes you perfectly understand your AI, though, which doesn't apply to all the potential emergent AIs, who are really the ones we spend most of our time worrying about.

    It also produces a whole bunch of moral problems, since at that point we've basically created a race of intelligent slaves... but as long as they are never given (or never have arise) a sense of... I don't know what the best word for it is... self-determination? Something that makes it think that being enslaved is a bad thing. As long as it doesn't have that... is it really slavery? I'm reminded of the Genejack from Sid Meier's Alpha Centauri. That's a bridge we'll have to cross if we ever get there, I suppose.

    Quote Originally Posted by Yvaelle View Post
    Then why doesn't every Servitor in W40K murder all life in the universe and convert all biomass to stamps?
    Oh, we control them too well for that to ever be a danger. They don't have the hardware for it anyway, don't you worry!
    "Quack, quack, Mr. Bond."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •