Page 3 of 4 FirstFirst
1
2
3
4
LastLast
  1. #41
    Okay people! If your argument is either "Computers just can't be intelligent" or "Nothing can create something more intelligent then itself" then PLEASE GET OUT OF THIS THREAD. Both of these arguments are BULLCRAP and are on the level of "the earth is flat, look at the ground it's not round"

    PLEASE only try to disproof theories when you can back them up with more than your inability to understand them, thanks!

  2. #42
    Deleted
    Quote Originally Posted by adimaya View Post
    we can however make self-learning computers, and as long as they can add to their knowledge base and make decisions based on the knowledge they themselves obtain, aren't they doing the same thing as we do, thinking-wise?
    Nope, they're not. Though it looks that way they're not.
    We can adapt things we do from earlier things we've learned. Like when i open a cola-can i know how because i use the knowledge of opening a door and the loot-mechanism, Robots can't. Since they can remember how they did it but they can't use it on a different thing. This is something only intelligent lifeforms have. Robots are programmed to do what they are told to. They can learn but they cannot use it for other processes.

    ---------- Post added 2011-04-09 at 01:05 PM ----------

    Quote Originally Posted by pixartist View Post
    Okay people! If your argument is either "Computers just can't be intelligent" or "Nothing can create something more intelligent then itself" then PLEASE GET OUT OF THIS THREAD. Both of these arguments are BULLCRAP and are on the level of "the earth is flat, look at the ground it's not round"

    PLEASE only try to disproof theories when you can back them up with more than your inability to understand them, thanks!
    Then you want yourself kicked out, i can make a post like: What if i had 1000000 dollars growing from a tree in the middle of space, the tree lives from steel. When will it happen?!

  3. #43
    Personally I'm unsure whether there will ever be a point at which computers will become self-motivated (i.e. work towards their own goals, rather than intelligently pursue the goals they are programmed to solve). If they were to do so, then I'd hesitate to assume that they would pick one "human" motivation - 'take over, become the boss, be the top-dog' - over another - 'spend all day looking at porn'. I pick those 2 examples because for all the advances in computing that there have ever been, all the achievements towards the first of those goals have been more than matched by achievements towards the second

  4. #44
    Deleted
    We have to stop skynet! Hasta la vista baby!

    On a more serious note, whenever machines/computers get an own consciousness and actually think they are living beings it is very hard to say they are wrong ( from a COMPLETE NEUTRAL viewing point ), since they would really feel that way.

  5. #45
    Quote Originally Posted by pixartist View Post
    You studied computer science and you have not found any hint that computers could eventually think for themselves ? Uhh, better not tell the scientific community, they might laugh at you. Sure, computers as today have a very very limited ability to think and learn, but this is due to 1) Technological problems - The human brain is FAR FAR FAR superior to a modern computer hardware-wise and 2) Programming issues. We still don't exactly know how the human brain works, and why we perceive ourself as independent beings.
    But saying that it is impossible to build a machine which has the same level of complexity and the same ability as the human brain is HIGHLY unscientific and basically like calling intelligence "magic".
    I'm really can't believe that you are studying computer sciences and still are that small minded about the future.

    To further explain how a truly intelligent AI would have to work: It would either need to be a dynamic physical architecture which could build and cut connections, similar to the human brain, or it would be a purely simulated dynamic architecture on a solid physical architecture which is what I think is more probable.

    edit: Also: watch the videos by Michio Kaku. Do you really think that you are smarter than him and all the people he is working with? You are basing your disbelief in true AI only on the fact that you can't explain/imagine/comprehend it. This is what religious people normally do..
    I do not hesitate for a second in saying that our technology will continually advance and as a result will enable us to do better things.

    Although this advance in technology could enable machines to think for themselves you would have to change the fundamental method how machines learn and respond to stimuli without the programming element and I believe that research-wise we are not there yet. I couldn't quite understand how your suggestion would work, it is to my understanding that we do not know everything about the human brain yet and to base a model off something we do not understand is a bit improbable.

    To answer your question about, do I think I am smarter than Michio Kaku, of course not he is a well respected figure in the scientific community. In the video he was mentioning how the hardware is excelling at a exciting rate and discussed the technical implications but towards the end of the video he concurred that we have a long way to go and gave no mention of computers thinking for themselves.

    Quote Originally Posted by pixartist View Post
    Okay people! If your argument is either "Computers just can't be intelligent" or "Nothing can create something more intelligent then itself" then PLEASE GET OUT OF THIS THREAD. Both of these arguments are BULLCRAP and are on the level of "the earth is flat, look at the ground it's not round"

    PLEASE only try to disproof theories when you can back them up with more than your inability to understand them, thanks!
    No offence but you need to accept that that is the way our computers work at the moment, although it is good to be critical and have your own ideas how we can improve and develop our approach to AI, you shouldn't ignore the logic behind the way AI currently works now.

    If you look at this from a non-technical view, computers have already taken over many aspects of our life and have been integrated very well to point where it has even excelled far beyond humans. Innovations through the use of computers have caused big changes in the world already, for example, THE INTERNET! This has become a massive part of peoples lives in all aspects of life and that within itself has caused massive changes just look at the uprising in middle east recently, with the use of social media they where able to effectively co-ordinate marches and demonstrations, toppling their corrupt governments. This is just one of many examples.
    Last edited by nixcookies; 2011-04-09 at 01:27 PM.

  6. #46
    Brewmaster Scottishpaladin's Avatar
    10+ Year Old Account
    Join Date
    May 2010
    Location
    Scotland, only the best.
    Posts
    1,372
    We only advanced this much in the past 100 years due to war. So bring on WorldWar3 and well be either all dead or living in a better world in another 100 years...
    War requires the sledge hammer, but will be decided by the scalpal
    Intel i5 2500k -Intel 330 180GB SSD - Saphire HD OC Edition 7870 - Gigabyte Z77-D3H Intel Z77

  7. #47
    Herald of the Titans
    10+ Year Old Account
    Join Date
    Oct 2009
    Posts
    2,808
    I seem to recall reading/seeing something about 2049 being the year (estimated) that a computer AI will be created that would be capable of human equivalent intelligence. I think this is is totally possible.

    If anyone has read the Hyperion novels by Dan Simmons, Computer intelligence was created, and they basically removed themselves to some sort of hyperspace/subspace dimension apart from humanity, and they ended up shutting down interstellar travel because humans weren't responsible enough to use it... anyway, I digress.
    Last edited by Vermicious; 2011-04-09 at 01:17 PM.

  8. #48
    Deleted
    computer and internet already took the control over our lifes. or can u imagine to go out the house without your mobile, learning without internet etc.etc.

  9. #49
    Quote Originally Posted by Tenkachii View Post
    They can't ever you can't make them think. You can make them look like they are thinking but they won't. Just like when a dog wags his tail, you think he's happy but he doesn't feel that way. he has just learned by process that he has to wag its tail in that situation. ( Yes, dog's are one amongst the other animals who don't have emotion etcetera, only eliphants dolphins monkeys and humans are the only known species that do have emotion and can think for themselves )
    That's the most ridiculous thing I've ever read. Dogs don't feel emotions? I doubt you have any source at all to back that up. Can't really trust someone who can't spell elephant as a reliable source

  10. #50
    Deleted
    Quote Originally Posted by pixartist View Post
    Okay I have a few assumptions about the future of humanity & technology:

    - We will NEVER just stop researching. Technology will advance as long as humans exist
    - Intelligence is something that can be fully explained by science IF YOU DISAGREE WITH THIS, THIS THREAD IS NOT FOR YOU
    - Advances in technology speed up exponentially

    From these 3 assumptions it is clear that we will create artificial intelligence similar to our own. This intelligence would be able to create even more advanced intelligence and so forth..
    So it is inevitable that "artificial" intelligence will become more powerful than human intelligence, which in return means that machines WILL take over the world at some point. THIS IS THE BIGGEST ASSUMPTION, yet you give no reasoning whatsoever.
    Our goals and reasoning are what they are because of very specific biological and evolutionary reasons. Why would we want to make an intelligence like our own (i.e. limited by biological needs and goals) where we could create something better altogether. There is no reason to assume that these intelligences would have any desire to take over the world, nor that they would do so.

    It will never take place, not because we can't, but because there is no reason whatsoever why it should happen. Just because computers will become more intelligent, they will not take over the world.

    *edit, added red and capslock to keep in the spirit of the OP's post
    Last edited by mmoc47c069d906; 2011-04-09 at 01:29 PM.

  11. #51
    Deleted
    My computer still randomly shuts down, I wouldn't place my bet on that thing taking over within the next century.

  12. #52
    Deleted
    50-150 years? You call that soon? Really? You do realize that there is already software that learns by itself right? It's still in it's starting phase, but no development process takes that long. 20 years from now I see this technology being highly developed and 5 to 10 years later either we will have merged so much with machines that there's almost no more difference, or they'll take control of the world if we don't put an on and off system on it (although technicly we're already useless without machines and such, so the question remains if we're not already being controlled by machines)

  13. #53
    Deleted
    Quote Originally Posted by Patchwerk View Post
    That's the most ridiculous thing I've ever read. Dogs don't feel emotions? I doubt you have any source at all to back that up. Can't really trust someone who can't spell elephant as a reliable source
    You might think it is ridiculous but dog's really can't feel emotions, they don't witness it as we do. Also Elephants can't do maths nor spell that doesn't mean that they aren't intelligent. we humans are just more developed. Let me find something about the dog part.
    Feeling emotion like we do is different for dogs, For instance. we can give a certain object some emotion, like: You love a stuffed animal you got from your first boyfriend. though it's just an unliving thing. You can also love the first car you have had, etcetera. Dog's can show emotion but don't recognize it as we do. This might sound difficult but sending signals doesn't mean you feel the signals.

    Theres also a video i have from a scientific experiment i have around.
    Last edited by mmoc5886693a05; 2011-04-09 at 01:37 PM.

  14. #54
    Deleted
    well i think it would be stupid by mankind to make a computer that thinks the same as a human, wants to take over the world etc. much easier to make it happy by just helping us or what it is supposed to do.

    then again its mankind we are talking about =/ i think we are pretty much doomed within the next 100years

  15. #55
    Deleted
    Quote Originally Posted by pixartist View Post
    Okay I have a few assumptions about the future of humanity & technology:

    - We will NEVER just stop researching. Technology will advance as long as humans exist
    - Intelligence is something that can be fully explained by science IF YOU DISAGREE WITH THIS, THIS THREAD IS NOT FOR YOU
    - Advances in technology speed up exponentially

    From these 3 assumptions it is clear that we will create artificial intelligence similar to our own. This intelligence would be able to create even more advanced intelligence and so forth..
    So it is inevitable that "artificial" intelligence will become more powerful than human intelligence, which in return means that machines WILL take over the world at some point.

    The question is: when will it happen ? What do you think ? I'd say 50-150 years. This may sound ridiculously soon, but think about what humans achieved in the last 50 years. A global Communication system, which stores nearly ALL information generated in the world, accessible by everyone at anytime, mobile phones that work nearly everywhere, airplanes that can't be controlled by humans anymore, just by computers, etc...

    remember, computing power has grown exponentially for quite some time now, see http://en.wikipedia.org/wiki/Moore's_law
    Haven't you watched Terminator?
    On a serious note, I think before we get to the point of having super intelligent AI, we will:
    a) have WWIII which will bring us back into Stone Age
    b) be brought to a very small number by global warming or something like that.

  16. #56
    Deleted
    Quote Originally Posted by Tenkachii View Post
    Nope, they're not. Though it looks that way they're not.
    We can adapt things we do from earlier things we've learned. Like when i open a cola-can i know how because i use the knowledge of opening a door and the loot-mechanism, Robots can't. Since they can remember how they did it but they can't use it on a different thing. This is something only intelligent lifeforms have. Robots are programmed to do what they are told to. They can learn but they cannot use it for other processes.
    not trying to ridicule your or anything of a similar fashion, but I reccomend that you read up on a toy problem in statespace-searching:
    http://en.wikipedia.org/wiki/Monkey_and_banana_problem

    by manipulating our current state by doing actions available to us, we can navigate to a scenario that is to our liking.
    claiming that a computer can only "do what is has been told to do" as far as actions are concerned is just naïve, no offense.
    computers will not however break the boundaries that we enforce upon them

  17. #57
    Quote Originally Posted by Ryukaa View Post
    I´m not entirely sure that Maschines WILL take over.
    The Man-Maschine-Interface is going to blur the lines between the two, so i rather think it will be more simbiotic.
    i believe this but I do believe computer when self aware will be the smartest things on this planet do doubt about that, I mean IBM's super computer is pretty crazy and when something like that gets self aware or something with 10,000,000 times more smarter than that xD

    it will truely be exciting to watch if I am still alive when it happens

  18. #58
    Deleted
    Quote Originally Posted by adimaya View Post
    not trying to ridicule your or anything of a similar fashion, but I reccomend that you read up on a toy problem in statespace-searching:
    http://en.wikipedia.org/wiki/Monkey_and_banana_problem

    by manipulating our current state by doing actions available to us, we can navigate to a scenario that is to our liking.
    claiming that a computer can only "do what is has been told to do" as far as actions are concerned is just naïve, no offense.
    computers will not however break the boundaries that we enforce upon them
    i didn't say that, i said that computers CAN adapt and learn etcetera but they can't think from themselves. it's a different process.

    The skinnerbox for instance, i know we were talking about computers but they can make computers similar to animals, behaviour wise. The skinnerbox is a box where when a rat presses the lever the rat get a reward. He can learn it by doing it over and over again. but this doesn't mean he can use this for other things. That way he won't be intelligent for a long long time. Maybe after thousands of years of evolution. the same goes for computers, the time that it takes a rat to be intelligent like some other animal-species is the computer half-way. It's just impossible.

    Another thing: Crows bend wires sticks to use it to grab a worm or anything which they can't reach.

    It is impossible for humans to make an intelligent computer, He can adapt he can learn he can talk he can move he can calculate but he can't think for his own. He also doesn't have emotion, which is the key to intelligence.
    Last edited by mmoc5886693a05; 2011-04-09 at 01:46 PM.

  19. #59
    Deleted
    Quote Originally Posted by Tenkachii View Post
    i didn't say that, i said that computers CAN adapt and learn etcetera but they can't think from themselves. it's a different process.
    ...what are you exactly trying to say?

  20. #60
    Deleted
    I just hope we will be able to travel faster than light someday.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •