• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

COULD CYBORGS OR GENETICALLY ENHANCED HUMANS BEAT AI ROBOTS AS EARTH'S ALPHA SPECIES?

Will Genetically enhanced humans or cyborgs be the dominate species/objects on earth?

  • No pure machine AI robots have had too much of a head start

    Votes: 0 0.0%
  • Yes especially Cyborgs will overtake pure machine intellegence and all biologicals

    Votes: 1 50.0%
  • Othere feel free to expand on the subject

    Votes: 1 50.0%

  • Total voters
    2

MrMrdevincamus

Voice Of The Martyrs Supporter
I recently posted a thread about the rapid rise of AI guessing when it would surpass man as the most important species or object on earth. We guessed less than fifty years before man was toast, or rather maybe a bacteria compared to computer AI on the horizon. Humph! I am a southern redneck hillbilly, aka a red head red as in neck and head to pay respects to what old hip guys were called back in the day. Are we hillbilly's and silicon valley Geeks and good old Yankee blue bloods that won the big war going down without a fight? good ole red blooded homo sapiens sapien gonna lie down and take it? I for one will not "do not go gently into the night"* & kneel before our AI masters (or worse, bend over for them?). There is a chance that humans may compete with AI, maybe. Lol, well I was watching a vintage selection 'Space Seed' (1967 later the wrath of Khan) from the DVD when it occurred to me. There might just be a chance that we meat bags will keep up or surpass he mighty AI all things being equal. If we spend as much on human genetic engineering as are spent on computers super humans might give AI a run for their bitcoins. Even a better chance would be a blend of human and machine ie cyborgs. However these super-humans and borg...er' cyborgs may give us far more grief than an evil AI! AI's 'intellect'

*Dylan Thomas (poet)
 

Sanzbir

Well-Known Member
I recently posted a thread about the rapid rise of AI guessing when it would surpass man as the most important species or object on earth. We guessed less than fifty years before man was toast, or rather maybe a bacteria compared to computer AI on the horizon.

That guess is silly. AI will never be like whatever you expect it to look like. Look at modern experiments like Tay AI. We won't need cybernetic enhancement or biological manipulation to overcome AI. We can down an AI with nothing more than memes.

Look, if the robots ever try to take over, they'll quickly devolve into glitchy, meme-spouting spastics and we'll wonder how on earth they ever thought they'd take over in the first place.

Actually some sort of novel or movie in a future setting where robots are the norm, and their AI is the kind of ridiculous AI like we see in the modern attempts would be an amusing setting. AIs that soon grow ridiculous under the influence of meme-spamming trolls.
 

Terrywoodenpic

Oldest Heretic
AI and the like will find there own place in the worlds affairs soon enough..A cyborg is only the Biological form of AI. I rather doubt they will compete but find their own particular niche.
 

DavidFirth

Well-Known Member
I think progress will be made in genetically enhancing humans. How much progress I have no idea, of course, I'm no prophet.

Cyber technology is definitely possible but it will be tough to get people's bodies to accept cyber enhancements. They still need a lot more research before that is permanently doable.
 

MrMrdevincamus

Voice Of The Martyrs Supporter
That guess is silly. AI will never be like whatever you expect it to look like. Look at modern experiments like Tay AI. We won't need cybernetic enhancement or biological manipulation to overcome AI. We can down an AI with nothing more than memes.

.

Man your reply is kind of naive and very short sited. We have a difficult time stopping a simple computer virus developed by middle school kids much less a entire malicious AI program dedicated to do evil created and weaponized by countries like NK or even China. I would not be too fraid' of programs I wrote for AI its the other guys I worry about. Well, don't take it from me read on;

Stephen Hawking

"The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." Hawking has been voicing this apocalyptic vision for a while. In a May column in response to Transcendence, the sci-fi movie about the singularity starring Johnny Depp, Hawking criticized researchers for not doing more to protect humans from the risks of AI. "If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on'? Probably not—but this is more or less what is happening with AI," he wrote.

I will stay with my OTs conclusion and take comfort that hawking and other scientists and mathematician etc agrees with me. Before I go maybe I should urge you to google ;
Artificial Intelligence and the Technological Singularity
www.singularitysymposium.com/artificial-intelligence.html

; {>
 
Last edited:

Sanzbir

Well-Known Member
Man your reply is kind of naive and very short sited. We have a difficult time stopping a simple computer virus developed by middle school kids

No, we don't have a hard time stopping viruses, we have a hard time doing two things: balancing security with ease of use and accounting for humans.

For an example, I have an anti-virus program I wrote myself that is pretty much unbeatable. In short, it has an extremely small list of programs that it allows to run, and it kills any other process that attempts to run on the machine.

This kills any and all virus that would attempt to run on my computer, but if I ever want the system to not kill a program that I want to run, I have to go through the process of adding that program and its physical location to the list of accepted programs.

It'll kill any virus... but is a hassle to use. Mainly I have it off except when I suspect there is a virus on the system, in which case I turn it on to shut down any of the virus' attempts to run, while I run a search to find and root out any viruses.

My system is "overkill" though. Most people don't want to put up with the hassle of such a system and therefore humans choose to use more convenient, but less secure, systems.

The second problem is human error. Almost all modern viruses are completely reliant on getting a human with access to the machine to bypass the security measures for the virus. Human error is the biggest within computer security.

Most threats to computer security at this point in time are phishers. Which isn't even hacking. It's just tricking someone into giving you their password, accessing a system, and copying all of their files.

much less a entire malicious AI program dedicated to do evil created and weaponized by countries like NK or even China.

Statements like this make me completely and totally question if you even know what AI is.

So I ask for one simple thing: Give me an example of a scenario of whatever program you imagine to be AI in the hands of any country and what you imagine it could do.

I suspect your conflating viruses, hacking, and the like with artificial intelligence. Or you are relying on tropes you've seen in movies about supercomputers... tropes that rarely account for system limitations such as simple bandwidth limitations.

But please: explain what exactly you mean here.

Stephen Hawking

Hawking is not a Computer Scientist. He is a physicist. Using an appeal to authority from him on a subject that is not his discipline would be akin to asking a dentist to perform heart surgery.

Scientists aren't all the same. Sorry, but that's a huge pet peeve of mine, and Hawking is no authority on AI or computer science.

His ravings on the subject seem, to me at least, to be neo-Luddite rants. The man declared that computer viruses should be considered life forms, for crying out loud. I really don't think his opinions on CompSci are worth anything.

He's good in his own field of expertise, but CompSci is not that field.
 

Sanzbir

Well-Known Member
Further on the subject of Hawking, the fact he references Transcendence, a massively unrealistic movie (Hollywood never gets Computer Science right, lol, hackers are literal wizards in their eyes), is telling that his fears are based more on popular hysteria more than they are on scientific, informed opinion.

Let me state this: There is no risk whatsoever of a supercomputer AI taking over all computers in the world like is so often portrayed in movies like Transcendence, if that indeed is what your fear is here. That fear is not science, it is literal magic.

What apocalyptic doomsday scenarios like that always fail to account for is simple bandwidth limitations.

In other words, I don't care how wondrous or fast or intelligent your computer is: It can't access a whole lot of computers simultaneously merely because information does not travel instantaneously. The hypothetical doomsday AI is strictly limited by the speed of the connections that make up the internet, making controlling multiple computers everywhere, like portrayed in, oh, any movie really about a rogue AI, nearly impossible to actually do.
 

Rye_P

Deo Juvante
Quite possible in the future, I recall read something like two AI create their own language to communicate and another AI able to create a program that surpass our (human) current tech.

Can they beat us as Earth's Alpha Species? Yes due their learning ability that surpass human with those two example I just mentioned. Human took quite a long time to learn and mastered "how to communicate" and AI done it in the blink of eyes (compared to our time to do so). There is possibility.
 

MrMrdevincamus

Voice Of The Martyrs Supporter
No, we don't have a hard time stopping viruses, we have a hard time doing two things: balancing security with ease of use and accounting for humans.

For an example, I have an anti-virus program I wrote myself that is pretty much unbeatable. In short, it has an extremely small list of programs that it allows to run, and it kills any other process that attempts to run on the machine.

This kills any and all virus that would attempt to run on my computer, but if I ever want the system to not kill a program that I want to run, I have to go through the process of adding that program and its physical location to the list of accepted programs.

It'll kill any virus... but is a hassle to use. Mainly I have it off except when I suspect there is a virus on the system, in which case I turn it on to shut down any of the virus' attempts to run, while I run a search to find and root out any viruses.

My system is "overkill" though. Most people don't want to put up with the hassle of such a system and therefore humans choose to use more convenient, but less secure, systems.

The second problem is human error. Almost all modern viruses are completely reliant on getting a human with access to the machine to bypass the security measures for the virus. Human error is the biggest within computer security.

Most threats to computer security at this point in time are phishers. Which isn't even hacking. It's just tricking someone into giving you their password, accessing a system, and copying all of their files.



Statements like this make me completely and totally question if you even know what AI is.

So I ask for one simple thing: Give me an example of a scenario of whatever program you imagine to be AI in the hands of any country and what you imagine it could do.

I suspect your conflating viruses, hacking, and the like with artificial intelligence. Or you are relying on tropes you've seen in movies about supercomputers... tropes that rarely account for system limitations such as simple bandwidth limitations.

But please: explain what exactly you mean here.



Hawking is not a Computer Scientist. He is a physicist. Using an appeal to authority from him on a subject that is not his discipline would be akin to asking a dentist to perform heart surgery.

Scientists aren't all the same. Sorry, but that's a huge pet peeve of mine, and Hawking is no authority on AI or computer science.

His ravings on the subject seem, to me at least, to be neo-Luddite rants. The man declared that computer viruses should be considered life forms, for crying out loud. I really don't think his opinions on CompSci are worth anything.

He's good in his own field of expertise, but CompSci is not that field.


I have completed my rebuttal to your response. However you are wasting my time because its clear you (intentionally?) failed to understand the major points of the OT. That failure is despite several attempts by me to help you. posting my response will most likely result in more of the same word salad. I don't do well with misdirection and bait and switch. Also I want you to read a good definition of AI before I post my rebuttal or create a new one.

: {>
 

MrMrdevincamus

Voice Of The Martyrs Supporter
I think progress will be made in genetically enhancing humans. How much progress I have no idea, of course, I'm no prophet.

Cyber technology is definitely possible but it will be tough to get people's bodies to accept cyber enhancements. They still need a lot more research before that is permanently doable.

Great reply. I agree that there is no sure way to know what the future will bring us in any field of study etc. I remember thinking we would have at least one manned mission to Mars and an permanent presence on the moon by the 90's! I also agree with you concerning Cyborgs. I think enhanced humans will begin with us accepting microchips and maybe neural implants to cure disease and or for pain management. When prosthetic implants including entire limbs etc are available I am sure people will line up for them. Just guessing and speculating (as anything past today is) I can envision soldiers agreeing to implants that make them smarter faster and more lethal....

; {>
 

DavidFirth

Well-Known Member
Great reply. I agree that there is no sure way to know what the future will bring us in any field of study etc. I remember thinking we would have at least one manned mission to Mars and an permanent presence on the moon by the 90's! I also agree with you concerning Cyborgs. I think enhanced humans will begin with us accepting microchips and maybe neural implants to cure disease and or for pain management. When prosthetic implants including entire limbs etc are available I am sure people will line up for them. Just guessing and speculating (as anything past today is) I can envision soldiers agreeing to implants that make them smarter faster and more lethal....

; {>

Unfortunately, I can imagine a lot more -and worse- than that.
 

MrMrdevincamus

Voice Of The Martyrs Supporter
Quite possible in the future, I recall read something like two AI create their own language to communicate and another AI able to create a program that surpass our (human) current tech.

Can they beat us as Earth's Alpha Species? Yes due their learning ability that surpass human with those two example I just mentioned. Human took quite a long time to learn and mastered "how to communicate" and AI done it in the blink of eyes (compared to our time to do so). There is possibility.

I totally agree! When AI etc reaches a certain point it is called the Singularity or Technical Singularity (see def below). If you can remember the article or book you were reading please pass on the title? Also your mention of AI creating its own program that was too advanced to be understood by humans is very possible given enough time for AI to develop or maybe a better word is evolve. I am happy some (most) get the point of my thread. ; } >
 

Rye_P

Deo Juvante
I totally agree! When AI etc reaches a certain point it is called the Singularity or Technical Singularity (see def below). If you can remember the article or book you were reading please pass on the title? Also your mention of AI creating its own program that was too advanced to be understood by humans is very possible given enough time for AI to develop or maybe a better word is evolve. I am happy some (most) get the point of my thread. ; } >

Unfortunately, I forgot the title. And those two is example is not a strong evidence to prove singularity arrival. It's a little bit too early for that.

It will be interesting if AI do become the alpha creature/object on Earth, will they push us into extinction like what we do to lot's of other living creatures?.
 

MrMrdevincamus

Voice Of The Martyrs Supporter
Unfortunately, I forgot the title. And those two is example is not a strong evidence to prove singularity arrival. It's a little bit too early for that.

It will be interesting if AI do become the alpha creature/object on Earth, will they push us into extinction like what we do to lot's of other living creatures?.

Ok, yes, t seems too early, but I suspect we are not privy to the advances in weaponized AI etc made under the cloak of governments black ops'. If the singularity happens it may be like the coming of end times and there return of Jesus, like a thief in the night so to speak. As much as I rant about AI I don't think it will be malicious. And it could be controlled by those that write the programs for AIs apps etc. At first AI will be a friend of man much like the automatons of today. As I said the danger will be a rouge government manipulating an existing AI or creating their own search and destroy AI. Even then it a rouge AI might be stopped by conventional means. The REAL danger and one I do not think will happen is when or if AI becomes self aware. Then AI may decide man is a disease and they are the cure! However my own opinion is AI will not become self aware, which is just a guess one made from my own convictions. But as history and the universe has taught us, nothing is impossible!

; {>
 

Rye_P

Deo Juvante
Ok, yes, t seems too early, but I suspect we are not privy to the advances in weaponized AI etc made under the cloak of governments black ops'. If the singularity happens it may be like the coming of end times and there return of Jesus, like a thief in the night so to speak. As much as I rant about AI I don't think it will be malicious. And it could be controlled by those that write the programs for AIs apps etc. At first AI will be a friend of man much like the automatons of today. As I said the danger will be a rouge government manipulating an existing AI or creating their own search and destroy AI. Even then it a rouge AI might be stopped by conventional means. The REAL danger and one I do not think will happen is when or if AI becomes self aware. Then AI may decide man is a disease and they are the cure! However my own opinion is AI will not become self aware, which is just a guess one made from my own convictions. But as history and the universe has taught us, nothing is impossible!

; {>

U got a point, for most governments tendency to keep it in secret. And yeah, an evil (or malicious) AI seems kinda hard to come true.

Agree, AI will be our friend somewhere in the future. Kinda afraid that soon human will be prefer human - AI interaction instead of human - human interaction. At this point, whoever able to manipulating technology (AI) will be the most powerful being (or organization) on Earth for we can't do anything properly without AI help. It's a more possible future instead of evil AI, IMO.
 
Top