• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Artificial Intelligence

Ostronomos

Well-Known Member
Deductive logic is a formal system inherent in cognition.

This was a statement made by Langan that was inspired by my thread "Reality is Reduced to Axioms" on sciforums.

Modern Artificial Intelligence theory seeks to solve problems related to multi-agent learning, generating knowledge required by reasoning and intelligent agent components.

If we could develop new technologies that are capable of generating knowledge and reasoning and decision making under uncertainty purely on the basis of artificial neural networks and by finding other alternatives to organic cerebral functioning then I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.
 

HonestJoe

Well-Known Member
... I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.
What do you mean by "ordinary human function"? Are you suggesting an AI could be developed that could replicate the entire range of human mental function? I really don't see that happening is because it'd be difficult and, significantly, pointless.

All AI applications (indeed all automation, which this is just an extension of and what a lot of things called "AI" in the media actually are) are about meeting a specific set of requirements. Anyone developing any kind of AI application is going to be doing it to meet those specific requirements and will focus on meeting them as effectively as possible. There is no reason for anyone to put in the massive amount of time, money and effort in to trying to develop a system that could replicate the entire range of human behaviours. There is simply no application for that range of functionality.
 

Regiomontanus

Ματαιοδοξία ματαιοδοξιών! Όλα είναι ματαιοδοξία.
Deductive logic is a formal system inherent in cognition.

This was a statement made by Langan that was inspired by my thread "Reality is Reduced to Axioms" on sciforums.

Modern Artificial Intelligence theory seeks to solve problems related to multi-agent learning, generating knowledge required by reasoning and intelligent agent components.

If we could develop new technologies that are capable of generating knowledge and reasoning and decision making under uncertainty purely on the basis of artificial neural networks and by finding other alternatives to organic cerebral functioning then I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.

A machine that can actually *think*, in just a few years? Not possible (if ever).

Side note: I heard "a few years" in lectures by then-leading researchers DECADES ago.
 

Debater Slayer

Vipassana
Staff member
Premium Member
If we could develop new technologies that are capable of generating knowledge and reasoning and decision making under uncertainty purely on the basis of artificial neural networks and by finding other alternatives to organic cerebral functioning then I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.

What you're describing shares some common elements with stochastic machine learning--that is, machine learning where the set of outcomes is random and impossible to predict to an accuracy of 1 (i.e., a certain probability). However, this is still different from human behavior because humans have consciousness. While an AI model can learn from previous examples how to behave in random circumstances, it still doesn't have self-awareness or a volition of its own.

Consider two examples, one of an AI-based chess program and one of a self-driving car. The first is deterministic, not stochastic: given full observation and knowledge of the chess pieces, both players (i.e., the involved agents), and the possible set of moves on a chess board, the outcome of the chess game can't possibly be random.

In the second example, there are an infinite number of possible events that the car might encounter on the road; thus, the sample space/set of probabilities is infinitely large. We don't know whether the self-driving AI will have a perfectly smooth journey or will be cut off by some drunk driver. We don't know whether it will rain or whether a tire puncture will interrupt the journey. Therefore, this is an example of an application where we need a stochastic AI model.

Regardless of which approach an AI model uses, unless computers somehow attain self-awareness or consciousness, it is fairly safe to say they will never behave exactly like humans. AI can "think" in the sense that it can learn from billions of datasets and previous events or existing patterns as well as make mathematical decisions with astonishing accuracy, but it doesn't have awareness of its own existence or the ability to simply decide it wants ice cream because of desire. It is yet another reason I believe consciousness is the single most fascinating and complex phenomenon in nature.
 

Heyo

Veteran Member
What do you mean by "ordinary human function"? Are you suggesting an AI could be developed that could replicate the entire range of human mental function? I really don't see that happening is because it'd be difficult and, significantly, pointless.

All AI applications (indeed all automation, which this is just an extension of and what a lot of things called "AI" in the media actually are) are about meeting a specific set of requirements. Anyone developing any kind of AI application is going to be doing it to meet those specific requirements and will focus on meeting them as effectively as possible. There is no reason for anyone to put in the massive amount of time, money and effort in to trying to develop a system that could replicate the entire range of human behaviours. There is simply no application for that range of functionality.
The idea of replicating a human brain is not for industrial application. A human would be much more cost effective for a long time.
It is strictly for scientific purpose to understand the human brain better.
 

We Never Know

No Slack
Deductive logic is a formal system inherent in cognition.

This was a statement made by Langan that was inspired by my thread "Reality is Reduced to Axioms" on sciforums.

Modern Artificial Intelligence theory seeks to solve problems related to multi-agent learning, generating knowledge required by reasoning and intelligent agent components.

If we could develop new technologies that are capable of generating knowledge and reasoning and decision making under uncertainty purely on the basis of artificial neural networks and by finding other alternatives to organic cerebral functioning then I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.

Can AI learn to be a general purpose learner?
As we have seen from Google’s MultiModel, AI can certainly learn to become general-purpose learners like us. However, getting there will still take some time. There are two parts to this: meta-reasoning and meta-learning. Meta-reasoning focuses on the efficient use of cognitive resources. Meta-learning focuses on human’s unique ability to efficiently use limited cognitive resources and limited data to learn.

Can Artificial Intelligence Learn to Learn?
 
Last edited:

Heyo

Veteran Member
Deductive logic is a formal system inherent in cognition.
Deductive logic is a thing humans are notoriously bad at and the same goes for neural network type AI.
Modern Artificial Intelligence theory seeks to solve problems related to multi-agent learning, generating knowledge required by reasoning and intelligent agent components.

If we could develop new technologies that are capable of generating knowledge and reasoning and decision making under uncertainty purely on the basis of artificial neural networks and by finding other alternatives to organic cerebral functioning then I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.
You are in good company. Ray Kurzweil's prediction for the first GAI is 2028 (and he is usually right with his technology predictions).
 

icehorse

......unaffiliated...... anti-dogmatist
Premium Member
If we could develop new technologies that are capable of generating knowledge and reasoning and decision making under uncertainty purely on the basis of artificial neural networks and by finding other alternatives to organic cerebral functioning then I see no reason why we cannot have a sufficiently intelligent machine capable of ordinary human function in a matter of a few years. That is my rough estimate for time.

Sadly, I think it will happen. But it will come in stages, and I think an AI capable of passing the Turing test is still decades away. But such AIs will be getting steadily better, that's for sure.

One thing we have in our favor (assuming we don't want human level AIs), is that the number of connections in a human brain is still way, way past what we can create in hardware.
 

Heyo

Veteran Member
One thing we have in our favor (assuming we don't want human level AIs), is that the number of connections in a human brain is still way, way past what we can create in hardware.
The human connections are about as more numerous than the switching speed is higher in transistors.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.

icehorse

......unaffiliated...... anti-dogmatist
Premium Member
The human connections are about as more numerous than the switching speed is higher in transistors.

I could believe that! So then what we're talking about is the degree to which speed can compensate for smaller networks.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
What do you mean by "ordinary human function"? Are you suggesting an AI could be developed that could replicate the entire range of human mental function? I really don't see that happening is because it'd be difficult and, significantly, pointless.

All AI applications (indeed all automation, which this is just an extension of and what a lot of things called "AI" in the media actually are) are about meeting a specific set of requirements. Anyone developing any kind of AI application is going to be doing it to meet those specific requirements and will focus on meeting them as effectively as possible. There is no reason for anyone to put in the massive amount of time, money and effort in to trying to develop a system that could replicate the entire range of human behaviours. There is simply no application for that range of functionality.


 

HonestJoe

Well-Known Member
That actually supports my point. The only automated aspect of those robots is the basics of moving without falling over. Don't get me wrong, the team there are doing amazing work in their field and creating some truly impressive machines but they're not anywhere close to replicating everything a human being can do, nor are they trying to.

Each robot they create is focused on a specific set of requirements and abilities, not to simply replicate humans but ultimately surpass us in that specific field, doing particular tasks faster, better or more accurately. AI isn't really anything special in that context. We've been creating machines to automate single processes for decades, AI is just a different tool to help achieve that.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
That actually supports my point. The only automated aspect of those robots is the basics of moving without falling over. Don't get me wrong, the team there are doing amazing work in their field and creating some truly impressive machines but they're not anywhere close to replicating everything a human being can do, nor are they trying to.

Each robot they create is focused on a specific set of requirements and abilities, not to simply replicate humans but ultimately surpass us in that specific field, doing particular tasks faster, better or more accurately. AI isn't really anything special in that context. We've been creating machines to automate single processes for decades, AI is just a different tool to help achieve that.
The thing is, that self learning ai is already a reality and getting more advanced everyday.

That started with IBMs Deep Blue that toppled world chess champion Kasparov.


More of late in games like Starcraft and Dota where a self learning program called Open Ai toppled its top tier tournament champions.




And now without human input..

Physicality

Engineers combine AI and wearable cameras in self-walking robotic exoskeletons

Cognitive

 

HonestJoe

Well-Known Member
The thing is, that self learning ai is already a reality and getting more advanced everyday.
I'm trying to be nice here but you're just going around in circles, just giving more examples of exactly the same thing, so please read my response carefully.

Yes, we can develop software (AI or not) and machines that can replicate or better a human at one a specific task. That is still nowhere near developing the software and physical machinery to create something which could replicate a human at any and all tasks. That would be infinitely more difficult and, unless it could be perfect, would be of limited practical use.
 
Top