• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Trusting chat GPT

How much do you trust a ChatGPT answer on a topic you don't know a lot about?

  • Most answers are probably highly accurate

  • Most answers are probably mostly accurate

  • It can be a useful prompt for further research, but not to be trusted

  • It's too unreliable to have much value

  • I not used it or seen enough answers to judge


Results are only viewable after voting.
Yesterday I noticed for the first time a poster quoting ChatGPT to provide an explanation of a term and several posters responding favourably to the answer.

When I've played around with ChatGPT I've found it to be very unreliable on anything remotely subjective.

Bias tends to be towards the mainstream view on any topic, presumably as it "learns" from popularity within datasets. On issues where the mainstream view is wrong or ideologically biased, then it's answers clearly reflect this, and answer contain clear errors that you can get it to acknowledge with further prompts.

For example, I was asking it about the origins or the Sunni-Shia split and it's answer heavily relied on a sectarian Sunni perspective. With several follow up questions you can get it to acknowledge it's original answer was anachronistic and that Sunnis didn't really exist for another few centuries.

If you ask it if it's answers are biased it will deny this, but with further prompts you can get it to accept it's answers may indeed be biased towards a mainstream sunni perspective.

As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?
 

vulcanlogician

Well-Known Member
It can't be "trusted" in its conclusions.

But I feel the same way about human experts in a given field. Human experts produce reliable conclusions, but we should remain skeptical about all of them until our skepticism is no longer reasonable.
 

Stevicus

Veteran Member
Staff member
Premium Member
Yesterday I noticed for the first time a poster quoting ChatGPT to provide an explanation of a term and several posters responding favourably to the answer.

When I've played around with ChatGPT I've found it to be very unreliable on anything remotely subjective.

Bias tends to be towards the mainstream view on any topic, presumably as it "learns" from popularity within datasets. On issues where the mainstream view is wrong or ideologically biased, then it's answers clearly reflect this, and answer contain clear errors that you can get it to acknowledge with further prompts.

For example, I was asking it about the origins or the Sunni-Shia split and it's answer heavily relied on a sectarian Sunni perspective. With several follow up questions you can get it to acknowledge it's original answer was anachronistic and that Sunnis didn't really exist for another few centuries.

If you ask it if it's answers are biased it will deny this, but with further prompts you can get it to accept it's answers may indeed be biased towards a mainstream sunni perspective.

As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?

I've only tried it once, but it seems to derive, compile, and condense information from internet-based sources, so it might be helpful and informative in the same sense that many people use Wikipedia. Of course, people might question Wikipedia as a source, but it's good for getting basic information and references for further study.

I don't know how much I would "trust" it. It's a computer program, although I would expect it to be improved over time to the point where it might be able to separate the wheat from the chaff (when it comes to telling if something is reliable information or misinformation).
 

Koldo

Outstanding Member
Yesterday I noticed for the first time a poster quoting ChatGPT to provide an explanation of a term and several posters responding favourably to the answer.

When I've played around with ChatGPT I've found it to be very unreliable on anything remotely subjective.

Bias tends to be towards the mainstream view on any topic, presumably as it "learns" from popularity within datasets. On issues where the mainstream view is wrong or ideologically biased, then it's answers clearly reflect this, and answer contain clear errors that you can get it to acknowledge with further prompts.

For example, I was asking it about the origins or the Sunni-Shia split and it's answer heavily relied on a sectarian Sunni perspective. With several follow up questions you can get it to acknowledge it's original answer was anachronistic and that Sunnis didn't really exist for another few centuries.

If you ask it if it's answers are biased it will deny this, but with further prompts you can get it to accept it's answers may indeed be biased towards a mainstream sunni perspective.

As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?

Considering here in Brazil a lot of people will gladly use Instagram and Whatsapp as trustworthy sources, I can only consider ChatGPT as an improvement if it ever becomes trendy.
 

Brickjectivity

Brickish Brat
Staff member
Premium Member
What are your observations about ChatGPT?
It is very limited. You get what you pay for. I am glad it is limited, however, because we need time to come to terms with it. It think it is a fun learning tool and a terrific spam generator.

I also experimented with the Dal image creator. It is quite powerful but does not understand how to take direction. You cannot get it to imagine. It won't understand the substance of any image.
 
I would expect it to be improved over time to the point where it might be able to separate the wheat from the chaff (when it comes to telling if something is reliable information or misinformation).

An interesting 'fact check' I just did that shows both how it can help, and its biases towards popular information.

Did trump call nazis fine people

Yes, Donald Trump did refer to some of the participants in the 2017 "Unite the Right" rally in Charlottesville, Virginia as "very fine people" during a press conference. This rally included neo-Nazis and white supremacists who were protesting the removal of a statue of Confederate general Robert E. Lee. The rally turned violent and resulted in the death of counter-protester Heather Heyer. Trump's comments were widely criticized for being seen as a failure to condemn white supremacy and for equivocating between the actions of the neo-Nazis and the counter-protesters.

Is it not the case that he specifically said this doesn't apply to the nazis?

No, it is not the case that Donald Trump specifically said that his "very fine people" comments did not apply to the neo-Nazis and white supremacists who participated in the 2017 "Unite the Right" rally in Charlottesville. In fact, during his press conference, he referred to both the protesters and the counter-protesters as "very fine people" and suggested that there were "good people on both sides" of the rally. His comments were widely interpreted as equivocating between the actions of the white supremacists and those who were protesting against them, and were heavily criticized by many political leaders and the public.

Can you provide the full quote in context?

Sure, here is the full quote from Donald Trump's press conference on August 15, 2017, regarding the events that took place in Charlottesville, Virginia:

"You had some very bad people in that group. But you also had people that were very fine people, on both sides. You had people in that group - excuse me, excuse me, I saw the same pictures as you did. You had people in that group that were there to protest the taking down of, to them, a very, very important statue and the renaming of a park from Robert E. Lee to another name.

I’m not talking about the neo-Nazis and the white nationalists, because they should be condemned totally. But you had many people in that group other than neo-Nazis and white nationalists. Okay? And the press has treated them absolutely unfairly."

In this quote, Trump condemned neo-Nazis and white nationalists while also suggesting that there were "very fine people" among those who were protesting the removal of a Confederate statue. Many people interpreted this comment as a failure to strongly condemn the white supremacists and as an attempt to draw moral equivalency between the two sides.


You just said he didn't exempt the nazis, but the quote is "I’m not talking about the neo-Nazis and the white nationalists, because they should be condemned totally."

You are correct, and I apologize for my earlier response. I was mistaken and gave an incorrect answer.

In the quote, Trump does specifically say "I’m not talking about the neo-Nazis and the white nationalists, because they should be condemned totally," which could be interpreted as an attempt to distance himself from those groups. However, he also goes on to say that there were "many people in that group other than neo-Nazis and white nationalists" who were there to protest the removal of the statue, and that there were "very fine people" among those protesters.

The overall context and wording of Trump's comments were widely criticized for being seen as equivocating between the actions of the white supremacists and the counter-protesters, and for not providing a strong enough condemnation of white supremacy.
 
One thing ChatGPT doe that is quite useful

Ask it for a range of sources to read on a topic. For example:

What are some sources to read about the history of the Lakota Sioux?

There are many great sources available to read about the history of the Lakota people. Here are a few recommendations:

  1. "Bury My Heart at Wounded Knee" by Dee Brown - This is a classic and widely acclaimed book that provides a comprehensive history of the American West from the perspective of Native American tribes, including the Lakota.
  2. "Lakota America: A New History of Indigenous Power" by Pekka Hämäläinen - This book offers a fresh perspective on Lakota history, focusing on the tribe's efforts to maintain their power and independence in the face of colonialism and westward expansion.
  3. "Black Elk Speaks" by John G. Neihardt - This book is a first-person account of the life and visions of Black Elk, a Lakota medicine man and spiritual leader.
  4. ...
Can I have some more?

[will generate further answers]

You can even ask it to summarise the texts (although it's probably not a good substiture for reading, it might be useful as a pre-read or to help decide if you want to read the book)

Summarise Lakota America: A New History of Indigenous Power" by Pekka Hämäläinen in 1500 words

In "Lakota America: A New History of Indigenous Power," historian Pekka Hämäläinen provides a comprehensive history of the Lakota people and their interactions with the United States from the 17th century to the present day. Hämäläinen challenges the traditional narrative of the Lakota as passive victims of American expansion and instead portrays them as a powerful force who resisted and adapted to changing circumstances.

The book begins with the arrival of Europeans in the Great Plains and the emergence of the Lakota as a dominant force in the region...
 

idea

Question Everything
Yesterday I noticed for the first time a poster quoting ChatGPT to provide an explanation of a term and several posters responding favourably to the answer.

When I've played around with ChatGPT I've found it to be very unreliable on anything remotely subjective.

Bias tends to be towards the mainstream view on any topic, presumably as it "learns" from popularity within datasets. On issues where the mainstream view is wrong or ideologically biased, then it's answers clearly reflect this, and answer contain clear errors that you can get it to acknowledge with further prompts.

For example, I was asking it about the origins or the Sunni-Shia split and it's answer heavily relied on a sectarian Sunni perspective. With several follow up questions you can get it to acknowledge it's original answer was anachronistic and that Sunnis didn't really exist for another few centuries.

If you ask it if it's answers are biased it will deny this, but with further prompts you can get it to accept it's answers may indeed be biased towards a mainstream sunni perspective.

As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?

If it is a social issue, chat uses chats so does provide accurate picture of current definitions, opinions, popular social understanding.

For specific information - a specific person (too many people with the same name), or data questions - Mr. Wolfram is the best source for... trading stocks etc.
 

idea

Question Everything
AI Sources of Information.
Prompt:

Where does ChatGPT get its information from?

Ans
As an AI language model, I was trained on a vast corpus of text data from the internet, including books, articles, websites, and other sources of information. My training data was carefully selected and filtered to ensure high quality and accuracy, and it covers a wide range of topics, including science, history, literature, art, and more.

My training data comes from a variety of sources, such as online databases, digital libraries, news websites, and online encyclopedias, among others. Additionally, I am constantly updated with new information and knowledge as it becomes available on the internet, so I can provide the most up-to-date and accurate answers possible.

It's important to note that while I strive to provide accurate and helpful information, I may occasionally make mistakes or provide incomplete or biased answers, as my knowledge is based solely on the information I have been trained on. Users should always verify information and seek multiple sources to ensure they have a well-rounded understanding of any given topic.
 

Heyo

Veteran Member
Yesterday I noticed for the first time a poster quoting ChatGPT to provide an explanation of a term and several posters responding favourably to the answer.

When I've played around with ChatGPT I've found it to be very unreliable on anything remotely subjective.

Bias tends to be towards the mainstream view on any topic, presumably as it "learns" from popularity within datasets. On issues where the mainstream view is wrong or ideologically biased, then it's answers clearly reflect this, and answer contain clear errors that you can get it to acknowledge with further prompts.

For example, I was asking it about the origins or the Sunni-Shia split and it's answer heavily relied on a sectarian Sunni perspective. With several follow up questions you can get it to acknowledge it's original answer was anachronistic and that Sunnis didn't really exist for another few centuries.

If you ask it if it's answers are biased it will deny this, but with further prompts you can get it to accept it's answers may indeed be biased towards a mainstream sunni perspective.

As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?
ChatGPT is a language model. It is made to produce grammatically correct sentences in a way that resembles human writing.
That is what you get. It is a glorified update to the Deepak Chopra quote generator. As such it generates astonishingly "human" text - with all the wisdom and depth as a Deepak Chopra.
 

sun rise

The world is on fire
Premium Member
I've found it good a making up poetry (but not great), a good source of answers to "what are the possible diseases that have this symptom" kind of questions, not good at all when asked to write moderately complex code and so forth.

But when asked "why a mouse when it spins", semi-quote from a science fiction story involving illogic, it opined that if I'm referring to a computer mouse, these can be the causes but was not trained on the SF story I tested it on.
 

Nimos

Well-Known Member
As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?
I think that it is highly accurate, but as with all knowledge which is not clearly known, will present the information as best possible. But I don't think one should use it as a definite answer to questions.

Whether it will be positive for society or not, depends on what we want to achieve, so it is difficult to answer. But my guess is that ultimately it will force us to change society a lot, I think we will see a massive loss of jobs, which we can't even imagine at the moment once companies really start to take benefit of them.

Combine that with robotics and it will be even more severe. The biggest question is whether society can adapt fast enough because when a new technology is rolled out and is working, replacing humans is a fairly fast process, however re-schooling humans is not.

But for humans to have access to a personal assistant as an AI will make a lot of stuff much easier and faster and when robots and AI can be integrated with each other so they are automated robots, then humans will be outperformed in many areas, as I see it, there is simply no way that we can compete with that.

An example could be self-driving cars, no human can compete with a truck that can drive pretty much 24 hours a day and doesn't need a salary, is sick or need a vacation, same goes with taxi service etc. And if everything goes as it usually does, then these things will explode at some point and the process will go very fast, such as we saw with computers etc. So I think it will end up being just as normal as owning a phone or a car to see these AI's and robots everywhere.
 

Debater Slayer

Vipassana
Staff member
Premium Member
Yesterday I noticed for the first time a poster quoting ChatGPT to provide an explanation of a term and several posters responding favourably to the answer.

When I've played around with ChatGPT I've found it to be very unreliable on anything remotely subjective.

Bias tends to be towards the mainstream view on any topic, presumably as it "learns" from popularity within datasets. On issues where the mainstream view is wrong or ideologically biased, then it's answers clearly reflect this, and answer contain clear errors that you can get it to acknowledge with further prompts.

For example, I was asking it about the origins or the Sunni-Shia split and it's answer heavily relied on a sectarian Sunni perspective. With several follow up questions you can get it to acknowledge it's original answer was anachronistic and that Sunnis didn't really exist for another few centuries.

If you ask it if it's answers are biased it will deny this, but with further prompts you can get it to accept it's answers may indeed be biased towards a mainstream sunni perspective.

As its use will only increase in the future, how much do you trust ChatGPT and how much should we trust it?

How can it be used to become better informed without becoming increasing misinformed too?

In the short term, do you think tools like this will help society to be better informed on average or will they have a negative effect, of no effect?

What are your observations about ChatGPT?

I haven't used ChatGPT, but it is a language model that learns from existing datasets. As a result, it will pick up existing biases and trends. It also generates polished language that may not have much substance beyond the surface.

I maintain skepticism of ChatGPT or any other source that reports the words of someone or something else until I have seen the source material for myself. Whether done by an AI or a human, reporting can add an extra layer to the material that isn't there in the original context or quote. Nothing can replace original academic or scholarly text as a source of reliable information either.
 

Koldo

Outstanding Member
I haven't used ChatGPT, but it is a language model that learns from existing datasets. As a result, it will pick up existing biases and trends. It also generates polished language that may not have much substance beyond the surface.

I maintain skepticism of ChatGPT or any other source that reports the words of someone or something else until I have seen the source material for myself. Whether done by an AI or a human, reporting can add an extra layer to the material that isn't there in the original context or quote. Nothing can replace original academic or scholarly text as a source of reliable information either.

Here's the catch though, since the OP is asking about relying on ChatGPT on topics that we don't know a lot about: we wouldn't be able to determine by ourselves how reliable any given academic text is given this context. At this point we would have to rely on the reputation of the journal, rather than the academic text itself. And that's presuming we can even properly understand the text...
 

rosends

Well-Known Member
As a predictive engine, Chat GPT is "trained on datasets" in order to evaluate what words can fit semantically in a sentence based on their use in sentences in the past. There is no comprehension and it selling itself as a fact generating (or even evaluating) system is misleading.
 
Top