• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
Yes gpt made things up if don't know the answer, and yes, this includes 4o too.
Funny thing is that company stated their censorship is to prevent false information xD
 
  • 2Haha
Reactions:
It's made up. If ChatGPT doesn't know something but you ask it a true/false question it will pretend it does. Here's an example:
I didn't bother to make an account so the only version I can use is "ChatGPT 4o mini". Perhaps the more advanced versions that require an account don't have this flaw. Or maybe they all do.
They all have this flaw. They will always try to help you and due to the way they are trained, saying "I don't know" cannot be a good answer. So these Bots will lie through their metaphorical teeth before admitting that they cannot help you (unless they have a valid excuse like their training data being limited to some point in time).
 
They all have this flaw. They will always try to help you and due to the way they are trained, saying "I don't know" cannot be a good answer. So these Bots will lie through their metaphorical teeth before admitting that they cannot help you (unless they have a valid excuse like their training data being limited to some point in time).
This is still too much personification.

It doesn't know anything. It doesn't think. Therefore it cannot lie. It predicts the next token, based on (effectively) a statistical analysis of body of input text.

When people ask questions, most of the time the response is a confident answer. Therefore, if asked a question it hasn't seen before, it will confidently answer with... something. And because there's no actual data lookup going on (assuming the user input is the only new context rather than it e.g. doing backend searches for real info to produce addition context), it will just spit out things that text prediction says seems reasonably plausible.

But yes, they hallucinate (produce fake information because the relevant information wasn't in the training set) all the time. It's just not really correct to call it lying, because lying implies that it thinks, and has some concept of knowing what it knows.
 
  • 4Like
  • 4
  • 1
Reactions:
This is still too much personification.

You're right.

However:
But yes, they hallucinate (produce fake information because the relevant information wasn't in the training set) all the time. It's just not really correct to call it lying, because lying implies that it thinks, and has some concept of knowing what it knows.

... the term "hallucinate" implies that they experience these falsehoods which they're producing, and that is also too much personification ... even if it is the best current technical term.
 
  • 2
Reactions:
You're right.

However:


... the term "hallucinate" implies that they experience these falsehoods which they're producing, and that is also too much personification ... even if it is the best current technical term.
The best term is probably "the programmer set it to (lie, make stuff up, refuse to answer, etc)." It is somewhat unwieldy, but its very literally a search engine with a chatbot filter marketing teams have decided to call "AI."

It doesn't lie because it's a liar, or because it hallucinated, or panicked and gave a quick answer. It can't do any of those things. It lies because the code says something that means the same thing as the sentence "lie confidently if asked a question that cannot be answered for any reason."

On a more humorous note, because this is a meme thread, imagine that any time something stupid happens with ship behavior, they asked their "AI" - which works just like ours. "ChatSTLRS says the Dimensional Horror is harmless, full speed my 20 corvette fleet!"
 
  • 5Haha
  • 1Like
  • 1
Reactions:
Feel this is relevant
IMG_2509.png
 
  • 2Haha
Reactions:
You guys know you can just ask ChatGPT what they'd pick right?
1736947989577.png

Interesting civics and ship appearance. But the part that made me chuckle is the leader trait. I don't know what's funnier, the ego or the irony.
 
Last edited:
  • 11Haha
Reactions:
An old meme by Insane Commander, hopefully to help get us back on track.
COMfamily by Insane Commander.png
 
  • 8Haha
  • 3Like
Reactions: