avatarPaul Pallaghy, PhD

Summarize

Finally! I got ChatGPT to stop being indecisive & take the logical viewpoint.

The logic learned by LLMs like GPT is actually highly impressive. Yet often, it’s still near impossible to get ChatGPT to side with a highly logical argument based on known facts.

Why?

Fundamentally it’s because GPT is likely trained ‘not to take sides’. Which kind of makes sense.

But even when the data really supports one side?

GPT-4 (via ChatGPT PLUS) will agree that your (logical) view is reasonable and even insightful.

But then it will insist, almost invariably, that:

“However, there is a diversity of views and this indicates the nuances present in (this space)”

Just when you think you’ve made progress!

But often the evidence is so clear on one side!

It’ll almost never just agree:

“Your view really appears to be supported by the evidence”

It just won’t do that.

This is frustrating and often not useful for it to be so ‘wishy washy’.

I’ve also found that if we just ask for an outright opinion GPT-4 is so inclusive that it won’t side flat out with the arguably logical option, just because there are people on both sides.

Unless it’s like a law of nature or a predominant mainstream narrative.

Solution

Well, I found a really compelling way to get GPT-4 to become much more logical.

Make assessments on the merits.

I prompt GPT to:

  1. List 5–10 pro & 5–10 con arguments & evidence for (view X)
  2. On the merits of these arguments & evidence you listed above (and ignoring your preconceived impressions), which side is better supported, and how strongly, and why?

I do step (1) to force it to ‘face the facts’ and not hallucinate based on various biases.

I’ve tested this on semi-controversial issues where I believe logic supports one side strongly.

This generic prompt works and IMO uncovers truths (IMO) that normally GPT will just be wishy-washy on.

And I haven’t tweaked this generic prompt.

And almost always it comes to the conclusion I have already come to. (But now I have more evidence. And a strange new confidence. LOL.).

This is quite profound and the correlation (likely!) means this prompt is achieving logical ‘on the merits’ analysis and that my thinking must, usually at least, presumably be logical.

I do believe there are more truths lurking out there than many people insist and that not everything is hideously grey.

I look forward to genuine truth seeking by LLMs and less orchestrated bias.

Artificial Intelligence
ChatGPT
Technology
Politics
Science
Recommended from ReadMedium