Commentary

The AI Bullsh**t Index And The Psychology Behind It

Researchers at Princeton University and University of California, Berkeley released a paper introducing a concept they call the Bullshit Index -- a metric that quantifies and measures indifference to truth in artificial intelligence (AI) large language models (LLMs).

Is this the type of representation that advertisers want for their products and services? …

2 comments about "The AI Bullsh**t Index And The Psychology Behind It".
Check to receive email when comments are posted.
  1. Lauren McCadney from CDW, August 4, 2025 at 2:12 p.m.

    This is interesting. But is anyone surprised that these models have a proclivity for spewing BS? The models are not sentient therefore have no concept of truth vs. a lie or right vs. wrong. They will never circle back around and tell you "Oops, I made a mistake" or learn from their actions right or wrong.  They are operating at the direction of a coder and digesting the words of others (which also might be wrong). They have no connection to the consequences encurred by others based on their misinformation.  What does surprise me is how casually the masses dismiss the tools shortcomings. There is passing talk of "hallucinations", "racist, dangerous statements" and now "a Bullsh**t Index" while business leaders speak of replacing entire workgroups with AI.  Can you imagine what happens with instead of that one guy in Marketing that loves buzzwords but lacks substance, suddently becomes an entire AI "team" that is optimized for stringing words together but never held accountable for their accuracy.  Further, with all the knowledge workers collecting unemployment who will be left to call Bullsh**t?

  2. John Grono from GAP Research, August 4, 2025 at 6:32 p.m.

    Bravo Laurie for your post and Lauren for your comment.

    But I just wonder whether it is deliberate to inject an additional "*" in the reference of "Bullsh**t Index".

Next story loading loading..

Discover Our Publications