It’s no secret that engineers write technical documents in a style that no one would actually speak. Like, if I’m explaining plastic hinge integration to you in person, it would sound nothing like what’s written in the journal article.
It’s difficult to measure the amount of nonsense in technical writing, but qualitatively, you know it when you see it.
One attempt at quantifying the nonsense, the BlaBlaMeter, launched in 2012 with the tagline “How much bullshit hides in your text?” Although the bullshit hides in plain sight, paste up to 15,000 characters of text into the form and you will get a Bullshit Index from 0 to 1.
I pasted in the abstracts from a few journal articles and scored in the 0.7 to 1.0 range. For example, the abstract from Scott and Fenves (2006) scored a 0.82, “reeking of bullshit” according to the BlaBlaMeter. Then I pasted in a few blog posts and averaged about 0.3 on the Bullshit Index, described as “some indications of bullshit, but still within an acceptable range”.
According to its FAQ, the BlaBlaMeter looks for nominal style, among many other things. What is nominal style? Basically, it’s using a noun where you could have used a verb, or unnecessarily turning a verb into a noun. For example, “We took strain measurements” would boost your Bullshit Index compared to “We measured strain”.
Although not explicitly called out in the BlaBlaMeter’s FAQ, I’m sure the passive “Strain measurements were taken” would not help your Bullshit Index.
[UPDATE March 12, 2024] Ironically–if it can be considered irony–the abstract for McKenna et al (2010), the article I recommend everyone cite when they use OpenSees, scored a “perfect” 1.0 on the Bullshit Index.
Paste the abstract of your favorite journal articles into the BlaBlaMeter. In some rare cases, the Bullshit Index can be greater than 1.0, so let me know if you overflow the meter. Also, let me know if you find an abstract that scores 0.5 or lower.
Hello Prof. Scott, thanks for sharing the interesting BlaBlaMeter tool! I dared to paste my abstract from 10.1007/s10518-023-01770-3 and, surprisingly scored 0.2. I did commit to using verbs and active voice, which probably leads to the relatively small Index. Thank you for the lesson on proper academic writing!
LikeLiked by 1 person
Good job avoiding the BS!
LikeLike
I pasted in my abstracts for my two papers in the upcoming SSRC conference. Both returned 0.49 which was strange. From the website: “It still may be an acceptable result for a scientific text.” I agree!
I got more of a range from some of my recent journal papers.
LikeLiked by 1 person
This one has 0.47: https://doi.org/10.1061/(ASCE)ST.1943-541X.0002882
And this one 0.64: https://doi.org/10.1061/JSENDH.STENG-11581
LikeLike
My abstract for a poster (MTC 24) scored 0.31. It appears that I’m direct.
LikeLiked by 1 person
I’ve always found you to be direct in person (in a very refreshing way), so I’m not surprised! Keep up the good work!
LikeLike
Hi Professor!
I really had fun reading this post.
I pasted my abstract from this: https://doi.org/10.1016/j.engstruct.2021.112455 and got 0.3. Not bad.
Tried this: https://journals.sagepub.com/doi/10.1177/8755293020988019 and got 0.72,
Also tried an abstract from a paper we’re about to submit (written by one of my students) and obtained 0.2
LikeLiked by 1 person
Muchas gracias, y sigue el buen trabajo!
LikeLike