Um, dude, what the fuck is this doing on Substack? It's way too amazing and data-driven and real.
I can barely follow most of the details, and my math is rusty, and so I'm sure I missed a lot. But I really enjoyed the explanation of why "infinite" error applies. Because the curve literally bends the wrong way, "to infinity and beyond" comes to mind.
Also, I'm sure you're tired of "stiffness" jokes, but "there is no upper bound to the erroneous stiffening" may be the best metaphor I've ever heard for what it's like for boys to go through puberty.
Adam Mastroianni (Experimental History) sent me here. I feel like what *he* would want me to take away from your argument is that we very possibly don't know much of anything about very much of anything, because even w/r/t the things we think we know a great deal about, it can turn out that we're doing it wrong. What else would you have me take away from it (given that I'm not going to pursue a career in AFM)?
Thanks so much for writing this. It's a sin and a crime that it doesn't have a thousand likes and a hundred comments.
Thanks for the kind words! I've really connected with Adam's writing for similar reasons. His encouragement of citizen science has pushed me to "publish" my work.
I agree that we don't know much about anything. That was a real epiphany for me from this work. And yea, it makes you wonder where else we are quite ignorant. We humans really struggle to process and model complicated systems. An AFM cantilever is literally two beams. A force curve has 4 regions and 3 transitions (more if you consider friction & sliding). I excelled in an undergraduate Ivy League engineering program and it took me years of work to dig down to this modeling oversight. Most of the observable world is far more complex. It's quite remarkable that science has identified all the patterns/models that it has. Frankly, I'm excited for machine learning / pattern recognition to identify more.
I don't know that I have an answer with regard to an additional takeaway. There are many ways that we could improve our knowledge generation framework. I'm trying not to turn this response into a rant...
I will say was surprised at how hard it was to develop a truly simple explanation of all this. Despite the clarity and value that simple explanations provide, I don't think we (myself included) value simplicity enough. Simple things don't convey their cost or investment. They're simple! Easy! And easy things are certainly not worthy of a PhD!
What do I mean by this? I think there is an attitude in academia to convey "this was hard for me, so it should be hard for you, too." This attitude is fundamentally a vibe, a feeling. For me, this feeling registers as "I don't understand this subject and I don't think the presenter does, either." For far too many academics, this feeling translate to "Oh, this is hard. This subject must be important. This presenter must be really smart."
Adam calls my interpretation of this feeling an "ignorance signal". Unfortunately, when you're the only one registering an ignorance signal, you seem like the dumb guy in the class. If your advisor thinks you're dumb, he gently guides you out of the program. (I was allowed to graduate.)
Thanks for the read and the comment! Keep supporting Adam and the lizard ecosystem!
I think to me it's exciting to realize how little we know. How much of what we believe is just plain wrong? How much of it is "not even wrong" because we're using concepts that literally don't apply to the world around us?
I'm glad you got the degree; from your description of the painful academic situation I was a bit worried even that was denied you.
If you do decide it's worth your time to "rant" for a while about these topics, or any other, do it on Substack and I'll be sure to read it. Thanks.
FWIW, it's not just you. As an industry chemist who has worked on the edge of industry and academia for a couple of decades, I'd say that at best 50% of academic chemistry articles are reproducible. And this is for cheap, easy to reproduce stuff! Like, things that took me a couple days in lab. At least in medicine they have the excuse that experiments are expensive. We are just lazy.
There is no good way out of this other than to go to industry, and to work on a problem where answers have to be not just verifiable, but within statistical process control. And where someone is incentivized to care whether the next shift can get the same result. Good luck in your job search! Industry has its perverse incentives as well, but I promise you that overall, they are far, far less perverse.
Seth, thanks for the kind comment! It's great to hear from other folks who have witnessed scientific malpractice in the rigorous "hard" sciences. Lazy is a kind characterization.
I have settled into industry and found its incentive structure to be a much, much better fit. But, I am always looking for new opportunities! :)
At uni we had to exercise practical measurements and prepare a discussion of the underlying theory before entering the practical bit. We needed endorsement by a Phd who questioned ANY assumption. That made me so mad because I thought he was just making my life miserable so I couldn't get on with the practical part (we had to get his approval first). He was right and I was wrong. You are right as well and what an ordeal that is. My goodness. Good on you!
Rainer, thanks for the comment! If only the field of AFM had such rigor! It is indeed a tough process.
But don't let a lack of understanding prevent exploration. There is value in experiment-first research. In fact, some argue that most technological development starts with a useful experiment and only decades later is accurately modeled and "understood".
Of course, if you are reporting "accurate", bounded measurements that rely on numerous calibrations, you should understand in detail all of the manipulations and calibrations that you are performing!
Um, dude, what the fuck is this doing on Substack? It's way too amazing and data-driven and real.
I can barely follow most of the details, and my math is rusty, and so I'm sure I missed a lot. But I really enjoyed the explanation of why "infinite" error applies. Because the curve literally bends the wrong way, "to infinity and beyond" comes to mind.
Also, I'm sure you're tired of "stiffness" jokes, but "there is no upper bound to the erroneous stiffening" may be the best metaphor I've ever heard for what it's like for boys to go through puberty.
Adam Mastroianni (Experimental History) sent me here. I feel like what *he* would want me to take away from your argument is that we very possibly don't know much of anything about very much of anything, because even w/r/t the things we think we know a great deal about, it can turn out that we're doing it wrong. What else would you have me take away from it (given that I'm not going to pursue a career in AFM)?
Thanks so much for writing this. It's a sin and a crime that it doesn't have a thousand likes and a hundred comments.
Hey Kent,
Thanks for the kind words! I've really connected with Adam's writing for similar reasons. His encouragement of citizen science has pushed me to "publish" my work.
I agree that we don't know much about anything. That was a real epiphany for me from this work. And yea, it makes you wonder where else we are quite ignorant. We humans really struggle to process and model complicated systems. An AFM cantilever is literally two beams. A force curve has 4 regions and 3 transitions (more if you consider friction & sliding). I excelled in an undergraduate Ivy League engineering program and it took me years of work to dig down to this modeling oversight. Most of the observable world is far more complex. It's quite remarkable that science has identified all the patterns/models that it has. Frankly, I'm excited for machine learning / pattern recognition to identify more.
I don't know that I have an answer with regard to an additional takeaway. There are many ways that we could improve our knowledge generation framework. I'm trying not to turn this response into a rant...
I will say was surprised at how hard it was to develop a truly simple explanation of all this. Despite the clarity and value that simple explanations provide, I don't think we (myself included) value simplicity enough. Simple things don't convey their cost or investment. They're simple! Easy! And easy things are certainly not worthy of a PhD!
What do I mean by this? I think there is an attitude in academia to convey "this was hard for me, so it should be hard for you, too." This attitude is fundamentally a vibe, a feeling. For me, this feeling registers as "I don't understand this subject and I don't think the presenter does, either." For far too many academics, this feeling translate to "Oh, this is hard. This subject must be important. This presenter must be really smart."
Adam calls my interpretation of this feeling an "ignorance signal". Unfortunately, when you're the only one registering an ignorance signal, you seem like the dumb guy in the class. If your advisor thinks you're dumb, he gently guides you out of the program. (I was allowed to graduate.)
Thanks for the read and the comment! Keep supporting Adam and the lizard ecosystem!
I think to me it's exciting to realize how little we know. How much of what we believe is just plain wrong? How much of it is "not even wrong" because we're using concepts that literally don't apply to the world around us?
I'm glad you got the degree; from your description of the painful academic situation I was a bit worried even that was denied you.
If you do decide it's worth your time to "rant" for a while about these topics, or any other, do it on Substack and I'll be sure to read it. Thanks.
Another referral from Adam.
FWIW, it's not just you. As an industry chemist who has worked on the edge of industry and academia for a couple of decades, I'd say that at best 50% of academic chemistry articles are reproducible. And this is for cheap, easy to reproduce stuff! Like, things that took me a couple days in lab. At least in medicine they have the excuse that experiments are expensive. We are just lazy.
There is no good way out of this other than to go to industry, and to work on a problem where answers have to be not just verifiable, but within statistical process control. And where someone is incentivized to care whether the next shift can get the same result. Good luck in your job search! Industry has its perverse incentives as well, but I promise you that overall, they are far, far less perverse.
Seth, thanks for the kind comment! It's great to hear from other folks who have witnessed scientific malpractice in the rigorous "hard" sciences. Lazy is a kind characterization.
I have settled into industry and found its incentive structure to be a much, much better fit. But, I am always looking for new opportunities! :)
At uni we had to exercise practical measurements and prepare a discussion of the underlying theory before entering the practical bit. We needed endorsement by a Phd who questioned ANY assumption. That made me so mad because I thought he was just making my life miserable so I couldn't get on with the practical part (we had to get his approval first). He was right and I was wrong. You are right as well and what an ordeal that is. My goodness. Good on you!
Rainer, thanks for the comment! If only the field of AFM had such rigor! It is indeed a tough process.
But don't let a lack of understanding prevent exploration. There is value in experiment-first research. In fact, some argue that most technological development starts with a useful experiment and only decades later is accurately modeled and "understood".
Of course, if you are reporting "accurate", bounded measurements that rely on numerous calibrations, you should understand in detail all of the manipulations and calibrations that you are performing!