Re: [CR]Fuzzy Math? in No more DeRosas : they all broke ?!?!

(Example: Racing)

From: <hersefan@comcast.net>
To: Steve Maas <stevem@nonlintec.com>, classicrendezvous@bikelist.org
Subject: Re: [CR]Fuzzy Math? in No more DeRosas : they all broke ?!?!
Date: Sun, 07 Nov 2004 03:35:43 +0000
cc: gillies@cs.ubc.ca
cc: gillies@cs.ubc.ca

I am an economist, not an engineer. But - is this really a viable fatique test? Do the math, the failures happened in as few as 57,000 thousand cycles. By my math, for a typical rider spinning 80 RPM that is perhaps 25 hours of riding!

That tells me that the force applied on each cycle is much greater than that ever encounterd in typical cycling.

Now, if there are a variety of forces acting on a frame that this simple experiment does not capture, it is possible that the steel frame might be yet be the winner in a test that is constructed differently.

Seems like a sad waste of a bunch of frames.

Mike Kone in Boulder CO


---------- Original message --------------


>

\r?\n>

\r?\n> OROBOYZ@aol.com wrote:

\r?\n>

\r?\n> > Talking about most of these bikes is off topic but this test is at least

\r?\n> > questionable and at worst bologna.

\r?\n> > It is ironic that in real world use, Trek OCLV, Cannondale and Principia are

\r?\n> > KNOWN to break! Of all the makes listed, these guys have an acknowledged

\r?\n> > history!

\r?\n> > And then the DeRosa and other lugged bike broke in the exact same place... A

\r?\n> > place I have never seen a bike break in over 30 years in the biz! That really

\r?\n> > seems strange. I see that they don't show how they held the front end of the

\r?\n> > frames in their stress testing machine. Hmmmm.

\r?\n>

\r?\n>

\r?\n> Dale, you're effectively saying that the scientific (bear with me a

\r?\n> minute, here) testing doesn't agree with the anecdotal information, so

\r?\n> the scientific part is questionable. That's not a very safe position to

\r?\n> take...!

\r?\n>

\r?\n> However, I've been aware of this site for some time, and, as a

\r?\n> practicing scientist and engineer (although not a mechanical one, I

\r?\n> hasten to say) I have had some serious misgivings about their

\r?\n> methodology and their interpretation of the results. If this work were

\r?\n> compiled into a technical paper and submitted to a journal, it would

\r?\n> almost certainly be rejected. A few reasons:

\r?\n>

\r?\n> 1. An obvious problem is the totally inadequate sample space.

\r?\n>

\r?\n> 2. Perhaps even more importantly, it seems impossible (to me, at least),

\r?\n> to derive anything useful from the results. The goal of reliability

\r?\n> testing is to predict the mean lifetime, under more-or-less normal use,

\r?\n> of whatever is being tested. To do that, you need some kind of failure

\r?\n> model, and the testing should determine the parameters of that model.

\r?\n> For example, in electronics, we know how failure rates increase with

\r?\n> time and temperature, so we put components on accelerated life test at

\r?\n> high temperatures. We can then estimate the mean time to failure, at

\r?\n> normal temperature, from the resulting data. I don't see any way to do

\r?\n> anything similar in the frame test.

\r?\n>

\r?\n> 3. The response of the frame to stress, in terms of failure rate, is

\r?\n> nonlinear. If you halve the stress, the frame won't simply last twice as

\r?\n> long. We all know, for example, that steel has a fatigue limit.

\r?\n> Apparently that was exceeded for the DeRosa, at least, in some sense.

\r?\n> Would it have been exceeded in use? Not at all clear, since the testing

\r?\n> conditions are pretty violent, compared to even the high end of normal

\r?\n> use. If the stress were lower, the results might be very different.

\r?\n>

\r?\n> 4. There's another little technological fact of life that is often

\r?\n> ignored in discussing things like this. While one material might be

\r?\n> superior to another in some respect, it doesn't follow that the

\r?\n> difference matters in practice. Returning to the DeRosa as an example,

\r?\n> suppose that someone overheated the tube while brazing on the shifter

\r?\n> bosses. If this weakened the tube, it might dominate in determining the

\r?\n> response to fatigue. In short, a "weak sister" dominates; kinda like a

\r?\n> chain with a weak link.

\r?\n>

\r?\n> 5. The "interpretation" section is naive. It makes a lot of statements

\r?\n> that simply cannot be justified by the data they have presented. For

\r?\n> example,

\r?\n>

\r?\n> Even to the "worst" frames in this test the following applies:

\r?\n> the color of their bikes will no longer please most racers

\r?\n> before the expected life span is reached.

\r?\n>

\r?\n> How do they know this? I don't see how they can, since the data cannot

\r?\n> be extrapolated to the case of ordinary use.

\r?\n>

\r?\n> Similarly,

\r?\n>

\r?\n> The fact that aluminum and carbon frames in this test lasted

\r?\n> longer than the steel frames is not in our estimate a question

\r?\n> of the material, but the design effort.

\r?\n>

\r?\n> How do they know what design effort went into the frame? Was there some

\r?\n> extra information about this? They didn't present any.

\r?\n>

\r?\n> To me, this is sloppy work, decorated with sloppy thinking. Still, I'd

\r?\n> be interested to hear from any MEs among our 1000 who can comment

\r?\n> specifically about the testing methodology and whether it is consistent

\r?\n> with what is done in industry today.

\r?\n>

\r?\n>

\r?\n> Steve Maas (PhD, PE)

\r?\n> (who had a really nice ride today on the chrome Rossi in)

\r?\n> Long Beach, California