Test theory

From Relativity
Revision as of 21:53, 4 July 2016 by Eric Baird (Talk | contribs) (1 revision imported)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Test theory usually refers to the theoretical understanding of how testing should be carried out to evaluate a theory or class of theories.

The concept of test theory

Test theory allows experimenters to carry out complex and demanding experiments without also having to be experts on the history and predictions of a wide range of alternative predictions – it allows a degree of compartmentalisation between the skills of the experimenter and the skills of the theoretical physicist and historian. Theorists study the physical predictions of a theory and its potential competitors, identify where their predictions converge and diverge, and identify an approach to testing that should be able to generate significant information regarding the goodness or badness of that theory.

Advantages of of test theory

Test theory can be used as a pre-existing peer-reviewed "component" by experimenters in their experimental approach, and allows them to publish shorter papers describing their results - once they have referenced a test theory used in their experiment, they do not have to go into great detail explainign the theoretical basis for their work. A single popular peer-reviewed test theory may be used and referenced in a large number of experiments.

Pitfalls of test theory

although one of the advantages of a good test theory is that it allows multiple experimenters ot leverage the same peive of expert knowledge without actually havign to have the test theory writers as part fo their team, a danger of test thoery is that if a popular test theory is bad, or incomplete, then a whole range of dependent experiments may be invalidated.

For instance, a popular SR test theory told C20th experimenters that they only needed to test for the existence of a Lorentzlike redshift component whose exponent was in the range zero to one half, with "0.5" representign the extrramal prediction fo special rrelativity. Redshifts stronger than 0.5 were declared to have no theoretical significance, and coudl safely be claibrated out of the experiment as assumed experimentla error without damagign the experiment's validity.

Unfortunately, it transpires that gravitoelectromagnemtic solutions to the principle of relativity lie in the range "0.5 to 1", which was not tested for. Thsi leave sus in the unfortunate situation, after nearly a century fo SR testing, of not knowign whetehr the real Lorentzlike exponent is 0.5, 1, or soemwhere in between.

if the real exponent value was redder than SR, we wodl not necessary know - the classic exampel being the 19xxx transverse ex[eriment by Hasselkamp,, xxxx. in this test, the equipment reported roughtly twice the Sr result, but since this was out of range accordign to theor test theory, hald]f of the redshift was manually discarded as the assumed resutl fo detector misalignment, and statistics applied ot demonstarte thatthe remaing half appeared ot be geoneune. The hasselkamp experiment proved that test theory allowed experimenters to find a shift twice as strong as special relativity, and still report an agreement with SR to a few percent.

The success of bad test theories

The idea in SR test theory that transverse redshifts didnt arise in earlier theory is especially vexing as we know that this calls of effect appeared in Newtonian theory, and was documented (Lodge, 19xxxx). Why woudl we stadardise on a test theory that was so badly floawed and so obviosuly mathematically and historically wrong?

One possible answer is Darwinian natural selction. since it is quite difficult to distinguish between some of the C19th and Sr effects, a test theory that said that no such effects existed before SR made it easier for experimenters to report accurate and significant results. While experimenters might not have deliberately chosen bad test theories, if a bad test theory was already widely accepted, then it woudl be easier to beleive that it was not a bad test theory. Once a bad system had become widely accepted, to quesiton it and use a better but non-standard set fo testign protocols might result in an experiment being assessed as inconclud=sive or insignificant, and woudl also involve criticising the experiments that had beeb accepted by peer review.

In Darwinian terms, a test theory that is "most fit" for xperimenters is not necessarily the one that is most correct, but the one that enables experimenters ot report thehighest degree of accuracy and significance.


This reverse section ,where bad test theory can drive out good, also suggests another possibility - if the actual Lorentzlike exponent was greater than special relativity's 0.5, the only way to get a convincing result would be to use a test theory that declared values over 0.5 invalid. This would also allow the slightly perverse result that, given that any experimentation is likely to generate a certain amount of random scatter, a test theory with a cutoff at 0.5 woudl give a more emphatic agreement with SR if the true value was closer to 1, – in this scenario, the worse the agreement between SR and reality, the better the cropped data would appear to correspond with and support special relativity

See also