Jump to content

Does science need to be proven?


todofwar

Recommended Posts

A theory is only as good as its predictive power. So yes, it still has to be falsifiable in some way. If it doesn't make predictions that are different from the prevailing theory, it doesn't tell us anything new, does it?

Now, some competing theories that make some predictions that can't be falsified (currently) aren't completely without merit, so long as they match the prevailing theory on the things that can, or the things we care about. Newtonian vs. relativity is kind of like this; in a vast majority of cases the differences are insignificant enough we just don't care; we use the simpler theory in those cases because it's good enough. We even assume Keplerian behavior in KSP, and that works all right for us?

A scientist may care more, but us engineers absolutely live for "good enough," and celebrate it when it happens.

Also, two theories could be equivalent. Quantum mechanics problems can be approached from calculus, or you can use matrix methods. Different math. Same answer. In this case everyone agrees it's the same theory however you slice it, even though you go about it in completely different ways.

Link to comment
Share on other sites

14 hours ago, Shpaget said:

Some would say that, in science, you can't prove anything. Just because the experiment you are running ends up agreeing with the hypothesis, it doesn't necessarily prove causality. You can only be a little bit more confident in the relation.

At some point the confidence level is so high that the causality is taken as a fact, but a new scientific finding can alter the model. It happened before, it's likely it will happen again.

Before we came up with relativity, Newtonian physics were all the physics we needed and were considered a done deal. Then came along Mr. Albert and basically told us that we were wrong.

Some corrections of "Einstein proved Newton wrong"...

The issue starts with the Maxwell equations.  They show that radio waves are emitted at the speed of light (they work fine with relativity.  It's quantum mechanics that replaces them.  Presumably even QEM doesn't play well with relativity, though).

Michelson-Morley came along and showed that the above speed of light was constant in all directions (and frames of reference).  PS: The recent discovery of gravity waves was basically an update of their apparatus looking for roughly the same thing.  Only this time the answer wasn't exactly zero.

Lentz works out the math trying to figure out why the speed of light adjusts for a given frame of reference.  I likely have this wrong, but there is a reason Einstein always gave credit for the transformations he has since received all the credit for (i.e. showing why instead of how.  Because just how didn't make much sense at all).

Finally Einstein came along and showed just why the speed of light is the way it is.  Also, don't forget that in one important detail he proved Newton right in a way almost completely forgotten:  there was little explanation to why the mass in F=ma and F=G*(m1*m2*/r**2) were the same, until Einstein based general relativity on the assumption and worked out the math showing that the universe does work that way.  PS.  

PS.  Any guesses what level of precision you would need to get Moho to process around Kerbol?  KSP uses double precision for "on rails" calculation (like the planets), but since Kerbol is roughly Jupiter sized, I doubt that Moho is really going to process in any measurable way with infinite precision, let along double.  This was one of the big kickers that Einstein used to show that General Relativity was on the right track (as crazy as it looked), and the gravitational lens checking was done soon afterwards.

Link to comment
Share on other sites

1 hour ago, wumpus said:

Some corrections of "Einstein proved Newton wrong"...

The issue starts with the Maxwell equations.  They show that radio waves are emitted at the speed of light (they work fine with relativity.  It's quantum mechanics that replaces them.  Presumably even QEM doesn't play well with relativity, though).

Michelson-Morley came along and showed that the above speed of light was constant in all directions (and frames of reference).  PS: The recent discovery of gravity waves was basically an update of their apparatus looking for roughly the same thing.  Only this time the answer wasn't exactly zero.

Lentz works out the math trying to figure out why the speed of light adjusts for a given frame of reference.  I likely have this wrong, but there is a reason Einstein always gave credit for the transformations he has since received all the credit for (i.e. showing why instead of how.  Because just how didn't make much sense at all).

Finally Einstein came along and showed just why the speed of light is the way it is.  Also, don't forget that in one important detail he proved Newton right in a way almost completely forgotten:  there was little explanation to why the mass in F=ma and F=G*(m1*m2*/r**2) were the same, until Einstein based general relativity on the assumption and worked out the math showing that the universe does work that way.  PS.  

PS.  Any guesses what level of precision you would need to get Moho to process around Kerbol?  KSP uses double precision for "on rails" calculation (like the planets), but since Kerbol is roughly Jupiter sized, I doubt that Moho is really going to process in any measurable way with infinite precision, let along double.  This was one of the big kickers that Einstein used to show that General Relativity was on the right track (as crazy as it looked), and the gravitational lens checking was done soon afterwards.

Newton said matter nor energy could be created or destroyed, Einstein showed that there was a mass equivalency to  energy.

Newton's law of universal gravitation states that a particle attracts every other particle in the universe using a force that is directly proportional to the product of their masses but also inversely proportional to the square of the distance between them.-wikipedia

While the above is true, Mass energy equivalents instead shapes the curvature of space-time more fundamentally, as a consequence objects that travel in orbits are non-inertial, while the observer on earth is never non-inertial, they are not either attracted or unattractive they are simply following a space-time. A basketball in flight is non-inertial, I think newton realized this, but it is not attracted to the earths center its simply following the warp of space time conserving the composite of kinetic and potential energy as it goes.

Quote

Newton's law has since been superseded by Einstein's theory of general relativity, but it continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme precision, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at very close distances (such as Mercury's orbit around the sun). -wiki

Quote

In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress (that is, pressure and shear).[30] Using the equivalence principle, this tensor is readily generalized to curved space-time. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero— the simplest set of equations are what are called Einstein's (field) equations: -https://en.wikipedia.org/wiki/General_relativity

I should point that its excellency of approximation works well as one is measuring static forces, its not so great in measuring the motion of bodies in orbit, and consequently resulted in the prediction of vulcan, where no planet was present. For things like GPS, newtonian gravitation would not work.

https://en.wikipedia.org/wiki/Vulcan_%28hypothetical_planet%29

I should point out that Newton if a product of his time, Einstein also failed in some predictions, you can't really blame the past for successes of the future. We are blessed by the work of both. Science is a process, not a product.

 

Edited by PB666
Link to comment
Share on other sites

3 hours ago, PB666 said:

Newton said matter nor energy could be created or destroyed, Einstein showed that there was a mass equivalency to  energy.

Sure that was Newton?  My understanding that most of his work went into alchemy.  I'd be fairly surprised if many alchemists accepted conservation of mass.  I'd further guess that it really didn't get established until after phlogiston was dethroned and that chemistry was shown to work with equal masses.

 Von Neumann used to say he could come out before people who went into a revolving door ahead of him.  Newton was likely a master of this (and gets credit for things long after his death).

3 hours ago, PB666 said:

I should point out that Newton if a product of his time, Einstein also failed in some predictions, you can't really blame the past for successes of the future. We are blessed by the work of both. Science is a process, not a product.

There's little fault in getting your predictions wrong.  If anything, I would chide Einstein for his blaise attitude that God owed him to have built the Universe in a way pleasing to Einstein.  I think he later understood that he was lucky to get so many predictions right (especially since God seemed dead set on "rolling dice").

Edited by wumpus
Link to comment
Share on other sites

I don't like it when people talk about "disproving Newton". Technically, he was right. All our equations must reach the classical limit, otherwise they fail. Newton's equations still all hold, he just missed some variables that become negligible at his scales. Back when I was actually taking modern physics my professor always made a point of having an exercise where we solved equations for the classical scale, and saw that all those intricacies of relativity or quantum fell away and we were left with good old Newton or Maxwell. 

Link to comment
Share on other sites

19 minutes ago, Bill Phil said:

Newton was wrong about the aether, and wrong about space being static, an after that he was wrong on certain scales of gravity. 

But he was not wrong in ways that significantly affected the predictive power of his theories on scales that could be measured during his era.

Link to comment
Share on other sites

2 hours ago, kerbiloid said:

Can the initial assumption be falsified?

("If you cannot falsify a theory, it cannot be valid scientifically").

I'm not sure if it works formally, but the argument goes like this:

You have a theory: A and it's falsification ~A.

If A is not falsifiable, both A and ~A explain all existing data and there are no known conditions (or experiments) that would show either A or ~A false.

Should a "scientist" write a paper claiming A was true, you could equally write a paper claiming ~A was true.  Should the "scientist" claim ~A you could claim A.  At that point A=~A, which is a sufficient contradiction to throw the both of them out and declare them "unscientific".

Link to comment
Share on other sites

13 minutes ago, wumpus said:

I'm not sure if it works formally, but the argument goes like this:

You have a theory: A and it's falsification ~A.

If A is not falsifiable, both A and ~A explain all existing data and there are no known conditions (or experiments) that would show either A or ~A false.

Should a "scientist" write a paper claiming A was true, you could equally write a paper claiming ~A was true.  Should the "scientist" claim ~A you could claim A.  At that point A=~A, which is a sufficient contradiction to throw the both of them out and declare them "unscientific".

The way I see it, if both theories predict the same results in all cases then they are almost certainly the same underlying theory with a different interpretation over the top (see quantum mechanics)

 

Unless anyone has any examples that show otherwise?

Link to comment
Share on other sites

21 minutes ago, Steel said:

The way I see it, if both theories predict the same results in all cases then they are almost certainly the same underlying theory with a different interpretation over the top (see quantum mechanics)

 

Unless anyone has any examples that show otherwise?

I think the cases that show otherwise get to a level of physics few people understand. But it would be more along the lines of A and ~A explain all observables equally well, but they start to differ at a point beyond which we can reasonably test. 

Link to comment
Share on other sites

3 hours ago, kerbiloid said:

Can the initial assumption be falsified?

("If you cannot falsify a theory, it cannot be valid scientifically").

Yes. See: Newtonian and all preceding theories vs. GR. Space is flat…space is not flat. The former is such a basic assumption that it's pretty much overlooked when describing non-GR physics, up until GR came about and pointed it out. And why we're forced to use bad anologies (i.e. The rubber sheet example) when we're describingthe GR assumptions..

Link to comment
Share on other sites

57 minutes ago, pincushionman said:

Yes. See: Newtonian and all preceding theories vs. GR. Space is flat…space is not flat. The former is such a basic assumption that it's pretty much overlooked when describing non-GR physics, up until GR came about and pointed it out. And why we're forced to use bad anologies (i.e. The rubber sheet example) when we're describingthe GR assumptions..

This is a consequence from the initial theory, though.
Newtonian physics is just a particluar case of more common relativistic one, for trivial conditions (v << c, space curvature → infinity, etc).
Newtonian theory is not wrong, it just has more limited range of applicability. Also the relativictic theory is a limited case of some greater one.

I mean: we need to go deeper  the initial assumption (i.e. the scientificity criterion) declares that a theory (i.e. this assumption itself, too) can be treated as a scientific one if and only if this is possible to imagine a test when the theory gives a wrong answer.

For example: if we will drop an apple, and it will fly up instead of fall down, then Newton's theory would be invalidated ("falsified" as they call this)..
So, Newtonian theory is true or wrong, but it's scientific.

While if we declare "dropped apple will either fly or fall", we can't make an experiment which can invalidate this assumption (because in any case the apple will either fly or fall).
So, this theory is unfalsifiable and thus, non-scientific.

But is it possible to imagine a theory which is scientific and unfalsifiable?
If we can't imagine such theory, then the initial assumption (Popper's criterion, btw) is itself unfalsifiable, thus non-scientific and we cannot use the falsifiability as a scientific criterion of scientificity.

Edited by kerbiloid
Link to comment
Share on other sites

24 minutes ago, kerbiloid said:

If we declare "dropped apple will either fly or fall", we can't make an experiment which can invalidate this assumption (because in any case the apple will either fly or fall).
So, this theory is unfalsifiable and thus, non-scientific.

But is it possible to imagine a theory which is scientific and unfalsifiable?
If we can't imagine such theory, then the initial assumption (Popper's criterion, btw) is itself unfalsifiable, thus non-scientific and we cannot use the falsifiability as a scientific criterion of scientificity.

Your first declaration that a dropped apple will either fall or not fall is not falsifiable and thus cannot be the basis of a scientific theory, but it is logically valid and true by definition.

Similarly, the idea that a theory must be falsifiable in order to be scientific is not falsifiable but is still logically valid and true by definition. It is true by definition precisely because we define science based on falsifiability. 

There is no contradiction in having the definition of science rest upon an axiom which is not scientific in and of itself. There is a suggestion of this because "not scientific" can easily be mistaken for "unscientific", which in turn carries connotations of pseudoscience and false belief, but this is equivocation. The practice of the scientific method is science; the scientific method itself is epistemology.

On a related note, it is of course possible to create a scientific theory which deals with the use and usefulness of the scientific method. For example, one could hypothesize that the rigorous use of the scientific method yields more accurate information in the long run than an epistemology which does not depend on the scientific method. This is a scientific hypothesis because it is, in fact, falsifiable. For example, beings inside a simulation could be faced with a world where scientific analysis was discouraged by anomalous interference from the Program.

Link to comment
Share on other sites

15 hours ago, wumpus said:

Sure that was Newton?  My understanding that most of his work went into alchemy.  I'd be fairly surprised if many alchemists accepted conservation of mass.  I'd further guess that it really didn't get established until after phlogiston was dethroned and that chemistry was shown to work with equal masses.

Antoine Lavoisie and conservation of matter, but Newton had the conservation of momentum law,  Liebniz had conseravtion of energy. Gottfried Wilhelm Leibniz

Momentum can be lost if mass can be lost, though technically when that occurs the momentum goes into other types of particle, conserving momentum.

f8f8a3832c296ee49a62f3f14d490749.png

E = 1/2mv2

 For example when two black holes merge, they send out a graviton field represented by the energy loss in the interaction, causing the orbits of both to decay.

E^{2}=p^{2} c^{2} + m^{2} c^{4}.

Quote

 Von Neumann used to say he could come out before people who went into a revolving door ahead of him.  Newton was likely a master of this (and gets credit for things long after his death).

There's little fault in getting your predictions wrong.  If anything, I would chide Einstein for his blaise attitude that God owed him to have built the Universe in a way pleasing to Einstein.  I think he later understood that he was lucky to get so many predictions right (especially since God seemed dead set on "rolling dice").

Never underestimate the lack of confidence in an overachiever.

Link to comment
Share on other sites

12 minutes ago, sevenperforce said:

It is true by definition precisely because we define science based on falsifiability. 

Mathematical axioms are absolutely falsifiable, though. And axioms are not just a question of faith, they are absolutely exact definitions which are designating the range of applicability of the theory which they belong to.
If you can draw a line crossing its parallel line, the you've successfully run a test which falsified the corresponding axiom.
(As you indeed can draw such line, this means that your test had crossed the limits of the range of applicability of Euclid's theory, not that Euclid's geometry is wrong).

So, we have not a scientific criterion of a scientificity here, but just a non-scientific assumption taken as a non-falsifiable (i.e. non-scientific) axiom.
And then we are trying to use it as a scientific criterion of scientificity.

14 minutes ago, sevenperforce said:

For example, beings inside a simulation could be faced with a world where scientific analysis was discouraged by anomalous interference from the Program.

This just would mean that the theories describing their world differ from others, they are out of our theories' range of applicability.

Edited by kerbiloid
Link to comment
Share on other sites

16 minutes ago, kerbiloid said:

Mathematical axioms are absolutely falsifiable, though. And axioms are not just a question of faith, they are absolutely exact definitions which are designating the range of applicability of the theory which they belong to.
If you can draw a line crossing its parallel line, the you've successfully run a test which falsified the corresponding axiom.
(As you indeed can draw such line, this means that your test had crossed the limits of the range of applicability of Euclid's theory, not that Euclid's geometry is wrong).

So, we have not a scientific criterion of a scientificity here, but just a non-scientific assumption taken as a non-falsifiable (i.e. non-scientific) axiom.
And then we are trying to use it as a scientific criterion of scientificity.

I don't think anyone suggested that the requirement of falsifiability is a scientific criterion of scientificity...or, if they did, they shouldn't have because they would be wrong. 

Mathematical axioms are falsifiable but ideas which are true by definition need not be falsifiable.

19 minutes ago, kerbiloid said:

This just would mean that the theories describing their world differ from others, they are out of our theories' range of applicability.

Not necessarily. Consider two simulated universes with identical physical laws, one of which is programmed to discourage scientific inquiry and one which is not. The former universe will see the "science is useful" hypothesis falsified, but the definition of science will not be in any way changed. 

Another example would be a universe with extremely complicated laws which came with a Truth Book providing freely accessible answers for all basic questions. Here, again, the definition of science would be the same but the usefulness hypothesis would be false; thus the usefulness hypothesis is falsifiable.

More broadly, the hypothesis of the usefulness of science is falsifiable because if the hypothesis were not true, science would not work nearly so well as it does. 

Link to comment
Share on other sites

12 minutes ago, sevenperforce said:

I don't think anyone suggested that the requirement of falsifiability is a scientific criterion of scientificity.

As I can understand, this topic is dedicated to the Popper's criterion, which declares this "unfalsifiable means non-scientific" assumption.

As for these two universes, of course if the second one is based on such complicated laws that nobody can get a predictable result, then "science" (i.e. structurized empirical knowledge) is not in order.
As an absolutely unpredictable system is usually called "chaos", so, either this is a pure chaos, or this universe is based on short-range predictability.
But, anyway, looks we are talking about different things here - falsifiability vs usability, just different themes.

Edited by kerbiloid
Link to comment
Share on other sites

21 minutes ago, kerbiloid said:

As I can understand, this topic is dedicated to the Popper's criterion, which declares this "unfalsifiable means non-scientific" assumption.

But, anyway, looks we are talking about different things here - falsifiability vs usability, just different themes.

I think we are talking about an equivocation of falsifiability and usability. Something which is not scientific can still be very useful, and something which is not very useful can still be scientific. A lack of falsifiability does not mean a lack of validity.

Link to comment
Share on other sites

If we wish to discuss this, we should probably read Popper's "The logic of scientific discovery." Which is the seminal work in the area. It's easily found in PDF, but, although out of copyright in most sane countries, it still labours under copyright law in the U.S.; I have a link here, but I can't post it. annoying.

Link to comment
Share on other sites

2 hours ago, sevenperforce said:

I think we are talking about an equivocation of falsifiability and usability. Something which is not scientific can still be very useful, and something which is not very useful can still be scientific. A lack of falsifiability does not mean a lack of validity.

Can I make a point here though, it really stands at the boundary of quantum world and relativistic world. The fundemental claim of the Newtonian/Einsteinian world is causality, to a fault in both cases.

There is a basic assumption of causality in all classic/relativistic models, that need not exist at small scales, and even if it did you might not be able to detect.
A graviton may be an example.

The problems at the quantum scale are in fact not limited to quantum scale. Some of the facts about quantum mechanics are also evident at visible scales.  When we talk about falsifiability in the context of quantum events, particularly if the power of the argument is not great, has more to do with sampling and resolution.

IN the wiki article they mention All swans are white which immediately drafts black swan theory. At the heart of black swan theory is the question, how much sampling is adequate to draw a conclusion about quantum states that make up the distribution. In some of the work that I do, it is sufficient to say, there is never enough sampling, its always an assumption that you have enough. 

An excellent example of the problem is infant onset type 1 diabetes. There are adequate studies in identical twins to argue the proportion of environment and genetics in disease plus or minus 10%, and neither can be assigned to zero or near zero risk, in many inflammatory diseases the risk are all but balanced. This makes it clear that both are extremely important to the cause, and we have rather good data on the genetics, but the environmental data constantly throws out marginal associations with factors (weight of female parent before and during pregnancy and sun exposure, infant birth weight, commencement of cereal consumption, sunlight exposure, vitamin D supplementation, infancy during a period of intestinal virus pandemics). The problem is that when rpidemiologists look at 10,000s of sample subjects across Europe most of the associations prove to be weak or marginal, some appear important in one generation but not the next. Certainly in places were winter insolance is low, the vitamin D aspects in parent and child are important, and viral infections vary.

Assessing risk in these circumstances proves to be an assignment of an multiparametric equation (this has been done for 31 genes in rheumatoid arthritis risk) which is not a unique answer, the equation is not 1x + 1z = risk, its more of U + 1x (+/- varx) + 1z (+/- varz) = risk (+/- var risk). Where U is the composite of unknown risk. I should point out that alpha in statistical analysis is the chance of something appearing false when it is true (typically in tests using a null hypothesis, the gold standard of science), bearing an alpha below 0.05, potentially much lower in correction for multiple sampling, creates a problem in sampling the lends itself, sometime unnecessarily, toward rejecting the null hypothesis. This is particularly the case if there are dependencies between two variables. The problem that the negation hypothesis brings forth is that beta, the chance of appearing true  when it is false, is almost never tested for in science, but the problem is that in complex analyses, correction breeds type II errors, even though we do not search for them (as you would say checking for linkage of  useful functional trends that lack falsifyable proof). Most scientist who use statistics are aware of this, its delegated as the responsibility of future studies, and the publish papers aware that conclusions are likely wrong. (why else do refined analysis).

You have to know and test for dependencies before you can correct/condition the correction methodology. The initial analysis for the correct statistical power of a study is textbook, and in many association studies is all but meaningless and almost never used (except in clinical trials) because of this, since you have to have first a large enough sample size to show dependencies, but there is no way to know this until you have demonstrated dependencies exist. Two papers can exist, one paper shows and association, the next paper studies 15 parameters at a time, and shows no association, a third paper then comes along and looks at a reduced set of marginals, does refined analysis of the marginals and shows an association. This happens all the time with Genome Wide Association Studies. There about a dozen known reasons why an accepted null hypothesis could be false, and yet the correction methodology is almost never conditioned, the reason is that it opens up a Pandora's box of critiques, its just simpler to accept and then restudy cherry picking pieces of the genome. To put it otherwise, altering the correction modality is akin to drawing up a multi-parametric equation, coming up with an answer and have only a faint clue about the parameters and their variances. This is black white thinking that permeates science, and it actually recreates the type of logic that statistics was designed to get rid of.

This is one thing that people don't understand about drug trials, when a drug is in trial they look for both effectiveness and side-effects. There are some classic areas of side effects (such as heart attack risk, allergic reaction, etc) these comprise white swans. The companies doing the test have no particular impetus to look for black swans there are 10,000s of these lurking at very low frequency. But even if they look for black swans they may still not observe them. The problem is quantum statistics, suppose you look for allergic reactions, you find X%, now suppose you look at all inflammatory reactions including things like drug induced deep organ inflammation. Broad catagories whose subsets seem unrelated can create significance where parsed groups show no significance.               So drugs goes to market, and in a few years you have a new category of auto-inflammatory disease, a misnomer in the sense that drug induced the inflammation. To a large degree if you are testing a drug that modulates immunity, you look for evidence from infections, that's obvious, but you might not look for evidence of the opposite effect, too much specific immunity. The bias in quantum analysis really depends what you are looking for, if you apriori suspect some bad thing might occur, if you see evidence then you might do refined analysis and show that there is an association. But if you are not looking then you may not detect, both outcomes can be drawn from a marginal or super-marginal associations. This is something that been observed repeatedly within the decade following new drug introductions in the market place. You can't really argue that phase III drug trial with 30,000 people has inadequate power, and the clinician then needs to interpret symptoms, like this blood parameter suddenly looks abnormal how much should he/she test, and you get then a statistically meaningless clinical case report.

Statistics is not black and white, it was never meant to give black and white answers its designed to give a confidence range. .. . .if you are looking at events outside or on the edge of the confidence range, then past statistics could be meaningless. To some degree the statistician needs to have a feel for the data that like any craft is an art of practice.

The layman looks at the world in terms of appearances of certainty, for example the big asteroid wiped out the dinosaurs (but in fact smaller winged dinosaurs survived). One thing I like to bring up is Species, a species is something that does not typically interbreed with the opposite gender (or mating type) of other similar types. Species barriers are not concrete but subject to change over time. The common definition has little to no statistical meaning, and in many examples genetics shows that barriers leak. Something can be a species and also interbreed with other species under select circumstances. Formal boundaries are almost always, but not entirely, meaningful at the genus level.

Therefore a species definition is not meaningful? Not really, a biological species can be a good way of defining ecological constraints on a roughly interbreeding population. For example whats the point of saying there are 100,000 members in a species, when an isolated subspecies has only 50 individuals is risking an extinction bottleneck, and the random probability that members of the other group will interbreed to restore diversity within the potential extinction window is less than 50%. The issue is the cat in the box, once the clad is extinct, walla it was a species after all, the asked question become moot point. That's not an alpha critieria, so is alpha arbitrary, sigma of 5 was used for higgs, 2 is pretty standard, but if the degrees of freedom is collapsing to 0, 50/50, 10/90, 1/100 might as well be just as good. But was the clad meaningful? was a collection of Pupfish in a desert ink-well something unique or was it washed down in a cryptic dry wash 50 years ago? Its all in the qualitative system analysis, very difficult to see unless the approach is comprehensive.

Complex systems have complex interactions, they often give meaningless answers when black/white logic is applied. In a flavored world a quantum state becomes the individual (or potentially pairs like identical twins) , molecular products of a chemical reaction, planets within solar systems. Quantum mechanics shows us that reality is really best interpreted in  the statistical world, not the classical arithmetic world. We interpret our black and white swans as such, but that is only a bias of the way we like to look for patterns. The means in which we classify, parse, or characterize in and of itself can determine the likelihood something is or is not going to be falsified, somethings are just not ripe for testing, and marginal answers are the result.

Link to comment
Share on other sites

On 5/3/2016 at 6:01 PM, todofwar said:

So, I stumbled upon this little essay https://www.edge.org/response-detail/25322 in which the author argues you don't need the ability to falsify a theory for it to be valid scientifically. I would argue that if you have no way to test a theory it doesn't belong in science, but I also can't think of a way to bring complex mathematics back into the fold of science with this harsh criterion so I am a bit torn. Any thoughts? 

Well math isn't science, so what's the problem?

I will note, however, that science is never "proven". Theories are tentatively accepted if attempts to disprove them fail. But failure to disprove something is not the same thing as proving it.

Link to comment
Share on other sites

3 hours ago, mikegarrison said:

Well math isn't science, so what's the problem?

I will note, however, that science is never "proven". Theories are tentatively accepted if attempts to disprove them fail. But failure to disprove something is not the same thing as proving it.

Then that means that there is a tea pot in space. Every attempt to disprove its existence has failed...

Link to comment
Share on other sites

3 hours ago, Bill Phil said:

Then that means that there is a tea pot in space. Every attempt to disprove its existence has failed...

No, you clearly fail to understand Russell's Tea Pot. It does not mean there is a tea pot in space. It does mean that there *could be* a tea pot in space. If there is, it will entirely blow up most of our established understanding of entropy (and therefore science). But until (unless) it is found, we continue on thinking that tea pots can not spontaneously pop into space.

You can not prove there is no Tea Pot. But that doesn't mean there is one. In fact, it means almost the opposite. It means that the theory that there is a Tea Pot is not a valid scientific theory.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...