Jump to content

PB666

Members
  • Posts

    5,244
  • Joined

  • Last visited

Everything posted by PB666

  1. https://en.wikipedia.org/wiki/Grasshopper_(rocket)
  2. Particularly since the STS are NOW sitting in cushy museums and theme parks.
  3. While the paper is on dark matter around spiral galaxies . . . .this thread is on Dark Energy, there is Which is an omnibus dark gravity - dark matter discussion thread.
  4. Looking for something else and found this paper https://www.cambridge.org/core/journals/publications-of-the-astronomical-society-of-australia/article/constraining-a-possible-variation-of-g-with-type-ia-supernovae/70513C64971B60C22A8232B64C37243B
  5. The problem is the specific kinetic (µ/2r) that you have and the lack of specific energy that your particle of interest have (random). The principle scattering force for particles is UV from the sun and the scattering wavelength is very low for hydrogen and a minor constituent for sunlight, particles scattered by UV don't generally have enough momentum to escape and to your prograde motion the direction of scattering non-rotating reference frame randomize. This is a problem for Venus and Mars because of lack of magnetic field allows particles to penetrate the atmosphere and scatter gas, mainly protons out of the atmosphere. This is not the case on Earth we have van Allen belts, and over the whole of Earths spatial atmosphere only about a kg is lost per day. There is simply not enough energy to scatter significant amounts oxygen above a certain altitude. 1. You have a very strong vacuum pump and you vacuum space. You would pick up particles if the nozzle is pointed in the direction of travel otherwise no particles, you could put a styofoam cup were the nozzle was and you would probably capture the same amount of hydrogen, there simply is not enough delta-pressure in space to make a vacuum pump useful. 2. So alright you dangle a gigantic cup behind the space craft (a giant plastic cup with a tiny vacuum pump at the vortex)[The spacecraft rotates 1 per orbit so that the cup is always behind the craft). The average velocity of the particles hitting the cup is 7840 m/s, your craft in merging with the particle gets all their momentum. If we replace moment of intertia with momentum (assuming a right handed orbit) then -7840*mass of particles vacuum. So suppose you vacuum hydrogen and oxygen from space at 7840. . .in order to maintain orbit, bad news, the Exhaust velocity is 4740 m/s with burning you just lost 3000 dV per particle. But its actually much worse, because you would have to capture 100s of hydrogens before capturing one oxygen. 3. Ok so you only expand your cup when the craft is passing the angle to prograde of 180 (in the direction of sunlight flow) and collect charged slow moving hydrogen that are moving 1000s of m/s . . . .a good choice but the problem is that there is not enough pressure and UV from the sun to charge all of the hydrogen you pass, volume swept is much much lower. 4. So to compensate you build an ION drive, lets make it real, its needs 16,000 exhaust velocity to neutralize all forms of drag created. (ISP = 16000). OK so now you have to haul Xe or Ar from Earth and the mass of a large number of solar panels (which also create drag) to compensate for the fuel, so now you are paying money to send fuel to space. And alternative to making a big plastic cup to capture O2 is to make your ship very space-aerodynamic (gently deflecting particles and using some very aerodynamically shape mass accelerating magnets and the solar panels to accelerate the particles in the other direction. You could use UV to ionize the gas prior to reaching the ship capture electrons, then push off on the gas, and electron emit retrograde. In this scheme you use the same number of particles, but instead of trying to capture them and loose the dV you hurl them into another direction (say an unfriendly spacecraft you want to deorbit). The problem with this is your solar panels that make the electricity for you hadron collider creates alot of drag itself . . . . SpaceX does have the right Idea, the key however to getting more fuel into space is to make launches more cheaply and then more efficient at roughly the same cost. Instead of putting solar panels in space to make lots of drag and try to devinate hydrogen and oxygen into existence . . . . . . . better idea is cheaper more efficient solar panels . . . .more efficient hydrolysis. . . . . .RS68A like engines are already great, something like that smaller and you have all kind of thrust you need to increase payload. Even if hydrogen it too much of a hassle on the ground, its not too incredibly difficult to make Methane from H2 and CO2 or to use the electricity-photosynthesis to make ethanol, acid dehydrate it to ether (35'C) or hydrogenate it to ethane (-88.5'C). The point is cost, get cost down and we go to space, keep cost high, no matter how efficient the process and we stay on the ground. Its not particularly ecofriendly idea (that CH4 is a greenhouse gas), but spending 20 billion dollars to get one really efficient rocket into space has alot of unseen ecological unfriendliness associated with. Seriously SpaceX can put solar panels on the roofs of all their facilities and have a more beneficial environmental impact relative to any scheme designed to scoop fuels from space.
  6. The paper was in 2013 almost 2014. By that time SpaceX had already worked out most of their landing bugs. See below the only major bug left was grid-fin engineering. https://en.wikipedia.org/wiki/List_of_Falcon_9_and_Falcon_Heavy_launches This you will find relevant although not the only evidence that the engineering constraints (the structure, not electronics) needed to be improved before reproducible landings could be accomplished. If prediction is any measure of which metric they are using, my bets are that B5 performance testing is not anywhere near finished . . .while BFR may shorten this process, also are on the testing to improve both the B5 F9 markedly and FH PL>LEO to at least 80 kT maybe 100 kT. The hangup in the performance of FH was the visible failure of the core landing. The initial burnback is as if they were still using f9 burnback protocols, indicating at least these have not been tweeked. Which means there is still performance left in core to be had. Im not criticizing them, but once B5 is fully operational I expect you are going to see many small design improvements that give tiny boosts in performance.
  7. You mean the trial and error that companies went through. Its more than just solving the equations for X (whatever the coordinated of the launch pad are), Vx = 0 (relative to the surface of the earth at X), and thrust as a function of altitude, friction as a function of altitude axis relative velocities. There are many other factors. 1. How much gas pressures you need to re-fire an engine experiencing certain dynamic pressures (things like mock test firing models under different pressures) 2. How much friction you generated traveling backward with the grid fins facing a certain direction. This equation gets really hair when the space craft goes transonic. A computer will not likely get this correct on the first pass, it requires some in the field modeling. You have 9 rocket engines coming across the sound barrier and they are shaped the exact opposite of a Sears-Haack shape, you have grid fins sticking out . . . . 3. Grid fins: what are they and how do they behave, this is not trivial . .once deployed.. .deep in space the grid fins act as a particle deflectors, as the hit the peak of dynamic forces and under the boundary layer they are doing far less breaking than the engines(off), after they go trans-sonic they begin increasing huge amounts of drag again. 4. Throttle up times while under huge dynamic pressures. 5. Initial ISP when under huge dynamic pressures. 6. How to factor in estimated and actual atmospheric conditions on the descent. Programmer X writes a program according to all the laws of physics that he is aware of. Programmer X watches boosters crash in ocean . . . . .telemetry data reveals parameters (X, Y, Z . . .he is not exactly sure which one) is off. Meaning actual f(x,y,z) at dT=q had an elevation of minus something (meaning its intercepts the water) when it was expected that velocity would be -1 m/s. (This could also be rotational or surface relative horizontal velocity) Programmer X as of yet does not have enough data to determine how to fix the f(X,Y,Z) but he just creates a constant that tells the engines to fire a quarter of second earlier than it would have. If you are using radiotelemetry or GPS and you are traveling at very high speed, your data is old by the time the data reaches the flight computer. At some point as the flight closes in on a target at high speed it has to actually be processing its reference frame and applying fudge factors relative to its reference frame. F9 has 9 engines at its disposal, based on symmetry it has 4 pairs + 1 engine, each engine can be throttled down to 40%. This means relative to initial full thrust it can operate from 4.4% up to 11.111% on one engine 8.88% to 22.222% on two engines, 13.3333% to 33.33% on three engines (meaning 33.33% to 4.444% if you on 3 engines and off two engines during the descent). Given the fact that it is experiencing a >> 10 m/s2 in the second prior to landing its got more than enough variability. What probably has the most meaning, if you don't want to was alot of weight and money on very fancy engines that can down-throttle to 5%, just use a bunch of cheaper, lighter engines. The problem is with kerosene, which has a low vapor pressure, you need enough head gas to fire them multiple times, this is not required with cryogenic fuels. Why they lost the core? I am very critical, and I was very critical of space X designs in the beginning . . . .kerosene on the first stage I see as part of the problem. I don't know if methane is the solution, propane might be a better solution, there is a trade-off, if you must fire a rocket many times some type of cryogenic gas is the way to go, plus you get that ISP if done right. RS25 however tries to get too much thrust from a small engine (making it expensive and prone more complex and failure prone) . .so you don't want to go that way. The 9-engine craft seems to be a good compromise because you don't need complexity. But they use the FH boosters to build altitude (18 engines back to surface) the core to build momentum (9 engines back to surface) and once the S2 has got a good vertical momentum and 3000 m/s (about 3/8ths of what it needs to orbit) it has enough thrust and momentum (i.e. time) to accelerate the rest. BTW, as per an earlier argument . . . . . .we should consider one of the limiting factors on other launch systems in the heavy category . . .repeating . . the RL10-b2 is rather limited in its thrust and difficult to pack multiple engines. So if you want to get something heavy into orbit with hydrolox fuels, a single RL10b-2 is not the way to go unless you want the RL10b-2 stage to fire from 5000 m/s, this is effectively a 3rd stage. Something is going to have to change in the hydrolox engine usage if the traditional players hope to compete with space X. RL10b-2 is too small and bulky.
  8. lol, and it goes on to say "The newly discovered Rhodeisna man may therefore revive the idea that Neandertal man is truly an ancestor of Homo sapiens; for Homo rhodesiensis retains almost all of the Neandertal face in association with a more modern brain-case and up-to-date skeleton." You're funny when you are angry. lol. Obviously Rhodsiensis did not evolve from Neandertals and Neandertals are no similar to humans than Rhodesiensis was. but that's another discussion. "Although the new skull from the Rhodesian cave so much resembles that of Neanderthal man . . . . ."
  9. No need " it is strangely similar to the Neanderthal or Mousterian race found in Belgium, France, and Gibralter" . . . . . . . . . . . IOW more similar to Neandertals than to Humans. So do you think now you can go back and be less profane? Avoiding that ole word filter that the moderators so love to use. I should point out that Homo rhodesiensis was not closer to Neandertals, it may actually have been closer to modern humans. Ok, maybe I have overused the term Neandertal, but the point was to emphasize the fact, that given two finds, Piltdown and Kabwe, Kabwe was pushed to the back and Piltdown pushed to the front as a potential ancestor of humans. The dating he applied was all over the board, it was meaningless and 100 years later is still meaningless. You can't dress this stuff up and say it was beautiful, it was ugly science.
  10. I see. You can edit your posts, lol. And with fair use you can quote. http://amendez.com/Early Man Seminar Poster/EMSP Rhodesian Man-text.pdf
  11. Original nature paper to be specific.
  12. Read the entire paper from 1921.
  13. If you want to read that into it that's your choice. For 41 years PA could not distinguish a fraud from fact and for 41 years PDM was part of the mileau of other finds. The fraud was there, in plain site to see, and still it could not be seen. As per AMS, garbage in garbage out. The technique is great, its not a put down. But the sample preparation is what makes a great machine produce great results. As I pointed out to you, if you invite a critique of paleontology, be prepared to be stung. Let me lay it out to you. First you take just the bone and do radiocarbon dating. Next you extract the collagen then you do radiocarbon dating. Then you carefully purify the collagen, then you do radiocarbon dating. You might even have an antibody that is specific to human collagen, you can use that to further purify it and then acid release the antibody. If the date increases with each purification you perform, then you had an extrinsic source of variation. Just as dust is extrinsic to glass. This is not trivial either, collagen is just an acid soluble protein, bacteria also produce acid soluble proteins and any good biochemist knows that extractions contain impurities, proteins not of interest. That was the point of a Ph.D. is basically you can purify someting away from everything else and characterize it.
  14. Look up the paper. it was described as a Neanderthal. He had a someone elaborate context but that was his choice of words. They may have been the founders but a New Zealander by the name of Wilson reformed it and made it meaningful. And if you know what that is in reference to you would be a much better paleontologist for it.
  15. No it cannot, with modern technology like AMS it can. Statistically the results of the Felhofer dating has a problem which I can state as this. Great statististics confidence is the inverse crossproduct of all independent sources of variation. But what the paper publish is the intrinsic source of variation. It makes no effort to describe extrinsic source of variation. For the physics folks here dP*dx= h/2pi is variation in that is intrinsic to nature, but poor experimental set up, faulty devices are extrinsic to the nature. The assumption you make in producing a +/- range as the published in the paper is that you have eliminated all extrinsic sources . . .but this is not true. Neither did they set up an experiment to test the possibility of extrensic variation or provided a result. For example, falling down the face of a rise and being crushed by various forces is a source of extrinsic variation. This is a huge problem in archeaology, although they have sought to improve the techniques over the last few decades, anyone who understands both the chemistry and the statistics can see the problem rapidly. Getting a dating before 40 kya requires you to get alot of things (as in everything) right, not just the instrument needs to be modern, but the samples need to be treated to remove the potential of all extraneous sources of 14C. As I stated who actually defended the sites context. Who stepped up and said this site is important, lets seek out and find all like kind items. If Furlochs work was great it would be done, otherwise, it makes little difference what he said. Its not to cast doubt on their work, its a point that the system is flawed and tends to produce flawed results whenever all points along the path are not treated rigorously. Of course Felhofer I is of a higher class than Piltdown. But in fact it was the highest point in two generations of work, which does not sa much about those two generations. Everyone doubted the A mtDNA work, including myself, the first thing I did was look for cytosine deamination and I found it. I really could not see how they could get DNA from a sample treated as badly as that sample had been treated. But when the Vidijna cave sequences came in and we could see how close these were together, one has to accept it as a point. I doubt everything that MPI produces, because they have a track record of small errors, but in general those errors are corrected in a couple of years. In archaeology errors might not be corrected in decades. Thats not intended to be an insult its just a recognition that it lacks an objective basis for critique. Within weeks of the original Neandertal genome you had people including myself scouring the sequence looking for flaws, and within months the flaws were found and reported and MPI had to respond and modify their work in response.
  16. When I look at the Enterprise it always looked to flat to have many decks, like maybe if everyone was a pygmy. When you see the corridors you know there is a plenum above the ceiling with stuff up there. On a submarine a tall man as to duck going between sections. That was the coolest part, the barricade that would drop and the bat-cave with a turntable for the bat mobile. You take away his props and all he is a guy wearing leotards with an unusually close and intimate relationship with robin.
  17. S. Woodward lumped them . . . .Im using his designation. Once upon a time i deemed it important . . . . .but in discussion with African paleontologist I gradually came to realize it could be 750 kya or a recent variant there is no way to tell them apart. Which gets to the point that some effort to have characterize the site as soon after its discovery, maybe even collecting a soil sample from the site, could have been useful. Again he supported a designation of Piltdown man, but also failed to investigate to any degree homo rhodesiensis, which was an opportunity lost. It is true the many archaeologist and anthropologist disputed certain commonly held points of view, but this is not the point, the arrogant bias the permeated human paleontology. From my point of view the focus on European paleontology put a very important discovery as more or less a * in paleontology. You here the word Kabwe, Broken hill, Homo rhodensiensis type specimen being tossed about in the literature, but thats basically the limit in its importance. . .an opportunity lost. One has to take the conservative position. You might have an insight that variant X existed, you don't know what or where existed until you have some pejorative information (which my point about AMH I also apply to my own thinking). The problem in Africa is this, if you can imagine a wall and I take a shotgun and fire gameload with painted pellets at the wall, would that random colored pattern make any less sense than Africa paleoanthropology. And while you could have an indefinite number of pop/subpopulations living all over the place, in the context of population sizes and forces that shaped internal and external African evolution would necessarily make some of these undetectable except in very fortuitous circumstances (like the discovery of recently dated iso-variants in S. Africa). This is where things will stand for the foreseeable future. To be certain, to know, you need context, and the lack of context in many sites means you don't know much. And in the case of human evolution to know means you need genetic context, and if the context only appears as fragmented DNA sequences over time attached to some odd primative remain. . . .then also you don't know much. Its also true for Asia, if you have a genetic anomaly and you don't have a reference to compare it with (like Denisovan) then you don't have a context. I read an article that B73 was probably from Denisovan admixture. But it was not found in Denisova and nor was it found in the haplotypes. B73 is not the only allele, and if the admixture was a single event as some believe . . .well . . .B73 is an odd one, so is B48 and so is B67 and so these cannot be from a single individual admixture and there mode is in the Northern Yellow Sea region, not Indonesia (Individuals can at most care 2 variants . .they cannot carry 3). In the same way I detected freaky alleles that created modes in central Africa, I also found these not to be from SSA Africans generally and over time others found similar associations. But if we never find a fossil that links a context to the region of interest, its just a guess. The Zoukoudian fossils could have been that, but they are gone now. Opportunity lost. This person that you defended J. C. Fuhlrott could have taken the initiative and fully investigated the site, securing as many fossils as possible and sheltering them away in some cold dry place, context was lost.
  18. Alright so the next bits start in the 1880s and proceeds to present. 1880 - Spika cave (what is now czech republic) Mousterian tools found in the context of a hominim, presumbably Neandertal <-- first finds with context (No dates) 1888 - Two sets of remains which form a better picture of Neandertals, and and also archaeological context. A third find from the site (no publication). 1899 - Krapina (Croatia). Excavations in this area would eventually provide the most useful samples for molecular anthropology in Europe. 1908 - https://en.wikipedia.org/wiki/La_Chapelle-aux-Saints_1 you can read. Like all the other sites except Krapina, the dating is uncertain. The archaeology was not done conservatively creating unanswerable questions about behavior. This particular site in Europe stands out be almost every Neanderthal find in Europe represents individuals of youthful disposition. The age at death would indicate the maximum longevity of Neanderthals to about 40 years in age (a guess). Neanderthals of similar or greater age are only seen in Iraq. And with this there is a point to be driven, the shorter lived Neanderthals appear to, without the presence of humans, have lived rather stressful lives. One researched noted that typical injuries are similar to those experience by rodeo riders. There has been speculation that Neanderthals matured 30% earlier than humans, had children at younger ages and died earlier. This may have been a response to the stressful conditions of Europe during the Ice Age. The hypothesis has not been followed up. 1912- Marcellin Boule publishes his now discredited influential study of Neanderthal skeletal morphology based on La Chapelle-aux-Saints 1. -wikipedia. We should note that while we have a dozen or so finds from Europe, nothing in the way of African archaeology has been done, and by and large the archaeology done in Europe is of poor quality, by anyone's standard it amounts to alchemy. Things don't get better, they get worse. 1912 - Piltdown Man - As most everyone knows now, piltdown man was a fraud, however this was not discovered until 41 years after the fact. One of the obvious failings of paleoanthropology is largely the fault that the researchers were unfamiliar with the way evolution works or the nature of trait evolution (for example an female gorilla jaw is not a trait, its a composite of various traits). But the impact of Piltdown Man was to create a Eurocentric view of human evolution. That is to say that chimps and gorillas and Africans were these weird offshoots of the branch that evolved in Europe. https://en.wikipedia.org/wiki/Piltdown_Man Up to this point the typonometric analyses are largely dominated by bias. If we look at these from a poisson point of view, we are already drawing conclusions about the rate at which drops fall per unit of time with both an obscuring of most of the survey area (about 90% of the land-surface of the Earth is not surveyed) and the context that would eventually result in the timings are also very lose to non-existent. Worse because of the careless preservation and disturbance at many sites the precise datings are obviated. And yet conclusions are already being drawn about human evolution based on a statistical analysis that is all-but worthless. And so it would be that in 1921 the discovery of a Neandertal (Broken hill, Rhodesia) in a very odd place would largely be ignored. No serious investigation was to be conducted at the site.
  19. With regard to the he said/she said stuff . . . . . .what I will be able to demonstrate is the original wrong notions that were brought into the argument did not disappear, but in fact many of these notions persisted until it was shown that these notions were wrong. And what you will see is that only the subset that were proved wrong fade, but other wrong notions continue to persist even until recently. Archaeology did not release its biases easily, some of the changes were well fought over. You don't quite get the point, you don't have the Marx brothers come in and do archaeology for 100 years and then say geeze now we are going to do the serious stuff. What you have to say in the paper is to start this is how the samples were manhandled (not bury that fact deep in the text) and these are the things we could and could not do. Critics need details, what was the nature of the preservative placed on the calvaria? Was any means taken to characterized the chert and other material in which the bone fragments were recovered (how disturbed were they after deposition). And of the things we could do this is how we avoided all the created risk. The null hypothesis is that the dating is indeterminate, in all circumstances its up to the authors to prove that in the mess that they have adequately cleaned the sample, the critics responsibility is to point this out if they haven't. I should also point out that at a certain point in the purification, because the dating is so close to the limit of Carbon-14, that each step needs to be conducted from CO2 (from ancient carbonates that are acid treated to release gas)) purging of buffers and then degassing CO2 purified such that the CO2 itself is tested for 14C and found to contain none. (IOW all gasses used need to be completely free of radioactivity, N-15, C-14 . . . .) I will deal with this problem later, it is not possible in all cases to get high count on the AMS dating. Another point here I will make later, even if the researchers are extremely careful, a cave is not dead, over the life of a cave, animals will come and go, bats, cave arthropods, etc. In detail it needs to be certain with testing as excavation proceeds that the indicators of recent activity (mtDNA and other markers) have not accumulated in the soil. But in the case of Feldhofer I we are beyond that because the context was lost, so one has to assume for the sake of research that the sample was contaminated and then take the appropriate measures to remove that contamination. As I said we have other reasons to believe that Feldhofer I was not greatly older than 40k because of the issues regarding the mtDNA persistence, and given the post disruption exposure to the elements we are lucky to have any DNA left in that sample, so it is reasonably to imply that the dates are close, but this really is 'in the ballpark' analysis. But getting the correct result may just be a matter of luck and the next time you may not be so lucky with the same technique. I will deal with this later with Oase 1. I should point out that the first three raindrops that fall all represent some type of ill informed sampling bias. None of the first 3 are accurately dated, there is extreme bias toward the discovery of European fossils (which would lay the framework for the next period of ___________________________[to be shown]). And essentially no effort or questioning about how these fossils fit into the broader framework of ape evolution, other than the spurious incorrect assertions that they are related to south americans. The same types of assertions were made concerning Coobol creek in Australian, and again, were wrong minded.
  20. Spectroscopy could not measure the level of radioisotopes in the gas. In addition even small traces of gases like HCN would be lethal with an hour of exposure, you would hardly notice that level. But there are other issues HCl even in small concentration will acidify the membranes in the lungs causing disabling conditions. Most atmospheres of planets will be toxic to humans, either in what they have or in what they don't have. As of yet we have not discovered a single planet that has >10% oxygen in its atmosphere. MOst sci-fi exaggerates the frequency of livable planets and planets with sentient life by many magnitudes. That usually part of the prop device, to bring aliens that are otherwise way far away from Earth, close enough that we can visit them by spaceship. Its not bad physics per say, it just bad chemistry.
  21. I was going to put this in the other thread but after some thought decide it was more appropriate here. I want this to be more of a backdrop of sensing when science is correct or not just in the results but the way it proceeds. To give a sense of how to assess I want to create a logical context of sampling, and the simplest example that I can give is this You have a patio that is 10 meters by 10 meters. Its recently been painted with a medium grey color and has a course satin finish that shows rain drops pretty well. So that at a given time, how many raindrops need to fall on that patio before you can assess whether the number of Raindrops are falling at the same rate or different rate at a given confidence range (say +/-50% relative variation) Here is the formula. https://en.wikipedia.org/wiki/Poisson_distribution By this definition you will have to bracket off both blocks (say 1 x 1 square meters) and your sampling will be over a discrete time interval, which is arbitrary, but the smaller the time period the more accurate the sampling. So that to assess the variance over time you now have discrete blocks in a 3 dimensional (2 spatial dimension and 1 temporal dimension) construct. So that for any discrete element in the matrix the first drop that falls, basically you have a relative variation that is much larger than your average, as more raindrops fall the average moves higher and the relative confidence tightens. OK so we are talking about hominid evolution, really a period bracketed by the last 10 M years (we don't know that when we start the experiment). We have some semi-discrete areas (Africa, Europe, East Asia, S.Asia, SE Asia, Australia, Americas). We also have in the late 19th century the work of Charles Darwin which specifies that morphologically similar species likely have common ancestry going backwards in time, these evolve. And the geology points very roughly to the fact that the surface of the Earth has changed weathering exposures over time (more or less 20th century work). Before the study of radioactivity in the Early 20th century precise dating was more or less non-existent. Each of these drops (finds) represents a study, it begins with the discovery and then proceeds over time, in the context we can analyze if or how science has corrected errors that occurred at previous dates in time. There are entirely too many finds to deal with all the errors. So this is the backdrop for paleontology of humans, lets take a look at the raindrops. Now we know what human skeletal morphology was, the British had a love for raiding graveyards and stealing bodies for various and sundry purposes, and these could be compared with remains in the catacombs elsewhere in Europe. They also loved snatching people (e.g. bizarre morphologies, to them, of people living in S. Africa - common curiosities in Europe). The year is 1833 at this point in time Homo sapiens (linneaus) is a species description with essentially no delineators. The first raindrop that hits is from SE Belgium and represents part of a calvaria that has a configuration different from known morphologies. Without any knowledge it was called "modern". Again we have to remember that almost no or no paleontology is being conducted by Europeans in any part of the world at this point, because prior to darwin it was just assumed that this would have been destroyed by god prior to the creation of adam and eve (dating could never be made that was considered by all to be heretical). So basically this raindrop was ignored for religious reasons and due to the lack of a background. Missassumption one - god determines when or where species evolved. For the most part this specimen has been lost to science. https://en.wikipedia.org/wiki/Gibraltar_1 skull found at Forbes' Quarry in Gibraltar in 1848. Never dated, the soils in the surrounding region have been dated to between 24,000 and 35,000 years ago, making the cave the latest point in Europe in which Neandertals are known to survive. However that date has been contested in the Literature. " Its discovery predates that of the original Neanderthal discovery but no one realised its importance at the time and it lay forgotten in a cupboard for years" " The original find was done in a time where the palaeontological dating was still in its infancy, and no stratigraphic information was supplied with the skull, making dating at best guesswork."-https://en.wikipedia.org/wiki/Gibraltar_1. Again, see below, particularly with regard to charcoal which is known for binding gases and dissolved organic compounds, dating carbon older than 20,000 years has to be done only when it is absolutely certain that the layer in which the charcoal was deposited has not been disturbed. Any disturbance after the fact could mean that the charcoal has been contaminated with more recent 14C and it may not be possible to extract all that 14C from the charcoal. https://en.wikipedia.org/wiki/Activated_carbon. Activated charcoal is not hard to produce, just produce charcoal and its activated, don't do anything to it and it remains activated. The second drop, 1858, Neander Valley in Germany, Feldhofer 1 as it is now designated . " Examination of the skeleton, namely the skull, revealed the individual belonged to the tribe of the Flat Heads, which still live in the American West and of which several skulls have been found in recent years on the upper Danube in Sigmaringen." - Johann Carl Fuhlrott. This was the original description of Feldhofer 1 and its pretty revealing of the bias at the time. Later descriptions reveal that the material was of a pre-Modern tribe that existed at the time of Cave Bears, but before moder humans arrived, however this went along with "...the immutability of species which is considered a law of nature by most researchers, has not yet been proven" . In 1991 some 133 years later the bones were radiocarbon dated to 39,900 ± 620; however we have to question the confidence interval in the date, because at 40,000 years carbon-14 (T1/2 = 5,730) has gone through 6.933 and any bone that has been exposed to carbon dioxide during the modern age (i.e. after the bikini atoll tests) is expected to have some high level of contaminating carbon. The residual C-14 is 1/122.2 the starting carbon and the amount of C-14 in is a tiny fraction. The normal rate of C-14 production is 16,400 and 18,800 atoms per meter of surface per second. If we consider that atmosphere is 101,300 kPa per meter. Then 101,300 / 9.8 = mass = 10, 336 kg per meter.\ https://en.wikipedia.org/wiki/File:Vostok_Petit_data.svg As we can see CO2 varies with respect to time, and C-14 is made from Nitrogen-14 which is created by cosmic radiation. So that the ratio of C-14 to C-12 varies with time. The graph above was made after the feldhofer 1 dating. But lets do the calculations anyway based on 250 ppmv (Volume) To do this we need to convert the 10,336 to volume, any volume will do. 1 mole per liter is 28.95 grams per liter. Thus at this 10366 kg/meter squared x 1000 gm/kg / 28.95 gram/liter = 10,366,000 grams/ meter squared/ 28.95 grams per mole = 356,000 liters of gas at one mole per liter per square meter. Again, this is not atmospheric pressure on the ground, the gas is in a column extending 100 km up so we don't really know what the average pressure or volume is, nor do we care. To calculate the moles of CO2 we then have to multiply this by 250/1000000 = 89.26 moles of CO2. We therefore know the amount of carbon per square meter. How many moles is 17,000 atoms/sec then we can calculate how many atoms are produced in a half-life of 5720 years is ~ 3 x 1015 (2.96036E+15 to 3.39358E+15)/half life of carbon. However since these are produced at a steady rate we can surmise that during that period only 1/4th have decayed . . . . so that the equilibrium level of Carbon-14 is basically 4.40584 x 1015 to 5.0506 x 1015 atoms per air column that is a meter squared at Earth's surface. The air column 40,000 years ago, assuming the data was correct has 210 PPM of carbon in it, we don't expect the total volume has change then we use 79.28 x 6.0223 x 1023 = 4.774 x1025 therefore the ratio of carbon atoms in atmosphere at the time was 6.427 x 10-1114C:(12C+13C). The amount of carbon remaining after 40,000 years is 6.427 x 10-11 * 0.5^(39,900/5720) = 5.107 x 10-1314C:(12C+13C). 28% of the bone is collagen of that 3400 C per Collagen (~80,000 MW) which means per gram of bone there are 2.078 x 1018 molecules of collagen with 3400 C per molecues is 7.068 x 1021 atoms of carbon per gram. 70% of bone is hydroxyapetite, however only 1.24% is carbon thus 0.86% in carbon, in whole bone this constitutes 4.31x 1020 In total around 8 x 1021 atoms of carbon are expected per gram of bone, so that we now know how many 14C atoms are expected per gram about 4 x 109. It does not take many carbon then to throw off the dating by 1%. The sampling is done with "A few small cylinders 2.5 mm in diameter and 2 to 3 mm in length were removed from each bone with a crown drill"-https://www.ncbi.nlm.nih.gov/pmc/articles/PMC130635/pdf/pq2002013342.pdf Ok so . So I calculated this as ~14 mg. of material, they do extract collagen which is good, but some of the collagen remains part of the insoluble fraction so ~3 x 109 per gram would be the max and the mass it 0.014gram and so we are talking about 42,000,000 atoms of 14C counted. Sigma associated with this number of counts would be small, less than a 0.1% and we might feel confortable with the dating. This might not seem to bad but would not complete collagen purification (not just extraction) been better. In other studies, when exposure related carbon absorption has been noted, the dating of purified collagen has been found to have an earlier date than extracted collagen. Other than Feldhofer 1 calvaria been treated with a preservative containing carbon, it was not adequately protected in a sealed (CO2 free) environment over its life. The other samples (retrieved in the 1990s) were recovered from "these specimens were thrown down an approximately 20-m rock face and subjected to breakage by subsequent quarrying activity while on the valley floor."-https://www.ncbi.nlm.nih.gov/pmc/articles/PMC130635/pdf/pq2002013342.pdf IOW over the last 150 years they have been exposed to the elements, to infiltration of animal excrement, pollution and soot from campfires, etc. For such a very early dating in radiocarbon range (which with perfect carbon dates only to 3.5 halflives earlier-60 kya) and given the small amount of sample. This could be a problem. We have to remember that 14C in the atmosphere spiked in the 1960s at twice the level of pre-1960s and has only slowly fallen since. The percentage of loss of 14C from the sample is 99.3% therefore it would only take a small addition of modern carbon, less that 0.05% to markedly change the date. When we look at other work, such as the original work done by Bowler (originally dated to 30kya, then later redated to 42 kya) using charcoal, and the work done on Oase 1, it is reasonable to assume that for such early dating using radiocarbon dating, one has to do everything perfectly, you cannot guess on how pure ancient the sample is. While mtDNA genome was extracted, the specimen lacked adequate quality of DNA to undergo full genome sequence (which requires magnitudes more DNA). It is likely older than 40ky but not greatly older, because in this case its unlikely that any of the DNA would be preserved, we could set an arbitrarily early date at 60ky as the upward time bracket. Feldhofer 1 is an amazing fossil in one regard, despite poor to horrific treatment over the last 150 years it still managed to produce some credible data, but as other studies will reveal, being handled by modern trained hands does no guarantee that good science will be done.
  22. Whoops. A simple GPS based If statement would have fixed that, lol.
  23. lol, talk about over-reactions. My bets are on a fault gyroscope or one flight guidance missprogramming.
  24. I don't know it would make a great back drop to some Mad-Max x Space genre of Movie.
  25. You have to be careful, the matrix of lithium tritide and lithium deuteride that was considered as one of the cause of the overpowered H-bomb test was a tightly guarded secret in 1965. Lithium is believed to be part of that because it trapped neutrons and became plausibly part of the power production.
×
×
  • Create New...