Jump to content

PB666

Members
  • Posts

    5,244
  • Joined

  • Last visited

Everything posted by PB666

  1. Experiment #1. In a pure vacuum the exhaust vectors must be (in excess) in a hemisphere premise 1 - whose radius is determine by hv of the resonator and the upper limit of which is determine by the inverse of the atomic weight of the potentially ejected ions. * premise 2 - either ejected as ions (plasma) which emit light and can be detected in a dark chamber premise 3 - or carry a charge that can be detected as voltage blips on a strike plate. Therefore placing a scintillation detector or a charged particle detector behind the device and comparatively graphing the force of acceleration against voltage spikes or EM pulses would be able to elucidate the 'ablation' *having worked on coating materials using charged ions for the purpose of electron microscopy I can suggest this is not simple, the problem is that almost everything gets coated with a patina of particles that when properly influenced come off and can go anywhere. The may be sufficient amount of gunk on the metal device to explain acceleration, so will have to set the bound. In response to the rest of your article. If this is working as the authors state, then momentum is being transferred 'free' to objects in local or universal space. I do not believe (absolutely certain) that their device is so efficient that it could be a perpetual motion machine, after all they are input enough power to blow up caps and only getting 10-6 the argument is starting to sound like a red herring. There are many authors now and what they say is happening is irrelevant since they have not done any experiments to rule out all possibilities. All they are saying is that force is generated and it is not an artifact of the electronics that generate resonance. Assuming that Experiment #1 is done properly and the ablation problem is excluded. The local space issue can be tested by a carefully designed vacuum with a hemispheric strike plate placed around the 'accelerator', with force sensors connected to the chamber itself. The universal space issue can be tested by testing the device in space, without local space interactions, the devices output should not saturate. If is does not, then with proper design you could create something approximating perpetual motion (although heat production will still probably disallow it). - - - Updated - - - Good analogy, but the problem is that the magnetic field lines already exist in space _before_ realization of force and we know this is just another face of electromagnetism. So basically the EM is already present and you are simply engaging it momentarily to transfer momentum. We have to assume that the researchers have already ruled out that this is happening (although by the comments of K2 this is not clear). - - - Updated - - - Except that Cannae is operating a magnitude more efficiently than a photon drive. E2 = p2c2 + m2c2 Since a photon has no mass and no (m2c2), then either they have a means of converting photons of one energy to 10 times as many at a lower energy without loosing momentum which is impossible (p = hredk = h/l which means you cannot convert photons to something 10 less energy without affecting momentum), or there is a compliment of reaction mass also being accelerated somewhere. .
  2. I think its more of the realm of testing many different things in very different ways to see if they can get a handle on where (or where not) the thrust is coming from. They need empirical data, all the theories in the world ain't going to help you unless you can start ruling some possibilities out.
  3. A lesson learned in graduate school. Never assume scalability is infinite. The friction between the tire and the road gives far more efficient energy conversion than you might perceive can happen by scaling up transitorized EM drive 1000 or 1,000,000 fold. And you make the basic assumption that the scaled up transistors would not interfere with each other. - - - Updated - - - The only thing that makes a difference is to begin altering the test platform in serious ways and observing how those changes affect the output. Neither does the opinion of serious scientist unless that opinion goes into a more insightful design or better experimental variation and neither the layman unless he is will to create his own random variants and through a process of hard knocks come up with new data on the matter. Its not an issue of who's right, its only a matter of who puts there money where there mouth is and produces. Here is something I read today. http://www.news.ucsb.edu/2015/015808/physics-space-and-time
  4. It is. A creationist has the conclusion of the argument before he/she starts. I have never said it works one way or the other, I said, the manner which is currently being testing a fruitless future utility. Nor have I said the standard model needs to be thrown out or summarily replaced, this seldomly happens in science, but old ideas are modified to fit new understandings. That summarily requires scientist, however, to be open to new understandings. It is perfectly rational to question the stringency of held scientific beliefs, particularly the way they explain things at extremes. You're argument is that once a consensus has determined a particular thing to be true, then it cannot be refuted. This in science is called cementing a theory into a dogma. You can chose that path on your own, but don't insist that others, particularly those who have spent decades observing science and who have studied the history of scientific fields, to follow your lead. Truth lies at the intersection of valid perspectives, the more perspectives you have on the truth (that is a well defined reality) the more limited the confidence interval is. The Cannae drive creates a perspective on momentum but the validity of the perspective is in question, not falsified. Thus the perspective itself needs to be defined, and that cannot be done by repeating the same experiment ad-nauseum. <----- where is the moderator? <----- Moderator. I understand it, I just don't accept the results of the first experiment. Its just one of many examples where publicly scientist are dubious about something. Another example, dark matter interactions, its matter therefore it has inertia, it must have gained inertia somehow, and yet it does not interact. How exactly do we know that dark matter cannot interact, how can we be sure that for the briefest moment that the Cannae drive transferred inertia to dark matter and then the dark matter quickly transferred inertia elsewhere. This is not a situation where I am trying to create a scenario and say 'look it can be true', but the fact of the matter is that 1000s of scenarios might explain how it works, including scenarios that fall into standard explanation. But unlike others here, I am not trying to lead the argument with a foregone conclusion in order to negatively influence the direction of future experiments, because if I were to do that, it would be wrong. The only point I am making is that if you repeat the same experiment over and over again and get the same result - -future repetition is not science. Science is that which expands on the boundaries of science. As I defined clearly in the philosophical section, the self-defining statistical parameters will come from testing under a range of conditions, potentially extreme conditions that not only can find the happy center but also define distributions and ranges. Without such experimentation one cannot to be confident in any explanation Not just 30 minutes ago I saw two scientist on Nova say just that about dark energy. 'Our current physics has no explanation for this'. They were the ones that discovered that so I would take their authority on this more than yours. Yes, that is built into alpha and we accept that risk. But you won't get anywhere if you keep insisting that type II errors do no exist in science, or denying that science has a problem in general of dealing with these types of errors. In the long term these problems are inevitably solved, you probably not get any pre-20th century scientist to accept quantum mechanics based on their observations of the Universe (cept maybe the occasional alchemist who work with mercury too long), even though there were tiny hints out there that it existed. If you look at the evolution of science you see easily a process of acceptance and then a questioning of what has been accepted creating a new acceptance/questioning cycle. Newtonian physics was not destroyed, it was modified into a new form, and that new form was modified again, and that form may be modified again in the future. Each form has created layers of consistency with the past form, but added to the explanations at the boundaries. I look at it like this, to get out of a scientific rut often takes a new mind willing to take risk that the previous generation of thinkers are unwilling to take. These are the same minds that took on creationism and eventually brought it down. If you take a look at Hubble data, Hubble starts out a cripple, but then it was repaired by adding some new optics that allowed it to correct for the aberrations created by the primary reflector. However when we look deeply into space, the hardest task Hubble had to do, how much of what we are seeing is from the corrections and how much are that of the light itself and distortions of space-time. So even the acceleration universe model may not be correct or to the degree (or maybe to a greater degree) that is currently proposed (or for that matter the timing). But the fact that we have dark matter as something that does not require Hubble to justify and the possibility that the universe has changed its outward acceleration, means that we must face the prospect that known physics is not dogma. The same can be true with popular explanation of quantum entanglement. Does the standard model explain why the universe, with no obvious change in energetic in any other way (for example a transition of one form of energy to dark energy) suddenly change its acceleration, nope. IOW there is a interrogative dance going on between all kinds of dynamics, and Cannae has now fallen into that dance. I don't believe there is any point of conversing with you on the matter, you seem to be driven toward dogma, and you like to use ad-hominems to emphasize your point.
  5. I pull dark matter and dark energy because if the standard model is correct neither of these should exist; second your creationist argument is a red-herring, simple as that and you should apologize for using it. Science is that which expands the boundaries of science, not playing the ostrich; we don't hide from things we can't explain; we try to come up with a more variable set of experimental conditions that would give a variation of results that might explain them. You and K2 are the ones arguing that for the Cannae drive to function as a massless drive all of physics is wrong, I do not. I contend that either physics works the way we think or momentum transfers can occur over a range of possiblities not previously considered. But more important than that, I see something more obviously than you see. They repeat the experiment over and over again and get the same result. Where have we seen this before, was it the slit experiment? Was it the attempt to measure light speed differences going 4 directions? You have to begin significant alteration of the experimental conditions if you want to start creating a situation were you can plot one parameter versus a second parameter other than input voltage or amperage and output thrust. The biggest problem that I see is that thrust is detected as the difference between the device and the surface (rotational) reference frame. There are ways of getting around this but as long as there is nearby structure and the level of near-by structure is not accounted there are means of momentum transfer nearby. The logical solution is to build a way bigger vacuum chamber, but eventually it becomes wiser to space the device than to create a very expensive chamber the size of a football stadium The perfect earthbound for testing the device You are at the South pole. you have neutralized the magnetic Field of the earth exactly in the center of the cube you are about to build, you have built a vacuum chamber the size of china's olympic stadium, you have a monofilament line hanging dead center from the evacuated cube. You have solar panels on the side of the devicewhich you feed with light both sides, ideally balanced with two very thin stabilizing lines coming off the side toward the sides near the ceiling, and a small battery on the device, the device is sealed hermetically with a non interacting and non-volatile material, you pulse the device at the harmonic frequency of the pendulum that is created. Estimated cost 20 billion dollars, risk to human life- high.
  6. I contest this, there are alot of weaknesses in modern physics. -The nature of dark matter and dark energy remain unknown. -Quantum entanglement, the idea that you can alter the polarization of photon X and immediately change the polarization of its pair product elsewhere is contested idea. and I can go on. -Where neutrinos acquire their mass (and their momentum) and how they are able to flip states. The black swan theory basically argues that statistical systems often assume the confidence of a much broader systems if the boundaries of those systems have never been properly challenged. As scientist we often test the central regions of a distribution, and assume the proportions of the extreme edges. This is something that has been fortuitously tested over and over again in historical science and found often to be true. The fact that the drive is reproducibly creating a faux momentum gives us pause to ponder a new how, although it does not prove that the new how is true. Gravity, first and second laws of thermodynamics remained true to the early 20th century and god does not play dice with the Universe. . . . . . . . . . Quantum physics gets alot of leeway because you always have uncertainty at the smallest scales and via heisenberg uncertainty, BUT the full connotations of quantum physics have not been worked out, so that there is wiggle room still left for unusual things to happen. To go back to an even more philosophical stance, in statistics there arises a point in where one has to decide whether to reject the null hypothesis or accept the hypothesis. This is known as the threshold of type I error or alpha. Alpha typically is presented that way when you fall below alpha, the hypothesis is summarily rejected. The probability that is generated below alpha is the risk of rejecting the null (or standard) hypothesis when it was in-fact true. However the other type of error is the error of accepting the false hypothesis is called the type II error. And this is where science has a particular amount of trouble. The hypothesis is often dependent on the distribution, and if you go pouring through the scientific literature (I've seen this so many many times in genetics especially prior to 2005) the distribution is never characterized and often assumed to be normal, when it is not. You cannot assume a distribution until you start looking at the margins. So the reality is that if you set on one side of a distribution and you get a certain T-test value you reject the hypothesis, but if one does a monte-carlo, sorted all possibilities analysis you accept, but on the other side the reverse is true. Bonnferronis correction and other correction modalities often assume independence between variables tested in parallel when it is known often they are not independent and grossly over correct. This is what Zar says and when you read this and think about this you realize why scientist often are strengent in their interpretations. Two groups are studying the same phenomena, lets just say the hypothesis is that X happens at rate Y and they are both testing the boundaries. Group I tests the boundaries and finds anamolous rate X2 but cannot reject the hypothesis. Group 2 finds the same result and publishes. Reviewer 3 combines the data from the studies of both groups and rejects the null hypothesis. Group 1 and Group 2 did not do an adequate level of power testing under the specified conditions, nor did either group specific that their conclusions were marginal (something that has changed since 2005 in some of the more powerful statistical studies). And actually this phenomena is observed quite commonly in the literature. This is no guarantee of success of a theoretical challenge or failure, it specifically argues that before one can have confidence in ones conclusions that one must have increased the power of the argument at all points of the spectrum in which the argument might have validity.
  7. so you need 4 thrusters each side mounted at 0, 90, 180, 270 degrees relative to the direction of forward motion. Two of four thrusters to go forward, one thruster to turn bearing any direction, another to stop. You also need a solar panel, I suggest top mounted point forward, and a reasonable battery. Oh god, I just realized we are trying to build a NASA mission using KSP logic (face palms).
  8. Yah but this one can generate electric power also. Which you need to keep fragile little humans alive for their long flight into never-never land.
  9. What he is saying is that if solar adds X and RTG adds Y and the rover needs X + 0.5Y to function, then when the power output drops by 50% the craft will cease to function. But the reality is the curiosity has a bigger problem, its wheels are wearing out. And it probably will always have enough power to function, but to move and do new things it may have a difficult time of things. Voyager for instance has either cut functions, or others have simply died, As long as voyager still has enough power to dial home, its considered functional, but after its experiments fail it will probably be abandoned.
  10. http://newsoffice.mit.edu/2015/small-modular-efficient-fusion-plant-0810 This maybe somewhat hypey, but if the barium-copper magnets prove to be effective, then it is possible to come up with more efficient magnet design. Smaller reactor means it could become the electric power source for something like a ...........Cannae drive for use in deep space. heh-heh. Theoretically all we need for interstellar travel is -A fusion reactor that actually works -A Cannae drive that proves itself in space -Humans that can live for 30,000 years -And that only eat a few calories per year. -A place that is ready for them to go to. These things should be easily accomplished. But there are more practical applications. The previous designs meant that the reactors, if ever made would necessarily need to be placed in large population centers due to the very high cost of building, you wouldn't want to waste that electricity in 100s of miles of transmission, and these would be too costly to spread around to suburban centers or in medium or smaller cities. The new design would allow reactors to be more dispersed distribution (remember that there is no effective waste from fusion power, the products of fusion are simply not dangerous enough to worry about). This means you can place the reactors closer to demand. I suspect that this reactor claiming it could be functional in 10 years, probably something working in 50 years, lol. by past precedence.
  11. No, if my guess is right, the physics wont 'behave' properly until you get it into space. Any and all further testing on the ground may be useless because there is simply too much interference. BTW 20 million dollars is nothing if there is a 1% chance that it works, it would pay of 1000 fold. And what is the primary mission of NASA (other than spying on everyone). How does the saying go, if we knew everything it wouldn't be called science. Or let me put this another way, on earth it works, but who needs to push off of whatever (unknown thing we have been handwaving about for the last 6 months) when wheels push off the ground (extremely effective) and props and jets translocate gas(very effective); Simple. Small device on another payload heading to geosynchronous orbit. At orbit device is released and allowed to drift away. Then the device is turned on and you track it. Doesn't have to be big, all it needs to have is a radio transmitter with a GPS on it.
  12. No evidence. They look out, they see a universe that is younger and moving away in every direction. that's it. no center, no edge. The universe has apparently accelerated in expansion, this is attributed to dark energy, but we don't know what it is.
  13. Let me simplify the argument for you. Its called dynamic equilibrium and you have to think that carbon dioxide increases the complexity of subvisible radiation absorption and thus changes the equilibrium, simple as that. Imagine you are in a room and there is a door. In this system you can go many directions but as soon as you begin in the direction of the door inevitably you go outside. That is the fate of vibrational energy on the surface of the moon. Once it reaches a IR frequency that directed to space, its gone. Now imagine the same room but in the path of the door going out are N circus hoops you need to jump through each time you hesitate a little to jump the next hoop. Once you start heading in the direction of the door you will go out, but stuff will interact, you will slow down, you might have a different direction, momentum. Vibrational energy here on the earth can vibrate air, it can induce state change (water to vapor), it can then radiate, and be recaptured (many gases with many absorption lines) and so on. A cyclone for instance, takes vibrational energy in water, the wind and turbulation helps get the energy into vapor, it travels up close to the eye of the cyclone, at the top it releases its radiation into space and thin-air above, the resulting water droplets that form fall back to earth. The latent energy in the water drives the engine. But the energy is trapped in the water because suppressed evaporation and radiation (water being semi-transparent absorbs light more deeply but is neither fluorescent or phosphorescent) rates at the surface, so the cyclone is another very complex way for energy to get out of the system. The radiation that comes from the sun is both visible and UV, but once it hits earth and is not reflected it is converted into vibrational energy and eventually converted to microwave and infrared radiation. So any gas added to the air that can absorb radiation, heat up, and release it at a lower frequency filling in a gap that is not already present in N2, O2, H2O and argon, will have the effect of raising air temperature. Eventually the radiation reaches the upper atmosphere with gases at -70'C or so, and alot of radiation occurs but at much lower frequency than the rocks here on earth, and the radiation has to trickle up through a waterfall of obstacles to get out. It keeps the ground warm at night, and gives the ground someplace to send its wobbles to in the morning. Don't trouble yourself too much with global warming, we learned yesterday that in 100 billion years there will be no radiation left in the Universe for carbon dioxide to trap. Since we will all be dead by then anyway (Run-a-way greenhouse caused by sun going red-giant), not necessary to fret about global warming. Have I caught the intent of your post wrong?
  14. Hah, but still not curious enough to suggest it goes into space. It needs to be up!
  15. Antarctica, because if I am a super space-worthy alien, I can stop earths rotation, turn Antarctica sun facing, and respin the earth so that Greenland and Antarctica are now equatorial. I can then colonize unoccupied lands with my own plants and animals (after a small amount of herbicides on the coast). This can then be used as a point of expansion to other continents. Of course for science all I need to do is collect the stuff that the massive tidal waves I created have dislocated, so I don't need to move to do science. Prime directive, don't interfere with already developed life, any contact is likely to be adverse to the contacted and will alter their evolutionary course. If I think about what is most efficient and manipulative for my own sake, taking 200,000 years of human evolution as an example, then it will not likely be advantageous for the sake of the biology investigated.
  16. IKR, I was just pondering what I would do with my time in 75,000,000,000 years. There is always the quantum instability of true vacuum space. Think about it, as the universe expands eventually intergalactic space cools, the remnant hydrogen condenses into stars and new galaxies, which then die themselves, leaving the universe dotted with quantum singularities and all the mass condenses into these leaving nothing (because no light exits)....perfect vacuum. Then boom. - - - Updated - - - I suspect that the effect is more likely that close to the center of inflation there is much older life we will never be able to communicate with and psuedoboundary they have barely reached the stage that can produce rocky planets do to time dilation effects. Again we have to separate the two Universe definitions. One that is commonly used is the visible universe, which is a small fraction of a potentially limitless universe that may have properties we cannot see or will never be able to see. Note that our solar system has already had its ups and downs. When the sun was born it produced a brief period of more intense sunlight, but then it declined and has been steadily rising. I think that the authors are trying to point out that galaxies are sort of fizzling out, the effect of merging galaxies and massive central black holes is ejecting gas into intergalactic space which is basically starving the galaxies of material for new star formation. As a consequence many galaxies are loaded with red-dwarfs and old stars in which the spectrum has shifted to red. But the gas that ended up in intergalactic space is in an equilibrium, because these x-ray jets are heating the gas which is keeping it moving, once gas stops being fed from other galaxies it will cool down and can condense to form new galaxies and other stars. This could go on for a while but it will eventually stop. Our milky way is enjoying a sort of happy life as a young vigorous adult, however once it merges with andromeda there will be a brief period of star formation but then all the gas will get sucked into and shot out of the central vertices leaving it rickety old man.
  17. The sun as you know it won't be around then to view the spectacle, by then it will have thrown off its gas into space and become a smouldering soft-glowing chunk of coal with no interior planets. The only plausible universal time would be a static point at the center of inflation that we can neither proves exists or ever hope to see. Every object move away faster than the speed of light during inflation and subsequently expanded close to the speed of light with expansion, so that now everything is can only reference its own space-time. You can select any clock for any space and use it, but referencing other parts of the universe is virtually useless.
  18. In science, they have been partially scooped, it would seem. So now they got to go back and double down. I don't think the blown capacitor excuse will fly anymore, they need to use good electronics and increase power.
  19. I once vacuumed out a dusty powersupply, I turned it back on and a capacitor immediately blew up.
  20. The Vasmir is set to run on Argon, it will be the most powerful ION drive out there when completed. The reason that argon is Momentum equation = to get the most delta-V on a ship you want to have the most mass on the accelarant. Weight density per volume. You can store more of that mass in a unit volume with argon. Most controllable and least damaging Ion. The ions are going to tear the crap out of the accelerator grids, the more that miss their mark, the more damage they do. With electric and hard masses you can use a high velocity rail gun. But VASMIR once it gets going is going to surplant other types of ION drives. Unlike other drives it does not rely on plates, it stirs the plasma into a gyre and focuses the gyre using microwaves so that it basically does not have the grid distruction. Just like Cannae drive, it needs to get its legs in space.
  21. Start out with some eye-candy. This is KSC day 223 and essentially its pretty much career mode end game because I essentually have just about everything except money for fuel to complete all science with out having to further leave the kerbin system. Been playing career mode pretty much by the books save 6 mods (wheel, that stripy conical tank you see on the lander, a larger fusiform tank, an LV-909 replacement engine, combo controller and MechJeb). So in career mode hard you have limit cash and science reward is diminished. I set out to build a science station in which the lander cycled between two biomes on the planet and return to the station. Section A was sent to the Mun, while section B was placed in orbit, originally 30K and somehow it ended up at 38K (and for some odd reason all my RCS fuel disappearred). Despite the result things did not go as originally planned. In previous normal career I did not earn anything on the Mun in the processing lab (science over time), but when I transported the lab to Minmus, it did and I though hmmm, they've got something going here. But anyway I tried doing the science and it didn't work again, but this time I decided to split the landings into two separate events for each biome, going for the Siesmic and Gravity experiments later. So I decided to burn one of my Mat. sci and goo in the processing lab, thinking I would get a boost as in pre-1.0 in transmitted science. This I learned was not the case, it instead loaded up the lab with data and on the Mun it quickly filled, and difficult to operate. I learned from this -Processing experiments provides data (and apparently nothing else) for long-term processing at the rate of maybe 0.5 sci/(experimement*day)(discussed below) -Using goo (30 data) and Mat (75 data), 5 biomes fills up the lab, and it takes weeks to empty. -Processing on full data (close to 500 as possible) takes alot of power -Battery capacity at night should be 10 times what you think it should be. So then I added another mobile processing lab C after fortuitously rescuing 2 scientist. I learned from this -This lab processes data twice as fast, instead of providing a power boost to B, it sucks even more power -D, not shown in the photo, is a huge fusiform fuel tank I hauled C up with and attached where section F now sits. Lacking Gigantor I built a stack of MGS and attached the el-cheepo 1x6 panels in a bilateral array on the MGS. In case you are wondering the docking port of E was the nose of the launch. -More power but as soon as the station moved over the horizon battery went to zero. At least I had 50% of experiment running. -note that a balancing small docking port on the other side is missing (I did not quite have my MIR moment but it was coming) Managing to run a few missions I finally did get the Gigantor, by that time however I mysteriously loss all the Monoprop (which I had a great abundance of). It is one of many strange things that happened between leaving the game and coming back to it. I should point out that several of the MP tanks were set not to feed, so . . . . . . So I brought in 2 size2 Monoprop tanks, on another lab (F) with 2 gigantor panels and some additional docking ports (5) two of the side ports are occupied with housing (cough,choke) and shuttle (a 1965 volkswagen beetle would be considerably more safe in the same occupation). The panels are sort of cockeyed with respect to the side mounted panels. This occurred for several reasons. The station core B was not equipped with near enough RCS ports (the complexity of the craft was not anticipated). It became cumbersome to rotate the craft. Once I realized it simply wasn't worth the effort to straiten the craft out, I realized I had reincarnated the MIR space station (sort of like a Babylon IV moment). I learned after attaching this -Both C and F have to detached to add New Science data, you can add new experiments to the modules via the store data function, but only B in the collective will be able to add data at the end of the processing gui in the review science panel for the experiment. -As soon as you detach this large stations angle to the decoupled piece begins to turn, any attempt to straighten the core station causes it to gyre. -Since I did not have enough scientiest, I put an engineer instead, the lab processing works, but much more slowly so that I assume that the rate of science is dependent on the skill of the scientist, in the future I will rotate scientist back to kerbin so that I can increase their science skill. All the while building this I was doing missions, and kept having moon survey missions. My lander is simply too expensive to operate for a landing mission or a soil grab mission, although I must say, I wish I had a low-mass capsule in addition to the lander-pod, for survey data storage. I built G a seat on a satellite combo core and 4 side mounted engines. Now this was a risk because I have here-to-forth never managed to get the seat to work in any previous version of the game. So even before I took off to Mun, I tested the seat in kerbin orbit. The command craft for the seat is attached to F at 90' rotation from the. As soon as I got him to the Mun I realized the landing mission was for a siesmic survey, and but fortunately I manage to take a sample collection mission. I learned: -Put some low weight science (seismic, temperature, and gravity ) experiments on the low-weight seat lander. Future landing missions....because afterall it has a port and can refuel! -Always add a couple more small ports than what you think you need, they come in handy with low weight stuff. -There is no capsule, so you can't do crew reports, unless you mod the seat. -There is no capsule so you can only keep one soil surveys and one EVA report. - (I eventually brought a satellite down to gather the siesmic, but the damn thing was not stable, and fell over, lost its engine. The combo core needs a reaction wheel to provide it with better stability, but given superb solar panels and battery I simply roled it to 4 destinations - rover not needed). So finally I go the 3.38 million spasoes needed to fund the R&D upgrade and unlock the gravoli. I had a mission to the Mun to satellite a mat science. So I attached the bidirectional port H on top which has both the seismic and gravity having just completely the first round of Munar surveys. Mir ain't so bad, it managed to get the two last science types before moving onto Minmus, which if I had not completed by first round of Minmus surveys it would have been a delay. Having completed the satellite mission I simply flipped the orbit and transffered to the MIR almost-well-planned Science station. Then I simply undocked the lander, attached the adapater, detached the mat.sat, and redocked the lander, and done. Now you may say, but MIR got smashed by a refueling ship and almost had to be prompt abandoned.....what I learned: -If you have a massive station, don't count on the engine sound to work, before I knew what had happened the adapter-mat.sat core hit the station at 9m/s, fortunately did not blow up the damn station. -See if your station has power before attempting a close approach, loosing power during a docking can cause Kerbalesce bounces to happen. Anyway we are headed off to minmus to harvest science. Some other tips -Cash limits fuel, which limits landing missions, the big science payback is getting experiments data back to kerbin by ship. Cheap ways to get fuel to refueling station pay-off -When landing pick spots that can take-off and land on a close by biome, if you can get up and down without surpassing 75m/s relative to surface, its as good as a free landing. -Missions should be mixed in-between pure science missions, for example you might want to wait a few days before landing in a dark zone, throw five or six low-cost sat missions up. -Some missions are just not worth it, for example a satellite mission around the sun, is not going to get the R&D facility upgraded, the mission takes too long. -For rescue kerbals, you can snatch them and use them for missions before you kerbin them, but you need to get them into space and check to see what their profession is ("I" button in Map view). Its a pain sometimes to get it to work. -RCS is both good and evil, if the rotation is not going right, hit the "R" key and if it gets worse, turn it off. I keep it off most of the time.
  22. Cosmic pin ball machine, they get double triple score and a free new OMG-particle if you split a sentient in half with a single muon. Good thing they are not very good shots at that distance. - - - Updated - - - Has anyone yet done a particle energy distribution of these things to see if these types of energies (plots)? Edit: https://en.wikipedia.org/wiki/Greisen%E2%80%93Zatsepin%E2%80%93Kuzmin_limit There appears to be a speed limit, but the sources of particles above the limit have not been identified.
  23. Talking about the early universe again, not the universe in front of your computer monitor. Our location is irrelevant, it could be any one of a many, if your location was inside the blob, they would be alot closer and they would be on all sides and we would be asking the question why these particular events in s sphere. And from a macroscale, looking at the universe we are not a particularly unique density node, there are bigger ones. (e.g. The Sun, Alpha Centauri, the black hole at the center of the galaxy, the black hole at the center of Andromeda, etc) and none of these density nodes are particularly interesting, and while there appear to be gas threads and webbing as far as we can see there appears to be nothing visible that is outstandingly different in our visible universe. However, if we are moving away from a central point of expansion as everything must, then we could ask the question do we see any differences. And the answer has been looking in any direction, there is nothing previously that would reveal itself as more central or exterior, the universe is moving away from us in all directions, and there has been not particularly revealing differences in the structure in any direction. The point of this study might indicate larger structural changes than we have observed in the resolvable galaxies of our visible universe.
  24. First off not to be nitpicky but it was clear I was talking about macroscalar (note title Macrostructural as you should interpret K2-we are not talking about super clusters or any scale structures of previously observed galaxies) or major subdivisions of the visible universe, to harp on that the universe is not uniform to the level a fleas fecal pellet is pedantic. Also I should point out that previous efforts to show spin in CMBR were later refuted indicating that the CMBR is largely uniform. But this is light from the opaque period, and these GRS are emmitting younger light which means they have formed macro structures shortly after the universe formed. So there is an ancient uniformity to the Universe even if it not uniform now. (I should also point out that when I refer to Universe I am not handwaving arguments about infinite expanding space and all that blabber, see a scientist would admit there are other viable contending theories out there like multiverse and non-uniform osscilating universe. Since we are making assumptions about the universe based on the small percentage that we can see, even smaller if we factor in that this shell of visible light that reaches us is basically a time slice through that, we have to accept that the data may not be representative of the whole). Second I consider the data flawed to begin with, the argument that inflation has made the universe completely uniform has the cart before the horse. Inflation was added to explain the uniformity that is seen, particularly in the CMBR, but its not the otherway around, if there is macroscalar structures that were beyond the visible range there is a problem. To understand what happens to matter after CMBR you need complex modeling and one key ingredient, symmetry, is missing (for example what if anti-matter exists within domains of the universe). So I posit to you, if these macro structures exist, why would they form in only one direction relative to ourselves, If the inflation was completely circumspect then we should see the pattern arranged around us and of no particular age. I think the one good obvious answer is that inflation was not perfect, and that material density from the centermost part was higher than that on the outside. One good explanation is that EM concentrated at the center and diffused at the boundaries lower or that there was better conversion to matter. It could be something even more insidious, like the conversion to take matter versus dark energy was better at the center. According to recent models with large galaxies merge star formation falls off, and you see red shifting and basically a stellar old folks home. This could explain the faintness of stars in these major clusters that render them hard to see, so now they are places to look in the near infra red spectrum. The papers conclusion, albeit tentative does not just say these come from just one part of the sky, but they are also about 7 billion light years from us, meaning that they originated from the same area and about the same time. Given that so many GRBS have occur, however this is not all possible arrangements of many, these are since intense observation began recently and so a 1/200 Bonnferroni correction need not be applied to alpha. If this particular pattern appearred for 2 years out of a 400 year study that would be a different story. In any case I think the four sigma used is 1/31540 so they still may have not reach criteria used for Higgs. Anyway it may represents new data that contradicts your hypothesis regarding the universe. - - - Updated - - - Single publication alpha is 0.05, in many fields now (not 20 to 30 years ago) it needs to be corrected with bonnferoni's correction. 0.05/CF depending on classification. For example in the field of genome genetics the alpha may be 10-9 since they may use millions of markers at a time. And there has now run in the literature a critique of overcorrection, the correction may not take into consideration linked elements or such. Multiple studies have been done on marginal probabilities (0.05 to the corrected probability threshold) and found that a sizable number in the low end of the probabilities were later shown to have association with more data provided, and many more were not-independent of associated values. The problem in most statistics is that type II errors are neither known or investigated unless they are highly suspected to begin with, and in many cases the power is not defined for rooting out sources of type II errors. I can give a clean example of this, if your are collecting Energies of something from atom smashing and you collect energies in bins of say 0.1% along the 100% scale but by and large your assay for energy is say +/-3% and particles themselves have extensive variation, then you might be overassaying. If you divide your particles into 1000 bins then your correction threshold is 0.05/1000 but the problem is that if the there should have optimally been 20 bins, you set the criteria too low, and it should have been 0.05/20 or 1/400. You probably wont make a type I error, but if you are looking for new phenomena, there is a good chance you will leave alot on the table with type II errors. IOW alpha needs to be set depending on criteria of the type of investigation, a sigma of 5 might be fine in one set of research and overkill in another.
  25. Yes but I am contesting that this is only a hypothesis based on incomplete information. I am making the counter arguement,
×
×
  • Create New...