Jump to content

Tullius

Members
  • Posts

    158
  • Joined

  • Last visited

Everything posted by Tullius

  1. I would guess that the launch contract contains a clause that allows for a price reduction in case of massive delays due to problems on SpaceX's side. Although I don't think that Hispasat will get anything in case of a delay of just 1 day.
  2. Poland is member of ESA. Since you are from Poland, your best bet is therefore to apply for the ESA astronaut corps. This webiste should provide a few more details on the application criteria: http://www.esa.int/Our_Activities/Human_Spaceflight/Astronauts/Frequently_asked_questions_ESA_astronauts However, please note that Poland's contribution to ESA is relatively small, making it rather unlikely that ESA will recruit a Polish astronaut, especially considering that the last selection took place in 2008 and only resulted in the recruitment of 7 astronauts.
  3. The minimum number of uncrewed launches before the first crewed launch is zero, as NASAs only requirement is a less than 1 in 500 risk of LOCV. However, doing no tests of prototypes means that you need a lot of additional overengineering, error margins and paper work to get the certification. But nobody wants to design to design the first rocket with a risk lowrr than 1 in 10.000, just to get the 1 in 500 for the first flight ever of it being crewed. So you are reduced to doing test launches that will prove that the rocket remains well within its specifications to get certification. Less test launches means more overenginering and more paper work, and vice versa. Since the cost per flight for SLS is ridiculously high, NASA obviously wants to do as few test launches as possible (but not zero, as that worked so well for the Space Shuttle). On the other hand, if Falcon 9 is anywhere near as cheap as it claims to be, doing a few more test launches might seem to be a worthy tradeoff for less overengineering and paper work.
  4. It will cross orbit of Mars, but won't enter orbit around Mars, as it has expended all its fuel. Most likely, we will lose communication with the Tesla in the next few days, as its electric power runs out. The test went well amd the PR stunt was a complete success, but now its time for SpaceX to move on.
  5. The correct title is Heriditary Grand Duke, as he will eventually follow his father as Grand Duke of Luxembourg. Although Prince is not completely wrong. The satellite was designed and built on the sole initiative of the Luxembourgish government, officially as a commitment to NATO. SES was chosen as commercial partner, as it was founded in 1985 by the Luxembourgish government and 1/6th of its shares (with 1/3rd of the voting rights) are still owned both directly and indirectly by the Luxembourgish state. So refusing this offer to collaborate on LuxGovSat wasn't really an option for SES, especially as one can expect that the state also took care of providing the necessary monetary incentives.
  6. Branch prediction is not a design flaw, because it is part of what makes our modern computers so fast. If it couldn't be used, it would cause a major speed loss to our modern cpus, probably much much larger and for more work loads than the up to 30% for the patches that mitigate Spectre. Unfortunately, I was unable to find just how large the loss would be, however modern processors are able to predict the correct branch with well over 90% success rate. So Intel's claims can also be interpreted as them only wanting to introduce minor changes to branch prediction in future processors to maintain its speed gain, and instead use mitigation strategies in the OS and software to prevent branch prediction from being exploited. Of course, this means that Spectre will continue to haunt us for a long time, but at least we can keep our fast processors. Of course, Intel also sends these claims forward to prevent lawsuits and calm down everyone, as even if they wanted to fix this problem in silicon, as this is no new Pentium bug (https://en.wikipedia.org/wiki/Pentium_FDIV_bug) where only small changes where required, but it would require us to rethink how our processors work.
  7. Even with multiple insulation layers with vacuum, there will still be heat transfer (due to black body radiation). And you would also have to deal with the electronics inside generating heat, which you would need to evacuate. Better insulation might extend the stay on the surface from a few hours to a few days, but you are still severely limited in time. And the heat production by the electronics needs active cooling, which would also cover the not perfect insulation, but you are relying on the cooling to work flawlessly. Also, unless you would be using vacuum tubes, you will need transistors for the RF receivers and emitters. Hence, at some point you need heat resistant transistors (or just let some cables pass between the inner and outer insulation).
  8. The research presented in the above video is specifically directed at developing electronic components that work even at the surface temperature of Venus, and thereby removing the need for insulation as was the case for the Russian probes in the 70s. If we just want to keep using our normal electronic on Venus, we need to prevent them from heating up to more than 150 °C. That is simply impossible for any extended period of time in the Venusian hell even with the most ridiculous amount of insulation.
  9. Using the idea pictured in the first post to give the Saturn V a 2g push: The Saturn V weighs nearly 3000 tons, which means we need more than 9000 tons of water. This is 9'000'000 9'000m3 of water, which equals a cube of water with a side length of more than 200 20 meters, i.e. enough to dwarf even a Saturn V with its 110 meters. still a huge cube, but not mindboggingly huge. And I forgot, if you want to give the Saturn V 50m/s of "free speed", you need 62.5 meters of difference in height, i.e. half the Saturn Vs height. So you have a giant water tank twice the size of the launch tower and a platform that moves half the height of the launch tower within 2.5 seconds. And all of this to save only 35tons of fuel (out of 2100tons for the first stage alone)?! Edit: Calculation error: 1m^3 = 1ton
  10. It is not a barge, but a ship. And unlike SpaceX's barges, it will be moving during the landings of New Glenn to provide a more stable landing spot. Added bonus is of course that it will be much faster to bring New Glenn back to port, if it lands on a ship instead of a barge.
  11. The image was transmitted as a sequence of numbers and then converted to an actual image down on Earth, i.e. just like a modern camera with the only difference being the actual camera type. And if you are hard enough, you can even draw the image from the sequence of numbers by hand: (Source: https://commons.wikimedia.org/wiki/File:First_TV_Image_of_Mars.jpg from the Wikipedia article https://en.wikipedia.org/wiki/Mariner_4) That above is the first image of Mars transmitted by Mariner 4, hand-drawn from the sequence of numbers, as the engineers didn't want to wait for the computers to process and print it. (If you zoom in on the original image, you can even read the sequence of numbers printed on strips)
  12. Yes, that is probably the most fascinating thing of this whole endeavour. The time difference is certainly correct, as otherwise we would lose a large amount of value of the parallel observation of the event. As an example, VIRGO detected the gravitational waves 22 milliseconds before LIGO Livingston, and 3 milliseconds later it was also picked up by LIGO Hanford, so we know exactly when it happened. Also the gamma-ray burst was observed by two space telescopes (Fermi and INTEGRAL). And knowing the exact timing of 1.7 seconds between the gravitational waves and the gamma-ray burst is a valuable in its own right. As to why, I found an article from the links on wikipedia: That may be part of the explanation. Another might (just guessing) be that the 100 seconds of gravitational wave observation might detail the seconds right before the explosion, while the gamma-rays etc. are the result of the explosion. The whole thing is just something that is mind-bogglingly huge: 10,000 Earth masses of gold and platinum are expected to have formed! The paper detailing the astronomical observations is said to have 4600 authors, about one third of all astronomers! And now, place your bets: What has formed out of this collision? A neutron star heavier than any known neutron star, or a black hole lighter than any known black hole?
  13. You might be onto something. I looked at the announcement of the LIGO press conference, which will be split in two parts. The first part will be with LIGO and Virgo scientists, but the scientists in the second part are not directly related to gravitational waves. Nial Tanvir and Edo Berger are connected to two discoveries with the Swift X-ray space telescope, one being an X-ray outburst GRB 090423 which is the oldest observed cosmic event (https://en.wikipedia.org/wiki/GRB_090423) and the other being a supernova whose observation would allow future observations of supernovae by gravitational waves (https://en.wikipedia.org/wiki/SN_2008D). But this whole thing doesn't end here, GRB 090423 was verified by a telescope of the ESO. So, we are still at a wild guesses, but these press conferences might indeed be related.
  14. Transmitting the data 3 to 10 times is probably the worst possible error correction algorithm, as it adds a lot of additional data for a very small gain in transmission reliability. If there is a 2% chance of a transmitted bit being received wrong, the system I described above with 2 additional error correction bits for 8 bits of data has a 99.9% chance of correctly transmitting the 8 bits of data, i.e. there won't be much errors. Without error correction, the chance of correctly reestablishing the data would only be 85.1%, i.e. a rather high chance of error. And, if you somehow could tell that the 8 bits of data were badly received, you would need an average of 1.175 retransmissions to get a correct transmission, i.e. you need to send an average of 9.4 bits. So 9.4 bits with no error risk vs. 10 bits with small risk doesn't sound that bad? But we haven't yet established how many bits are needed to decide if the received data is correct, and we won't need to add an additional 10 or 40 minutes to get our data through. With CD grade error correction, even a 10% chance of flipping each bit can be recovered with 99.98% probability. Retransmissions can still be useful for very precious data, if you notice through a basic error check that, despite a very low chance, an error still crept in. But in practice, you don't want to rely too much on them, as retransmitting costs a lot. If you improve the signal-to-noise ratio, you can either reduce the amount of error correction you do or switch to faster connection speed (which increases the errors in the signal). The task is now to find the right compromise. In that respect, both your wifi and Curiosity use the same tricks to optimise the speed, while retaining the least tolerable amount of errors. The only case, where it actually becomes useful to retransmit the data, is when the signal-to-noise ratio is affected by something like solar wind: Instead of always using the more secure transmission method, which won't drop out even when a solar flare occurs, you use the faster method and just accept that during solar flares nothing useful is transmitted and you have to retry later on. Since solar flares are rare, you might actually gain bandwidth with this technique, which is good, as long as there is no time-critical data. And in the above example of non-time-critical data with occasional solar flares, your idea of relay satellites might actually become useful, provided that the transmission through the relays is actually of higher bandwidth than the one through the 70m antennas of the DSN. And having the relays buffer the data in case some of the data needs to be retransmitted is then obviously a nice bonus (By the way, NASA already tested an alternative internet protocol with the nodes buffering the data for retransmissions during an experiment on the ISS). Curiosity, as well as Opportunity, is only rarely communicating directly with the Earth, but rather through the Mars orbiters, as they have the more powerful transmitters as well as being able to share their time on the DSN. Sharing capacities has obvious advantages, but doesn't need interplanetary relays, but relays on the surface or in the orbit of Mars. Also the advantage of a large and powerful dish on Mars or in Mars orbit is that it can communicate directly with Earth, which beats the transmission through multiple relays in orbit between Earth and Mars in signal round-trip time, as during opposition between Earth and Mars it can communicate in direct line instead of a half-circle.
  15. Usually, you use error correction algorithms to account for errors so that, despite some errors, the correct data can be reestablished. For example, you can use an algorithm that encodes each 8 bit piece of data in 10 bits in such a way that even two transmission errors in 10 bits don't affect data integrity, and therefore won't cause data retransmission. wumpus's formula now tells you how high the theoretical limit for the useful data rate is, given the channel bandwith and the signal-to-noise ratio. For our communication with Mars, we know exactly the channel bandwidth (as it is determined by our antenna setup) and we have a pretty good estimate of the worst possible expected signal-to-noise ratio. We can then use nice mathematical tricks to add additional bits to our data bits so that no data is lost in transmission (except in exceptional and extremely rare cases), and therefore we don't need to retransmit data. I just looked up how data is encoded on a CD as an example: Each 8 bit block of data is encoded as 14 bits on the CD. This might seem extremely wasteful at first, as we are losing nearly half the capacity of the CD to error correction. However, since we added error correction to our CD, we can actually save three times as much data on our CD, as we could write without error correction, giving us 50% more usable capacity. And this is without needing to go back to reread a faulty byte (by the way, how do you detect that the data contains errors?). So, there is no need for relay satellites to improve the time between transmission attempts, as error correction algorithms can already cover all errors. The only useful role of relay satellites is to improve signal-to-noise ratio, which could potentially increase transmission speeds, as one could use higher frequencies and less error correction. However, the question that remains is: How many satellites do you need until you beat a 70m dish?
  16. Because safety. If both the rocket and the LES only have 97% reliability each (which is quite low for a modern rocket), it gives the crew a chance of about 99.9% of surviving the launch, i.e. very very safe. On the other hand, to achieve this survivability solely with the reliability of the rocket, the rocket needs to be 99.9% reliable, which is ridiculously high.
  17. It took me a bit of time to find the proof, but the German wikipedia page gave the right clue: Fibonacci numbers satisfy the formula Fm+n = Fn+1 Fm + Fn Fm-1, of which your formula is the special case for n=n and m=n+1. The formula itself follows quite easily from the defining property Fn=Fn-1+Fn-2 via induction on m.
  18. You can probably design around the political problems, after all we have launched nearly 10kg of plutonium 238 on New Horizons, and even more on Cassini. You will only need to fulfill 2 conditions: You can prove that it is the only sensible propulsion system for this type of mission. Make it as safe as possible: On the Apollo missions the RTG for the science package was transported in a casing capable of surviving reentry, preventing the spreading of the plutonium in the atmosphere should the mission result in a failure and the lunar module reenter (as has happened on Apollo 13). If you observe these restrictions, it is probably doable to launch a NTR into space. On the other hand, you probably want to do tests on Earth, which could produce a quite significant amount of radioactive waste, which in turn could increase program costs and political opposition to it significantly.
  19. The vertical stabilizer is also the non-deflecting part of the tail fin, which prevents the plane from side slipping. The shorter the plane, the larger the vertical stabilizer has to be to keep the plane stable (and the Space Shuttle isn't very long, compare it for example to a Boeing 747SP). The part, you are struggling with, is the rudder, which gives yaw control. One part might be that in reality you have much more delicate control as to how much you want to deflect them. Maybe it also needed extra large control surfaces for control in the thinner parts of the atmosphere (the other control surfaces on the shuttle aren't small either). In case of the shuttle, the large size of the rudder (the moving part of the vertical stabilizer) might also have to do with the fact, that the shuttle's rudder was split in two parts allowing it to deflect to both sides simultaneously to act as a speed brake (this is actually modelled in KSP, if you deploy its control surface). As to why the stock Dynawing has this peculiar arrangement with the vertical stabilizers at the wing tips, I can only speculate as a player having built my own shuttle replica in KSP: If you use the Space Shuttles rudder during launch to control yaw, it generates an enormous amount of roll, as the center of mass is sitting very far away inside the external tank. On the other hand, on the stock Dynawing, the rudders a sitting much closer to the center of mass and thereby generate much less roll during launch, and none during the glide home, making it much simpler to control as using yaw has no negative side effect on roll.
  20. Because EUS isn't ready yet and would cause significant delays to EM-1. On the other hand, ICPS allows NASA to test the lower stages and Orion in a similar fashion as they will be used in EM-2. Without ICPS, EM-1 could probably only be launched in a similar timeframe as EM-2 is planned to happen, i.e. in 2023 instead of 2019. Sure, ICPS costs money, but its derived from Delta IV's DCSS, which should reduce its costs. And 33 months for altering the VAB is a short time compared to the 4 years between EM-1 and EM-2.
  21. A quick Google search provides the answer: And the article itself contains even more interesting details (absolutely worth a read, even if you are not a rocket scientist).
  22. Exactly that, if you have two sides, resp. the ratio of their lengths, you can get the angle with arctan, in the same way as you can get the ratio of these sides by taking tan of the angle. arctan is the inverse function of tan, i.e. arctan = tan^-1. What does inverse function mean? If I have a function f mapping x to y, then f^-1 maps y to x (provided that no more than one x is mapped to y). What does this mean for tan? tan maps angles in (-pi/2,pi/2) resp. (-90°,90°) (depending on your notation of angles) to values in the real numbers, and arctan maps real numbers to angles in (-pi/2,pi/2) resp. (-90°,90°). For example tan(pi/4) = tan(45°) = 1, and arctan(1) = pi/4 = 45°. Or take a look at this graph tan maps values on the horizontal axis (usually called the x-axis) to values on the vertical axis (usually called the y-axis), arctan does the inverse: it maps values on the vertical axis to values on the horizontal axis.
  23. It would mean starting from zero, or accepting Falcon Heavy and New Glenn as alternatives. And while the former isn't really interesting for the senate to fund, since they would see the results even later, the latter two are less capable rockets than SLS, which, while one would quickly see some results, would have difficulties even putting a space station module into lunar orbit. And if you start anew with private contractors, every development works needs to be paid for 2 or 3 times, since you want competition. Also, either NASA lets 2 or 3 companies build the rocket entirely on their own, which would mean that NASA has little influence on the design and you have to find as many companies willing to do it, or NASA does the development itself, but buys different parts from different companies, which would reduce the efficiency gains. In short, cancelling SLS now would not just mean a huge blow to NASA's manned space exploration, but also leave NASA with something that might not necessarily be better. It might be necessary to redesign some parts of SLS, like maybe find a private contractor to develop new, cheaper engines to be build in NASA's facilities, but just cancelling SLS and putting the whole work in the hands of private companies won't necessarily improve anything. On the other hand, if NASA finds out in 3-5 years that SLS is indeed a dead end, there is a good chance that SpaceX and Blue Origin might have developed by that point an acceptable replacement, which NASA could buy directly without any of the lengthy procurement procedures, since there are only 1 or 2 viable possibilities.
  24. sin and tan are the trigonometric functions sine and cosine (https://en.wikipedia.org/wiki/Trigonometric_functions). arctan is the inverse function of tan (https://en.wikipedia.org/wiki/Inverse_trigonometric_functions).
  25. I think that SpaceX just wants for the time being save money on the development of a specialised 2nd stage for Falcon Heavy. The Falcon 9 2nd stage is good enough for the first flights of Falcon Heavy and provides it with a meaningful payload capacity, even if it means that recovering the core stage will cost a lot more fuel and thereby cost a lot in terms of payload fraction. And lets not forget that SpaceX wants to recover 2nd stages, so they are planning for quite some development in terms of Falcon 2nd stages, unlike the first stages, where most of the development already seems to have happened. After all, Falcon wouldn't be the first rocket that got a new second stage at some stage in its life cycle.
×
×
  • Create New...