• Content count

  • Joined

  • Last visited

Community Reputation

103 Excellent

1 Follower

About Tullius

  • Rank
    Spacecraft Engineer
  1. SpaceX Discussion Thread

    I would guess that the launch contract contains a clause that allows for a price reduction in case of massive delays due to problems on SpaceX's side. Although I don't think that Hispasat will get anything in case of a delay of just 1 day.
  2. Poland is member of ESA. Since you are from Poland, your best bet is therefore to apply for the ESA astronaut corps. This webiste should provide a few more details on the application criteria: http://www.esa.int/Our_Activities/Human_Spaceflight/Astronauts/Frequently_asked_questions_ESA_astronauts However, please note that Poland's contribution to ESA is relatively small, making it rather unlikely that ESA will recruit a Polish astronaut, especially considering that the last selection took place in 2008 and only resulted in the recruitment of 7 astronauts.
  3. SpaceX Discussion Thread

    The minimum number of uncrewed launches before the first crewed launch is zero, as NASAs only requirement is a less than 1 in 500 risk of LOCV. However, doing no tests of prototypes means that you need a lot of additional overengineering, error margins and paper work to get the certification. But nobody wants to design to design the first rocket with a risk lowrr than 1 in 10.000, just to get the 1 in 500 for the first flight ever of it being crewed. So you are reduced to doing test launches that will prove that the rocket remains well within its specifications to get certification. Less test launches means more overenginering and more paper work, and vice versa. Since the cost per flight for SLS is ridiculously high, NASA obviously wants to do as few test launches as possible (but not zero, as that worked so well for the Space Shuttle). On the other hand, if Falcon 9 is anywhere near as cheap as it claims to be, doing a few more test launches might seem to be a worthy tradeoff for less overengineering and paper work.
  4. SpaceX Discussion Thread

    It will cross orbit of Mars, but won't enter orbit around Mars, as it has expended all its fuel. Most likely, we will lose communication with the Tesla in the next few days, as its electric power runs out. The test went well amd the PR stunt was a complete success, but now its time for SpaceX to move on.
  5. SpaceX Discussion Thread

    The correct title is Heriditary Grand Duke, as he will eventually follow his father as Grand Duke of Luxembourg. Although Prince is not completely wrong. The satellite was designed and built on the sole initiative of the Luxembourgish government, officially as a commitment to NATO. SES was chosen as commercial partner, as it was founded in 1985 by the Luxembourgish government and 1/6th of its shares (with 1/3rd of the voting rights) are still owned both directly and indirectly by the Luxembourgish state. So refusing this offer to collaborate on LuxGovSat wasn't really an option for SES, especially as one can expect that the state also took care of providing the necessary monetary incentives.
  6. Branch prediction is not a design flaw, because it is part of what makes our modern computers so fast. If it couldn't be used, it would cause a major speed loss to our modern cpus, probably much much larger and for more work loads than the up to 30% for the patches that mitigate Spectre. Unfortunately, I was unable to find just how large the loss would be, however modern processors are able to predict the correct branch with well over 90% success rate. So Intel's claims can also be interpreted as them only wanting to introduce minor changes to branch prediction in future processors to maintain its speed gain, and instead use mitigation strategies in the OS and software to prevent branch prediction from being exploited. Of course, this means that Spectre will continue to haunt us for a long time, but at least we can keep our fast processors. Of course, Intel also sends these claims forward to prevent lawsuits and calm down everyone, as even if they wanted to fix this problem in silicon, as this is no new Pentium bug (https://en.wikipedia.org/wiki/Pentium_FDIV_bug) where only small changes where required, but it would require us to rethink how our processors work.
  7. Even with multiple insulation layers with vacuum, there will still be heat transfer (due to black body radiation). And you would also have to deal with the electronics inside generating heat, which you would need to evacuate. Better insulation might extend the stay on the surface from a few hours to a few days, but you are still severely limited in time. And the heat production by the electronics needs active cooling, which would also cover the not perfect insulation, but you are relying on the cooling to work flawlessly. Also, unless you would be using vacuum tubes, you will need transistors for the RF receivers and emitters. Hence, at some point you need heat resistant transistors (or just let some cables pass between the inner and outer insulation).
  8. The research presented in the above video is specifically directed at developing electronic components that work even at the surface temperature of Venus, and thereby removing the need for insulation as was the case for the Russian probes in the 70s. If we just want to keep using our normal electronic on Venus, we need to prevent them from heating up to more than 150 °C. That is simply impossible for any extended period of time in the Venusian hell even with the most ridiculous amount of insulation.
  9. Catapult start?

    Using the idea pictured in the first post to give the Saturn V a 2g push: The Saturn V weighs nearly 3000 tons, which means we need more than 9000 tons of water. This is 9'000'000 9'000m3 of water, which equals a cube of water with a side length of more than 200 20 meters, i.e. enough to dwarf even a Saturn V with its 110 meters. still a huge cube, but not mindboggingly huge. And I forgot, if you want to give the Saturn V 50m/s of "free speed", you need 62.5 meters of difference in height, i.e. half the Saturn Vs height. So you have a giant water tank twice the size of the launch tower and a platform that moves half the height of the launch tower within 2.5 seconds. And all of this to save only 35tons of fuel (out of 2100tons for the first stage alone)?! Edit: Calculation error: 1m^3 = 1ton
  10. It is not a barge, but a ship. And unlike SpaceX's barges, it will be moving during the landings of New Glenn to provide a more stable landing spot. Added bonus is of course that it will be much faster to bring New Glenn back to port, if it lands on a ship instead of a barge.
  11. Camera system for Mariner 4

    The image was transmitted as a sequence of numbers and then converted to an actual image down on Earth, i.e. just like a modern camera with the only difference being the actual camera type. And if you are hard enough, you can even draw the image from the sequence of numbers by hand: (Source: https://commons.wikimedia.org/wiki/File:First_TV_Image_of_Mars.jpg from the Wikipedia article https://en.wikipedia.org/wiki/Mariner_4) That above is the first image of Mars transmitted by Mariner 4, hand-drawn from the sequence of numbers, as the engineers didn't want to wait for the computers to process and print it. (If you zoom in on the original image, you can even read the sequence of numbers printed on strips)
  12. Yes, that is probably the most fascinating thing of this whole endeavour. The time difference is certainly correct, as otherwise we would lose a large amount of value of the parallel observation of the event. As an example, VIRGO detected the gravitational waves 22 milliseconds before LIGO Livingston, and 3 milliseconds later it was also picked up by LIGO Hanford, so we know exactly when it happened. Also the gamma-ray burst was observed by two space telescopes (Fermi and INTEGRAL). And knowing the exact timing of 1.7 seconds between the gravitational waves and the gamma-ray burst is a valuable in its own right. As to why, I found an article from the links on wikipedia: That may be part of the explanation. Another might (just guessing) be that the 100 seconds of gravitational wave observation might detail the seconds right before the explosion, while the gamma-rays etc. are the result of the explosion. The whole thing is just something that is mind-bogglingly huge: 10,000 Earth masses of gold and platinum are expected to have formed! The paper detailing the astronomical observations is said to have 4600 authors, about one third of all astronomers! And now, place your bets: What has formed out of this collision? A neutron star heavier than any known neutron star, or a black hole lighter than any known black hole?
  13. You might be onto something. I looked at the announcement of the LIGO press conference, which will be split in two parts. The first part will be with LIGO and Virgo scientists, but the scientists in the second part are not directly related to gravitational waves. Nial Tanvir and Edo Berger are connected to two discoveries with the Swift X-ray space telescope, one being an X-ray outburst GRB 090423 which is the oldest observed cosmic event (https://en.wikipedia.org/wiki/GRB_090423) and the other being a supernova whose observation would allow future observations of supernovae by gravitational waves (https://en.wikipedia.org/wiki/SN_2008D). But this whole thing doesn't end here, GRB 090423 was verified by a telescope of the ESO. So, we are still at a wild guesses, but these press conferences might indeed be related.
  14. Aldrin Cycler Ships

    Transmitting the data 3 to 10 times is probably the worst possible error correction algorithm, as it adds a lot of additional data for a very small gain in transmission reliability. If there is a 2% chance of a transmitted bit being received wrong, the system I described above with 2 additional error correction bits for 8 bits of data has a 99.9% chance of correctly transmitting the 8 bits of data, i.e. there won't be much errors. Without error correction, the chance of correctly reestablishing the data would only be 85.1%, i.e. a rather high chance of error. And, if you somehow could tell that the 8 bits of data were badly received, you would need an average of 1.175 retransmissions to get a correct transmission, i.e. you need to send an average of 9.4 bits. So 9.4 bits with no error risk vs. 10 bits with small risk doesn't sound that bad? But we haven't yet established how many bits are needed to decide if the received data is correct, and we won't need to add an additional 10 or 40 minutes to get our data through. With CD grade error correction, even a 10% chance of flipping each bit can be recovered with 99.98% probability. Retransmissions can still be useful for very precious data, if you notice through a basic error check that, despite a very low chance, an error still crept in. But in practice, you don't want to rely too much on them, as retransmitting costs a lot. If you improve the signal-to-noise ratio, you can either reduce the amount of error correction you do or switch to faster connection speed (which increases the errors in the signal). The task is now to find the right compromise. In that respect, both your wifi and Curiosity use the same tricks to optimise the speed, while retaining the least tolerable amount of errors. The only case, where it actually becomes useful to retransmit the data, is when the signal-to-noise ratio is affected by something like solar wind: Instead of always using the more secure transmission method, which won't drop out even when a solar flare occurs, you use the faster method and just accept that during solar flares nothing useful is transmitted and you have to retry later on. Since solar flares are rare, you might actually gain bandwidth with this technique, which is good, as long as there is no time-critical data. And in the above example of non-time-critical data with occasional solar flares, your idea of relay satellites might actually become useful, provided that the transmission through the relays is actually of higher bandwidth than the one through the 70m antennas of the DSN. And having the relays buffer the data in case some of the data needs to be retransmitted is then obviously a nice bonus (By the way, NASA already tested an alternative internet protocol with the nodes buffering the data for retransmissions during an experiment on the ISS). Curiosity, as well as Opportunity, is only rarely communicating directly with the Earth, but rather through the Mars orbiters, as they have the more powerful transmitters as well as being able to share their time on the DSN. Sharing capacities has obvious advantages, but doesn't need interplanetary relays, but relays on the surface or in the orbit of Mars. Also the advantage of a large and powerful dish on Mars or in Mars orbit is that it can communicate directly with Earth, which beats the transmission through multiple relays in orbit between Earth and Mars in signal round-trip time, as during opposition between Earth and Mars it can communicate in direct line instead of a half-circle.
  15. Aldrin Cycler Ships

    Usually, you use error correction algorithms to account for errors so that, despite some errors, the correct data can be reestablished. For example, you can use an algorithm that encodes each 8 bit piece of data in 10 bits in such a way that even two transmission errors in 10 bits don't affect data integrity, and therefore won't cause data retransmission. wumpus's formula now tells you how high the theoretical limit for the useful data rate is, given the channel bandwith and the signal-to-noise ratio. For our communication with Mars, we know exactly the channel bandwidth (as it is determined by our antenna setup) and we have a pretty good estimate of the worst possible expected signal-to-noise ratio. We can then use nice mathematical tricks to add additional bits to our data bits so that no data is lost in transmission (except in exceptional and extremely rare cases), and therefore we don't need to retransmit data. I just looked up how data is encoded on a CD as an example: Each 8 bit block of data is encoded as 14 bits on the CD. This might seem extremely wasteful at first, as we are losing nearly half the capacity of the CD to error correction. However, since we added error correction to our CD, we can actually save three times as much data on our CD, as we could write without error correction, giving us 50% more usable capacity. And this is without needing to go back to reread a faulty byte (by the way, how do you detect that the data contains errors?). So, there is no need for relay satellites to improve the time between transmission attempts, as error correction algorithms can already cover all errors. The only useful role of relay satellites is to improve signal-to-noise ratio, which could potentially increase transmission speeds, as one could use higher frequencies and less error correction. However, the question that remains is: How many satellites do you need until you beat a 70m dish?