Tullius

Members
  • Content count

    102
  • Joined

  • Last visited

Community Reputation

91 Excellent

1 Follower

About Tullius

  • Rank
    Spacecraft Engineer
  1. Even with multiple insulation layers with vacuum, there will still be heat transfer (due to black body radiation). And you would also have to deal with the electronics inside generating heat, which you would need to evacuate. Better insulation might extend the stay on the surface from a few hours to a few days, but you are still severely limited in time. And the heat production by the electronics needs active cooling, which would also cover the not perfect insulation, but you are relying on the cooling to work flawlessly. Also, unless you would be using vacuum tubes, you will need transistors for the RF receivers and emitters. Hence, at some point you need heat resistant transistors (or just let some cables pass between the inner and outer insulation).
  2. The research presented in the above video is specifically directed at developing electronic components that work even at the surface temperature of Venus, and thereby removing the need for insulation as was the case for the Russian probes in the 70s. If we just want to keep using our normal electronic on Venus, we need to prevent them from heating up to more than 150 °C. That is simply impossible for any extended period of time in the Venusian hell even with the most ridiculous amount of insulation.
  3. Catapult start?

    Using the idea pictured in the first post to give the Saturn V a 2g push: The Saturn V weighs nearly 3000 tons, which means we need more than 9000 tons of water. This is 9'000'000 9'000m3 of water, which equals a cube of water with a side length of more than 200 20 meters, i.e. enough to dwarf even a Saturn V with its 110 meters. still a huge cube, but not mindboggingly huge. And I forgot, if you want to give the Saturn V 50m/s of "free speed", you need 62.5 meters of difference in height, i.e. half the Saturn Vs height. So you have a giant water tank twice the size of the launch tower and a platform that moves half the height of the launch tower within 2.5 seconds. And all of this to save only 35tons of fuel (out of 2100tons for the first stage alone)?! Edit: Calculation error: 1m^3 = 1ton
  4. It is not a barge, but a ship. And unlike SpaceX's barges, it will be moving during the landings of New Glenn to provide a more stable landing spot. Added bonus is of course that it will be much faster to bring New Glenn back to port, if it lands on a ship instead of a barge.
  5. Camera system for Mariner 4

    The image was transmitted as a sequence of numbers and then converted to an actual image down on Earth, i.e. just like a modern camera with the only difference being the actual camera type. And if you are hard enough, you can even draw the image from the sequence of numbers by hand: (Source: https://commons.wikimedia.org/wiki/File:First_TV_Image_of_Mars.jpg from the Wikipedia article https://en.wikipedia.org/wiki/Mariner_4) That above is the first image of Mars transmitted by Mariner 4, hand-drawn from the sequence of numbers, as the engineers didn't want to wait for the computers to process and print it. (If you zoom in on the original image, you can even read the sequence of numbers printed on strips)
  6. Yes, that is probably the most fascinating thing of this whole endeavour. The time difference is certainly correct, as otherwise we would lose a large amount of value of the parallel observation of the event. As an example, VIRGO detected the gravitational waves 22 milliseconds before LIGO Livingston, and 3 milliseconds later it was also picked up by LIGO Hanford, so we know exactly when it happened. Also the gamma-ray burst was observed by two space telescopes (Fermi and INTEGRAL). And knowing the exact timing of 1.7 seconds between the gravitational waves and the gamma-ray burst is a valuable in its own right. As to why, I found an article from the links on wikipedia: That may be part of the explanation. Another might (just guessing) be that the 100 seconds of gravitational wave observation might detail the seconds right before the explosion, while the gamma-rays etc. are the result of the explosion. The whole thing is just something that is mind-bogglingly huge: 10,000 Earth masses of gold and platinum are expected to have formed! The paper detailing the astronomical observations is said to have 4600 authors, about one third of all astronomers! And now, place your bets: What has formed out of this collision? A neutron star heavier than any known neutron star, or a black hole lighter than any known black hole?
  7. You might be onto something. I looked at the announcement of the LIGO press conference, which will be split in two parts. The first part will be with LIGO and Virgo scientists, but the scientists in the second part are not directly related to gravitational waves. Nial Tanvir and Edo Berger are connected to two discoveries with the Swift X-ray space telescope, one being an X-ray outburst GRB 090423 which is the oldest observed cosmic event (https://en.wikipedia.org/wiki/GRB_090423) and the other being a supernova whose observation would allow future observations of supernovae by gravitational waves (https://en.wikipedia.org/wiki/SN_2008D). But this whole thing doesn't end here, GRB 090423 was verified by a telescope of the ESO. So, we are still at a wild guesses, but these press conferences might indeed be related.
  8. Aldrin Cycler Ships

    Transmitting the data 3 to 10 times is probably the worst possible error correction algorithm, as it adds a lot of additional data for a very small gain in transmission reliability. If there is a 2% chance of a transmitted bit being received wrong, the system I described above with 2 additional error correction bits for 8 bits of data has a 99.9% chance of correctly transmitting the 8 bits of data, i.e. there won't be much errors. Without error correction, the chance of correctly reestablishing the data would only be 85.1%, i.e. a rather high chance of error. And, if you somehow could tell that the 8 bits of data were badly received, you would need an average of 1.175 retransmissions to get a correct transmission, i.e. you need to send an average of 9.4 bits. So 9.4 bits with no error risk vs. 10 bits with small risk doesn't sound that bad? But we haven't yet established how many bits are needed to decide if the received data is correct, and we won't need to add an additional 10 or 40 minutes to get our data through. With CD grade error correction, even a 10% chance of flipping each bit can be recovered with 99.98% probability. Retransmissions can still be useful for very precious data, if you notice through a basic error check that, despite a very low chance, an error still crept in. But in practice, you don't want to rely too much on them, as retransmitting costs a lot. If you improve the signal-to-noise ratio, you can either reduce the amount of error correction you do or switch to faster connection speed (which increases the errors in the signal). The task is now to find the right compromise. In that respect, both your wifi and Curiosity use the same tricks to optimise the speed, while retaining the least tolerable amount of errors. The only case, where it actually becomes useful to retransmit the data, is when the signal-to-noise ratio is affected by something like solar wind: Instead of always using the more secure transmission method, which won't drop out even when a solar flare occurs, you use the faster method and just accept that during solar flares nothing useful is transmitted and you have to retry later on. Since solar flares are rare, you might actually gain bandwidth with this technique, which is good, as long as there is no time-critical data. And in the above example of non-time-critical data with occasional solar flares, your idea of relay satellites might actually become useful, provided that the transmission through the relays is actually of higher bandwidth than the one through the 70m antennas of the DSN. And having the relays buffer the data in case some of the data needs to be retransmitted is then obviously a nice bonus (By the way, NASA already tested an alternative internet protocol with the nodes buffering the data for retransmissions during an experiment on the ISS). Curiosity, as well as Opportunity, is only rarely communicating directly with the Earth, but rather through the Mars orbiters, as they have the more powerful transmitters as well as being able to share their time on the DSN. Sharing capacities has obvious advantages, but doesn't need interplanetary relays, but relays on the surface or in the orbit of Mars. Also the advantage of a large and powerful dish on Mars or in Mars orbit is that it can communicate directly with Earth, which beats the transmission through multiple relays in orbit between Earth and Mars in signal round-trip time, as during opposition between Earth and Mars it can communicate in direct line instead of a half-circle.
  9. Aldrin Cycler Ships

    Usually, you use error correction algorithms to account for errors so that, despite some errors, the correct data can be reestablished. For example, you can use an algorithm that encodes each 8 bit piece of data in 10 bits in such a way that even two transmission errors in 10 bits don't affect data integrity, and therefore won't cause data retransmission. wumpus's formula now tells you how high the theoretical limit for the useful data rate is, given the channel bandwith and the signal-to-noise ratio. For our communication with Mars, we know exactly the channel bandwidth (as it is determined by our antenna setup) and we have a pretty good estimate of the worst possible expected signal-to-noise ratio. We can then use nice mathematical tricks to add additional bits to our data bits so that no data is lost in transmission (except in exceptional and extremely rare cases), and therefore we don't need to retransmit data. I just looked up how data is encoded on a CD as an example: Each 8 bit block of data is encoded as 14 bits on the CD. This might seem extremely wasteful at first, as we are losing nearly half the capacity of the CD to error correction. However, since we added error correction to our CD, we can actually save three times as much data on our CD, as we could write without error correction, giving us 50% more usable capacity. And this is without needing to go back to reread a faulty byte (by the way, how do you detect that the data contains errors?). So, there is no need for relay satellites to improve the time between transmission attempts, as error correction algorithms can already cover all errors. The only useful role of relay satellites is to improve signal-to-noise ratio, which could potentially increase transmission speeds, as one could use higher frequencies and less error correction. However, the question that remains is: How many satellites do you need until you beat a 70m dish?
  10. SpaceX Discussion Thread

    Because safety. If both the rocket and the LES only have 97% reliability each (which is quite low for a modern rocket), it gives the crew a chance of about 99.9% of surviving the launch, i.e. very very safe. On the other hand, to achieve this survivability solely with the reliability of the rocket, the rocket needs to be 99.9% reliable, which is ridiculously high.
  11. Fun with Fibonacci

    It took me a bit of time to find the proof, but the German wikipedia page gave the right clue: Fibonacci numbers satisfy the formula Fm+n = Fn+1 Fm + Fn Fm-1, of which your formula is the special case for n=n and m=n+1. The formula itself follows quite easily from the defining property Fn=Fn-1+Fn-2 via induction on m.
  12. Nuclear thermal rockets

    You can probably design around the political problems, after all we have launched nearly 10kg of plutonium 238 on New Horizons, and even more on Cassini. You will only need to fulfill 2 conditions: You can prove that it is the only sensible propulsion system for this type of mission. Make it as safe as possible: On the Apollo missions the RTG for the science package was transported in a casing capable of surviving reentry, preventing the spreading of the plutonium in the atmosphere should the mission result in a failure and the lunar module reenter (as has happened on Apollo 13). If you observe these restrictions, it is probably doable to launch a NTR into space. On the other hand, you probably want to do tests on Earth, which could produce a quite significant amount of radioactive waste, which in turn could increase program costs and political opposition to it significantly.
  13. The vertical stabilizer is also the non-deflecting part of the tail fin, which prevents the plane from side slipping. The shorter the plane, the larger the vertical stabilizer has to be to keep the plane stable (and the Space Shuttle isn't very long, compare it for example to a Boeing 747SP). The part, you are struggling with, is the rudder, which gives yaw control. One part might be that in reality you have much more delicate control as to how much you want to deflect them. Maybe it also needed extra large control surfaces for control in the thinner parts of the atmosphere (the other control surfaces on the shuttle aren't small either). In case of the shuttle, the large size of the rudder (the moving part of the vertical stabilizer) might also have to do with the fact, that the shuttle's rudder was split in two parts allowing it to deflect to both sides simultaneously to act as a speed brake (this is actually modelled in KSP, if you deploy its control surface). As to why the stock Dynawing has this peculiar arrangement with the vertical stabilizers at the wing tips, I can only speculate as a player having built my own shuttle replica in KSP: If you use the Space Shuttles rudder during launch to control yaw, it generates an enormous amount of roll, as the center of mass is sitting very far away inside the external tank. On the other hand, on the stock Dynawing, the rudders a sitting much closer to the center of mass and thereby generate much less roll during launch, and none during the glide home, making it much simpler to control as using yaw has no negative side effect on roll.
  14. Forum designs new rocket to replace the SLS

    Because EUS isn't ready yet and would cause significant delays to EM-1. On the other hand, ICPS allows NASA to test the lower stages and Orion in a similar fashion as they will be used in EM-2. Without ICPS, EM-1 could probably only be launched in a similar timeframe as EM-2 is planned to happen, i.e. in 2023 instead of 2019. Sure, ICPS costs money, but its derived from Delta IV's DCSS, which should reduce its costs. And 33 months for altering the VAB is a short time compared to the 4 years between EM-1 and EM-2.
  15. Stabilizers on Saturn V

    A quick Google search provides the answer: And the article itself contains even more interesting details (absolutely worth a read, even if you are not a rocket scientist).