Tullius

Members
  • Content count

    98
  • Joined

  • Last visited

Community Reputation

90 Excellent

1 Follower

About Tullius

  • Rank
    Rocketry Enthusiast
  1. Camera system for Mariner 4

    The image was transmitted as a sequence of numbers and then converted to an actual image down on Earth, i.e. just like a modern camera with the only difference being the actual camera type. And if you are hard enough, you can even draw the image from the sequence of numbers by hand: (Source: https://commons.wikimedia.org/wiki/File:First_TV_Image_of_Mars.jpg from the Wikipedia article https://en.wikipedia.org/wiki/Mariner_4) That above is the first image of Mars transmitted by Mariner 4, hand-drawn from the sequence of numbers, as the engineers didn't want to wait for the computers to process and print it. (If you zoom in on the original image, you can even read the sequence of numbers printed on strips)
  2. Yes, that is probably the most fascinating thing of this whole endeavour. The time difference is certainly correct, as otherwise we would lose a large amount of value of the parallel observation of the event. As an example, VIRGO detected the gravitational waves 22 milliseconds before LIGO Livingston, and 3 milliseconds later it was also picked up by LIGO Hanford, so we know exactly when it happened. Also the gamma-ray burst was observed by two space telescopes (Fermi and INTEGRAL). And knowing the exact timing of 1.7 seconds between the gravitational waves and the gamma-ray burst is a valuable in its own right. As to why, I found an article from the links on wikipedia: That may be part of the explanation. Another might (just guessing) be that the 100 seconds of gravitational wave observation might detail the seconds right before the explosion, while the gamma-rays etc. are the result of the explosion. The whole thing is just something that is mind-bogglingly huge: 10,000 Earth masses of gold and platinum are expected to have formed! The paper detailing the astronomical observations is said to have 4600 authors, about one third of all astronomers! And now, place your bets: What has formed out of this collision? A neutron star heavier than any known neutron star, or a black hole lighter than any known black hole?
  3. You might be onto something. I looked at the announcement of the LIGO press conference, which will be split in two parts. The first part will be with LIGO and Virgo scientists, but the scientists in the second part are not directly related to gravitational waves. Nial Tanvir and Edo Berger are connected to two discoveries with the Swift X-ray space telescope, one being an X-ray outburst GRB 090423 which is the oldest observed cosmic event (https://en.wikipedia.org/wiki/GRB_090423) and the other being a supernova whose observation would allow future observations of supernovae by gravitational waves (https://en.wikipedia.org/wiki/SN_2008D). But this whole thing doesn't end here, GRB 090423 was verified by a telescope of the ESO. So, we are still at a wild guesses, but these press conferences might indeed be related.
  4. Aldrin Cycler Ships

    Transmitting the data 3 to 10 times is probably the worst possible error correction algorithm, as it adds a lot of additional data for a very small gain in transmission reliability. If there is a 2% chance of a transmitted bit being received wrong, the system I described above with 2 additional error correction bits for 8 bits of data has a 99.9% chance of correctly transmitting the 8 bits of data, i.e. there won't be much errors. Without error correction, the chance of correctly reestablishing the data would only be 85.1%, i.e. a rather high chance of error. And, if you somehow could tell that the 8 bits of data were badly received, you would need an average of 1.175 retransmissions to get a correct transmission, i.e. you need to send an average of 9.4 bits. So 9.4 bits with no error risk vs. 10 bits with small risk doesn't sound that bad? But we haven't yet established how many bits are needed to decide if the received data is correct, and we won't need to add an additional 10 or 40 minutes to get our data through. With CD grade error correction, even a 10% chance of flipping each bit can be recovered with 99.98% probability. Retransmissions can still be useful for very precious data, if you notice through a basic error check that, despite a very low chance, an error still crept in. But in practice, you don't want to rely too much on them, as retransmitting costs a lot. If you improve the signal-to-noise ratio, you can either reduce the amount of error correction you do or switch to faster connection speed (which increases the errors in the signal). The task is now to find the right compromise. In that respect, both your wifi and Curiosity use the same tricks to optimise the speed, while retaining the least tolerable amount of errors. The only case, where it actually becomes useful to retransmit the data, is when the signal-to-noise ratio is affected by something like solar wind: Instead of always using the more secure transmission method, which won't drop out even when a solar flare occurs, you use the faster method and just accept that during solar flares nothing useful is transmitted and you have to retry later on. Since solar flares are rare, you might actually gain bandwidth with this technique, which is good, as long as there is no time-critical data. And in the above example of non-time-critical data with occasional solar flares, your idea of relay satellites might actually become useful, provided that the transmission through the relays is actually of higher bandwidth than the one through the 70m antennas of the DSN. And having the relays buffer the data in case some of the data needs to be retransmitted is then obviously a nice bonus (By the way, NASA already tested an alternative internet protocol with the nodes buffering the data for retransmissions during an experiment on the ISS). Curiosity, as well as Opportunity, is only rarely communicating directly with the Earth, but rather through the Mars orbiters, as they have the more powerful transmitters as well as being able to share their time on the DSN. Sharing capacities has obvious advantages, but doesn't need interplanetary relays, but relays on the surface or in the orbit of Mars. Also the advantage of a large and powerful dish on Mars or in Mars orbit is that it can communicate directly with Earth, which beats the transmission through multiple relays in orbit between Earth and Mars in signal round-trip time, as during opposition between Earth and Mars it can communicate in direct line instead of a half-circle.
  5. Aldrin Cycler Ships

    Usually, you use error correction algorithms to account for errors so that, despite some errors, the correct data can be reestablished. For example, you can use an algorithm that encodes each 8 bit piece of data in 10 bits in such a way that even two transmission errors in 10 bits don't affect data integrity, and therefore won't cause data retransmission. wumpus's formula now tells you how high the theoretical limit for the useful data rate is, given the channel bandwith and the signal-to-noise ratio. For our communication with Mars, we know exactly the channel bandwidth (as it is determined by our antenna setup) and we have a pretty good estimate of the worst possible expected signal-to-noise ratio. We can then use nice mathematical tricks to add additional bits to our data bits so that no data is lost in transmission (except in exceptional and extremely rare cases), and therefore we don't need to retransmit data. I just looked up how data is encoded on a CD as an example: Each 8 bit block of data is encoded as 14 bits on the CD. This might seem extremely wasteful at first, as we are losing nearly half the capacity of the CD to error correction. However, since we added error correction to our CD, we can actually save three times as much data on our CD, as we could write without error correction, giving us 50% more usable capacity. And this is without needing to go back to reread a faulty byte (by the way, how do you detect that the data contains errors?). So, there is no need for relay satellites to improve the time between transmission attempts, as error correction algorithms can already cover all errors. The only useful role of relay satellites is to improve signal-to-noise ratio, which could potentially increase transmission speeds, as one could use higher frequencies and less error correction. However, the question that remains is: How many satellites do you need until you beat a 70m dish?
  6. SpaceX Discussion Thread

    Because safety. If both the rocket and the LES only have 97% reliability each (which is quite low for a modern rocket), it gives the crew a chance of about 99.9% of surviving the launch, i.e. very very safe. On the other hand, to achieve this survivability solely with the reliability of the rocket, the rocket needs to be 99.9% reliable, which is ridiculously high.
  7. Fun with Fibonacci

    It took me a bit of time to find the proof, but the German wikipedia page gave the right clue: Fibonacci numbers satisfy the formula Fm+n = Fn+1 Fm + Fn Fm-1, of which your formula is the special case for n=n and m=n+1. The formula itself follows quite easily from the defining property Fn=Fn-1+Fn-2 via induction on m.
  8. Nuclear thermal rockets

    You can probably design around the political problems, after all we have launched nearly 10kg of plutonium 238 on New Horizons, and even more on Cassini. You will only need to fulfill 2 conditions: You can prove that it is the only sensible propulsion system for this type of mission. Make it as safe as possible: On the Apollo missions the RTG for the science package was transported in a casing capable of surviving reentry, preventing the spreading of the plutonium in the atmosphere should the mission result in a failure and the lunar module reenter (as has happened on Apollo 13). If you observe these restrictions, it is probably doable to launch a NTR into space. On the other hand, you probably want to do tests on Earth, which could produce a quite significant amount of radioactive waste, which in turn could increase program costs and political opposition to it significantly.
  9. The vertical stabilizer is also the non-deflecting part of the tail fin, which prevents the plane from side slipping. The shorter the plane, the larger the vertical stabilizer has to be to keep the plane stable (and the Space Shuttle isn't very long, compare it for example to a Boeing 747SP). The part, you are struggling with, is the rudder, which gives yaw control. One part might be that in reality you have much more delicate control as to how much you want to deflect them. Maybe it also needed extra large control surfaces for control in the thinner parts of the atmosphere (the other control surfaces on the shuttle aren't small either). In case of the shuttle, the large size of the rudder (the moving part of the vertical stabilizer) might also have to do with the fact, that the shuttle's rudder was split in two parts allowing it to deflect to both sides simultaneously to act as a speed brake (this is actually modelled in KSP, if you deploy its control surface). As to why the stock Dynawing has this peculiar arrangement with the vertical stabilizers at the wing tips, I can only speculate as a player having built my own shuttle replica in KSP: If you use the Space Shuttles rudder during launch to control yaw, it generates an enormous amount of roll, as the center of mass is sitting very far away inside the external tank. On the other hand, on the stock Dynawing, the rudders a sitting much closer to the center of mass and thereby generate much less roll during launch, and none during the glide home, making it much simpler to control as using yaw has no negative side effect on roll.
  10. Forum designs new rocket to replace the SLS

    Because EUS isn't ready yet and would cause significant delays to EM-1. On the other hand, ICPS allows NASA to test the lower stages and Orion in a similar fashion as they will be used in EM-2. Without ICPS, EM-1 could probably only be launched in a similar timeframe as EM-2 is planned to happen, i.e. in 2023 instead of 2019. Sure, ICPS costs money, but its derived from Delta IV's DCSS, which should reduce its costs. And 33 months for altering the VAB is a short time compared to the 4 years between EM-1 and EM-2.
  11. Stabilizers on Saturn V

    A quick Google search provides the answer: And the article itself contains even more interesting details (absolutely worth a read, even if you are not a rocket scientist).
  12. Exactly that, if you have two sides, resp. the ratio of their lengths, you can get the angle with arctan, in the same way as you can get the ratio of these sides by taking tan of the angle. arctan is the inverse function of tan, i.e. arctan = tan^-1. What does inverse function mean? If I have a function f mapping x to y, then f^-1 maps y to x (provided that no more than one x is mapped to y). What does this mean for tan? tan maps angles in (-pi/2,pi/2) resp. (-90°,90°) (depending on your notation of angles) to values in the real numbers, and arctan maps real numbers to angles in (-pi/2,pi/2) resp. (-90°,90°). For example tan(pi/4) = tan(45°) = 1, and arctan(1) = pi/4 = 45°. Or take a look at this graph tan maps values on the horizontal axis (usually called the x-axis) to values on the vertical axis (usually called the y-axis), arctan does the inverse: it maps values on the vertical axis to values on the horizontal axis.
  13. NASA SLS/Orion/DSG/DST

    It would mean starting from zero, or accepting Falcon Heavy and New Glenn as alternatives. And while the former isn't really interesting for the senate to fund, since they would see the results even later, the latter two are less capable rockets than SLS, which, while one would quickly see some results, would have difficulties even putting a space station module into lunar orbit. And if you start anew with private contractors, every development works needs to be paid for 2 or 3 times, since you want competition. Also, either NASA lets 2 or 3 companies build the rocket entirely on their own, which would mean that NASA has little influence on the design and you have to find as many companies willing to do it, or NASA does the development itself, but buys different parts from different companies, which would reduce the efficiency gains. In short, cancelling SLS now would not just mean a huge blow to NASA's manned space exploration, but also leave NASA with something that might not necessarily be better. It might be necessary to redesign some parts of SLS, like maybe find a private contractor to develop new, cheaper engines to be build in NASA's facilities, but just cancelling SLS and putting the whole work in the hands of private companies won't necessarily improve anything. On the other hand, if NASA finds out in 3-5 years that SLS is indeed a dead end, there is a good chance that SpaceX and Blue Origin might have developed by that point an acceptable replacement, which NASA could buy directly without any of the lengthy procurement procedures, since there are only 1 or 2 viable possibilities.
  14. sin and tan are the trigonometric functions sine and cosine (https://en.wikipedia.org/wiki/Trigonometric_functions). arctan is the inverse function of tan (https://en.wikipedia.org/wiki/Inverse_trigonometric_functions).
  15. Blue Origin Thread (merged)

    I think that SpaceX just wants for the time being save money on the development of a specialised 2nd stage for Falcon Heavy. The Falcon 9 2nd stage is good enough for the first flights of Falcon Heavy and provides it with a meaningful payload capacity, even if it means that recovering the core stage will cost a lot more fuel and thereby cost a lot in terms of payload fraction. And lets not forget that SpaceX wants to recover 2nd stages, so they are planning for quite some development in terms of Falcon 2nd stages, unlike the first stages, where most of the development already seems to have happened. After all, Falcon wouldn't be the first rocket that got a new second stage at some stage in its life cycle.