Jump to content

K^2

Members
  • Posts

    6,181
  • Joined

  • Last visited

Everything posted by K^2

  1. No, UAE. Just because Americans keep sending robots to Mars, doesn't mean there is oil there. Sorry. Couldn't resist. Well, I'm pretty sure UAE can simply buy a decent space program. But even then, 2021 is a heck of a goal, in terms of available time. Best of luck to them anyways. Maybe it will motivate some people, if nothing else. And it will help increase demand for people educated in the field, which is always a good thing.
  2. I don't actually think 1G is a reasonable goal. I've mentioned before that 1m/s² is quite doable with a 1U. It gives rotation slow enough to get a few sensor readings per turn, which allows for correction of any tumbling that might develop. For torque, magnetotorquers. Just a square coil per face. It's very simple to build, and it will provide enough torque both to spin up the cube initially, to adjust rate of rotation if necessary, and to re-orient axis of rotation by having it precess around the magnetic field. It's simple, cheap, has few things that can go wrong with it, and we can do everything we need to with it. Good question. Depends a lot on duration we can plan for. If it's a low altitude launch with cheap hardware, it won't live past couple of weeks. Seal-and-forget might be the way to go there. Pressure will drop, but within acceptable levels during duration. If we get at least an ISS orbit launch, and especially if we can get the electronics that will survive for many months if not years, then we definitely need a way to sustain the pressure. It'll have to be a gas generator of some sort. It'd be nice to find a fluid with vapor pressure ~1bar in relevant temperature range which is also inert and non-toxic, but that might be asking for too much. Otherwise, perhaps something that can be decomposed into harmless gasses at high temperature. Again, preferably a liquid. I'm going to look at some options. Of course, before settling on something specific, it'd be nice to build a prototype of the eco chamber and see how long it holds pressure. For anything that leaves Earth system, we'd definitely need serious external help with tracking. It should be possible to get some radio telescope time via some education outreach programs. I've done some basic experiments while in school that way. That would, at least, take care of the communication, as well as some rough telemetry. For more precise telemetry, DSN might be the way to go, yes. But I have no idea how hard it'd be to get access to that. On approach, final corrections would have to be done by visual localization. That's probably the only way to get precise enough fly-by with something like that, and certainly the only way on aerocapture. Aerobraking would also require adjustable airbrakes, weather data from target body, and a good deal of luck. In any case, this is something with a whole different sort of budget. We'll see how much money we can get. Maybe even look at doing a second mission once the first one is successful to see if we can get better funding. But it's good to have ideas like that in mind, to always have a bigger, better goal out there.
  3. Combination of factors. First, yes, they are very short events. So you have to happen to look in the right direction. Second, most of the energy output is in gamma, which you can't see with radio telescope. Or conventional telescope. Or even X-Ray telescope. In fact, one can't build a gamma ray telescope, because there are no optics that work in the right range. We have equipment to detect gamma bursts, but usually just the general direction. The fact that these events have only recently been detected, and some of the facts have only been established this decade, should tell you how challenging it is. You've misunderstood some of the key points. The distortion, essentially, works to create the delay. It's the reason the white hole explosion doesn't happen immediately following the collapse. But the energetic part of the event will take as long as the collapse. That puts it on the same time frame as blitzars. And that's the same time frame as we see with Lorimer bursts. The duration will be just a few milliseconds, spilling all of the black hole's energy.
  4. 8051 draws 25mA at 5V. That's nothing compared to solar input. The only part that might run a bit hot is the comms unit, but that should only be used in bursts. That leaves the Sun as primary source of the heat to deal with. Worst case, we can cool electronics with an active heat pump. (Solid state.) These things are cheap enough, and you can use them for noise-free cooling of a CPU on a modern PC. It's a bit of a power hog, but since we only need it when solar power is abundant, I don't see a major issue with it.
  5. Plus launch and whatever we end up doing for ground station/comms, yes. Still working on the details. I keep getting side-tracked into specifics of the board layout. But that's not really a bad thing.
  6. The only thing of sufficient magnitude and short enough in duration to match this prediction are Lorimer bursts. They are extremely uniform, so I tend towards the blitzar hypothesis. After all, blitzars should happen, and their output is definitely consistent with Lorimer bursts, but some of these could be exploding black holes. You'd need a pretty good sample to differentiate the two types of events, which we don't really have.
  7. Alignment would be a factor if we needed precise maneuvering. But if the sat is going to spin at 60 RPM, any mis-alignment will go into tumble, which is easily detected and corrected for. I'll run some simulations with slightly randomized fields, but I honestly don't expect it to be a problem. Once in orbit, temperature shouldn't be a factor. Plastic could get brittle when cold, which could be a problem on the ride up. Any idea where we could check on the temperatures of the payload during the ride? I did not think about outgassing. Hot + vacuum could be an issue. I'll find out what the relevant factors are, and if any of the available 3D printables are fine for it.
  8. I don't know much about photography specifically, but signal/noise is mostly just a question of statistics. Still, take it with a grain of salt. Short exposure with high ISO reduces SNR due to shot noise. That's pretty obvious. But if you are going to stack a minute worth of short exposure images or do one minute-long exposure, then it doesn't matter. Stacking averages out shot noise exactly the same way that long exposure would. Ditto read noise. You can run into problems with quantization noise, but that's what ISO setting is for. So if you are going to stack, you'll get the same SNR either way. So it comes down to whether you plan to take just one shot or stack a bunch of images. With a single shot, go for as long of an exposure as you can manage. Adjust ISO to get contrast you want. If you are going to stack, then go for short, high ISO takes. That will get you same level of noise in the final image, but you'll get fewer problems due to movement/vibration.
  9. Wouldn't it be much easier to combine multiple shots taken with shorter exposure than deconvoluting a single long exposure frame? You gain a lot more information from an extra dimension (time) than you do from a known convolution kernel.
  10. We've sort of covered this in another thread. In general, threads within the same process share memory. However, if you want them to run on different cores, memory sharing becomes complicated. Asynchronous access is an issue, for example. It's something you have to work around. If the two different threads do completely different things, and they don't share memory, then it's pretty straight forward to set things up to run in parallel. Running physics and rendering in two different threads that run on two different cores, for example, is pretty straight forward. Only locations of entities have to be shared between the two. But if you want to run several physics threads in parallel, you start running into additional complications. A CPU can run any number of threads, by switching between them. That's basically the definition of a thread. But some Intel CPUs can run two threads at the same time using hyperthreading.
  11. These have nothing to do with multi-cores. There are two things that happened. First, amount of physical memory has skyrocketed. Every process has to have a chunk of memory dedicated to it. But you can easily run more processes that you have memory for. That's because addressing happens in virtual memory, rather than physical. Anything that doesn't fit in the physical memory is written to the disk in the page file. When processes switch, page file might have to be accessed. If there is a lot of data there, that can take a while. This is the biggest reason why sometimes a window appears to freeze when you switch to/from that window. The other factor is that there have been a lot of improvements in Windows kernel and drivers. Linux machines never really suffered from these problems to begin with. Nor did general code under Windows. You can write "while(1);", and that will not slow down your computer for a second. Why? Because that infinite loop can still receive an interrupt at any moment. So your PC can switch to and from a task caught in an infinite loop just like any other. In general, it doesn't matter if your process is doing something important, idling, or stuck in an infinite loop. You can switch to/from that process all the same. Though, you can save OS a bit of time by using sleep() calls, since it tells OS that it can skip switching to this task for a while. For example, it's often a good idea to have sleep() in your code when window goes out of focus. Unless, of course, it's meant to do some background computations. The reason programs would freeze, temporarily or permanently, is because it's possible to set a flag on CPU that prevents interrupts. However, only kernel or driver code can actually do that. There are some technical reasons why you don't want to interrupt some of the driver code. And a lot of Windows drivers have been just written as one big no-interrupt block. That means, if there is an infinite loop there, such as an error in a piece of code that's supposed to wait until some buffer is ready, your entire computer freezes. This has been a huge problem with W95/98 code, and has been steadily improving over the past decade as Microsoft has changed a lot of their paradigms on kernel design and driver licensing. In big part, because these sort of problems weren't plaguing *nix platforms. Edit: I do seem to recall a way to ask Windows not to switch tasks during a certain block of code. It's relevant for some sensitive call-backs, like buffer switching in audio playback. But I'm pretty sure that it's either completely in software, or OS does the interrupt switch for you. Process level code physically shouldn't have access to the no-interrupt flag.
  12. True. But usually, idle background stuff doesn't make that much difference on CPU usage. It tends to be a bigger issue on RAM/page file usage. And that's not going to change with multiple cores.
  13. The above calculations are for 6 coils of 50 turns each with no core. I see no point for paying for anything but the spool of wire for that. There might be a point in paying for something special to drive these coils, however. I'm still looking at best way to build the bus for this thing. I don't know if it's better to have a single address decoder and a single point of failure, or have each channel decode its own address, increasing complexity and reliability. The coils, however, I just plan to wind to the inside of the aluminum frame of the sat. If there is a need to 3D-print an endoframe to support the coils and sensors, that's always an option. Exoframe will be custom-cut aluminum to meet specifications, and will have multiple points of attachment to the plastic endoframe. That should provide sufficient resistance to any vibration during the ride.
  14. It usually does, because any sleep disorder runs with a risk of narcolepsy. But it's effectively up to whoever does your medical. I don't know the details of the instructions they go with. It's pretty risky either way, though. You should get yourself checked out. See if there is something you can do about treating/managing it. (I'm going by FAR, but I believe JAR is very similar.)
  15. That's why I said that if you want to compute something in the background while playing a game, that's going to help you. But a single game is going to run a single process, to allow common access to the memory from all the threads within that process. A lot of games these days run multiple threads. And some game devs do start pushing some of the auxillary threads, like audio processing, into separate process so that they can be taken care of by another core. But that's a lot of extra work.
  16. One more question. Is there a good place to shop for just the mirrors/optics?
  17. Honestly, not a lot of software makes use of multi-core. Writing code for multiple processors tends to be a pain, and a lot of people simply don't bother. It helps a lot if you have a background process, however. If you are encoding a video, for example, and want to play a game while computer is busy working, multi-core is your friend.
  18. Thought more about magnetotorquing. Rather than making small coils with ferrite cores, it might be a good idea to have frames with large area. (For a current loop, torque is IAxB) We can count on a mag field of about 30μT. Maybe a bit stronger. The loops can carry an amp easy. So a 100 turns on a frame that goes around the full face of a sat, we can get 30μNm of torque. With a 1kg cube, depending on how the mass is distributed, we could be looking at 0.01 - 0.02s-2 of angular acceleration. That will get us up to necessary rotational speed in about 10 minutes, which is more than we could ask for. So that definitely takes care of the attitude control. I'd probably want a 50 turn frame on each face with individual amps for a touch of extra reliability. The 8051 handles all data access as reads/writes from the bus, so the coil amps can simply have an address on that bus, allowing up to 256 field intensity settings.
  19. Qualification is just stress tests. I can't imagine any reason to have the expensive ICs in there if they are going to be same package installed in the same sockets. There is also a form for waving parts of the test requirements. This might be a sort of thing that can be waved. At any rate, it'd be worth the time to verify all of this at some point. I'll make a note of that. Forward-facing camera for general pictures. Rear-facing camera taking pictures through the eco-chamber. There would definitely be spares constructed, but they'll probably be built with cheap ICs. There will probably need to be at least one spare solar panel as well, but not a whole spare set, in all likelihood.
  20. ZIP encryption is very vulnerable to plain text attack. I you happen to know enough about the contents to have the beginning of the archive figured out, or better yet, have the first file in the archive decoded somewhere, it's very easy to crack. Albeit, it doesn't give you the actual password, but rather the key codes derived from it. But if you have the codes, you can open an archive. And yeah, Enigma is trivially cracked with a modern computer.
  21. Looks to me like we don't need to sacrifice rad-hard ICs either way. First round of qualification testing can be done with stand-ins, and TVAC bakeout would have to be done with the actual flight build regardless of qualification/protoflight route. Or am I misreading something?
  22. Hm. If camera can produce a JPEG (something like this), I can probably get it to about 50-100kbps over an 8051, streaming Huffman codes with reset points. The total size will be a bit larger than original JPEG because of the reset codes (by a few %), but if a block gets lost due to noise or any other problems, it should be just a block that's missing. Not the rest of the image. The loss of baud rate is due to having to do some processing on CPU to insert the reset codes, and 8051 only does 1MIPS. At typical camera JPEG quality, we're looking at 10-20s to beam the image down. That should be fine. Of course, camera like that is going to buy it pretty fast, since it's not rad-hard. I'll see if there are rad-hard options out there that are reasonably priced and wouldn't require a dedicated processor just for that. Alternatively, we can look at cameras that simply have addressable buffer on board, without any processing, but then we'll have to encode the image, and that will be considerably slower. JPEG might not be the way to go at all there.
  23. We'd need a camera with a buffer. Once it has taken a still, we can process and beam it at a leisurely rate.
  24. While standing by, it won't draw more power than an old flip phone, and the battery only needs to last for 45 minutes between charges. A pair of 100mAh+ LiPo cells should do just fine. Considerably more power will be drawn during attitude adjustments or while transmitting. But these operations can be carried out on the day side. Or we can add a bit more batteries to allow for limited night-side operations. Anything that leaves Earth system might as well have a backup just on principle. Given costs of propulsion and solar array required, backup computer isn't going to break your budget.
  25. That Dobsonian stand instruction is great. Is there anything similar that can be built to utilize CNC for tracking? I can find good steppers and build the board and control firmware, but if you have any 'how-to's on building the mechanical parts that are precise and smooth enough to do the tracking, I'd be very interested in that.
×
×
  • Create New...