Jump to content

MBobrik

Members
  • Posts

    629
  • Joined

  • Last visited

Posts posted by MBobrik

  1. Guys, I was talking to a friend of mine who's an amateur astronomer and he told me about Colombia's first satellite (didn't even know we had one), which was Cubesat called "Libertad 1" launched on 2007 on a Dnepr-1. The interesting thing, though, is that one of the engineers from that project lives on my city and is a friend of my friend, so I might be able to get in contact with him. As far as I can tell, he was in charge of communications and energy storage aboard the satellite.

    Any input from someone who has actual experience will be appreciated. Especially someone who did the energy management and communications, as those are two biggest empty white patches on our plan.

  2. Yay! We've settled something! :D Updating the doc now!

    Great, so computer will be Raspberry Pi, which will only be used occasionally, and then a rad-hard 8051 backup always on auxiliary one? Is that the plan? As for single events, there's still a small chance the Pi will be fried while in use. Are there ways to avert this? This is mission-critical, remember. Like only using the pi at night (orbital night, that is), or is that useless? Or maybe turn it off when power surges occur, is that feasible? I know nothing of Raspberry Pi but would love to learn, if anyone here does. :)

    First, it will be the 8051 flight computer that is the main one. It will be permanently on, and in charge of keeping the spacecrafts orientation and rotation, managing the power supply, collecting the low volume data like chemical sensors, transmitting and receiving low bandwidth data, and switching the media CPU on when needed, and eventually giving it access to the transmitter to transmit stored high volume data.

    The media CPU will be the raspberry pi compute module with two cameras, both the raspberry pi camera module ( as it got direct support for 2 cameras at once, and an additional camera costs mere $25 ) , one internal camera with microscope optics and IR capability, to observe the sample, and one external camera for shooting cool videos, and perhaps an yet unspecified additional experiment.

    None of this is rad hardened, so its main line of defense against radiation will be playing possum - for timelapse videos of the moss it will be powered only for say 500 miliseconds each minute (we will have to hack the boot sequence so that only what is needed for the camera will be loaded in this case ), switching it on only at certain positions in orbit got no sense because inside the magnetosphere there is not much difference.

    Another option to consider would be adding two redundant media CPUs ( which means cost additional ~ $120 and ~ 20 g weight ). If they both survive, towards the end of the mission, we could switch them on both at once, to shoot stereoscopic timelapse. ( and during release to have a cool 3D video, if we deem it worth )

    Further question @Mazon Del.

    How scientifically valuable are normal color timelapse video vs IR timelapse video vs both ? What, if any, added science value would have stereoscopic 3D timelapse ?

    How big were the scientific loss if we lost all imaging capability say in the middle of mission, and only the numerical sensor data would remain ?

    I would like to support C/S band, but that requires an expensive transceiver and a proper tracking station. If we get budget for it, it would be nice. A good stretch goal? But we need to plan to make do with UHF. That will work as fallback in either case. Cannot hurt to plan support for broadband ops on media CPU, though.
    So K^2, how are we doing for power demand? We now know an 8051 will be always on, and a Raspberry Pi (which model?) sometimes. Orbit will be similar to ISS's (right?). Transmitter is either really puny or 40W one (stretchgoal). How much would each cost approximately (small vs large power generation/consumption)?

    I've done some quick estimates...

    First I started with a module with known params, namely the 433 MHz transceiver module sold as rasp pi/arduino accessory. It got :

    110 kbps

    Power 5 dBm

    Range 300 m

    To have reasonable communication window say 2 minutes, at orbital height 430 km we will have to extend the range to cca 430 *sqrt(2) kilometers,

    which mans adding cca 66 dB to the signal.

    First we amp it up to 10 W ( I don't believe we manage to cram an 40 W amplifier or more into the sat ) which leaves us with 41 dB on the demand side

    A good low-noise amp on the ground will give us cca 17 dB.

    26 dB remain. Which is in the ballpark of a really good and thus proportionally highly directional high gain antenna.

    If we manage to keep the satellites axis perpendicular during transmission, we can emit not omnidirectionally, but in a torus (like Herz dipole), we can shave off another 2.5 dB,

    and if someone designs an antenna which emits in a thinner torus, we might get say 7 dB on the transmitter antenna side which would leave us with cca 19 dB, so still a directional high gain antenna would be needed.

    I don't think we can go without tracking the satellite on the ground.

    Of course antenna aperture increases with square wavelength. Thus going for 155 MHz would shave 9 dB. But the antenna would have to be proportionally bigger,

    which is not a problem on the ground, but fitting a proportionally bigger antenna on the satellite would be next to impossible, and a small antenna would negate any dB gained.

    So either someone can design for us a cca > 7 dB gain @155 MHz antenna that radiates in an uniform flat torus, and is small and light enough to be fit into 1U cubesat, and yet robust enough to survive it spinning at > 80 rpm ( and changing the spin direction ), or we have to go with 433 mHz and a high gain tracking antenna on the ground.

  3. I've finished reviewing the media CPU boards, and, there is really no competition to the Raspberry Pi compute module. There used to be one, the almost unknown ODROID-W board, which was planned as a direct competitor, but it has been canceled. There is Arduino Nano. But it got no CSI interface. And the slightly larger and heavier Arduino Uno. But it got almost two orders of magnitude less performance and no real advantage. There are of course other, more powerful single board computers, but I don't think trading 50 % or so more CPU power for more than 7 times weight is a good idea. If someone else knows a CPU with CSI , ~10 g weight, > 500 MHz clock, and a very good software camera support, that has a clear advantage over rasp pi, any suggestion is welcome.

  4. If you and/or anyone else could look into that, that'd be one problem solved! (Unless you're willing to tutorialize electronics to us laymen, do you know any good one in the internet? :))

    Well, I did almost nothing last several years and have to catch up myself ( right now I am just sorting out the suitable CPU boards ), but ultimately, given the number of people involved, most of us will have to get involved in things electrical, and/or programming.

    Based on this website there are 77. Whether they could be used for our goals and if that's enough is a different question

    Note: this site has a list of sats that can be used specifically for this (I guess)

    I looked briefly at it, and OMG it looks like an aftermath of a nuclear war, or repelling an alien invasion. reentered,reentered,reentered, dead,reentered, dead, partially dead, a few remaining operational... and all of them on ~1000 km polar orbits and with ~9.6 kbps bandwidths, which is practically useless for our purposes. So either we will go through C-band and professional geostationary satellites ( no clue how it could be made ), or we have to find other solutions.

    just to be clear, we don't need continuous coverage by any means, just long enough to get data from the last several orbits

    That would be very risky, leaving all the data just to sit there, waiting for the last moment to catch it. The sat might glitch out, or it may reenter a little sooner, or just radiation might erase the data ( It would have to be stored in non-volatile memory of sufficient capacity, which means purposes, non-hardened flash memory )

  5. moments won't help you, because they don't deal with time delayed functions. I would simply go for least squares method.

    E = sum (1/2 *(e(t) + f1 *e(t-1) + f2*e(t-2) - x(t) )^2) = min

    dE/df1 = sum ( (e(t) + f1 *e(t-1) + f2*e(t-2) - x(t) ) *e(t-1) ) = 0

    dE/df2 = sum ( (e(t) + f1 *e(t-1) + f2*e(t-2) - x(t) ) *e(t-2) ) = 0

    sum( x(t)*e(t-1) ) - sum( (e(t)*e(t-1) ) = f1 * sum( e(t-1) * e(t-1) ) + f2 * sum( e(t-1) * e(t-2) )

    sum( x(t)*e(t-2) ) - sum( (e(t)*e(t-2) ) = f1 * sum( e(t-1) * e(t-2) ) + f2 * sum( e(t-2) * e(t-2) )

    assuming

    sum(e(t) ^ 2) = sum( e(t-1) * e(t-1) ) = sum( e(t-2) * e(t-2) )

    sum( (e(t)*e(t-1) ) = sum( (e(t-1)*e(t-2) )

    thus

    sum( x(t)*e(t-1) ) - sum( (e(t)*e(t-1) ) = f1 * sum( e(t)^2 ) + f2 * sum( e(t) * e(t-1) )

    sum( x(t)*e(t-2) ) - sum( (e(t)*e(t-2) ) = f1 * sum( e(t) * e(t-1) ) + f2 * sum( e(t)^2 )

    substitute

    A = sum( x(t)*e(t-1) ) - sum( (e(t)*e(t-1) )

    B = sum( x(t)*e(t-2) ) - sum( (e(t)*e(t-2) )

    C = sum( e(t)^2 )

    D = sum( (e(t)*e(t-1) )

    we get

    A = C*f1+D*f2

    B = D*f1+C*f2

    and thus

    (A*C-B*D)/(C^2-D^2) = f1

    (B*C-A*D)/(C^2-D^2) = f2

  6. So how about this: We rad-hard the metal cube in which everything else goes into, letting only small gaps for cables (if we go with deployable solar panels) and an external camera peephole. Unless I'm missing something here, that should be enough for covering all bases. Or does radiation phase through anti-radiation surfaces? If it does, then how can anything be rad-hard? I understand why the external camera needs to be rad-hard, as it's going outside and image quality decreases significantly with radiation exposure.

    Basically yes. the high energy cosmic rays punch through everything we, or anything less than battlecruiser sized, feasibly can have on board. There are the lower energy protons and electrons from the solar wind, but against those even the chip package, or the lens in front of the camera would be an adequate protection, because the magnetosphere already has taken care for the most of them.

    The average radiation at LEO is so low that it would take years till the chips start to be affected by cumulative dose. The problem are occasional strikes (so called single events) of high energy particles, which, though rare, can cause the whole chip to short out destructively, thus any time interval a non-rad hard chip is powered, is like playing a round of the Russian roulette.

    When the chip is not powered, it can not short out, and thus only a hit energetic enough to obliterate a whole transistor at once could harm it. ( but such a hit would most probably take the rad hard chip out too).

    Rad hardened chips are different, they are built internally so that they are largely immune to this type of glitches.

    The physical structure of a gate is altered so that even flooding it with carriers does not cause it to short out (co called "latch up"), but only transiently open. On top of that, all important logic is tripled, with additional voting logic, so that when only one gate gets hit, it does not affect the output at all.

    And of course, the transistors are larger and made from more ionizing radiation resistant materials to soak up more of the cumulative dose before their junctions start to degrade. Technically, since we will stick to LEO and we won't perform any time critical maneuvers, only protection against destructive single events is relevant to us.

    I can not find the part, though K^2 said it was discussed, and I believe I saw something like it being discussed a few months ago. But the result was, that the chance of the CPU being fried by such an event, if it has to run continuously during the entire mission was too high, and we would thus risk losing the sat before obtaining enough data. Not sure how this conclusion squares with the results of previous cubesats flown with non-hardened CPUs. How many of them seized up prematurely. I feel that we might have been a little too pessimistic here.

    But the rad hard low performance + on demand switched high performance non-hardened CPU combo* we seems to be coming to, got advantages of its own. The simple but rugged 8051 can babysit the more fragile CPU, allowing us to restart, power cycle, even debug it remotely, and even if the more powerful CPU is destroyed, we still get some data. And the high performance CPU, because it will not have to be powered the whole time, the overall probability of being fried will be correspondingly low.And because it will perform no function that can endanger the satellite, we can and let it run more complex ( and thus less throughoutly tested ) software, and generally experiment with it even in flight.

    And then there's the deal with beaming down images/video. From what we've has been saying, I strongly suggest we put a stretch goal on the kickstarter, because then more money => bigger solar panels => more powerful comms => More bandwidth / signal quality => More and prettier pictures/videos/data! I think this is a nice solution, though it still does not answer how we'd do it without the extra money. We should't rely on strech goals.

    Agreed. As I mentioned, someone should look into the possibility of going through amateur radio sats. I myself have zero experience with ham radio, and will be most probably busy searching and comparing suitable CPU boards.

    *As K^2 named it, we should call them media computer and flight computer.

  7. I second optocouplers. Power, there are options on. Transformer requires converter. I would rather just fuse it, and maybe drop in a low pass for interference.

    Since both boards use different voltages, we will have to use a DC-DC converter, or have the main power supply produce two voltages. In both cases, electrical separation of the two and overcurrent protection

    comes at neglible, even zero additional cost. Anyway, a fuse alone is not fast enough to protect circuitry that can burn out in microseconds, so an overcurrent shutdown circuit is necessary anyway.

    And off the shelf camera with JPEG support are cheap and plentiful.

    And requires the same decoupling as co-processor, but got more than order of magnitude worse compresion, and the 8051 will have to shovel the data from it to the transmitter, which would be a significant load for it... And that all to save ~ $30 and ~ 1 watt of power.

    Anyhow, I will now look more into Pi/ARM/other options for "media" co-processor.

    I will keep searching for other potential processors/boards/camera candidates. Maybe we find something better, though rasp pi will be hard to beat.

  8. I was picturing this with a camera that has integrated compression chip. 8051 would only have to grab compressed data and package it.

    AFAIK there is no rad hard camera module that can do that. Either bare chips w/o compression, or complete stand-alone cameras which cost an arm and leg. And the 8051 we are going to use got not enough processing power to interface directly with either of the two anyway.

    So long as it is only powered on for media operations, and no mode of failure, including a total short, can fry the whole sat.

    We can ( and should ) make it completely separated from the flight computer, communicating only through optocouplers and powered through converter-isolated transformer, which cuts the power instantaneously in case of overcurrent.

    Is Pi the best option, then? I agree that it should be ARM based. But custom solution would allow for better resource cross-use. If you want two CPUs, for example, you definitely do not need two boards. But I do appreciate simplicity of simply grabbing a board and libs.
    Well, I still think we should have a CPU designed for CubeSats.

    I am inclined to say that the raspberry pi compute module is our best choice. it got ARM11@700 MHz plus additional 24 GFLOPS GPU, weights only 7 g, and costs ~$30.

    "professional" cubesat computers got comparable or smaller computing power, weight 5-10 times more, and costs two orders of magnitude more.

    Other boards of comparable size and price range got much less computing power (the r pi compute module is the newest of the newest, released only in April this year )

    Even if we were to solder together our own custom hardware, I seriously doubt we could beat that. Most probably we would end up with something comparably small, only less powerful, and we would need man-years of development of both the hardware and software. If we had a few hardware pros on team, which we don't.

    Having two processors on one board for redundancy would also not help much because supporting circuitry working with high-speed processors would not be hard rad either. We would end up duplicating the whole board anyway.

  9. Well, video is going to be really little. Just the initial spin-up procedure (for visual confirmation) and maybe launch (is that even allowed?).

    or a few videos of orbiting earth. No real scientific meaning, but cool to post on youtube :)

    On the other hand, Mason Del could definitely get some samples like, right now, film them in the dark under IR light at 30 FPS (or whatever an IR camera can do) for some days, then we'll speed up the video at different rates and see which one is acceptable for growth analysis. What do you say? One experiment is worth a thousand expert opinions!

    That is definitely an experiment worth doing. ( I would guess, that the actual frame rate would be like one per minute or minutes, not 10 seconds ). And to the speeding up, most cameras support, or can be rigged to support time lapse. no need to waste space by recording @30 fps.

  10. The sat will be directly overhead for seconds.

    A quick back of the envelope calculation shows that at ISS heights the satellite will be in range for cca 2 minutes, not seconds.

    Not to mention that there is a lot of amateur radio satellites on high orbits, which we can route through, allowing us to transmit continuously for tens of minutes.

    I am not aware of any way we could handle a video stream like that without going to a custom solution on that. A 50-100kbps stream, which we'd be lucky to get, is already next to useless. Take one of these that's corrupted on pretty much every frame, and it is useless.

    I am afraid you are massively underestimating the power of video compression.

    Let us for example have a time-lapse series of 1280x720 images.

    If we encode them as individual jpegs and send them separately, we will have to send cca 2000 kilobit per image. But a good h264 codec will compress it to cca 120 kilobits per frame

    So during the hypothetical 2 min @ 50 kbps transmission window we can send

    - 3 jpeg encoded time-lapse frames

    or

    - 50 h264 encoded time-lapse frames,

    both not counting error handling overhead

    Each one is Huffman-coded, but I plan to pack each into its own packet. Packets and essential data will be guarded with error correction codes. The actual JPEG data will not be, to save bitrate, but like I said, worst case scenario, it's an 8x8 block that's missing, not an entire stream.

    Having to have a video stream, of course we could not protect only essential data, leaving the rest unprotected, but if we have bidirectional communication, so we can divide the stream into small with error correction codes guarded blocks, and we can ask the sat to send them again during the next transmission window, thus downloading progressively the entire stream correctly. Or we could send the data with 5x redundancy and we would still get 3 times the throughput of separately encoded jpegs. Or a combination of both.

    Of course the UT69RH051 microcontroller, I guessed you intend to use, has no chance of handling any video.

    But concerning jpeg compression, it got 20 MHz, 12 clocks machine cycle, 256 bytes of internal RAM, and a neat but not very efficient instruction set ( for example just sequentially reading external RAM at least 5 machine cycles per byte ). Frankly, it will be busy just doing navigation, transmission control, and stuff, generally controlling the sat. Trying to compress megabytes of raws into error correction enhanced jpegs in-between, the thing will be on its last legs.

    I would suggest we split the low processing demand high reliability real-time tasks and let the rad hard 8051 do them, and all the camera related high throughput data will be handled by a normal CPU which has the power to deal with it.

    My idea is to use the raspberry pi compute module which weights 9 grams, costs $30 and got two SPI interfaces so we can connect two cam modules, each $25 and 3g weight.

    We will have to add extra circuitry so that the 8051 can power them up and down, reset them, connect and disconnect them to the transmitter/receiver, and so on... so we end up at like 1.5 W power consumption, 20 grams weight and $100 cost which is utterly negligible compared to other costs.

    And because they are so inexpensive both in terms of cost and on board resource usage, we should use two of them for redundancy. One should fail, the other will be used. If both survive long enough, we could even risk a little and switch them both on and get stereoscopic images.

    Most time they will be switched off, so the cumulative probability of a single event upset destroying them will be orders of magnitude lower than if they had to be permanently on and control the sat. They will only wake up for a fraction of a second each few minutes to take picture, add it to the stream, or when transmitting the data (or when we command them to take a video through the external cams ).

  11. What would 10 minutes of streamed video do for us that a timelapse of still images would not? If the goal is to observe the moss growing the surely a sequence of compressed images would be ideal. It would be a bit silly to transmit video if we intend to throw away most of the data by speeding it up anyway.

    Well, moss is not renowned for high speed action. But we could save a lot of bandwidth by encoding the timelapse as a video. Because the changes are likely to be small between frames, video compression will compress it an order of magnitude or better compared to a series of jpegs.

    But that with the 10 minute video was meant more or less for the external camera. ( the rasp pi cam module costs $25 and weights 3 g, so we could afford to have even several of them on board ).

  12. That has not occurred to me at all.

    Now we've traded sarcastic remarks, we can go straight to business.

    We're going to be lucky to get 50kHz out of what we can reasonably use for comms.

    The 50 kHz is what ? Analog frequency bandwidth ? In this case a suitable modem can squish out as much as 500 kbps out of it. And that is enough for medium res video. Or did you mean 50 kbps ? that is still acceptable bitrate for low res video.

    bitrate calculator

    And we don't have to stream in realtime. we can buffer say 10 min of video and send it piecewise over several hours.

    But video aside, for still images we will need compression too. Transferring for example 3 MB raw instead of 200 KB jpeg is pure bandwidth waste.

    Earlier in this thread we've established that over life span of a Cube Sat launched from ISS-like orbit, odds of CPU going dead are significant. Depending on space weather, we might get unlucky and lose it within days.

    Well, other cubesats seem to accept the risk (or came to different numbers). but decision has been made, and I do not wish to contest it. ( we don't want to rebuild the whole circuitry from scratch, when our next mission happens to be beyond LEO, anyway ).

    But the main doc should be updated accordingly, and all the arduino, raspberry pi, GoPro stuff replaced with the hardware choices that have already been done. Because now it looks like the doc is up to date and the rad hard 8051 idea from the beginning of the thread has been dropped, while in fact the opposite is the case.

    We can buy a rad-hard version of 8051 for $1.3k.

    Has already been decided which among the rad hard 8051 models we take ? Because there is a range of them, from devices that would have hard time handling static medium res pictures, up to 50 MHz clocked devices which can with some effort stream video in real time (but those don't cost 1.3 k ), and the cam choice is thus dependent on which 8051 we choose to use.

    That is, unless we are going to use dual CPU like you've suggested. Rad hardened one to do the necessary stuff. And a regular one that will be switched on only briefly to reduce chance of single event effects.

  13. I really don't think we should use a Raspberry Pi as the CPU, can't we just modify the Raspberry Pi camera module to work with a CPU designed for CubeSats?

    Not sure whether one of rad hardened CPUs got SCI bus interface.

    I took Raspberry Pi as a reference point because it is in the main doc.

    I am not sure whether this has been discussed here, but do we really need a rad hardened CPU just for LEO ? according to the wiki normal chips can handle up to ~ 5-10 krad while this link says LEO 0.1 krad/year = cca 7 rad during 4 week mission, which is cca 0.1 % of what non-hardened chips are supposed to survive.

    Or it is because of other intended missions beyond LEO ?

  14. Just a really quick advice to anyone new here, try to read at least from page 123 onwards. I know it's a lot of pages, but there's a lot that was discussed, and before that page we digressed a lot, so there's not much in there. Please do read all of the pages to get caught up on decisions and discussions that have already started in order to avoid repetition. Thank you!

    I admit, I didn't read all up. But I think that shouting at me because of that was unnecessary. Now, after I read through pages 123 - current, I found that the cam was indeed discussed a few times, but all suggestions were stand-alone cameras.

    No one mentioned camera modules or using raw chips. And thus I would like to present this little gem.

    Just a quick comparison with the suggested GoPro:

    UrqGWiP.jpg

    I would rather have a camera capable of on-board compression as a stand-alone unit.

    While not stand-alone, the module can send both raw and h264 compressed stream. No processing on CPU necessary, but sending it over to the transmitter.

  15. To MBobrick, there's nothing done until now except for the basic mission goals. That means any input will be heard! And there's no leader either, as none of us can dedicate full time into this. In a way, we all have the same level of responsibility and authority because we aren't professionals on anything, but we are kinda good at something! It's truly a community project.

    Thank you very much for the answer. I've a few questions about the hardware... why to use a complete camera ( which got DSP and processor and a lot of irrelevant or redundant stuff inside ) when there will be a full on-board CPU ? What about using just a camera module, or even just hooking a sensor chip to the CPU board ? It will cut a lot of mass and power consumption.

    .

    And another question... It is right to get into such details here, or is there a kind of more specialized forum ?

  16. I wonder if they would ever consider marketing it, to the US perhaps...

    Every country that got nuclear weapons got enough plutonium to build Plutonium heaters or RTGs. The reasons why other countries are using them much less are political...

  17. I have been watching this thread since it started but i'm hopelessly behind, is there any way i can help? I would love to be a part of this.

    Nearly the same here too, I've been busy doing other stuff (work, and too exhausted to do pretty much anything else) during the last 10 or so months, and missed almost all of this.

    I would love to make some meaningful contribution, but don't even know where to begin. Who is in charge ( K^2 ? ), who does the planning, where the sat's designs are, what is done, what has to be done yet.

×
×
  • Create New...