Jump to content

KSP Community CubeSat


K^2

Ultimate Mission?  

104 members have voted

  1. 1. Ultimate Mission?

    • LEO Only - Keep it safe
      55
    • Sun-Earth L1
      5
    • Sun-Earth L2
      1
    • Venus Capture
      14
    • Mars Capture
      23
    • Phobos Mission
      99
    • Jupiter Moons Mission
      14
    • Saturn Moons Mission
      14
    • Interstellar Space
      53


Recommended Posts

I really don't think we should use a Raspberry Pi as the CPU, can't we just modify the Raspberry Pi camera module to work with a CPU designed for CubeSats?

Not sure whether one of rad hardened CPUs got SCI bus interface.

I took Raspberry Pi as a reference point because it is in the main doc.

I am not sure whether this has been discussed here, but do we really need a rad hardened CPU just for LEO ? according to the wiki normal chips can handle up to ~ 5-10 krad while this link says LEO 0.1 krad/year = cca 7 rad during 4 week mission, which is cca 0.1 % of what non-hardened chips are supposed to survive.

Or it is because of other intended missions beyond LEO ?

Link to comment
Share on other sites

Might I make a suggestion?

I work in communications and designs systems & antenna solutions to send and receive voice and data and remain efficient doing so

For radio traffic communications through the bird I can suggest you use a transmitter with some muscle. Not this 1 - 5 watt garbage the Hams throw up. I have a few 40 Watt UHF 12VDC transceivers that I can interface into a repeater unit and use very little power and nor weight all that much. I am not a fan of using ham radio frequencies simply because the hams in the states here consist of jackasses who know it all and think they own every radio frequency they can tune into. I on the other hand hold a few commercial radio licenses (16) and I have no issue letting this project use one of the frequencies.

The problem with using the ham radio VHF bands is the ham community will run the bird until the batteries do run flat and will no longer charge. Most of them do not know when to stop and since they didn't have to fund the project why would they care?

For batteries. it seems you would go with some type of Li-Ion or polymer battery if they can take the conditions of course. With Li-Ion from my own experience I am able to run a 10 watt beacon transmitter in UHF (470MHz) for 3 1/2 hours. Granted I had to series parallel them to get the amperage up but the whole package was pretty reasonable.

S band you looking at a pretty penny in cost as mentioned BUT you could use the Ku band where the TV birds operate. The receivers are very cheap and may need some modification of course but may be something to look into.

For imaging how about doing what NOAA does in their LEO birds, a similar WEFAX type system? You can use it in VHF like NOAA does in 137 MHz band though Its a slow image but reliable and can be received by a very cheap USB SDR dongle with proper software or a regular ole police scanner radio tuned to the frequency and fed in via soundcard input to decode the fly-by image

Also why not place it in geo-sync orbit..

Edited by elfnet
Link to comment
Share on other sites

Hey elfnet! First of all, thanks for offering a frequency for us! We really appreciate it.

As for geosync orbit, that's a no-go I'm afraid. The whole cubesat needs to return to Earth within a reasonable timeframe, no space junk allowed. A geosync orbit would require thrusters to do that, which is a whole new level of complexity our team just could not make feasible at this point. I guess if we got $200k in the first week of kickstarter (or some ridiculous amount like that) we could see to expanding the whole operation for "proof of concept" mission objectives (like thrusters). Realistically, though, a first mission with the bar set low will break the status quo of cubesats not being a hobby, but yes a second-job-hobby. This will then bring confidence in our small enterprise to make bigger, better, longer missions later on (with the ultimate objective being a Phobos soft landing, woo!). Until we can ACTUALLY DO THIS first mission in real life, however, that is out of the question.

Similarly, a 40 Watt transmitter scares me because of the wattage. This is a very critical discussion we are currently not taking with the seriousness and accuracy of estimates it needs. Since you're into radio operating, here's some links that K^2 posted. He apparently knows a little bit about radios too.

Ground stations

Thermal control (short)

Batteries

Recharging time

Solar panel configurations

Especially this last one. The highest recharge rate is 10W, that's not even discounting other components' use...

You two should talk :)

_____________________________________________________________________________________________________________________________________________________

PS: Honestly, we're really needing a subforum right now. How's Endersmen doing on the website? Could we create a reddit or something? Ask for a subforum? We're not even stickied yet... :(

Edited by henryrasia
Link to comment
Share on other sites

Video compression is useless. We won't have the bit rate for it. And we need a rad-hard option. I don't think there is one for Pi. There are plenty of camera options that will not require a specific CPU.

I'm sure you know more about this than I do... but I thought rad-hard components were generally unnecessary for cubesats due to LEO magnetosphere protection and having short lifespans anyway?

Link to comment
Share on other sites

I believe video compression is something necessary to keep the bit rate down :wink:

That has not occurred to me at all. :confused:

We're going to be lucky to get 50kHz out of what we can reasonably use for comms. That isn't enough for a video feed. Period. So why would we want video compression, if we can't send even compressed video?

I'm sure you know more about this than I do... but I thought rad-hard components were generally unnecessary for cubesats due to LEO magnetosphere protection and having short lifespans anyway?

Earlier in this thread we've established that over life span of a Cube Sat launched from ISS-like orbit, odds of CPU going dead are significant. Depending on space weather, we might get unlucky and lose it within days. Now, do you really feel like risking tens of thousands of dollars worth of equipment, not to mention launch costs, turning into an orbital paperweight because we went cheap on the CPU?

We can buy a rad-hard version of 8051 for $1.3k. That's cheaper than each solar panel is likely to cost us. 8051 is also a breeze to work with. It has a simple enough instruction set, but has more than enough power for what we need. The CPU just needs to decode instructions received from Earth, manage attitude and sensors, and occasionally, beam down data.

Among other advantages are low power consumption, just 125mW, and the fact that garden variety of 8051 can be bought for less than $5 each, so we can burn through a dozen of them during early prototyping, or sacrifice a few for any sort of testing we might have to go through.

Link to comment
Share on other sites

Ah.

I got that from the Sandy Antunes book ("Surviving Orbit the DIY Way") which says:

"Most picosatellites are not just limited in weight, but also don’t need to really worry about short-term transient damage like SEUs. Instead, we’ll just take our lumps and hope that the bulk of the data we downloadâ€â€itself just a fraction of the entire data capturedâ€â€will suffice."

and

"in general, a short-lived picosatellite does not have to significantly worry about the radiation environment for their typically shorter lifetimes."

Is that not accurate?

EDIT: Also, I thought you were talking about a rad-hard camera, not a rad-hard CPU. Did I misread that? If so, sorry.

Edited by NERVAfan
punctuation, fragment
Link to comment
Share on other sites

Ah.

I got that from the Sandy Antunes book ("Surviving Orbit the DIY Way") which says:

"Most picosatellites are not just limited in weight, but also don’t need to really worry about short-term transient damage like SEUs. Instead, we’ll just take our lumps and hope that the bulk of the data we downloadâ€â€itself just a fraction of the entire data capturedâ€â€will suffice."

and

"in general, a short-lived picosatellite does not have to significantly worry about the radiation environment for their typically shorter lifetimes."

Is that not accurate?

EDIT: Also, I thought you were talking about a rad-hard camera, not a rad-hard CPU. Did I misread that? If so, sorry.

I think that book was comparing the weeks or days that a Cubesat needs to last to the years or decades that a normal satellite needs to last. In which case, buildup of radiation isn't too bad, since there won't really be enough time for significant amounts to build up. But we still have to consider single event failures, and would need a reasonably radiation resistant CPU for the possibility of that happening.

Also, I also know Solidworks, but I don't have a copy available that I can use currently. Going to university to use a copy in the computer labs is possible, but a bit of a hassle considering how far from uni I live. So if possible, I would rather not be the CAD guy, since the CAD guy would need constant access to Solidworks to make and then adjust our design throughout the design process.

Link to comment
Share on other sites

Is that not accurate?

EDIT: Also, I thought you were talking about a rad-hard camera, not a rad-hard CPU. Did I misread that? If so, sorry.

Picosatellites have even shorter lifes than cubes. But again, it's the question of odds. I don't see a need to go for a different type of CPU, and with 8051-equivalent, we can go rad-hard. Why not cut our risks?

Camera isn't a single point of failure. We don't expect it to live very long, but we don't need it to. It's also easy to have some redundancy. Little reason not to have two front-facing cameras, for example.

My complaint was that using a camera specifically designed for Pi is a bad idea if it limits us to Pi. Because we can't get rad-hard Pi. But if we can access it from a rad-hard CPU we end up choosing, then I'm fine.

Link to comment
Share on other sites

For external cameras? I don't know. It really depends on what we want to capture with it. It'd be purely for cool points, so whatever we all decide would be more interesting to have. A good looking ordinary still, or a wide angle shot. Given LEO and it being unlikely that we'd be able to get a very high quality sensor up there, we're pretty much limited to taking snaps of Earth's surface.

Link to comment
Share on other sites

I'm back from reading through the thread.

I think that the whole craft should be as reflective as possible, and the solar panels thermally insulated from the rest of the craft. as suggested earlier, another black part of the craft is also fully insulated from the craft, except for a controlled way to transfer heat to it, allowing as stable as possible and simple as possible temperature control. someone needs to work out how quickly it would cool down like this to see if we need any heating at all or just a radiator.

i think it would be simple to have the four sides have solar panels that hinge up to make five panels in direct sun. all that would be needed is a simple spring to make them open and stay open and a simple release mechanism.

How will we get the craft aligned with the sun before it spins up?

also i have a friend who can 'acquire' any number of copies of autodesk inventor depending if it will be useful

EDIT: have a look at this in terms of atmosphere sensor http://www.consumerphysics.com/myscio/

Link to comment
Share on other sites

For external cameras? I don't know. It really depends on what we want to capture with it. It'd be purely for cool points, so whatever we all decide would be more interesting to have. A good looking ordinary still, or a wide angle shot. Given LEO and it being unlikely that we'd be able to get a very high quality sensor up there, we're pretty much limited to taking snaps of Earth's surface.

Assuming that power is a precious resource, external photography not being a priority, and no control of the cubesat... I would say that wide-angle or fish-eye lenses would be effective. Although, cheap fish eye lenses I've used aren't usually that sharp and produce chromatic aberrations. I know a few people who mess around with cheap little cameras for different projects. I can look into it a little bit and contribute more once the more important details have been worked out. (Finals week coming up so I might be a while before I get back to this)

Link to comment
Share on other sites

That has not occurred to me at all.

Now we've traded sarcastic remarks, we can go straight to business.

We're going to be lucky to get 50kHz out of what we can reasonably use for comms.

The 50 kHz is what ? Analog frequency bandwidth ? In this case a suitable modem can squish out as much as 500 kbps out of it. And that is enough for medium res video. Or did you mean 50 kbps ? that is still acceptable bitrate for low res video.

bitrate calculator

And we don't have to stream in realtime. we can buffer say 10 min of video and send it piecewise over several hours.

But video aside, for still images we will need compression too. Transferring for example 3 MB raw instead of 200 KB jpeg is pure bandwidth waste.

Earlier in this thread we've established that over life span of a Cube Sat launched from ISS-like orbit, odds of CPU going dead are significant. Depending on space weather, we might get unlucky and lose it within days.

Well, other cubesats seem to accept the risk (or came to different numbers). but decision has been made, and I do not wish to contest it. ( we don't want to rebuild the whole circuitry from scratch, when our next mission happens to be beyond LEO, anyway ).

But the main doc should be updated accordingly, and all the arduino, raspberry pi, GoPro stuff replaced with the hardware choices that have already been done. Because now it looks like the doc is up to date and the rad hard 8051 idea from the beginning of the thread has been dropped, while in fact the opposite is the case.

We can buy a rad-hard version of 8051 for $1.3k.

Has already been decided which among the rad hard 8051 models we take ? Because there is a range of them, from devices that would have hard time handling static medium res pictures, up to 50 MHz clocked devices which can with some effort stream video in real time (but those don't cost 1.3 k ), and the cam choice is thus dependent on which 8051 we choose to use.

That is, unless we are going to use dual CPU like you've suggested. Rad hardened one to do the necessary stuff. And a regular one that will be switched on only briefly to reduce chance of single event effects.

Edited by MBobrik
Link to comment
Share on other sites

My complaint was that using a camera specifically designed for Pi is a bad idea if it limits us to Pi. Because we can't get rad-hard Pi. But if we can access it from a rad-hard CPU we end up choosing, then I'm fine.

Ah, OK. (10 characters)

Link to comment
Share on other sites

And we don't have to stream in realtime. we can buffer say 10 min of video and send it piecewise over several hours.

But video aside, for still images we will need compression too. Transferring for example 3 MB raw instead of 200 KB jpeg is pure bandwidth waste.

What would 10 minutes of streamed video do for us that a timelapse of still images would not? If the goal is to observe the moss growing the surely a sequence of compressed images would be ideal. It would be a bit silly to transmit video if we intend to throw away most of the data by speeding it up anyway.

Link to comment
Share on other sites

What would 10 minutes of streamed video do for us that a timelapse of still images would not? If the goal is to observe the moss growing the surely a sequence of compressed images would be ideal. It would be a bit silly to transmit video if we intend to throw away most of the data by speeding it up anyway.

Well, moss is not renowned for high speed action. But we could save a lot of bandwidth by encoding the timelapse as a video. Because the changes are likely to be small between frames, video compression will compress it an order of magnitude or better compared to a series of jpegs.

But that with the 10 minute video was meant more or less for the external camera. ( the rasp pi cam module costs $25 and weights 3 g, so we could afford to have even several of them on board ).

Edited by MBobrik
Link to comment
Share on other sites

Shannon-Hartley. We won't have clean enough signal to allow for these sort of bitrates. The sat will be directly overhead for seconds. Most of the data beam would be over distances of hundreds of kilometers. Using HAM-licensed transmitter or similar. We'll have to use error correction as it is, which tends to be complicated with general compression algorithms.

The great thing about JPEG is that the 8x8 pixel blocks are essentially independent. Each one is Huffman-coded, but I plan to pack each into its own packet. Packets and essential data will be guarded with error correction codes. The actual JPEG data will not be, to save bitrate, but like I said, worst case scenario, it's an 8x8 block that's missing, not an entire stream. (If you've ever seen a JPEG image that just turns into garbage at a particular line, that's what happens when raw JPEG data is off by even one bit.)

I am not aware of any way we could handle a video stream like that without going to a custom solution on that. A 50-100kbps stream, which we'd be lucky to get, is already next to useless. Take one of these that's corrupted on pretty much every frame, and it is useless.

Link to comment
Share on other sites

Well, video is going to be really little. Just the initial spin-up procedure (for visual confirmation) and maybe launch (is that even allowed?). Any other video will be pretty boring. And for the moss: if we film it it must be under IR light to avoid messing with its metabolism (which works in the dark, ask Mazon Del). But still the video would be 10 seconds per frame or something, so in one orbit it'll capture the equivalent of 18 seconds of 30 FPS footage. Is that really that much? If it is we should definitely consider beefing up comms and therefore power as well (like what elfnet suggested in page 166). On the other hand, Mason Del could definitely get some samples like, right now, film them in the dark under IR light at 30 FPS (or whatever an IR camera can do) for some days, then we'll speed up the video at different rates and see which one is acceptable for growth analysis. What do you say? One experiment is worth a thousand expert opinions!

Link to comment
Share on other sites

The sat will be directly overhead for seconds.

A quick back of the envelope calculation shows that at ISS heights the satellite will be in range for cca 2 minutes, not seconds.

Not to mention that there is a lot of amateur radio satellites on high orbits, which we can route through, allowing us to transmit continuously for tens of minutes.

I am not aware of any way we could handle a video stream like that without going to a custom solution on that. A 50-100kbps stream, which we'd be lucky to get, is already next to useless. Take one of these that's corrupted on pretty much every frame, and it is useless.

I am afraid you are massively underestimating the power of video compression.

Let us for example have a time-lapse series of 1280x720 images.

If we encode them as individual jpegs and send them separately, we will have to send cca 2000 kilobit per image. But a good h264 codec will compress it to cca 120 kilobits per frame

So during the hypothetical 2 min @ 50 kbps transmission window we can send

- 3 jpeg encoded time-lapse frames

or

- 50 h264 encoded time-lapse frames,

both not counting error handling overhead

Each one is Huffman-coded, but I plan to pack each into its own packet. Packets and essential data will be guarded with error correction codes. The actual JPEG data will not be, to save bitrate, but like I said, worst case scenario, it's an 8x8 block that's missing, not an entire stream.

Having to have a video stream, of course we could not protect only essential data, leaving the rest unprotected, but if we have bidirectional communication, so we can divide the stream into small with error correction codes guarded blocks, and we can ask the sat to send them again during the next transmission window, thus downloading progressively the entire stream correctly. Or we could send the data with 5x redundancy and we would still get 3 times the throughput of separately encoded jpegs. Or a combination of both.

Of course the UT69RH051 microcontroller, I guessed you intend to use, has no chance of handling any video.

But concerning jpeg compression, it got 20 MHz, 12 clocks machine cycle, 256 bytes of internal RAM, and a neat but not very efficient instruction set ( for example just sequentially reading external RAM at least 5 machine cycles per byte ). Frankly, it will be busy just doing navigation, transmission control, and stuff, generally controlling the sat. Trying to compress megabytes of raws into error correction enhanced jpegs in-between, the thing will be on its last legs.

I would suggest we split the low processing demand high reliability real-time tasks and let the rad hard 8051 do them, and all the camera related high throughput data will be handled by a normal CPU which has the power to deal with it.

My idea is to use the raspberry pi compute module which weights 9 grams, costs $30 and got two SPI interfaces so we can connect two cam modules, each $25 and 3g weight.

We will have to add extra circuitry so that the 8051 can power them up and down, reset them, connect and disconnect them to the transmitter/receiver, and so on... so we end up at like 1.5 W power consumption, 20 grams weight and $100 cost which is utterly negligible compared to other costs.

And because they are so inexpensive both in terms of cost and on board resource usage, we should use two of them for redundancy. One should fail, the other will be used. If both survive long enough, we could even risk a little and switch them both on and get stereoscopic images.

Most time they will be switched off, so the cumulative probability of a single event upset destroying them will be orders of magnitude lower than if they had to be permanently on and control the sat. They will only wake up for a fraction of a second each few minutes to take picture, add it to the stream, or when transmitting the data (or when we command them to take a video through the external cams ).

Edited by MBobrik
Link to comment
Share on other sites

Well, video is going to be really little. Just the initial spin-up procedure (for visual confirmation) and maybe launch (is that even allowed?).

or a few videos of orbiting earth. No real scientific meaning, but cool to post on youtube :)

On the other hand, Mason Del could definitely get some samples like, right now, film them in the dark under IR light at 30 FPS (or whatever an IR camera can do) for some days, then we'll speed up the video at different rates and see which one is acceptable for growth analysis. What do you say? One experiment is worth a thousand expert opinions!

That is definitely an experiment worth doing. ( I would guess, that the actual frame rate would be like one per minute or minutes, not 10 seconds ). And to the speeding up, most cameras support, or can be rigged to support time lapse. no need to waste space by recording @30 fps.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...