Jump to content

No one

Members
  • Posts

    152
  • Joined

  • Last visited

Posts posted by No one

  1. For an interplanetary ship, this is valid. For a launcher less so. I don't need more delta-V per se but more payload to orbit. If I've got an asparagus lifter built for a 40 ton payload, and I try and slap a 400 ton payload and more asparagus boosters with it, I'll end up with 1/5th of the TWR on my core stage. Since said core stage actually has the lion's share of the delta-V, chances are I won't make orbit.

    By contrast, if I've got a serially staged lifter built for a 40 ton payload, and I want to lift a 400 ton payload, I can gang ten of those lifters together and I'll have exactly the same TWR at all points in my launch.

    If it's something like 45 rather than 40 tons then sure, adding a couple of boosters to the asparagus lifter will work. But adding a couple of boosters to the non-asparagus lifter will equally work.

    Asparagus doesn't scale well, sure, but creating the first lifter is easier.

    Let's say I want to get something into orbit:

    I make a lifter which doesn't make it.

    If I'm using normal staging, I then have to make another stage which is bigger than the current stage by exactly the right amount. Too much and it's hard to build the next stage, too little and that stage doesn't add much.

    If I'm using Asparagus, I can just add another stage which is the exact same as the last one.

    Though actually since Career Mode was implemented I've generally been using SRBs for the first stage, and then asparagus (if necessary).

  2. Yes, because it's easy.

    If you need more delta-v on an asparagus-staged thing you just add another stage. If you need more on a regularly staged you need to make the next stage bigger enough that it's much bigger except then to add another stage after that you need even more bigger and then the entire thing dies.

  3. Dres? You've got to be kidding me.

    #1) the Delta-V doesn't cease to be a problem - note there are many threads about long burn times with LV-Ns... so there's added complications of perapsis kicking, poor TWRs, more of a requirement for orbital rendevous if you intend to return, etc.

    FWIW, it takes ~4.5x the delta V to land on dres, than to land on Duna.

    1034 for aerocapture at duna (from a 100km Kerbin orbit) vs 3989 m/s for a capture into a 12km orbit on dres - and another ~550 m/s to actually land

    for a total of about 4550 for Dres vs 1050 for Duna.

    #2)Airless bodies are harder to land on, IMO. Its pretty easy to pop chutes, and just control descent rate with a touch of throttle.

    Its why a landing on Tylo is much harder than a landing on Kerbin (or Laythe), despite the lower gravity.

    #3) Dres's orbit is highly inclined, and launch windows are more irregular as far as dV requirements (there is that rate launch window where you can arrive just at the AN/DN and ignore the inclination, all other launch windows require varying amounts of plane changing).

    1) Delta-V is only a problem in that the more you want the longer you have to burn. Waiting for a burn to complete is easy and doesn't affect which planet is the "easiest" to return from.

    2) Airless bodies are harder to land on but easier to return from. Once you've got there, there's practically no difference between landing+returning from Dres vs landing+returning from the Mun. Also, Dres has significantly less surface gravity. You need a rocket to return from Duna, but you can return from Dres with just an ion thruster.

    3) That's strange. Dres always seems to be at a window when I want to go there.

  4. If you want to go one-way, definitely Eve. If you want to land and return, definitely Dres.

    As soon as you get the Atomic Rocket, Delta-V ceases to be a problem so they're both easy to reach. Dres has less surface gravity and no atmosphere, it's basically just like landing on the Mun, except it's a bit harder to get to. You can take off and return all the way to Kerbin with nothing but an Ion Thruster.

  5. Clearly, KSP is just a large project by the marketing side of Squad. NASA/Some other space agency hired Squad to get people into science/math/engineering/SPAAAAACEE, and thus they created KSP.

    They'll only make another game if they get another thing which a game would be a good way of marketing.

  6. I generally use ion engines in getting Kerbals home. It's just so convenient, they weigh very little and they can get you from the surface of another world (other than moho, tylo, eve, and duna) back to Kerbin in a single stage.

  7. Just to be clear- are you talking about sapient, non-sentient machines? Like a machine that has a thinking mind but no feelings? Or are you talking about non-sapient, non-sentient machines (like the software I'm running on this computer)? Because I'm not sure if everyone in this thread understands the distinction between sapience and sentience, and it's a highly significant one.

    I think it may be exceedingly difficult to create a machine with all brains but absolutely no feeling. You would want an intelligent machine that works towards goals. If your machine was an asteroid mining overseer, for example, it would want to perform an exemplary job at mining asteroids. If it did not feel this way, then why would it be motivated to mine asteroids?

    Additionally, I think that it would be very important for all intelligent (sapient) machines to have a reasonable moral sense. For example, imagine the asteroid mining machine is out somewhere, mining asteroids when some nearby space colony finds itself in distress. The machine should have the sense to abandon what its doing and go help, not prioritize asteroid mining over saving lives. Also, imagine the machine is mining some Earth-crossing asteroid. You don't want it blasting the thing to bits to like, get to some buried deposit of metal, because that could create asteroidal shrapnel that could collide with Earth. It has to have the sense to act responsibly.

    From the perspective of the company mining the asteroid, it would be far more costly to create something which would go help someone than something which wouldn't, and it would also be totally pointless. If you were being mugged and you saw a car parked nearby, would you expect the car to come to your aid? Of course not. So why would you expect the asteroid-miner to help you?

    And if you think that an unthinking machine- which is not sapient (but can actually still be sentient)- can do a job more cheaply than a thinking machine (a sapient machine) at very complex tasks, then you don't understand programming at all nor the nature of complex tasks. The problem is, when a task gets complex enough, it becomes cheaper and easier to use a thinking being to perform that complex task. It is simply not possible to program the machine with all the proper responses necessary for a highly complex task. The machine must be able to come up with solutions of its own. Right now we use humans for these tasks. If it was easy and cheap to program "automotons" to do any job, then how comes people still have jobs? Why haven't we been entirely replaced by "automotons"? The fact is, the policeman needs to think; the engineer needs to think, heck, even people building buildings need to think about how to do the job correctly.

    Currently we cannot make a thinking machine at all. I am saying that we would be able to make an automoton to do that before we could make a non-automoton. Anything we build will be an automoton. A thinking machine would just be an automoton so complex it transcends being an automoton. But it would still be built out of more basic programs, more automoton-like parts.

  8. From strictly utilitaristic point of view, not much really unless you want something like a robot friend or artist in which case those things could matter. But the automaton executes morality as defined by it's creator, the free thinking intelligence executes morality as defined by itself. So from the original point of view in this thread it makes a world of difference. I personally wouldn't allow anyone to pull the plug on a free thinking machine intelligence if it was advanced enough that I would consider it self-aware, morally acting and sentient creature. For the automaton, who cares? It's a program.

    But this thread is talking about whether or not forcing a sentient program to work is slavery. My point is that anything a sentient program can do an automoton could do cheaper, and thus there would be no enslavement of sentient programs because there would be no sentient programs working.

    In the cases of robofriend and robo-artist, you can't force a sentient thing to be a friend, and you can't force a sentient thing to be an artist of the kind which would require sentience, so there's still no issue.

  9. I'd personally advice against using the word soul in this context. It's a word that's pretty heavily loaded with subjectivity and values and in order to use it to convey any meaningful ideas you'd have to construct an exact definition for it first. And by doing that you'd probably contradict almost every other person's view of what soul is.

    In any case, quantifying morality and ethics has been done in many different ways by a vast number of people. Some of the theories have had limited success in describing how people behave in very spesific settings, such as for example consumer behaviour. But on a more general level the problem with quantifying morality is that it always leads back to the people who are defining the values in the first place.

    So to some small extent you can quantify your own morality and you can quantify the morality of people in general or any other subgroup. But since morality in itself is a subjective and largely qualitative concept, you can only quantify it subjectively and in retrospect. What I mean by this is that starting from a completely "blank person", you don't first arrive at certain values, then apply them to your own thinking and then reach a moral decision. What happens is that you first develop your own concept of morality, then quantify it and then you can in theory use that quantification to predict your own decision in a given moral problem. If you don't have morality in the first place, you cannot assign values you need to quantify it.

    So the problem here is that you can program a synthetic mind to behave according to some moral standards. But the values of morality are actually chosen by you and the quantification process is also creator-dependant. So what it ends up representing is your idea of a system which quantifies your idea of moral values. But now the machine is deprived of free will as you're hardcoding it to "like" and "dislike" some actions more than the others. And now we're already pretty deep in the territory where the machine is pretty much just an automaton pretending to me a moral subject. So you can call this morality but I don't see how it even remotely resembles the morality the humans possess.

    Very well then, replace all instances of "a soul" with "sentience". It makes no difference.

    What is the advantage of a thinking sentient AI over an automaton which merely executes the morality of a human? We know that the latter will more closely follow accepted ethics than the former, the latter is cheaper, and the latter is easier to control. What advantage does the former possess?

  10. @All the people who say "Pluto is still a planet to me"

    What about Ceres?

    Of course. Why do people get hung up on Mercury? Sure it's the smallest planet, but that's because it's the closest to the Sun. There just wasn't as much stuff there to coalesce into a single body when the solar system formed. The gases, ices, and much of the rocky material were vaporized and blown away by the nearby Sun, which is why Mercury is mostly made of metal. (That's also why the planets formed beyond the frost line at ~3 AU are much bigger than the closer-in planets, since there's a lot of water in the solar system but it couldn't coalesce closer in than the frost line.) The 8 planets actually all have a similar internal structure when you account for the temperature at which they formed, with farther-out planets being able to accrete substances with a lower boiling point. (Mars is kind of an outlier, but that can be explained by the Grand Tack model.)

    But Mercury is still huge compared to any other non-planets orbiting the Sun. Mercury is more massive than the entire Kuiper belt for example, even though the Kuiper belt contains about 100,000 objects larger than 100 km in diameter. And Mercury is about 100 times more massive than the entire asteroid belt.

    We don't know the mass of the entire Kuiper belt, some estimates put it at more than Mercury, some estimates put it at less.

    Mercury is about 20 times the mass of Eris, and about 1/20 times the mass of Earth. When you look at it based on mass, Mercury seems a rather arbitrary place to draw the line.

  11. @The people saying "because we can":

    But then those AIs wouldn't be doing jobs, and this discussion is about AIs doing jobs against their will.

    @Velocity

    Why does one need a soul to create a moral decision? If we can quantify the soul, surely we can quantify morals.

    After all easy jobs are taken by "stupider programs", corporations can raise their profit by replacing the hard jobs (that took human intelligence) with cheaper-than-human AI. As long as there are jobs left that humans can fill, there will be opportunity to safe labor costs by replacing them with better and better AI.

    The only point were you could argue that smarter AI aren't profitable anymore, is in a completely humanless economy. And even there, even smarter AI may be more cost effective for some tasks.

    An economy can be completely humanless, the driving force of the economy doesn't need to be consumtion by humans. As long as their is space with resources to expand into, an economy consisting of automated corporations could fill the galaxy.

    But what jobs actually truly require human intelligence? If we can program something with a soul, surely we can program something with only an individual piece of a "soul" rather than the entire thing. Sentience is inefficiency. It means that part of your brain is doing something else. Everything unachievable by computers at the moment which would be achievable by an AI would probably be achievable by a lesser program, in fact, it would probably be achieved before an AI, as a building block to an AI.

  12. Why are we giving them souls? I mean, that just sounds inefficient.

    An AI would:

    Take a lot of computing power

    Be very expensive to create.

    Be expensive to run (Electricity costs are high when you're dealing with the kind of power they would need)

    Be devoting a lot of its processing power to things not its job.

    A stupider program would:

    Take a lot less computing power

    Be cheaper

    Use less electricity to run

    Be devoting maximum attention towards its job.

    As an evil greedy profit-focused corporation, which option would I take?

  13. there will be a generation that has never heard of Pluto, just like they won't have heard of Ceres or Vesta.

    FTFY.

    I personally had never heard of Ceres until I started playing KSP. Actually I vaguely remember seeing it while randomly reading wikipedia, but to me it was "some sort of icy asteroidish thingy I think I forget". And I'm not sure if I remembered it by name or if I only know that that thing I read about was Ceres because I now know what Ceres is. Most people I know don't play KSP and have never heard of Ceres. When it comes to space, most people have heard of the planets, the sun, the moon, pluto, and then that's it.

  14. We had this argument a very long time ago when the asteroid belt was discovered and decided that Pluto was not a planet. I don't see why we needed to have it again when we actually discovered that Pluto wasn't a planet.

    Personally, I think the definition is too permissive. We need to find a way to exclude Mercury. And possibly Mars. Venus and Earth are cool, they can stay. And then let's put the gas giants in their own category. And then we end up with 2 planets. Umm....

    We should do the following:

    Extend the definition of "planet" to include what are currently dwarf planets.

    Subdivide "planet" into the following:

    Dwarf Planets* (Ceres, etc.) Lame Planets** (Mercury, Mars), Earthlike Planets (Earth, Venus), Gas Giants* (Jupiter, Saturn, Uranus, Neptune)

    *May need subdivisions

    **Ok so maybe we could give them a different name. But I like "Lame Planets", I mean, they are pretty lame.

    And then we need to take a look at the word "Moon". What the heck are Ganymede and Euporie doing in the same category?

×
×
  • Create New...