Jump to content

mikegarrison

Members
  • Posts

    5,124
  • Joined

  • Last visited

Posts posted by mikegarrison

  1. Here's how it works, to the best of my knowledge. You write a plan of what you are going to test. The plan gets approved. You conduct the test.

    If things go according to the plan, that's fine. If they don't, that's what needs to be investigated and resolved.

    It's up to SpaceX what they write in their plan. If it had just said "Splashdown of the first stage, destructive reentry for the second, both at predetermined spots..." then that would have been good enough. Obviously the plan must not have said that, though.

  2. 11 hours ago, AckSed said:

    Can you make a staged combustion engine more reliable? Peter Beck of Rocket Lab says you can: by building it to withstand and run at extremes, then under-driving it, you end up in the same level of reliability as a gas-generator rocket engine. Neutron's Archimedes is ox-rich, not full-flow, but building to run at max, then under-running at a more comfortable level is a solid path to good reliability.

    That's just standard engineering. You always have to balance margins of performance between the risk that you unexpectedly exceed the limits versus the risk that you overdesign and fail for cost, weight, etc.

    The skill and experience is in knowing how close to the edge you should get. Race car engineer Carroll Smith wrote in one of his books: "An engineer is someone who can do for a dime what any fool can do for a dollar."

  3. 19 hours ago, Meecrob said:

    Mikey, my boy, stop trying to snipe at me re: Elon Musk. Do you really think that the guy pushing to go to Mars, who is an engineer, and owns a space launch company that is building a rocket aimed at Mars has his feet kicked up on his desk while his minions do his bidding? You're an engineer, don't you love creating things? You confuse me, buddy.

    If you don't believe me that he is an engineer, listen to "IRL" Rocket Scientist Lauren Lyons:

    Edit: The higher you go in Public companies, what you say is true. Do yourself a favour and read up on Elon's Management style. I know you have decades of Boeing experience so you are used to beurocracy and appeasing shareholders and all that fun stuff. Bottom line is, Elon knows how to run a company. He gets specialists in to do tasks, unlike other companies who get one of the 500 or so CEO's that float around and have no real experience in anything other than management. Go look at Boeing and Blue Origin. Research their leaders, form your own opinion. I doubt you will listen to me.

    Anyways, have a great day.

    My name is not "Mikey", I'm not your boy, and why are you trying to make this argument personal? (Against the forum rules, I will add.)

  4. 35 minutes ago, Deddly said:

    He did make it more pointy. I think that counts as engineering.

    Seriously though, isn't he the head engineer?

    He's the CEO (and has about 80% control of the voting shares). He can call himself the "head engineer" if he wants -- nobody who works for him is going to tell him otherwise.

    But the higher you go in management, the less actual engineering you do, despite sometimes being the person who ultimately says yes or no about major decisions.

    FWIW, he has an economics degree, not an engineering degree. Not that I'm saying having a degree is necessary, but it is somewhat indicative. If you look at his history, it's not at all clear he has ever worked as an engineer.

    Shotwell, on the other hand, does have an engineering degree and clearly has worked as an engineer in her professional background.

  5. 4 hours ago, Meecrob said:

    Look man, with all due respect, would you please let Elon Musk design his own rockets? He is an actual engineer, and a decent one at that. He is 12 steps ahead of what you are thinking of here. just let him show you what he has planned. We are getting really close to the test flight campaign actuallty getting started.

    I am quite confident that Elon does approximately 0% of the engineering on any SpaceX rocket.

  6. 1 hour ago, Deddly said:

    Falcon 1 failed a few times though, right? As I understand it, Falcon 1 was basically just one step in the development process for the Falcon 9.

    Everything can be said to be a step in the development process for everything else that comes after, but as I understand it, Falcon 1 was originally expected to be a viable launcher, not just a step on the way to Falcon 9.

  7. Of course the main issue is that metallic hydrogen may not even exist.

    But the reason hydrogen has the potential for a high ISP is because it is the lightest atom. Any other atoms added to the reaction mass will only lower the ISP if they are also carried onboard. If they are drawn from outside (ramjet, etc.) then yes ... but that obviously only works in the atmosphere and has so many other problems that it has never been done except for "Stage 0" of air-launched rockets.

  8. On 2/23/2024 at 6:40 PM, StrandedonEarth said:

    Well, it is still sending (whispering?) data. Call it a partial success. Certainly not a full success if some instrumentation wasn't working on descent. Sort of the equivalent of "Any landing you can walk away from"

    More like the equivalent of a landing you can crawl away from, with two broken legs. Like, it *could* have been worse, but not by very much.

    16 hours ago, StrandedonEarth said:

    Too bad the switch could not be flipped remotely....

    Manual lockouts are intentionally unable to be flipped remotely. They didn't want any chance that somebody was staring into a non-eye-safe laser when somebody remotely flipped a switch to turn it on.

  9. 14 hours ago, magnemoe said:

    Pretty sure animals know that humans are not very dangerous outside of hunting season and not dangerous in suburban settings where hunting is illegal.

    If you are a herbivore, a lack of fear tends to be dangerous. But running away from harmless things is also dangerous (unnecessary caloric output, plus the chance that you alert or even blunder into something actually dangerous). At some point many animals learn to be cautiously trusting around other animals that don't seem hostile.

  10. 1 hour ago, PakledHostage said:

    I'll admit to knowing very little about AI or how our own brains work, but my impression that I mentioned above comes from watching my kids learn as they grew from infancy into childhood and my own reflections on the subject.  But having said that, how does the passage above support your argument? Current AI is basically pattern recognition, and LLMs can do all of what you describe as "true language processing"? One can ask it questions and give it instructions in plain language and it does a reasonable job of responding appropriately. Surely doing that requires contextual understanding of nouns, verbs, modifiers, and the like?

    Edit: Thinking about it some more,  even your scenario of the elephant can be interpreted as an example of pattern recognition.  Your brain learns that that combination of shapes and colours is an elephant. It learns how that combination of shapes and colours changes as the elephant is viewed from different angles. It learns object permanence. These are all patterns. Putting them together allows the brain to formulate an expectation or model of the world it is experiencing. Some of these patterns that it learns, like object permanence, arise out of interacting with the physical world, but that interaction still  breeds a recognition of patterns and expectations that future situations will follow the same pattern as was previously encountered. Babies learn object permanence early in their development. They learn that things fall. They learn what elephants and petunias look like. They learn to expect that falling petunias don't think "oh no, not again!"

    It's 30 years old now, but a decent place to start if The Language Instinct, by Pinker.

    When I was in college, my gf was working in Pinker's lab. She was doing things like searching through transcripts of kids' recordings, looking for very specific grammar errors. One thing that you find is that, unlike LLMs, kids learn language by figuring out the rules, rather than just associating the words from the usage that they hear. This can be shown by how they will make a certain kind of grammar error that they never hear adults say -- regularizing irregular constructions. A kid has likely never heard an adult say "Joe goed to the store," so it's clearly not the kind of learning that a LLM does where they just regurgitate the things they were trained on. Instead, the kids internalize the rule that you add -ed to the verb, but don't (at first) pick up the irregular nature of the verb "to go". They aren't just using things they have been taught by hearing other people say, because other people don't say it.

    Yes, it's a "pattern" that they learn -- add -ed to the end of the verb -- but that implies that have already recognized some words are verbs and some are not, and that different tenses have different suffixes, and so forth. If they were learning like LLMs, they would always use "went" instead of "goed", because all their training data uses "went".

  11. 4 hours ago, PakledHostage said:

    I've often thought that our brains are largely just pattern recognition machines.

    They are not. Pattern recognition is only one sub-system of the brain.

    Human brains are mostly focused on a few things: keeping you alive (breathing, heart, etc.), controlling your body (walking, etc.), language, and vision processing. But they also have a lot of other specific and general capabilities.

    Vision processing is much, much more than pattern recognition, which is why robots find it notoriously difficult. Essentially what you are doing is doing a real-time mapping of a 2D image into a 3D model. It's the much-harder task that 3D video games do in reverse, when they take a 3D model and project it into a 2D image. The reverse mapping is basically impossible (not enough data to find a unique solution) unless you already have a bunch of built-in concepts about the 3D world, like object permanence and the idea that objects obscure what is behind them. You also have to know, for instance, that an elephant seen from the side and an elephant seen from the front is the same elephant, even though the 2D patterns look much different.

    True language processing is also much more complicated than pattern recognition, and involves grammar like nouns, verbs, modifiers, and the like. Moreover, for it to be useful, you have to be able to use the same (or close enough) meanings as other people, as well as the same grammar.

  12. 8 hours ago, NFUN said:

    This thread has approximately eight hours before it becomes solely a discussion on the Chinese Room problem

     

    Good luck

    Nah. Chinese room problem is silly.

    No, wait, let me re-phrase that. The Chinese Room does not describe how human minds work. But it *is* a pretty decent analog for how these LLMs work. And that's why LLMs are not what people think/hope/fear that they are.

    I'm not saying human minds aren't computational devices. I think they are. But the LLM is not how human minds are built, and it's not a path toward anything we would think of as being actually intelligent. IMO, of course. But it's a pretty educated opinion on this subject.

  13. 1 hour ago, tater said:

    Hard to believe a Feb 2024 article used GPT-3, not GPT-4. The latter is far more capable. Also, the limited tokens really matters. With more tokens you can take the model—pretrained—and teach it. It gives a wrong answer, and you not just correct it, but show it HOW to correct it—like a kid asking math homework questions. You ask the kid to show their work, then maybe you ask if they checked the signs—then they notice they added not subtracted moving a variable from one to the other side of the expression. You don't tell them, you lead them. Models do this now with multi-shot questioning, and do far better.

    Yes, it is finding the next word, just sounding fluent—but they are none the less also more than that. They have demonstrated emergent capabilities. The theory of mind examples in that long paper (microsoft people? I linked it in another thread) are telling. That was not "finding the next word" via statistics, it correctly explains what and WHY the different characters in the scenario think what it says they think.

    At a certain level faking intelligence IS intelligence. As has happened throughout the quest for AI, the goalposts will shift—often rightfully. Back in the day chess was said to require intelligence. Humans fell, the bar was moved. Go is much harder than chess, surely it requires intelligence... nah, not enough. Chatbots could beat a Turing test right now (depending on the interlocutor), nah, not enough, bar moved.

    Maybe coming up with novel math or scientific ideas will be enough? I have no idea, but I think it's far closer than it had been.

    The point is that the whole method is fundamentally flawed. Not only is faking intelligence not intelligence, but it's not even close to intelligence. It's like if you studied an entire dictionary and learned exactly how every word is related to every other word, but still had no clue that any of them actually refer to a real world outside the dictionary.

    They don't exhibit "emergent capabilities". We *see* emergent capabilities in them, just like we see patterns in tea leaves and shapes in clouds and the face of Jesus on a tortilla. We interpret their babbling as meaningful, but the "emergent capabilities" being demonstrated are all on our side of the fence.

  14. IMO, these LLM that people have gotten all excited about seem to be excellent Wernicke's aphasia simulators. They are really good at sounding fluent but conveying no real meaning.

    https://garymarcus.substack.com/p/statistics-versus-understanding-the

    In that article, for instance, you see lots of pictures that AI generated when asked for an image of a person writing with their left hand. They all show right-handed people -- because the AI doesn't actually know what left and right is, and the images that they have been trained on overwhelmingly show people writing with their right hands.

  15. 2 hours ago, darthgently said:

    If we can track its position via radio direction finding we can possibly detect nearby unknown masses out there from how V1's orbit changes.  Occlusions in its signal could detect bodies out there.  Ya never know.  We got it out there, learn from it being there is all I'm saying

    That's not responding to what I said. Or perhaps I misunderstood you.

    I am not disputing there is more information to be gained from Voyager 1. I was responding to the idea that it would be useful to learn new techniques for maintaining contact with 1970s probes. (The last part of the post I quoted.)

  16. On 2/8/2024 at 11:38 AM, farmerben said:

    Other than the sonic boom breaking nearby windows I don't see what's so bad about going through air.

    It makes no sense to travel at high speed along the ground in thick air, when it is much more efficient to travel at high speed through the much thinner air at high altitudes.

    Power required for a ground vehicle is approximately related to the velocity^3, while power needed for an airplane is approximately related to just velocity. (This is because the faster a plane flies, the more drag it makes but also the more lift it makes, which means the power required due to drag is compensated by a reduction in power required to provide lift. That's not the case on the ground.)

    At low speeds, the ground vehicles are more energy efficient, but at high speeds the airplanes flying at altitude are more energy efficient.

    The vacuum-train idea (which existed long before "hyperloop") is an attempt to get that benefit of lower drag without having to accept the cost of flying to high altitude. But it has its own costs, such as the energy it takes to maintain a vacuum in a tube that is hundreds or thousands of kilometers long.

  17. 11 hours ago, darthgently said:

    Agreed.  However, at this point, there is a unique value in taking advantage of the unique engineering and signal processing challenge of just keeping in contact with Voyager-1.  If we can get useful data from the sensors or control it that would be a bonus.  That said, the budget could be cut some if the sole goal is to maintain contact and learn new techniques for doing that

    I don't really see how solving this problem of keeping 1970s hardware and software alive remotely is going to have any direct applicability to new missions.

  18. 6 hours ago, SunlitZelkova said:

    By the way, the full 19 page NTSB report is present in this article at the bottom.

    This is a preliminary report.

    The protocol for reports like this comes from ICAO Annex 13, and requires a preliminary report within 30 days that includes all factual information known at that point. No recommendations are in the Preliminary Report.

    A Final Report is due later, which includes all final conclusions and recommendations. The Final Report is to be issued within one year of the incident, if possible. If not complete, an Interim Report is due in a year (and another one every year after, until a Final Report is released).

×
×
  • Create New...