-
Posts
2,059 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by AngelLestat
-
Pellet riding : bringing railroads to the Solar System?
AngelLestat replied to SomeGuy12's topic in Science & Spaceflight
Yeah is similar to the concept of Jordan Kare of the sailbeam. At this scale is much easier to make, the accuracy is more feasible. Is proppelent mass that you are not carrying but you use it for free, you are ignoring the rocket equation with this, extract the energy via solar panels in the moons is easier, and you dont need to shoot much, only a 10% aprox (I dint did the math) of the proppelent mass that you would use it in a normal rocket, in case the speed of the projectyle are equal to the exhaust of chemical combustion. And? Your acceleration/deceleration time may take only 10 min. then you to scale your magnet accelerators size and power by a terrible amount, you also need huge capacitor banks.- - - Updated - - - Yeah that idea is also good. -
Gpu are 20 to 50 times more faster than CPU at these task, so I guess is not quite fair put them in the same level. Also the full brain simulation is the worst path to achieve a true AI. That is the path than the europe union is taken. https://www.humanbrainproject.eu/ But in their defense, they do it just to understand how the human brain works. MOstly all neurons of our brain are used to control and diagnose each internal organ we have, then we have the skin (billions of connections), then extra millions for each one of the senses, then those for memory (which are in part the same neurons), learning, actions, movements, ADN instinct, etc. The remaning of all that may be consciousness.. The only thing we know, is that we can create an IA base on NeuNets to play mario bross with only 20 neurons, how many we need to do the same with our brains? As I ask in the second page... if someone dint read the OP or it jump some articles, then before post dont forget to mention that. You dint only not read the OP, you are also assuming and answering something that is not there. Sorry If I am wrong, but that is the full impression that I have after read your post. A very common mistake, is to read the title, then remember a posture you have over a similar article readed years back, and assume its the same thing. This is not about a point in a progress curve, that is not the definition of singularity I am using (that is a old definition which does not any sense), I define the singularity as the moment in time when an AI reach the capacity to improve it self escaping from the limit of the brain. We always was the ones who took the desicions, if we achieve to build an Hard IA, then its intelligence can be so far from ours, that we would not be the ones who took the desicions any more.. So after that point, is impossible to predict what would happen. Also the accelerating rate of the tech explosion makes impossible to continue with normal way of living or business. Why you would invest in a new clean way to get energy using wind or fusion, if it will take you 5 to 20 years of development to start, when the knowledge is increasing by huge steps year by year, you dont need to be a genius to know that it will be a very bad investment. Now, about those non stupid AI. Programs do what you tell them to do. Always. That is why we have bugs (most of the time, could always be hardware failure, but that is rare) and those bugs are normal. The human brain is a large electrochemical computer, one with exceptional speed, nearly bug free programming, and billions of computer years proving the code functional. Now you want to build an AI, mostly from scratch to do what the human brain does? Good luck, even if it is self learning you still have problems, the biggest one is the slow speed of learning and the fun when your self learning program learns incorrectly, resulting in fun bugs which will be the slow death of your program. The NeuNets structures to achieve an aware AI, would not be bounce by a rigid structure, because the most easy way to achieve that (from my opinion) would be let the machine to make its own structure inside the neural network. Similar as our brain does it, but without the need of billions sensors as our body has. I am agree.
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
yeah i am with tex_nl here. you can have computers using water doplets or simple mechanism, this is the same thing, but then take as conclusion that metal can think, I dont know where is that come from..
-
My point about what? Is the same thing I said, I can understand that someone might thing we are very far away from the singularity.. is ok. But we were always wrong about the mechanism of how progreess increase, is not linear. There is not flying cars followed of colonies in other planets, travel to the stars and becoming a parady of star treek with different civilizations. That never will happen, once the singularity starts, there are not middle steps. Which might be other way to explain the Fermi paradox. About shoot down everybody´s opinion not sure what are you talking about... You can said that my view is wrong by X reasons but I can not answer and explain why I disagree? Next time, make your opinion and clarify that you dont want an answer. Yeah I am agree with the thing you said, in the Op I explain that our recent tech give us a more exponential progress rate, but we are still bound to the limits of our brain. Each time takes more years learn and be specialize in something, so the true point of inflection comes with the tool making other tools "without" human supervision. Some people call the singularity to any point in an exponential curve without knowing its true trend. But that is not the definition of singularity that I am using.. which may bring confusion. We already have some kind of quantum computers.. the theory is the most solid theory we have, more than relativity and thermodynamics. And people was mention the "silicon limit" as an universal limit. You can combine different NeuNets, lets said that you have a robot, a neunets already learn how to move around using those robots (arms and legs), then you have a NeuNets that identify objects that it see with the camera, then other NeuNets to audition, other to understand language and structure, once those NeuNets already have learn, then you can run them without a big processor, or you can design the hardware base in those already trained NeuNets. You may need a new small NeuNet that linkds the responces of other NeuNets on visuals objects with words and sounds + movement actions. You can copy that to other robots, so they already know. Of course this is not yet enoght to bring awareness, but it give you an idea of the huge game change this technology means. There are more videos on the "Neural Networks spoiler" section. The fact that these NeuNets can be combine by genetics algoritms selecting those who get the best result, bring us another clue of how they may improve it self. Yeah, that is one of the things that I have doubt, it should be true as you mention. Maybe we dont need an IA with conscience, but not sure. Watson is a good example of what it can do a cognitive machine. But I guess the best examples of NeuNets are these: In this second link, they introduce as imput values to the NetNet all pixels of the game, then without rules or nothing, the NeuNets should learn to play just looking the screen. This means many thing, it learn to understand the shape and form of its enemies and what shape controls, it learns to predict movement and other things. So a NeuNet can be used for many task, also one single neunet can be used to understand sounds, visuals, movements, all in one, the structure doe snot change. The principle of self learning is simple, is just relate impulse and connections and create patterns, then reward those pattern or result who gives more accurate results.
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
Well, I read all your comments, everyone is free to make their own predictions and not share my view. I just thought it was a very important topic that we should all pay more attention, because it may help us to take future decisions about our lives or what types of jobs will be paid more or which may be at risk. I think it's healthy to accept that the world is changing faster than years before. I just wanted to summarize all the things I've been reading about it months in the best way I could find. Still, the post is very extensive, that is why I put some sections on quote in case someone choose to skip it, but if someone will comment about something, have the courtesy to be honest and mention how much do you read about the post and what sections you skip. There are also tons of links that might take few hours watch them all. To those who are really interested in know more about this topic, I guess you have a lot of info to get started. That is just a picture from the Time magazine that is not even mentioned in the magazine note.You are mising the true point about the singularity. What limits of the technology? you mean silicon? ??? But you are true, it can not be improved indefinitely, the singularity has the possibility to increase the tech and knowledge so fast that you will know everything in a very short amount of time, no matter how complex the universe is. That is the same I mention in the OP. But there is still a lot of margin to improve... quantum computers working with quantum size mechanism. Or better than a computer, a learning machine.. Quantum computers with learning machine architecture can improve speeds by (I dont really know the exact value, but just imagine something that you can not imagine) The telegraph or computers, or communications has nothing to do with the singularity. Read the post please. The singularity is bound to a single tech.. a learning machine. Why? Because is the only way that tech can escape to the limitations of our brain. Is already explained in the OP, but read my next answers. Again, this has nothing to do with plug us.. This has to do with the learning machine we create, that is good enoght to improve its own design. That might be the same Hard IA or something that eventually will create a Hard IA. Then this Hard IA will do wherever it choose to do, nobody can predict what it will do, or stop it. If it choose that you should plug to something, you will wanted or not.. If it choose to kill us, it will.. If it choose to ignore us and leave, then we are still in the singularity edge, anyone can repeat the process until a new Hard IA will choose a different path for us. ?? I really will enjoy how you break your head to try explain that in a logic way Second... what a black hole has to do with this? Read the definition of singularity.. Also read the OP, I explain what singularity means in this case. We are always predicting the future... some time only 1 min ahead, other times many years ahead. Those with more info and "good at this" will do it better (yeah, there is not 0 or 1 here) than those who dont. Call them arrogants points to a problem more related to the observer than the subject. One thing that we can predict, is that the singularity will happen, if is tomorrow or in the 2200 nobody know for sure. Because is related to a learning machine.. their exist because our brain exist. Also we already made a lot of them that are helping us in a lot of areas. Second fact is that once a learning machine leaves the limitations of our brain as: learn stuff take us time (many years), and each time somebody born and die, it needs to start again. Our communication is slow, our way to learn is slow too, we can only be focus in 1 task at the time, we get tied, we dint get more intelligent with each generation (in our time scale), plus millions of other limits we have. So if a learning machine overseed all those limits and can improve it self at the same grade of its knowledge, then you have the singularity.. And that is a fact. What happens after that.. nobody knows.. that is why is called the singularity. Read Brain vs Cpu section. Also I explain that you dont need to exactly copy the brain to have an effective learning machine. We invent the wheel which is a very efficient mechanism to move things around vs all different mechanism that the evolution produce in billions of years. Simulate the brain is the worst path that you may take to accomplish this. Is good to find inspiration of how the brain solve things, but it does not mean is the only way. Also simulate neural networks is not the best approach, I already mention how you can increase many orders of magnitud with architectures base on neural networks and even more if is base on quantum mechanics properties. Read the other answers, also the OP. ok, you got it. - - - Updated - - - You seems to understand the basic idea and you did some good explanations. But I dont understand your main point about step function and your definition of the singularity.. A tech singularity is not an effect that go on forever... Try to read the OP post "the human conclusion" section.
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
This post will be very long, but I think it is important that we get involved, which does not mean that we can do something about it, but we can be more prepared to face that final step as human beings. (Sent me a pm if you find some English mistakes that makes a sentence unclear) Introduction: If I need to make a prediction, I like to include as many variables and data as my brain allows me. But there is always a particular variable which I choose to ignore, because if it is unleashed, it destroys any possibility of accuracy in the prediction. That variable is the moment when our technology escape from the limits imposed by our brain. In where the research and conclusion is done by the same technology which can improve itself in a positive feedback loop; generating an exponential explosion. To understand how big will be this change, we need to know first how our technology and human capacity evolved in the last 50000 years. Our brains hardly changed in this time frame. We already had language to help us to share discoveries, but it was not until the writing that we became more efficient in knowledge accumulation. Machines, population, cheap energy; all played an important role in transforming our linear slow growth in something more exponential. But our brain is still the same, our intelligence did not develop for visualize and understand complex concepts beyond our everyday reality, due this, we depend a lot on the experimentation to move on. The new age: We already enter in the age of self machine learning using neural networks and evolution principles. In case someone doesn't know, all the latest biggest software advances like speech recognition, image recognition, concept understanding, new search algorithms; between others, was achieved by these new neural networks structures. The trick was to mimic the things we know about real neurons and our way to learn, which is all based on how data are related. How neural networks work: Brain vs computers: In the past it took us a lot of code engineering and hundreds of experts working by many years, just to try to make an algorithm to identify objects in a picture or a song in the radio. They first started with 2 % of accuracy, then 5%, 8%.... many years later 25%, the first year Deep Learning go out (a new NeuNet algorithm that needs less human intervention and other characteristics) already achieve a 40% in a very short time without those hundred of engineers. Now the % of efficiency in any of its task was increased considerably. There are some small hardware chips which recreate the structure of an already trained NeuNet that can identify people, cars, and other objects in a video surveillance camera only consuming few miliwatt of power, whereas for the same task with normal programs would consume a lot of power. In 2011 IBM win the Jeopardy game using Watson, a supercomputer base in NeuNet, who was able to read Wikipedia and relate all its content, then it keep learning in other areas as Medicine, analytics, cooking, sport, advisor; helping to researches in a way that until now nobody could. Watson Links: Jeopardy, How it works?, as Advisor, Learning to see, General knowledge We can feed these algorithms with raw data as pixels in the screen, without teaching rules or nothing; the computer will learn what to do by itself just looking the screen. In this case; learning to play video games. There are two drawbacks with this technology. 1- Learns on its own, the acquired knowledge is not fully controlled by us. 2- We don’t really understand why it produces an outcome, because "it’s a complicated machine", then we cannot predict what it will do. Google in recent times bought the company DeepMind, which it has as goal to create a true AI in where our friend Elon Musk also invested some money to ensure that the necessary security measures are taken, this also gave him the opportunity to keep an eye on the development of this technology. Examples of today breakthrough with Deep Learning: The human conclusion To see if these neural networks really show intelligence traits in their results and what it is needed to achieve consciousness, first we need to find a better definition of what we call intelligence and consciousness. Michio Kaku makes a good job answering this question in the first 10 min of this video from a physicist's point of view, I recommend. Taking a look to deepmind papers and last breakthrough in neuroscience with the different brain mechanism, we are close to create an algorithm that would learn in a similar way as the brain; this does not mean that it needs to be equal, just needs to work. We can make airplanes that fly well without the need to imitate all the complex movements from birds. I realized that when looking for news in this field by selecting the option “this year” is not enough; you need to select months or even weeks given how fast it progresses. So this take us to our final question: -and then what? Well, we will reach the time when our brain is not longer the limit to our technology, our slow way to learn, test and developing will be over. Our technology at this point allow us to create a learning machine smart enough to improve its own design, this point in time is called “THE SINGULARITY”. Even today, it is becoming more difficult to make predictions, but once we're in the singularity, all our predictions collapse, we can no longer see the future, neither (to a certain degree) the machine that is driven this. At this point all resources are mostly focus to improve the power of this Hard IA, any other application of the new acquired technology will become in a waste of time and resources. Why? Because the knowledge will increase so fast, that any application that we might think of as useful, it will be outdated in few months by the new discoveries, a Hard IA does not need experimentation to prove new theories (which is something that consume us a lot of time), it can do it only by deduction. We will reach a time when we (or it / Hard IA) will double all human knowledge progress in just 1 year, then we double again in 1 month, then again in just a week. There is not hard to imagine no matter how complex the universe, all possible questions will be answered in a very short amount of time after the singularity, this means jump from a limited knowledge to a godlike knowledge without middle app steps. So, when will this happen? They made this same question to many scientists and people working in the field in 2012; the average answer was by 2040, but many of those specialist was not even able to predict the grade of success that deep learning has today in just a period of three years. Elon Musk said it may happen in 5 to 10 years. if I have to make a prediction, I would say that in 10 to 15 years, even 15 years looks like an eternity at this accelerating rate. We saw many signs like this in the past, but this certainly reflects the end of predictions, with respect to the Time magazine issue, that note was in fact about the singularity, it was released in 2011. So this make us think; what about all our silly predictions about in how long mankind will begin to colonize other worlds?, or the technology needed for a Von Neumann probe?, how long until we reach another star? what about our life plan of have grandchildren and die from old?, Global warming really matter? We were always so wrong to ignore this variable in all our predictions, but well, maybe now we are more prepared to explore and enjoy these last years of life as we know it.
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
How do clouds form on waterless planets?
AngelLestat replied to Sharkman Briton's topic in Science & Spaceflight
Venus atmosphere has the same amount of water than earth atmosphere, which is 15000km3 aprox, the difference with venus is that its atmosphere is 90 times ours. -
NASA developing a new, eco-friendly propellant
AngelLestat replied to Frida Space's topic in Science & Spaceflight
Then change all words of propellant for monopropellant If it were only a green propellant, we already have one.. oxygen+hydrogen. -
Well, scientists made a wormhole.
AngelLestat replied to _Augustus_'s topic in Science & Spaceflight
I am a wormhole too, today I eat an orange and it disappear, then appeared on the toilet, with some changes due the dimensional shift. Scientists and journalist always try to come out with a way to call attention. -
lovely picture, one of the best pictures that I saw from Mars. It has great quality and we can see the rover, which give us a sense of scale.
-
The technology may be usefull as ways to harvester energy, turism, or many other things mentioned in Gizmag. But even if achieve to reduce the fuel by a 30%, is not a big deal. To become really useful to reach orbit, it should have something as a magnetic cannon. For example the problem with this method: It´s that when the vehicle go out, it strikes against a wall of air, so acceleration and deceleration are not survivable for humans (maybe if they are within a liquid in their suits may be). But if you make the same magnetic tube in the atmosphere, reaching 15 km height (where the atmosphere is a lot less dense) with a 30o degree inclination, then the acceleration can be survivable and when it leaves the tube, it does at lower pressure. Similar techniques as rotate tube sections to stabilize and point the cannon in the right direction and the right angle can be used. Still the cost will be high enoght (but lower than an space elevator) to drop this idea to the trash, it will depend on how other uses can be exploited to improve its benefics.
-
You are underestimating this community, it may be because you are still new here. first.. this is not minecraft. The only thing in common is that both games exploit player creativity. Some times I go to the physics forum if I need to ask something very particular, but I may get also the same answer here, maybe better. If I need to compare the Nasa forum with this... I dont find any difference. And remember that you are in the science section, not just the "where is your Jebediah?" section.
-
Why you label them as actual scientists without know them and you do the same with us "gamers" without know us? You think they dont play games? And my prediction is that less than 25% of people who post in this science section play the game. I dont, I like reality, and to get that I need to install many mods which my PC end up crashing, also I dont have much free time. In science, each new idea or tech is open to criticism, some times we see something as a good idea, sometimes we dont, but we are not making a final judgment either, we are just making our opinion and depend on their inventors (or the people who agree with the idea) to prove that would be cost/efficient. Mine was "there is another option which is many times more efficient, so why we should build that?" PD: Maybe this tower idea is not good for rockets, but it may work as alternative to build tall towers for other purposes.
-
it's just a patent... nothing more.. I would say that for every 200 patents, only one would be good enough to work and be viable. For example why this tower would be better than a single airship (with variable buoyancy) who can in fact go to the equator to launch and gain extra deltav? The airship would be 100 to 1000 times more cost efficient, and I am not sure if the airship real worth it.
-
It will depend, imagine that you can build a 1 stage to orbit vehicle, but to include enoght payload to be viable, it should launch from 20 km height in the equator. Then an Airship makes sense, also your launch vehicle weight is reduce but a lot which reduce the airship cost.
-
ahh.. thanks. That is the hiden cost of projects delays, if you have long developement time it means not only that you need to pay all the employess, instalations and extras way longer. But you are also reducing its operational time, earnings and benefics. That is why projects as SLS, Orion, James Webb skyrocket in cost with minimun predicted benefics and operation due how faster that technology will become obsolete in relation to all the time waste in its development. So its always convenient detail your goals and invest all you can to finish as soon you can (trying to avoid bureaucracies or unnecessary controls). But well, even if spacex is left out, they will continue developing dragonv2 until receive certification, the end of the ISS would not matter much, different space station will be build. I hope for an efficient design this time.
-
Yeah I already read this, but its kinda pointless, you may have real advantage to launch from 20km, but you will have much more advantage if you use an airship enoght big to do the same task. With the benefic that you can launch from the equator or any place in the planet convenient for the orbit, also you can be positioned so the first stage can go down on base without waste much fuel in retro burn. You need also a lot less energy to keep it in one place against the wind.
-
uh, I vote "yes" because I dint understand the question I still dont know what downselect means. But well, any decision to delay the crew program would be really bad. If they can do both, good, if dont.. they should choose one "spacex is the obvius choice" and try to reduce the time frame to avoid keep wasting money in soyuz launches.
-
Photovoltaic EM spectrum question?
AngelLestat replied to Der Anfang's topic in Science & Spaceflight
I am agree with all responces, in theory you can harvester energy from any wavelenght, but trying to absorb a bigger range of wavelenght will make your device more complex and costly, for what? if you know that the sun emit mostly of its energy in the visible range. But well a good way to try to get a bigger range is looking for a true black body, nanotech may allow this without sacrifice much of simplicity. -
But we can see the 2 horizons merge and measure the gravity waves that form comming from such dance. This is because the event horizons are not objects, so there is not time perspective in that point with respect to us, and the singularities are already hide within their horizons. So there is not time dilation like you imagine, we can see the merge and it would not take infinite time.
-
Could there be contact binary planets?
AngelLestat replied to cptdavep's topic in Science & Spaceflight
Which is less than half than the moon on earth, that is nothing on planet deformation. Yeah we can notice with the moon 1 or 2 meter difference, but earth has 12000000 meters in diameter. Now in this system you may have 2 or 4 times that "virtual diameter", which may increase in 2 or 4 times the tide difference. But like this is half of moon, then it will be equal to have 2 or 4 meters of difference between both planets. Which is not enoght to change the gravity between both. -
Understanding the Greenhouse (Gas) Effect
AngelLestat replied to arkie87's topic in Science & Spaceflight
in that case there is not winds (convective) neither difference on density due isothermal and 100% IR block, so that looks more like a solid, in that case; the greenhouse effect only start when the real atmosphere starts. Yeah, but not sure how that helps you to understand more on the greenhouse effect. yeah is correct. Yeah, top going down, until the sum of the energy release is equal to the outcomming energy. Rememer that the values are all make up, but they may be similar to that depending the atmosphere properties with a incomming sun energy from 300w/m2. You are right, I overestimate in my mind the quadratic factor. I was visualizing the temperatures in centigrade degrees also, which 30 degress vs 18 degress is a big % difference to the quadratic, not if you take 300K vs 254K. heh, for be lazy and not do the math, but I was also talking into account the thermometer position normal to the surface, which it would receive radiation from both sides (or almost), so that it would increase considerable the temperature measure it, plus other factors. Yeah, but we are talking about a thermometer.. that thin piece of glass that we all know. Or a thermocouple, which it has less resistence. If you are talking about something alive, then the blood will manage to transfer heat. Well, not sure what else we can discuss about the greenhouse effect, all points are pretty well clarify. -
Could there be contact binary planets?
AngelLestat replied to cptdavep's topic in Science & Spaceflight
Take a look to the graphic, it also show the gas circulation, but well that is what is call roche lobe. Thanks Ralathon, I will take a look when I have time and see if I found a better case. - - - Updated - - - You will have tides only if they are not tidal lock. If the both planets form at the same time (which is just a gas cloud with 2 mass centers), then there is almost a 100% chance they will be tide lock, also in the way that is not a capture event, the gravity forces will be lower, so they might be close to its roche limit. -
Understanding the Greenhouse (Gas) Effect
AngelLestat replied to arkie87's topic in Science & Spaceflight
Yeah I still dont get it Maybe someone else can help me to answer this. There are many english words together, my brain can not assimilate all. a) ok.. that "is true", but I guess is a bit messy the analogy, you need to take the surface as a heat source which leads to more questions. What it means well mixed? and isothermal? How can you have the same temperature at different height? There is not such thing as 100% greenhouse layer (perfect IR mirrow), the higher temperatures always will located on the bottom (because they can not go deeper) and everything radiates up from there. Meanwhile if you rise altitude, you receive radiation from down but lower radiation from up, because you will have always a point up where the temperature will be lower. Maybe I dint get your question. C) heh, super hard to follow you there. Not sure why you bring up the emisivity. All the text block seems like a very hard way to explain something as greenhouse, lets see if someone else understand the point you want to made and answer you. http://www.windows2universe.org/kids_space/temp_profile.html Not, I just use my common sense, and not sure if is right. But what I did is this: You always radiate heat against the background space radiations, which is 3k. So if you have a gas at just 100 kelbins, you still radiate 974 , but the thermal mass (density) at that height is so low, that is almost nothing. More down you go, tempearature of your atmosphere rise at the same time than density. In my calculation I ignore the thermal mass and I just focus in the temperature gradient. Lets divide the atmosphere of venus in layers of 10 km starting from 100km top (venus atmosphere reach 200 km, but we can ignore them because those extra 100km are negligible) I will make up numbers just to visualize the mechanic: (there are not reals and there are not based on real venus data) first layer will radiate only 1 w/m2 second layer 2w/m2 third 10 w/m2 four 47 w/m2 fifth 240 w/m2 Total= 300 w/m2 Something like that, is just an example, the idea behind will be to imagine what portion of the atmosphere emits the last radiation to the space without "bouce" in superior layers. But of course that is not complety true. well yeah, depending thermometer position, I was imagine a clasic case, thermometer normal to the sea surface (mercury capsule pointing down, scale up). It will measure a bit lower, but not half, because even if receive from one side and radiate from the other, radiation works T4, which a small decrease emits a lot lower than the energy is receiving. So if the temperature from the sea is 30c degree, the thermometor will measure 27c degree. Insulation or resistence has nothing to do, it reach the point when it reach equilibrium between radiation in and out.