Jump to content

Recommended Posts

Hi everybody!

 

(UPDATE on page 3, it's now sort of working)


Hope I'm in the right section, I don't come very often to the forums, so just tell me if I got this wrong!

So here's the idea: for a few months now I've been thinking about hooking a machine learning program to a rocket and pressing 'Launch' to see how bad things could go. I've looked it up, but it seems not much people have * actually* done it successfully.
So HUGE disclaimer: I have no idea where this is going to go. Maybe I'll stop in 2 weeks, maybe I'll get super on board with the idea.
But I think there's something really cool to try here and that's all I need.

The main idea I have for how to do this is:

- Control KSP through the kRPC mod, enabling us to run Python, which means we can import any machine / deep learning librairies alongside it.
- Use reinforcement learning to learn from each launch, and try to improve the performances of the rocket. Note that here I'm not trying to generate a rocket design, I just start from a particular rocket and try to teach it to fly it.

To put a bit of context around this, I have a few months of practice on neural networks (using mostly Keras over TenserFlow, which will most probably be the library used for running the basic neural network that will be trained) I have barely used kRPC for a hours the last few days to say what was possible with it, I just made a successful program that reloads a quicksave when a rocket after a certain time and looks at the altitude it's gone, to see if it was possible. I've also been through some of the documentation. Oh, and I've never done anything related to reinforcement learning, so just like when I started playing KSP a few years a go, I don't really know what I'm doing, but I have theoretical knowledge, and think this could be fun, that's good enough for me to start! I thought opening a new post here on the KSP forum could be a good place for me to first of all take the time to lay down what I want to do, and get some feedback from people that might have more experience than me, maybe some people might even want to follow progress on this! :)

So to make things a little more interesting, I'm gonna give my IA a name, because I don't like how we just refer all the time to "IA". I'm going to call my 'IA' Bertrand.

So, here are the initial ideas I have (this will be a bit technical as the trickiest part of this is to get a neural network running and interacting with KSP, while minimising the RUD counter).

- In order to make any reinforcement learning, we need to define a loss function, this is what the neural network is going to have to minimize. Basically what is Bertrands  purpose in life? What I would like to try for the very first stage is to get the rocket to a certain altitude. Say 50km to start with. That sounds super easy: just full throttle and nothing else. Yeah, but you haven't seen how dumb a untrained neural network is. So basically the reward function would be something like a function of the altitude, say Altitude*10 and you get -500 if you blow up (this is just a rough idea first, I have to figure out the specifics later). I think taking into the account the amount of fuel left when reaching the altitude is also important, as that's what we want it to do: be the best rocket out of all the self-teaching rockets out there, and make the best ascend possible, using the least amount of fuel possible.

- Second, the controls: Bertrand, my IA, needs to be able to control the rocket  to make it blow up fly it. But the thing is, with KSP, controls aren't really something we're short of. Remember, Bertrand is going to be really, really dumb at first, we're talking cat-walking-on-a-keyboard-level dumb here. I thought having control over yaw, pitch and thrust would be a good start. Since most rockets are cylindrically shaped, roll doesn't have much of an effect, and I think keeping just 6 buttons (2 for each control) is already going to be hard enough. I'm also afraid that if we start letting roll in, it will heavily confuse Bertrand, because it will change what yaw and pitch do. If this thing turns out to be a good idea, I might add multiple stages to my rockets and thus decoupling as a control. (I'm also pretty curious to see how it starts just randonly decoupling at any time and things just blowing up even more). But I'm worried that at first decoupling will make the capsule hop and give an easy win, so Bertrand keeps doing it without learning anything else. But we'll save that for later, let's not spoil Bertrand too much for now.

- Third, this is going to need to run a loooot of times before Bertrand starts doing something not stupid and actually understand it has to throttle up and not do anything else to lift off. This is where my little kRPC program comes into play. I can make it reload the game as soon as the ship blows up, or reaches a certain altitude, and store the final altitude at which the game is reloaded. That's what I managed to do in a few hours of tinkering on kRPC, but I'm also going to need to register all the keystrokes Bertrand does, those are going to be the inputs of the neural network. That's a bit more tricky, but I don't think that going to be very hard. The hard part is going to be feeding that to the neural network.

- Fourth I only have a Xiaomi Mi Notebook Pro, and that thing only has an 8th gen i5, with 8Gb of Ram and most laughable of all, an mx150 for a GPU. This bad boy doesn't have any issue running KSP at beautiful max settings and 60fps no problemo. But we're gonna have to train Bertrand too. And Bertrand is going to require some serious hoursepower to get better. This means I'm probably going to run KSP at a stupidly low resolution, and potato graphical settings to just even *try* to train the neural network. This is why it's going to be very important to keep the neaural network architecture as simple as possible! If that turns out to be really too long, I'll see if it isn't possible to use a Kaggle Kernel to get access to the free GPU and connect it to kRPC. Otherwise I might have to pay an online GPU, but I'd rather keep that as a last resort, I'm just a poor lonesome student with a bit too much time on his hands, but surely no money to rent a GPU. So I'll simply take this as the "optimization"' success trophy in my Bertrand plays KSP game. And if hooking a Kaggle Kernel to kRPC doesn't work, I'm not even sure it would with any other paid service; but that's for later anyways.

- The idea I have, from this article from people that previously seemed to have relatively done what I want to do would be to use OpenAI's Gym, as a support for the reinforcement learning. OpenAI launched a thing called Universe a while back that was supposed to support KSP, but that got cancelled before we had time to see Terminator-powered rockets take over. But these guys (the ones from the article) made their own environment compatible with the Gym, which I will very happily clone, and tweak according to what I will be trying to do (after I eventually understand all of it). I'm basically going to use what these guys have done to try to get something working. I do think I should spend some time learning how to use OpenAI's gym on easier examples first before doing that though. So Bertrand isn't even going to see the world inside KSP first!

That's basically how far my reflexion has gone for now. I've spend a good amount of time going through the internet looking for people trying this, and as I mentioned earlier, didn't seem to find anybody that did achieve this, apart from the guys who's work I shared in this last point. I also saw that a few years ago a guy streamed his neural network training, but that's long been finished, so I can't see that anymore ;.;

Just to conclude here is what I have already done:
Using kRPC, I made a program that goes on for 5 epoch (or 5 times) and each time makes the rocket turn after epoch_number seconds, then waits 5 seconds, gets the altitude the rocket got at after those 5 seconds, and reload to the last quicksave. These altitudes are stored in a dicitonnary called Dict_alt that is displayed at the end. This was my initial test to see if it was simply possible to be able to run multiple launches one after the other without me coming in. And storing in a dictionnary will probably need to be changed to a dataframe using a cuddly panda.

p82Gsop.gif

TL;DR: I have no idea what I'm really doing, but I want to train an IA to fly a rocket, and I have a lot of work ahead of me for this to even remotely work.

Anyways, if this is interesting to follow, well take a seat and prepare to wait a long time, and if you have any good ideas on how I could do this, things I have forgotten even if I haven't even started or improvements, please let me know! As I said, I don't really know what I'm doing with this, so all and any suggestions would be greatly appreciated! :)
Thanks for having taken the time to read this! I hope this turns out to be a real project, maybe!

 

Edited by Jirokoh
Update of June 29th
Link to comment
Share on other sites

Very interesting idea. Although... Last summer i watched a stream on Twitch of someone trying exactly that. Teach AI how to launch rockets in KSP. And i have to say - i wasn't impresed. This experiment ran for weeks i think, and when i watched it for the last time, i couldn't notice any visible progress. Bot was launching the rocket in seemingly random directions and random angles just as it did on day one. Maybe you can track this stream in Twitch archives and gain some ideas from it?

Link to comment
Share on other sites

41 minutes ago, Scotius said:

 Very interesting idea. Although... Last summer i watched a stream on Twitch of someone trying exactly that. Teach AI how to launch rockets in KSP. And i have to say - i wasn't impresed. This experiment ran for weeks i think, and when i watched it for the last time, i couldn't notice any visible progress. Bot was launching the rocket in seemingly random directions and random angles just as it did on day one. Maybe you can track this stream in Twitch archives and gain some ideas from it?

To be fair, KSP is a tough game to learn. Hard enough for us humans who can grasp learning from our mistakes and build on it, much less a computer that doesn’t even know if what it did was right or wrong, and may not even know why, only what it did to get those results. Not to mention there’s thousands and thousands of variables maybe even millions, and it only has to get 1 wrong for a rocket to start falling over and of those millions and millions of iterations, I can’t be certain the rocket DIDN'T get to a high altitude. 

Link to comment
Share on other sites

True. That's why i remain sceptical about Artificial Intelligence when i read articles about "awesome achievements" in this field. Like the program winning against GO archmaster or something else. It's still not intuitive learning - you can't tell the program how flight and orbital mechanics work (even in broad strokes), and what it's trying to achieve. It's still brute-forcing "Put this rocket in 70 km x 70 km circle with 0 inclination" by the way of throwing stuff in random directions and waiting until something sticks.

Link to comment
Share on other sites

7 minutes ago, Scotius said:

True. That's why i remain sceptical about Artificial Intelligence when i read articles about "awesome achievements" in this field. Like the program winning against GO archmaster or something else. It's still not intuitive learning - you can't tell the program how flight and orbital mechanics work (even in broad strokes), and what it's trying to achieve. It's still brute-forcing "Put this rocket in 70 km x 70 km circle with 0 inclination" by the way of throwing stuff in random directions and waiting until something sticks.

Well it depends on how it learns. Some AIs can learn traits that prove effective, and can learn in other ways. So they can learn well but it’s largely dependent on how. 

Link to comment
Share on other sites

4 hours ago, Scotius said:

Very interesting idea. Although... Last summer i watched a stream on Twitch of someone trying exactly that. Teach AI how to launch rockets in KSP. And i have to say - i wasn't impresed. This experiment ran for weeks i think, and when i watched it for the last time, i couldn't notice any visible progress. Bot was launching the rocket in seemingly random directions and random angles just as it did on day one. Maybe you can track this stream in Twitch archives and gain some ideas from it?

That's sthe stream I was talking about, but I can't seem to find it. If any of you guys knows where I can find it, then I'd be more than happy to have a look!

And, I know this is going to be a complicated thing, I'm not even saying it's going to work. I'm just a little bit confident on the fact I think it can work (that doesn't sound very confident, does it? :P)

Now, just because some have tried and not really succeeded, doesn't mean it's not possible ;)
Flying a rocket isn't that hard I think, and that's also why I'm narrowing down the parameters Bertrand will be able to interact with. I'm even been thinking about only giving him access to pitch and throttle. Does anybody know if there's a mod to lock the game in yaw and roll? That would basically make the game a 2D environment, that could be interesting too.

About learning Go, that's not really what happened, and there are mostly two different algorithms: AlphaGo and AlphaGo Zero. AlphaGo used human games to learn from them, a little bit like any human player would. But what AlphaGo Zero did, was totally unsupervised learning: it played against itself, only, and got better from there. This means it actually came up with strategies that never occured to humans before, and completely smashed the original AlphaGo. So no, IA is really not just brute forcing in that term, otherwise it's just any other program. What is commonely referred to IA, and what I want to try here is a little bit how humans learn: trial and error, with a feedback on your progress. In Bertrand's case, the feedback is the loss function I talked about in the original post.

To come back to the IA taking hours, days, weeks or months to learn how to fly a rocket in the previous streamer's tentative, that's one thing I'm going to try to optimize, how long before each epoch, because I'm going to need to go through hundreds or even thousands. If it turns out ot be too complicated, I might need to record myself playing and feed that to Bertrand to decrease the learning time. But that's just not as fun. I'm also trying to think if my gaming beast of a computer would handle running two instances of KSP + Bertrand. Of course I'll only try with one in the first place, but running 2 games at once means it could double up speed time. The guys that made their neural network fly a rocket had 6 instances of KSP running on a 1080Ti, and then there algorithm on a dedicated server. I don't really have that, so we'll see what's possible. But then they didn't really tell at what framerate, graphical settings nor resolution they were playing. I do believe we could go pretty low before we start going too low. But we'll see that a bit later as well. For now I mostly need to get familiar with OpenAI's gym I think.

Anyways, I do have skepticism too, but still believe this could be achievable, so I guess there's only one way to find out!

Link to comment
Share on other sites

3 hours ago, ZooNamedGames said:

To be fair, KSP is a tough game to learn. Hard enough for us humans who can grasp learning from our mistakes and build on it, much less a computer that doesn’t even know if what it did was right or wrong, and may not even know why, only what it did to get those results. Not to mention there’s thousands and thousands of variables maybe even millions, and it only has to get 1 wrong for a rocket to start falling over and of those millions and millions of iterations, I can’t be certain the rocket DIDN'T get to a high altitude.  

That's point of machine learning.   You give it goals and a 'reward' system, and it learns by making progressive advances over many many many iterations of attempts. 

The first launches will go nowhere, but eventually it will learn to go up, and then sideways, and eventually it will figure out the optimum path for an orbital ascent.

The problem is not the AI program we are attempting write here, but KSP itself.  The various learning algorythms take a lot of iterations to work correctly, and KSP is definitely not a good program for doing this.  The load times are slow, and it can't run simultaneous launches.    I doubt I have done a few thousand launches myself, and that's probably the amount where it will get good at "Up is good". 

Link to comment
Share on other sites

4 minutes ago, Gargamel said:

The problem is not the AI program we are attempting write here, but KSP itself.  The various learning algorythms take a lot of iterations to work correctly, and KSP is definitely not a good program for doing this.  The load times are slow, and it can't run simultaneous launches.    I doubt I have done a few thousand launches myself, and that's probably the amount where it will get good at "Up is good". 

Hense wanting to run multiple instances of the game at once. If I can outsource the code running  Bertrand to Kaggle or Colab, that means my laptop can only work on running KSP, that would be a good starting point I think. I also need to make the restarting conditions as efficient as possible: as soon as the rocket goes into a unrepairable trajectory, restart. That's going to be a bit tricky to code, but I don't think it's going to be unfeasable.

Link to comment
Share on other sites

Just now, Jirokoh said:

I also need to make the restarting conditions as efficient as possible: as soon as the rocket goes into a unrepairable trajectory, restart. That's going to be a bit tricky to code, but I don't think it's going to be unfeasable.

Playing devils advocate to that, if you program in what is a "bad" trajectory, by default, it will know what is a "good" one.   And now you are just basically playing bumper bowling down a prescribed flight path until your rocket makes it.  And now, you've made a standard auto pilot with a really bumpy ride. 

I think, though, if you start it off with "Up is good", and "head east" type constraints, basically what most 'normal' people would understand of basic rocketry, it would be a fair assessment of the AI.  You would then be able to reduce the number of completely useless iterations, as even a child knows "rocket go splash" is bad (but exciting).  Give it a starting direction, and let it learn to optimize the ascent profile.   Give it goal (100kmx100km orbit) and a score (amount of dV left), and let it run from there. 

Link to comment
Share on other sites

4 minutes ago, Gargamel said:

I think, though, if you start it off with "Up is good", and "head east" type constraints, basically what most 'normal' people would understand of basic rocketry, it would be a fair assessment of the AI. 

That's what I have in mind if it doesn't work. But that feels a bit like cheating to be honest ^^ And I do also want to blow it up a lot, just for fun. But if eventually it never goes past the blowing-up phase, then I'll obviously have to try something else. I just don't want to discard the fully unsupervised learning without even trying it in the first place :)

Link to comment
Share on other sites

1st UPDATE

I'm stuck.
That was fast.

 

Seriously though, I'm trying to connect kRPC to a Google Colab Notebook to run all of the code online, and keep all my local ressources for running KSP. Especially since Colab gives a free GPU to use, that's really the reason why I'm trying this. So I've tried doing this:

!pip install krpc
import krpc
conn = krpc.connect(name='Web testing', address=ip, rpc_port=50000, stream_port=50001)

With my IP adress just above that last line, and launching a websocket server in kRPC with that same ip adress, RPC port, and Stream port. Basically nothing happens and eventually the Notebook just tells me 'Connection timed out'. (no issue on installing kRPC with pip nor importing it, that works fine.)
I think I'm missing something here, does someone have any idea what I could be doing wrong?
I should mention I don't really know much about servers and protocols, I'm entering unchartered territory of my knowledge here. 

Link to comment
Share on other sites

Machine Learning is all about fitting coefficient and constants into an equation, to ensure that the resulting equation will be able to produce the optimally satisfactory results.

So the question of how well will the "AI" learn depends on what kind of equation do you start with - is it something that's very far from ideal, or something fairly close to the ideal result ? AI learning isn't far removed from human learning, apart from the speed that it can do them (precision is a matter of calculation, not learning).

Link to comment
Share on other sites

@Jirokoh Have you checked to see if your firewall is getting in the way? You might have to forward those ports. Also, I'd try running it all on your KSP machine first--That will at least isolate the problem.

Have you got the server all set up like the screenshot? https://krpc.github.io/krpc/internals.html#server-performance-settings

This is the KRPC Discord: https://discord.gg/YbSC5R We're not super active, but you usually get a solid answer

Link to comment
Share on other sites

  • 2 weeks later...

A little update since it's been a few days since I last posted here.

I have been talking about this project with two friends of mine, one being @Dakitess (whom I've talked about this for a few months already) which some of you might be familiar with. He's kind of my KSP Guru, whenever I have KSP related questions, he's the one I go to.
Basically the conclusion I am at for the moment is that this requires neural network features that I am not familiar with. I already knew I was entering uncharted seas as far as what my knowledge of reinforcement learning was, but didn't think I'd have too many things to learn. I was wrong. I have started learning to use OpenAI's Gym, to get an understanding of how it works, and what I can do with it. I've learned quite a lot with this, but it's not going to be enough for Bertrand. One of the easiest thing you can do is implement a neural network that learns how to stabilize a pole on a sort of cart, which can only go left or right. Once trained, here's what it looks like:

QtA0ukM.gif

This might not look impressive, but at the slightest move, the pole being in a very unstable situation, it falls. Except here the neural netowrk learned how to stabilize it, not perfectly, but good enough.

Yeah, nothing impressive, but to code, it's actually not that straight forward. Here's the best tutorial I found for this specific example (The only difference being that the guy uses TensorFlow in this article, and I went for Keras, that I find much simpler to use and work with. I also have more experience with Keras and feel much more confident using it. For this project, that should work, hopefully).

Let me explain to you how this works:

Step 1: We tell the agent (that refer to the program that takes the actions, here either left or right) what actions it can take in the environment (here, the game) and how to calculate its score: you gain points each second the pole is straight up, if it goes beyond 15 degrees, you loose. The aim is  to be able to maximize this score, aka keep the pole straight as long as possible.

Step 2: Get some initial data to train our neural network on. This is pretty easy in this case: Just ask the agent to take random actions until the poles falls. Since it takes random actions, usually it lasts one or 2 seconds before the poll falls more than 15 degrees, but we don't really care . We just want to see what each action does. We are going to keep track of every aciton the agent is taking, and the resulting position & velocity of the cartpole.

Step 3: Feed all that data into the neural network, telling it to figure out how to maximize the score (here, this being keeping the pole up as long as possible). That's what neural networks are good at: finding patterns to maximize (or minimize, depends how you see it) a certain function. Here a very basic neural network is enough. At the end of this stage, the neural network is trained, and ready to make predictions.

Step 4: At each time step, take the position of the cartpole and the pole, give it to the neural network as input, and ask it to predict the best move to make (either left or right) to keep the pole balanced. We take that move, and observe the result, which gives us another state. We feed that again to the neural network for the prediction on the best action to take this time. We do this for every time step, and voilà !

(Step 5:?

Step 6: Profit)

So there we have it, a neural network trained to steer a cart to keep a pole steady. That took me about 2 evenings to figure out  shamely copy, but adding and testing new things along the way to see how different elemts of code worked and understanding the ins and outs of that.

While that's pretty cool (Okay, maybe not. I find it cool at least, especially once you've worked for a few hours on it). But that's not really teaching a rocket to fly now, is it? Well, nope. The big difference here is that the cartpole actions only depend on the state at which we are now. So that's actually pretty easy to figure out. We could take a picture of the cart and ask the neural network what to do; meaning nothing depends on the previous states the agent was in. That's not going to be as easily possible with a rocket, because each state of flight is different, the atmosphere's propreties are changing as we go higher, the air resistance also changes with our speed and altitude, and the agent itself (here the rocket) changes mass as the flight goes on, burning fuel. So we need some sort of time dependancies. And that's what I don't know yet, and where I need to leave this project aside for some time, while I go learn about that, how they work, and even more importantly what we can do with these, and how to use them. (for people interested these seems to be LSTM layers mostly, also looking at the A3C algorithm, since this seems to be pretty much the state of the art, and what the guys that did this before me used. But that requires multiple instances of the game at once, so another challenge. Though it should be not too complicated to code multiple instances of the game run at once, I don't know if my little laptop will really be able to handle it).

I'm also working on other machine learning projects at the moment, and I'm looking for a 6 months internship too, so this project isn't getting my full attention. But I'm still super motivated to work on this, and do believe it's feasible!
I'll let you know if there is any other major step forward, but it's probably going to take me multiple weeks to get a full graps of these time dependancy neural layers, and I'm also learning reinforcment learning as I go. I'm pretty new to this type of AIs, but it's super interesting to learn so much, especially with a project like this to apply it to! Basically take a seat and don't expect to see anything that soon! I still hope this is at least interesting to follow :)

 

Link to comment
Share on other sites

Question

Would you guys be interested if I streamed some of the tests I am doing to try to get this AI to fly?

 


These would obvioulsy be a bit messy, with a lot of things going wrong, but I can promise explosions, at least. Not sure how this would go as I've never done this before, but I'm thinking about it, what do you guys think?
Let me know if you'd be interested, and also if you wouldn't be!

I'd still keep updating this post here each time I actually make some meaningful progress.

Link to comment
Share on other sites

12 hours ago, Jirokoh said:

Question

Would you guys be interested if I streamed some of the tests I am doing to try to get this AI to fly? 

These would obvioulsy be a bit messy, with a lot of things going wrong, but I can promise explosions, at least. Not sure how this would go as I've never done this before, but I'm thinking about it, what do you guys think?
Let me know if you'd be interested, and also if you wouldn't be!

I'd still keep updating this post here each time I actually make some meaningful progress.

Should be nice to see, but while AI is in learning process it will be a lot iteration between attempts that would look almost same as previous one. To make video interesting to watch, you should edit it quite a lot. Show on video only few attempts that are almost same, placing attempt number as AI progress. Meaning, show only interesting flights where is visible that AI have made some progress, maybe one or two attempts before and after "successful" flight. Maybe to accelerate videos between two "successful" flight, to better see progression.

You would probably need to compress two weeks of recordings into 10-15 minutes of video, to make it interesting to watch. While overall AI programming is relatively new and interesting topic, watching each AI learning step can be tedious, depending how fast AI is capable to learn.

Link to comment
Share on other sites

18 hours ago, Jirokoh said:

Would you guys be interested if I streamed some of the tests I am doing to try to get this AI to fly?

As mentioned, compression would be required.   Maybe if you're a tricky enough video editor, overlaying the various attempts in a single shot.... But I'd be curious to see how it goes. 

Link to comment
Share on other sites

I am totally aware of that, I've been training some models for a few months now, I know it's not the most interesting thing ever. It's not really the learning itself I'd like to stream, because there's nothing to do or even to really watch during that time. It would rather been the making of the program, and testing it to see if if has correctly access to all the controls, efore actually training it. During learning, I'm not even around on my computer, I just let it do it's thing, because it mostly is computation. 

I think it also could be a good opportunity to explain to people what I'm doing, maybe answer a few questions if people have any. While neural networks and AI are big buzz words at the moment, I do think there's a lot of confusion around them and this could be a good opportunity to try to explain the little I know to people interested.


I just want to give this a try, maybe it's not going to work out, we'll see :)

Editing requires a whole different level of skills that I don't have, and a lot of time that I do not know if I want to spend doing that. That's why I was thinking about streaming.

Link to comment
Share on other sites

16 hours ago, Jirokoh said:

It would rather been the making of the program, and testing it to see if if has correctly access to all the controls, efore actually training it. During learning, I'm not even around on my computer, I just let it do it's thing, because it mostly is computation. 

I think it also could be a good opportunity to explain to people what I'm doing, maybe answer a few questions if people have any. While neural networks and AI are big buzz words at the moment, I do think there's a lot of confusion around them and this could be a good opportunity to try to explain the little I know to people interested.

That is different kind of animal to show. When you have mentioned streaming, I have assumed that you will streaming AI flights all day and night as it learn to fly.
It is understandable that you don't wish to do any kind of video editing and streaming is much easier solution for you. Most of streaming platform allow you to record that streaming for later usage, it will be good idea to record it, since a lot of KSP community members live on different time zones around globe. Not everyone would be able to watch live stream, but might want to watch record later on.

If you are going to for live streaming with explaining how everything works, I can only suggest to prepare yourself and materials in front. Write down some guidelines for yourself, what you want ot show, in what order, to make whole session easier for you and to not miss some important details. Try to aim streaming session for about 30 minutes to 1 hour max, if it become longer than that, people might start losing focus and interest to watch.

But yes, while I have some blure idea how neural network based AI works, I never study it in more detail and would be interesting to see how it can be setup against KSP rocket control, even if not being full success. Humans are not too much diferent from AI in that regard, we also learn on mistakes, so nothing to shame about if it does not go as you wanted at first atempt. Go for it and learn something from it.

Link to comment
Share on other sites

That's pretty much the plan, trying to have some ideas of what I'm dong and where I'm heading with that, to try to make it even remotely interesting to watch.
It's going to be a bit of trial an error though, I've never done that before, but it sounds appealing :)

I'll let you guys know how it goes!

Link to comment
Share on other sites

If you're automating everything I'd look in to automating the recording as well. Maybe set it up to record every 100th attempt, or only record the next flight after a milestone is reached or a setback occurs (like if a previous milestone was not reached).

Edited by 5thHorseman
Swypeos
Link to comment
Share on other sites

On 4/16/2019 at 11:28 AM, 5thHorseman said:

If you're automating everything I'd look in to automating the recording as well. Maybe set it up to record every 100th attempt, or only record the next flight after a milestone is reached or a setback occurs (like if a previous milestone was not reached).

That's going to be maybe onther step after. Recording it really isn't the priority, that's why I'm thinking of live streaming, not much effort :D I really want to focus on the machine learning itself :)

Link to comment
Share on other sites

On 4/18/2019 at 12:24 PM, FleshJeb said:

I had the thought that someone must have already come up with a solution to visualizing the machine learning process...BAM:

https://www.quora.com/What-are-the-best-visualizations-of-machine-learning-algorithms

It looks like there are several Python libraries available.

Just saw your post now! Thanks for sharing
I didn't talk about that, but I'm most probably going to be using Tensorboard, sticking to what I know. It's a great tool for looking into models on TensorFlow / Keras. 

No news on this project for the moment, I have exams coming up, and also spend a lot of time preparing for interviews for internships. Hopefully when that's passed I can go back to KSP.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...