keks
Members-
Posts
64 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by keks
-
Well, I do not really know. you could add the output of tree -ahF for mods that ship non-GameData stuff for easy reference of the directory layout. I really can't think of anything else right now. Just grab all the information you can get and store it in a somehow filterable format. Better to have too much than too few information
-
We certainly do not need every mod out there, just the most common ones and maybe a few uncommon ones as well. As you are putting hands on a lot of mods, could you also check which license they use? It would be great if you could dump your data into a spreadsheet so we can filter out the information we need, when we need it Also, it looks like I may have some free time to spend this this weekend I'll go and try getting the prototype fully functional with the specs we currently have. Edit: I created a few milestones so you guys get an idea how I planned to approach this. Also I'm currently creating a few tickets so you get an overview of what needs to be done.
-
The Repository: There really is nothing special about it. What we call "repository" here, actually is just an empty "git repository" pointing to other "git repositories" as submodules. Said submodules must then include a file 'meta.yaml' in their root directory providing all the information about the actual mod. Everything else does not matter to us in terms of the actual repository. The Description files aka meta-data: We discussed this a bit, but it definitely needs further investigation and tweaking. I'd suggest to start with a minimal set of data and extend it as needed. For a basic prototype all we really need are just the download url's, archive contents and their respective checksums. The client applications: This is what the prototype I put up earlier represents. Here all the 'magic' happens. The other two points are exclusively data-only. Currently I'm not working on this project as I simply do not have spare time to do so. As I said earlier, the meta-data structure needs further work. I'd actually like to dump it and start over again with the following structure: id: "authorname-modname" name: "Human readable name of the mod" release: version: "1.0.7.3" download: - url: "http://some.domain.tld/archive.zip" md5: "md5-checksum-of-archive.zip" contents: extra: - name: "some/documentation.pdf" md5: "md5-checksum-of-some/documentation.pdf" - name: "another/file.doc" md5: ... data: - name: "some/part.cfg" md5: "md5-checksum-of-some/part.cfg" target: "target/to/install/some/part.cfg" That's actually all information we need for basic functionality. The 'contents' structure got reorganized to support categories as requested by someone here earlier (too lazy to search the post), so you can chose if you for example do want to install additional data, such as artwork, documentation, alternate textures, etc. The reason why I use lists instead of dictionaries for the file names is, that some YAML-parsers have problems with keys exceeding a given length in characters. Populating the repo at this point is kinda pointless IMHO, as we might need to adjust stuff and possibly break backwards-compatibility at some point. What actually can be done is looking out for mods that need "special treatment". That means, mods that (because of what they do) do not follow the usual pattern of "just dump these files into GameData/ and you're done". KMP and StandaloneMapView are examples for this, as far as I remember. Ye, but to combine efforts on something, there actually needs to be something to start with in first place. That's why I wanted to pout out a basic prototype first, which I did. Sources are out there on GitHub, so feel free to mess with it and for example implement the new data-structure I mentioned above. If you have a GitHub account, feel free to send me pull-requests and I'll happily merge them. I will also create a few tickets and milestones in the issue tracker on GitHub, so we can easily assign tasks to people. For this you'll have to send me your GitHub account name though, so I can give you write access to the repo.
-
So here you go: https://github.com/ksprepo-alt/kspmm-prototype Be careful when using it as it did not undergo any tests at all! Kittens may die when using it! Also be sure to set config/cache_dir and config/ksp_base_dir according to your environment. Code is a undocumented mess, but it should be pretty easy to understand.
-
As I said, as long as their API provides every information we need, we of course can use it to easily feed the repo. We could for example query their API every X minutes for new uploads, or even better, they push accumulated lists changes to us. We could then easily generate the mod-repo, meta-data, etc and update the index automatically. But that is something we can talk about once there's some more or less stable codebase to built upon. I never said that it's no valid option. In fact I said I'd love to see integration (meaning sharing data in both directions) with them at some point I'm currently building a very basic cmd-based prototype in Python àla 'apt-get'. So far you can download/update your local index, search for mods by name and install/update them. As mentioned earlier, my job got a bit stressful lately as we managed to acquire two (for us) unusually big projects at a time, hence we're a bit short on manpower right now. I usually say that I might have something on Sunday, but in fact I have had only a couple of hours over the last few days for working on my reference implementation. So right now, I really cannot say when I will have something to show to you... And it's past 02:00 again already...
-
Sadly, I did not have the time to write a technical specification. Looks like my rough outline/summary and the prototype (once it's there) have to suffice until I have some more free time to spend. There's a link to the summary on my initial post, so it's easier to find. Well, I think most developers are used to fight against windmills... You don't develop stuff that's great, you develop stuff that gets paid for...
-
As I already posted earlier, I'd love to see integration with KerbalStuff. It does not matter from where the data comes (if it is a human being, or some API), as long as it is complete. The reason why I, personally, prefer a free repository over some third-party site is, that it can easily be continued by someone else, when the original maintainers disappear for some reason. Think about what would happen if the KerbalStuff-Guys decide to suddenly shutdown their project for some reason. Even with the source-code available, it would take some time to get everything back up and running. Not to mention the data loss... With a solution based on a GitHub-Repo, it's a simple push of the [fork] button, and you're done. Also it's completely free. Nobody needs to pay for hosting that site, while for KerbalStuff (hosted at digitalocean.com/pricing/) obviously does. Most of the modders host their sources and binary releases on GitHub anyway. All these people would have to do is create a tag, say 'latest' and let our repo automatically update our index to the commit said tag points to. done. That would even be less work than any other solution out there, yet. Including the forums and KerbalStuff. The only "work" that is to be done, is the initial import or when file structure changes. Then the meta-data has most likely to be updated manually. Simple updates with no fundamental changes to directory structure can be processed automatically. Including bumping the version, as long as they tag their releases properly. If then we, and the KerbalStuff guys work together, they can easily integrate our repo into their site, if they wish to. Please be patient and wait for the first prototype. As I already said, I'm quite busy with my job recently, so I don't have that much time to work on the prototype application. Also I have some other things to do as well... damn social life... I might have something to show by Sunday, but I cannot promise that...
-
Well, actually we already made a decision here: C#. Simply because of two reasons: * No extra dependencies needed on the client, because .Net/Mono is already there * Someone needs to maintain the repo and the tools As this community is focused around C#, because KSP mods are written in C#, chances to find someone in here willing to help develop and maintain a C# project are by far better than finding a, lets say, Haskell developer. PS: I did not have time to complete my Prototype yet, as I've been quite busy with my job lately, but I should have a working prototype at the end of the week.
-
And how would you handle the compressed contents then? You'd have to unpack the whole archive and sync it then. Also rsync cannot intelligently handle conflicts or our "hotfix" architecture that easily. Don't get me wrong, I love rsync and use it on a daily base, but I see no reason to introduce yet another dependency here. And creating such a small class handling the file patching is really not that difficult. Well, every language has it's specific field in which it is good. Perl's good at efficiently handling/parsing text files on *nix systems, but was never designed to be used for building standalone executables. That's something people hacked in at a later point, and is more of a toy than anything I'd use in production unless I really have to. Also, I'd not say that plain perl fits your "easy-and-fast" description. Writing object-oriented code in perl is nothing anybody out there would like to do for bigger projects without using something like Moose, which in turn adds a ton of overhead, breaks on a regular basis thanks to B.pm and the like and introduces strange bugs that are almost impossible to debug without crawling through Moose itself and it's dependencies. I myself moved away from using perl for bigger projects some time ago because of this issue. I still use it for a lot of sysadmin stuff.
-
My fault. Sorry for that! I'd like to keep external dependencies as minimal as possible, because they need maintenance. When sticking to language features only, we can rely on them not breaking that easily in a multi-OS environment. Besides that, we could also rely on git's implementation here when using a C# implementation/wrapper. I never said that. But I already mentioned several times that I do not think it is a good idea to introduce further dependencies when there effectively is absolutely no need for it. Why ship (and maintain!) another runtime when .Net/Mono is already available on all target systems? Because when not checking all files for manipulation, your updates are not consistent. Maybe a user manipulated a file that is not part of the update-diff or does not want a file to be overridden/removed when updating a mod. And I never said we need to always calculate all checksums - it's just a worst-case scenario for people updating old installations. We also need to consider files created at runtime, not being part of the installation package. Do we delete, ignore or patch them? Stuff like that. Also, when knowing which files changed and which files did not, we can apply differential updates, effectively reducing update-time and fragmentation. No need to delete everything and unpack it again. Nope. But I'd love to see it integrate Toolbar via some API some day, notifying you of updates But that's something to look at when things are working as expected. Did you ever try that in real-life environments? I did (have to...) do that for Perl, Python and Ruby applications. It was a nightmare to get it working reliably. Freezes, segfaults, dependencies that are determined at runtime only, and stuff like that. Not to mention the bloat of always having to ship a complete interpreter and a ton of libraries in your executables. Also efficiently debugging such applications at runtime is nearly impossible. If you still think it's a good idea, go try pack up a application of your choice and see how it runs on different systems. (And here I'm talking of real applications, no "Hello World" stuff ) Well, I we already agreed on creating a basic prototype. But someone has to build it... As I said I currently do not have the time to build it right now. When I find the time to do so, I will build it, but I cannot say when that will be. Maybe next weekend. Also nobody stops you guys from implementing the basic spec I posted earlier (and linked in the first post). Everything one needs to know to implement a prototype has been discussed on this thread here. When replying to this thread here, I am usually actually doing stuff for my daytime-job, and just try to reply to your posts without having you guys waiting too long. So again, if you feel like you want to implement a prototype application, please, feel free do so! The more references we have, the better. I will implement mine as soon as I find the time to do so.
-
I fully agree with Spyhawk here. Python forces you to write readable code and gets rid of all the unnecessary braces. I mean, you (should) indent your code anyway, no matter if you have to or not. So I see no reason why not to get rid of redundant braces that basically are of no meaning at all. Also, the GIL can be a good thing. For example, you don't have to care about synchronization that much, because Python threads do not run in parallel anyway. But that's a completely different topic and should not be discussed in here IMHO I already explicitly pointed that out in my post earlier. This benchmark is three years old and is not representative at all. It tests for one specific operation which shows a fundamental problem with Java's memory management, and shows how fast it could actually be if implemented right, like Perl does. This issue is nicely explained on the site I linked. The reason I chose this benchmark is, that it pretty much corresponds with my personal experience with those languages. I've been using those languages for quite some time now, for projects of all kinds and scales, and I can safely say that there probably is no better language out there than Perl, when having to do lots of string manipulations. On the other hand, Perl is pretty much unusable in big projects, because of the limitations of the language features it provides "out of the box". You almost always would want to rely on some hacks like Moose (or it's deviates) which frequently break because of their dependency on B.pm and the like. Also, running Perl on Windows is a big mess. Python on the other hand is pretty portable, but very slow when multi-threading. In Python when for example implementing search algorithms in a multi-threaded approach, they are actually tremendously slower than a single-threaded implementation. This is caused by how Python handles threads and their synchronization and ofc. the GIL itself. That's actually not correct. Perl does not have the concept of Arrays as you know them from lets say languages like C++. Instead they basically are an over-sized array or pointers to multiple lists containing the actual data. This way all basic array operations (insert, fetch, push, pop, shift, unshift, ...) can almost always be done in O(1). There are only a few edge-cases in where the complexity grows to O(n). So actually Perl handles "Arrays" quite well. The reason why Java sucks so hard in the benchmark I linked is that it does have to allocate a new String() object after every operation and destroy the old one. This takes a lot of time. In addition to that, it does not re-use "freed" memory of destroyed objects. Performance actually IS of concern here. When updating mods we have to calculate checksums for every file in the GameData directory tree. Depending on the actual implementation, this can be something that can take a few seconds, or a couple of minutes. AFAIK Python provides C-implementations of all major checksum algorithms, but I do not know how portable they are. Pure python implementations would take way too much time, especially on many small files. When using C# we could compute multiple checksums at the same time, taking advantage of multiple CPU cores and I/O wait times. We already agreed to implement out prototype in Python. We also agreed on not introducing additional dependencies on client-side, so choosing Python for the actual release is out of question. .Net/Mono is already there, because KSP runs on it, so let's stick to that. It would (maybe) also attract more contributors, as mods are written in C#, not Python
-
Well I am no blender user, but AFAIK blender itself is written in C/C++ and only the scripts are Python. And that's exactly where Python (or even more often Lua) is often used at, and where it can play it's strengths. When writing GUI you usually divide your application into at least two (mostly) independent parts: the GUI and the logic. Both run their own threads, so some long-taking operation in the logic does not block the GUI, but you can still easily share data between logic and GUI. Python cannot easily do this, because it can only run one thread at a time due to it's GIL [1]. That causes the GUI to freeze, even when you run it in its own thread. The only way around this is to use multiple processes, which in turn cannot easily share data anymore... Edit: For the sake of completeness: The above is actually not 100% accurate. You could also use external (C/C++) libraries for displaying your GUI and manually control the GIL then. But then you must make sure your GUI code never ever manipulates python memory in any way, or bad things will happen... And that's something you cannot guarantee that easily if you still want to be able to interact with your GUI End Edit [1]: https://wiki.python.org/moin/GlobalInterpreterLock Disclaimer: when talking about "Pyhton" I actually mean the "CPython" implementation CPython is pretty fast, but not even close to a native implementation. That's why many Python modules are written in C. To get a rough idea of the speed differences of various languages take a look at this: http://onlyjob.blogspot.de/2011/03/perl5-python-ruby-php-c-c-lua-tcl.html . But keep in mind that "speed" is also very dependent on the actual implementation and algorithms used. That's why Perl scores so high at the site I linked. Perl makes heavy use of highly optimized native modules (Perl modules actually written in C). It also was created to efficiently handle strings, and the "benchmark" used at the site I linked does measure string manipulation time. Perl has had 25 years to optimize for exactly that use case
-
Python definitely is a nightmare to build GUI (or basically any multi-threaded) applications with. Python does not support running multiple threads in parallel due to it's GIL. So when the application is working, the GUI hangs. The only way around this is multi-processing which has it's problems on it's own... Also writing it in Pyhton would add the dependency on the python interpreter and libraries. Even if cx_freeze was working reliably (which it does not) it would be at least lets say "adventurous". I really prefer building the final application in a proper language which is properly supported on all of our target platforms without the need of strange hacks and workarounds. We'd have to rewrite the application anyway, because the prototype is just meant to be a prototype. Rewriting an application from scratch usually leads to cleaner, more sane code as you won't (or should not) do the same mistakes again, you did when writing your prototype Edit: However, if you guys really want to use Python instead of C#/Mono, I could accept IronPython, which is a .Net/Mono implementation of the python specification. IronPython compiles python code down to .net Bytecode which in turn can be run using the .Net/Mono runtime. But that does not really feel right to me. Mods are written in C#, so should our application be written in C#. No need to introduce a new language here, IMHO.
-
Well, there acutally is a little difference. Official packages only contain the original release archive. Unofficial patches contain the original release archive and a the unofficial patch-set that gets applied over the original release. I explained my proposal earlier: http://forum.kerbalspaceprogram.com/threads/85989-Combining-efforts-on-proper-mod-management-framework-tools-platform?p=1378736&viewfull=1#post1378736 From perspective of the user, there is no difference between them. That's correct.
-
Problem is most devs do not include a proper license in their release archive. Instead they mention license on the forums only. That's a problem for us, because when not shipping a proper license, we actuall violate the dev's copyright. As we agreed on not modifying the original release, shipping it outside the release archive really is the only way that's left.
-
That's acutally exactly what I am doing in my examples ;-) Don't mix up the license of the repo with the mod license. The mod license is below repo license under the 'legal/original' key. PS: I also just noticed that I messed up some white-space there... 'source' is actually ment to be on the same level the other keys under 'original' are It actually is a requirement to include a proper license when redistributing content. For GPL licensed content, you have to either ship the full license text or provide a link where users can download the full text. I don't know about CC and BSD, but I'm pretty sure they handle it in a similar way. Also for example BSD-2-Clause (and others) requires you to include the original license notice. The same goes for MIT-license. You have to include the original copyright/license notice. I did not make this clear earlier, but my example file file plaintext/url entries are mutually exclusive, but one of them is always required. Mods using unmodified license texts can simply supply the license name and URL, where modified licenses must provide the full license text and/or the original license they build on. For example some developers here use the BSD-2-Clause, but add further restrictions. Instead of putting in plaintext license into the meta-file, you could also put your license into a file and then use the 'url' key to point to that file. It really does not matter IMHO. But fact is, we must include those unchanged license/copyright notices and must provide a proper full-text version of said license. This is a requirement most licenses impose upon derived work, and I see no reason to not make this mandatory for all mods. I'd rather have a link too much in there than always having to figure out if I need it or not.
-
And how would you then handle custom licenses like? Many authors add further restrictions to their license or simply create their own license. Proper licensing is a must, and nowadays 1 kByte of data really is not that much By the way, I in fact did only put in name and version of the license when a unmodified standard license was used: license: - name: "CC BY-NC-SA 4.0" url: "http://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.txt" Well, it actually is not really a lot of work. Everything can be done in less than 2 minutes if you're quick at typing. But As I already said: In addition to generation of the meta-data file, uploading, tagging, etc can ofc. also be done via a simple script.
-
That's where the dev-tools come into play. The meta.yaml file can mostly be generated automatically using a simple script. This would also greatly reduce the possibility to mess up things wen creating/updating the meta-data. You're correct. Some users may prefer to have them installed to their usual location. We could split up contents into categories and let the user decide which categories to install: contents: core: '03d2659490e744b2641ca47ebe6e93f8': 'DeadlyReentry/DeadlyReentry-RealChutes.cfg' 'bbe1bd3cb63cba5b630ae9c82bc2f011': 'DeadlyReentry/Sounds/fire_damage.wav' '01ea0cb76541c1f16f8c71ba09d04098': 'DeadlyReentry/Sounds/gforce_damage.wav' crafts: ... docs: ... This way we can easily add categories over time. MD5 is a bit faster than SHA1. Given a large amount of mods/files and a slow client-pc this could greatly reduce computation time when updating mods. The probability of two hashes colliding is something about 1/(2^128) for both, MD5 and SHA1. As we only use it as checksum (and not as crypto-function) it really does not matter if we chose MD5 or SHA1 as they both create 128bit sums. If we really encounter problems with collisions, we can still switch to a longer checksum. Now I really have to get my sleep. have a good night
-
It could look something like this: Original v5.2 file: https://github.com/ksprepo-alt/deadlyreentry/blob/aa8720d9366fefda8f1785e2340450fd7f6c1d92/meta.yaml [...] download: - url: 'https://github.com/NathanKell/DeadlyReentry/releases/download/v5.2/DeadlyReentryCont_v5.2.zip' md5: 'f70f51778cb8c026bbccd07523016477' - url: 'https://github.com/ksprepo-alt/deadlyreentry/releases/download/v5.2/DeadlyReentryCont_v5.2.zip' md5: 'f70f51778cb8c026bbccd07523016477' contents: '03d2659490e744b2641ca47ebe6e93f8': 'DeadlyReentry/DeadlyReentry-RealChutes.cfg' 'bbe1bd3cb63cba5b630ae9c82bc2f011': 'DeadlyReentry/Sounds/fire_damage.wav' '01ea0cb76541c1f16f8c71ba09d04098': 'DeadlyReentry/Sounds/gforce_damage.wav' 'aa0ed53b8b89a0366c53777ff76e23eb': 'DeadlyReentry/Parts/UP_decoupler_2/model000.png' .... patch: - download: - url: https://github.com/ksprepo-alt/deadlyreentry/releases/download/v5.2-cf-1/5.2-community-fix.zip md5: 'bbe1bd3cb63cba5b630ae9c82bc2f011' contents: '03d2659490e744b2641ca47ebe6e93f8': 'DeadlyReentry/DeadlyReentry-RealChutes.cfg' remove: - 'DeadlyReentry/Sounds/gforce_damage.wav' In this example the patch would be committed to the repo as version/tag 'v5.2+ksprepo-1'. '+ksprepo' being our suffix, '-1' being the version counter (I took Debian's approach as reference here). When the clients sees there is an update available, it simply applies everything under 'patch' in the order specified over the original release. Internally it simply merges 'contents' and all 'patch/.../contents' to one single list (ofc. also respecting the 'remove' entries) and treats the generated list as a normal release. In the example above that would replace 'DeadlyReentry/DeadlyReentry-RealChutes.cfg' with a new version and delete 'DeadlyReentry/Sounds/gforce_damage.wav'. So we actually do not distribute patched releases, we distribute only differential updates / patches.