Jump to content

keks

Members
  • Posts

    64
  • Joined

  • Last visited

Reputation

5 Neutral

Profile Information

  • About me
    Rocketry Enthusiast
  1. Well, I do not really know. you could add the output of tree -ahF for mods that ship non-GameData stuff for easy reference of the directory layout. I really can't think of anything else right now. Just grab all the information you can get and store it in a somehow filterable format. Better to have too much than too few information
  2. We certainly do not need every mod out there, just the most common ones and maybe a few uncommon ones as well. As you are putting hands on a lot of mods, could you also check which license they use? It would be great if you could dump your data into a spreadsheet so we can filter out the information we need, when we need it Also, it looks like I may have some free time to spend this this weekend I'll go and try getting the prototype fully functional with the specs we currently have. Edit: I created a few milestones so you guys get an idea how I planned to approach this. Also I'm currently creating a few tickets so you get an overview of what needs to be done.
  3. The Repository: There really is nothing special about it. What we call "repository" here, actually is just an empty "git repository" pointing to other "git repositories" as submodules. Said submodules must then include a file 'meta.yaml' in their root directory providing all the information about the actual mod. Everything else does not matter to us in terms of the actual repository. The Description files aka meta-data: We discussed this a bit, but it definitely needs further investigation and tweaking. I'd suggest to start with a minimal set of data and extend it as needed. For a basic prototype all we really need are just the download url's, archive contents and their respective checksums. The client applications: This is what the prototype I put up earlier represents. Here all the 'magic' happens. The other two points are exclusively data-only. Currently I'm not working on this project as I simply do not have spare time to do so. As I said earlier, the meta-data structure needs further work. I'd actually like to dump it and start over again with the following structure: id: "authorname-modname" name: "Human readable name of the mod" release: version: "1.0.7.3" download: - url: "http://some.domain.tld/archive.zip" md5: "md5-checksum-of-archive.zip" contents: extra: - name: "some/documentation.pdf" md5: "md5-checksum-of-some/documentation.pdf" - name: "another/file.doc" md5: ... data: - name: "some/part.cfg" md5: "md5-checksum-of-some/part.cfg" target: "target/to/install/some/part.cfg" That's actually all information we need for basic functionality. The 'contents' structure got reorganized to support categories as requested by someone here earlier (too lazy to search the post), so you can chose if you for example do want to install additional data, such as artwork, documentation, alternate textures, etc. The reason why I use lists instead of dictionaries for the file names is, that some YAML-parsers have problems with keys exceeding a given length in characters. Populating the repo at this point is kinda pointless IMHO, as we might need to adjust stuff and possibly break backwards-compatibility at some point. What actually can be done is looking out for mods that need "special treatment". That means, mods that (because of what they do) do not follow the usual pattern of "just dump these files into GameData/ and you're done". KMP and StandaloneMapView are examples for this, as far as I remember. Ye, but to combine efforts on something, there actually needs to be something to start with in first place. That's why I wanted to pout out a basic prototype first, which I did. Sources are out there on GitHub, so feel free to mess with it and for example implement the new data-structure I mentioned above. If you have a GitHub account, feel free to send me pull-requests and I'll happily merge them. I will also create a few tickets and milestones in the issue tracker on GitHub, so we can easily assign tasks to people. For this you'll have to send me your GitHub account name though, so I can give you write access to the repo.
  4. Take a look at the first post, there's a link there to a brief summary. If you still have questions after reading the summary, feel free to ask again
  5. Python 2.7.8 - GitPython v0.3.2 RC1 (https://pythonhosted.org/GitPython/0.3.2/) - PyYAML v3.11 (http://pyyaml.org/wiki/PyYAMLDocumentation)
  6. Information can be fed into the repo from anywhere, as long as there's some kind of API available that provides all the information we need. Curse does seem to provide a JSON-API (https://github.com/curseforge/api) which theoretically could be used to feed the repo from.
  7. So here you go: https://github.com/ksprepo-alt/kspmm-prototype Be careful when using it as it did not undergo any tests at all! Kittens may die when using it! Also be sure to set config/cache_dir and config/ksp_base_dir according to your environment. Code is a undocumented mess, but it should be pretty easy to understand.
  8. As I said, as long as their API provides every information we need, we of course can use it to easily feed the repo. We could for example query their API every X minutes for new uploads, or even better, they push accumulated lists changes to us. We could then easily generate the mod-repo, meta-data, etc and update the index automatically. But that is something we can talk about once there's some more or less stable codebase to built upon. I never said that it's no valid option. In fact I said I'd love to see integration (meaning sharing data in both directions) with them at some point I'm currently building a very basic cmd-based prototype in Python àla 'apt-get'. So far you can download/update your local index, search for mods by name and install/update them. As mentioned earlier, my job got a bit stressful lately as we managed to acquire two (for us) unusually big projects at a time, hence we're a bit short on manpower right now. I usually say that I might have something on Sunday, but in fact I have had only a couple of hours over the last few days for working on my reference implementation. So right now, I really cannot say when I will have something to show to you... And it's past 02:00 again already...
  9. Sadly, I did not have the time to write a technical specification. Looks like my rough outline/summary and the prototype (once it's there) have to suffice until I have some more free time to spend. There's a link to the summary on my initial post, so it's easier to find. Well, I think most developers are used to fight against windmills... You don't develop stuff that's great, you develop stuff that gets paid for...
  10. As I already posted earlier, I'd love to see integration with KerbalStuff. It does not matter from where the data comes (if it is a human being, or some API), as long as it is complete. The reason why I, personally, prefer a free repository over some third-party site is, that it can easily be continued by someone else, when the original maintainers disappear for some reason. Think about what would happen if the KerbalStuff-Guys decide to suddenly shutdown their project for some reason. Even with the source-code available, it would take some time to get everything back up and running. Not to mention the data loss... With a solution based on a GitHub-Repo, it's a simple push of the [fork] button, and you're done. Also it's completely free. Nobody needs to pay for hosting that site, while for KerbalStuff (hosted at digitalocean.com/pricing/) obviously does. Most of the modders host their sources and binary releases on GitHub anyway. All these people would have to do is create a tag, say 'latest' and let our repo automatically update our index to the commit said tag points to. done. That would even be less work than any other solution out there, yet. Including the forums and KerbalStuff. The only "work" that is to be done, is the initial import or when file structure changes. Then the meta-data has most likely to be updated manually. Simple updates with no fundamental changes to directory structure can be processed automatically. Including bumping the version, as long as they tag their releases properly. If then we, and the KerbalStuff guys work together, they can easily integrate our repo into their site, if they wish to. Please be patient and wait for the first prototype. As I already said, I'm quite busy with my job recently, so I don't have that much time to work on the prototype application. Also I have some other things to do as well... damn social life... I might have something to show by Sunday, but I cannot promise that...
  11. Well, actually we already made a decision here: C#. Simply because of two reasons: * No extra dependencies needed on the client, because .Net/Mono is already there * Someone needs to maintain the repo and the tools As this community is focused around C#, because KSP mods are written in C#, chances to find someone in here willing to help develop and maintain a C# project are by far better than finding a, lets say, Haskell developer. PS: I did not have time to complete my Prototype yet, as I've been quite busy with my job lately, but I should have a working prototype at the end of the week.
  12. And how would you handle the compressed contents then? You'd have to unpack the whole archive and sync it then. Also rsync cannot intelligently handle conflicts or our "hotfix" architecture that easily. Don't get me wrong, I love rsync and use it on a daily base, but I see no reason to introduce yet another dependency here. And creating such a small class handling the file patching is really not that difficult. Well, every language has it's specific field in which it is good. Perl's good at efficiently handling/parsing text files on *nix systems, but was never designed to be used for building standalone executables. That's something people hacked in at a later point, and is more of a toy than anything I'd use in production unless I really have to. Also, I'd not say that plain perl fits your "easy-and-fast" description. Writing object-oriented code in perl is nothing anybody out there would like to do for bigger projects without using something like Moose, which in turn adds a ton of overhead, breaks on a regular basis thanks to B.pm and the like and introduces strange bugs that are almost impossible to debug without crawling through Moose itself and it's dependencies. I myself moved away from using perl for bigger projects some time ago because of this issue. I still use it for a lot of sysadmin stuff.
  13. My fault. Sorry for that! I'd like to keep external dependencies as minimal as possible, because they need maintenance. When sticking to language features only, we can rely on them not breaking that easily in a multi-OS environment. Besides that, we could also rely on git's implementation here when using a C# implementation/wrapper. I never said that. But I already mentioned several times that I do not think it is a good idea to introduce further dependencies when there effectively is absolutely no need for it. Why ship (and maintain!) another runtime when .Net/Mono is already available on all target systems? Because when not checking all files for manipulation, your updates are not consistent. Maybe a user manipulated a file that is not part of the update-diff or does not want a file to be overridden/removed when updating a mod. And I never said we need to always calculate all checksums - it's just a worst-case scenario for people updating old installations. We also need to consider files created at runtime, not being part of the installation package. Do we delete, ignore or patch them? Stuff like that. Also, when knowing which files changed and which files did not, we can apply differential updates, effectively reducing update-time and fragmentation. No need to delete everything and unpack it again. Nope. But I'd love to see it integrate Toolbar via some API some day, notifying you of updates But that's something to look at when things are working as expected. Did you ever try that in real-life environments? I did (have to...) do that for Perl, Python and Ruby applications. It was a nightmare to get it working reliably. Freezes, segfaults, dependencies that are determined at runtime only, and stuff like that. Not to mention the bloat of always having to ship a complete interpreter and a ton of libraries in your executables. Also efficiently debugging such applications at runtime is nearly impossible. If you still think it's a good idea, go try pack up a application of your choice and see how it runs on different systems. (And here I'm talking of real applications, no "Hello World" stuff ) Well, I we already agreed on creating a basic prototype. But someone has to build it... As I said I currently do not have the time to build it right now. When I find the time to do so, I will build it, but I cannot say when that will be. Maybe next weekend. Also nobody stops you guys from implementing the basic spec I posted earlier (and linked in the first post). Everything one needs to know to implement a prototype has been discussed on this thread here. When replying to this thread here, I am usually actually doing stuff for my daytime-job, and just try to reply to your posts without having you guys waiting too long. So again, if you feel like you want to implement a prototype application, please, feel free do so! The more references we have, the better. I will implement mine as soon as I find the time to do so.
  14. I fully agree with Spyhawk here. Python forces you to write readable code and gets rid of all the unnecessary braces. I mean, you (should) indent your code anyway, no matter if you have to or not. So I see no reason why not to get rid of redundant braces that basically are of no meaning at all. Also, the GIL can be a good thing. For example, you don't have to care about synchronization that much, because Python threads do not run in parallel anyway. But that's a completely different topic and should not be discussed in here IMHO I already explicitly pointed that out in my post earlier. This benchmark is three years old and is not representative at all. It tests for one specific operation which shows a fundamental problem with Java's memory management, and shows how fast it could actually be if implemented right, like Perl does. This issue is nicely explained on the site I linked. The reason I chose this benchmark is, that it pretty much corresponds with my personal experience with those languages. I've been using those languages for quite some time now, for projects of all kinds and scales, and I can safely say that there probably is no better language out there than Perl, when having to do lots of string manipulations. On the other hand, Perl is pretty much unusable in big projects, because of the limitations of the language features it provides "out of the box". You almost always would want to rely on some hacks like Moose (or it's deviates) which frequently break because of their dependency on B.pm and the like. Also, running Perl on Windows is a big mess. Python on the other hand is pretty portable, but very slow when multi-threading. In Python when for example implementing search algorithms in a multi-threaded approach, they are actually tremendously slower than a single-threaded implementation. This is caused by how Python handles threads and their synchronization and ofc. the GIL itself. That's actually not correct. Perl does not have the concept of Arrays as you know them from lets say languages like C++. Instead they basically are an over-sized array or pointers to multiple lists containing the actual data. This way all basic array operations (insert, fetch, push, pop, shift, unshift, ...) can almost always be done in O(1). There are only a few edge-cases in where the complexity grows to O(n). So actually Perl handles "Arrays" quite well. The reason why Java sucks so hard in the benchmark I linked is that it does have to allocate a new String() object after every operation and destroy the old one. This takes a lot of time. In addition to that, it does not re-use "freed" memory of destroyed objects. Performance actually IS of concern here. When updating mods we have to calculate checksums for every file in the GameData directory tree. Depending on the actual implementation, this can be something that can take a few seconds, or a couple of minutes. AFAIK Python provides C-implementations of all major checksum algorithms, but I do not know how portable they are. Pure python implementations would take way too much time, especially on many small files. When using C# we could compute multiple checksums at the same time, taking advantage of multiple CPU cores and I/O wait times. We already agreed to implement out prototype in Python. We also agreed on not introducing additional dependencies on client-side, so choosing Python for the actual release is out of question. .Net/Mono is already there, because KSP runs on it, so let's stick to that. It would (maybe) also attract more contributors, as mods are written in C#, not Python
  15. Well I am no blender user, but AFAIK blender itself is written in C/C++ and only the scripts are Python. And that's exactly where Python (or even more often Lua) is often used at, and where it can play it's strengths. When writing GUI you usually divide your application into at least two (mostly) independent parts: the GUI and the logic. Both run their own threads, so some long-taking operation in the logic does not block the GUI, but you can still easily share data between logic and GUI. Python cannot easily do this, because it can only run one thread at a time due to it's GIL [1]. That causes the GUI to freeze, even when you run it in its own thread. The only way around this is to use multiple processes, which in turn cannot easily share data anymore... Edit: For the sake of completeness: The above is actually not 100% accurate. You could also use external (C/C++) libraries for displaying your GUI and manually control the GIL then. But then you must make sure your GUI code never ever manipulates python memory in any way, or bad things will happen... And that's something you cannot guarantee that easily if you still want to be able to interact with your GUI End Edit [1]: https://wiki.python.org/moin/GlobalInterpreterLock Disclaimer: when talking about "Pyhton" I actually mean the "CPython" implementation CPython is pretty fast, but not even close to a native implementation. That's why many Python modules are written in C. To get a rough idea of the speed differences of various languages take a look at this: http://onlyjob.blogspot.de/2011/03/perl5-python-ruby-php-c-c-lua-tcl.html . But keep in mind that "speed" is also very dependent on the actual implementation and algorithms used. That's why Perl scores so high at the site I linked. Perl makes heavy use of highly optimized native modules (Perl modules actually written in C). It also was created to efficiently handle strings, and the "benchmark" used at the site I linked does measure string manipulation time. Perl has had 25 years to optimize for exactly that use case
×
×
  • Create New...