Jump to content

keks

Members
  • Posts

    64
  • Joined

  • Last visited

Everything posted by keks

  1. The idea as a whole consists of three major parts: The mod repository Description files aka meta-data The client applications The repository: The repository is represented a as bunch of git repositories containing meta-data for a set of mods. A mod release is represented as a 'tag' in the repository and (optionally) has the actual release archive attached to it. In addition to the actual mods, a special 'index' repository exists, listing all mods known to the repository as git-submodules. Once a mod gets updated, so does its submodule reference in the index. This will be explained later on, under 'The actual manager application' The description files: The description file(s) contain all information the client-application needs to manage a given mod: Download link. The actual URL to obtain the release archive from - basically the download link from the forums Legal information like author, copyright, license, ... to display in the client Dependencies on other mods and KSP version requirements A complete list of files to install from the release archive The client application(s): This part actually consists of two separate parts: developer tools: The developer tools should assist the maintainer/developer in maintaining the mod and providing him with some common tools to use for modding. Like commonly used libraries, templates, a blank module boilerplate, etc... The actual manager application: This application does most of the work. It maintains a list of repositories to get mods / updates from. It keeps track of mods installed and checks them against the repositories for updates. The client application makes use of git to track the repositories index-repo. When the index has updated on remote, the client also updates its submodules to get the latest meta-data available. This way a single 'git fetch' can tell if there are any updates available, efficiently reducing the number of API requests required to keep clients up-to-date. Adding a mod to the repo: Some developer "D" puts up a mod on the forums Some maintainer "M" (may be the developer himself) downloads the release-archive M creates a new git-repository on GitHub M creates the 'meta.yaml' file and commits it M creates a new version tag (e.g. 'v1.0') and pushes it to GitHub M attaches the release-archive to the release-tag on GitHub At this point the mod is actually already fully usable. What's left to be done is adding it to the index by some "trustworthy" person "T": T pulls the current index T adds the new mod as submodule to the index T commits and pushes to GitHub Now the new mod is live and automatically available to all clients as new "latest" version. Such an example repo could look like this (I already linked this earlier): https://github.com/ksprepo-alt https://github.com/ksprepo-alt/index https://github.com/ksprepo-alt/deadlyreentry/blob/aa8720d9366fefda8f1785e2340450fd7f6c1d92/meta.yaml The meta.yaml linked points to two independent distinct sources for the release archive to download from. As shown by the md5-checksum the files are identical. The repo here simply acts as a mirror. No files have been altered. The 'contents' section lists all files included in the archive, that are to be extracted into the 'GameData' directory by the client-application. Files contained in the archive, but not listed under 'contents' are to be silently ignored (e.g. Documentation, Sources, etc...). The meta.yaml file also contains proper license and copyright information to be displayed by the client-software in some way. The repo contents (in most cases only containing the meta.yaml file) are also properly licensed (in this example MIT) and should be accompanied by a proper README-file, auto-generated from the meta-data. This makes sure our repo can be freely used/forked by others. Such an example README can be seen here: https://github.com/ksprepo/ksp_module-manager/blob/master/README.md Hotfixing a mod: When in need to create unofficial hotfixes (like community-made fixes) we can make use of the same scheme: M downloads community fix M pulls current mod repo M updates 'meta.yaml' to include the community fix' changes M creates a new tag (e.g. v1.0+ksprepo-1) and pushes it to GitHub M attaches community-fix.zip to the new release tag The actual download url for the mod itself remains the same. When the client updates the mod to the new release 'v1.0+ksprepo-1' it follows the usual update-procedure (see 'Updating a mod') but also respects changes introduced by the patch-set. It effectively combines them before applying it, where the patch-set always takes precedence over the original release. Updating a mod: When updating a mod the client compares its local state (for example the contents of the GameData directory) against the file lists of the mods it has installed. It will take care of updating files, as well as removing obsolete ones. When it detects a conflict (for example a user locally modified a file) it prompts the user to decide what to do. This is basically exactly what git does when you pull from a remote into your working tree. Download changeset, check conflicts, apply changes. @Spyhawk: Does this satisfy your needs for now? As I said I do not have the time to start working on a proper spec until at least Sunday, but I took the time to quickly summarize the current state of our discussion. Disclaimer: I may have missed something in the hurry here, as I really should be asleep by now... have to get up for work in less than 5 hours again... Very basic reference implementation: https://github.com/ksprepo-alt/kspmm-prototype
  2. As far as I understood you, you proposed to create a community-fix package overriding parts of the "original" package. So in your example there would actually be two packages then: The original R4.0c package and a R4.0c-community-fix package. Once v5.0 gets released, the R4.0c-community-fix package must be removed to not create conflicts with 5.0. That's the point I am concerned of. Creating packages which will have to be removed later on, because you'd need to mark them somehow, that they are meant to be removed later on. Your workflow would look something like this: install R4.0c install R4.0c-community-fix-1 remove R4.0c-community-fix update R4.0c to 5.0 where as what I proposed would look like this: install R4.0c update R4.0c to R4.0c+community-fix-1 update R4.0c+community-fix-1 to 5.0 difference is, that from the package's point of view, R4.0c+community-fix-1 is just another regular version released, following the regular update process, instead of an independent package which would have to be release at a later point. Yes, it workes (almost) exactly the same way your approach does, but does not create a new package. It instead could be realized by simply adding a patch-set to the release tag in addition to the original release
  3. Well, that's why this is called a discussion, I guess. I said I'm not here to tell you how to do something, I'm here to talk. My opinion on things changes as we talk, and that's the reason why I do this. To get feedback from you and hear your proposals. I thought people answering here did in fact read the whole discussion, and not just the first and last post. I also did not complain at all. If I sounded a bit harsh, that was not my intention. I just mentioned that I already talked about X in detail before. EDIT: Also, I did put up several examples early on. Even a complete example repo including some management scripts. The idea developed over time and so the example became obsolete. I updated a example mod in the repo not too long ago, so it matches the current state of our discussion here. I linked it in a previous post and explained what I did there. /EDIT I'm not going in every direction at once. I'm taking a look at this as a whole. I take into account as many information as I have available at a given point. I plan for the future, not just for a single aspect at once. Another problem is replying to different (unrelated) people at once. I cannot really direct this conversation into a single direction, because I cannot really influence what someone asks at a given point. A forum is not really ideal for this, as we'd need to create several threads for each question / part of the application. A ticket-system would IMHO be much better for this kind of discussion. I also must admit that I made some mistakes early on, that I did not try to steer the discussion into a specific direction. As I said, I'm not used to plan a project via a forum. It's different We're actually discussing three things here: The repo, the client-application(s) and the actutal package layout. It see how this can become a bit confusing when context keeps switching. See answer to reply below. Well, I did not think that it would be necessary to create a specification / RFC at this point, as nothing is fix yet. I really did not expect this amount of confusion, as the whole discussion can be reviewed at any given time. But I will respect this and consider your request. But I don't think I will be able to put up anything close to a spec before Sunday. I think a meeting would help because we'd be there all at once, taking to each other and we'd be able to steer the discussion into specific directions instead of randomly replying to posts.
  4. That would increase complexity in the installation process. For example when you downgrade packages or when we simply have multiple fixed versions released (maybe the first fix did not work correctly). Also these temporary packages would have to remain available over time, bloating up the repo. However, instead of patching the release, we could integrate some kind of patch-mechanism to the installer: The meta-file would then point to a patch-set in addition to the original content. This way we would not have to create "temporary" packages, could simply bump up the version number of the original package and still would not have to mess around with original source release files. We'd also not have compatibility issues as this would actually be treated as a normal release. Our release B9 R4.0c+ksprepo-1 would then actually point to the download of B9 R4.0c, but additionally point to a set of community patches which would be applied after the original release file. So I guess we agreed on Python here. (for the prototype only) Yes, that was my initial thought. But we actually do not have to standardize directory layout. From perspective of our application it simply does not matter if the mods are structured, or simply put all together in a single folder, because we have the meta-data describing each package and its contents. I'd make this point optional, as it might cause confusion/problems on client-side when the files are re-arranged. Correct. That's correct. By enlisting package contents we can also easily ignore files that are part of the original release archive, but are of no meaning to us. Documentation or source files for example do not need to be moved to the GameData directory, but may be part of the release archive. Correct. Ideally, the maintainer would be the developer himself, but it can basically be done by anyone. As long as the mod license allows redistribution in unmodified form, the maintainer also should attach the unmodified release archive to the GitHub relesae tag, so the repo serves as a mirror. I'd suggest the official KSP channel linked at the top of the page: irc.esper.net #KSPOfficial
  5. That would make the "fix-package" obsolete when the base package gets updated. The fix would have to be uninstalled manually or marked as conflicting so it automatically gets removed. I personally prefer not to create temporary packages as this increases maintenance. But that's something that needs further evaluation and we can decide later on. Any help is greatly appreciated :-) I fully agree. Once we got all important questions answered, the next step is implementing a basic prototype. Git and Mono/C# are not that difficult to learn and understand. I'd happily give you guys a crash-course in git if you want me to. I myself am new to Mono/C# as well, but as I've worked with Java for quite some time now and C# is quite similar, that's no problem :-) For the prototype application I'd like to suggest Python, here as it is (mostly) platform independent and it's very easy to learn, yet powerful enough. But I'm fully open to any suggestions here
  6. My initial post is not up-to-date anymore. It contains an initial idea, which developed over time. I know some things got lost over time, that's why I suggested a meeting where we can directly talk to each other, so we can directly reply to questions instead of waiting several hours between replies. In my experience this helps a lot. I'd really prefer a direct chat over a written down specification at this time. Writing specifications, examples and stuff just takes up too much time, which I currently do not have. However, if you want me to, I could fiddle together a quick&dirty prototype repo and application demonstrating such an approach. This will take some time though, and would be Linux-only as I do not have access to a Windows machine over the next weeks. As for the optional features: It's important to keep them in mind early, and include them in the initial planning. Else we end up in a state where we cannot easily implement feature X later on, because we did do something in a specific way which does not allow X to be added without also introducing major changes/breaks to feature Y. I already talked about this. Once we agreed to some point, we need to isolate critical features from optional ones. Optional features get put on backlog with low priority and will be implemented at some later point. It's just important to keep those in mind So far nothing of what I lately talked about is very complicated to implement. The only complicated stuff are the developer tools, which obviously are not "critical" to the basic function of such an management application (IMHO), but still important to push this project forward. Because we simply cannot maintain all mods ourselves. Number of mods will grow, and maintenance costs will increase. Not really. The file/checksum list in my example file is such an index keeping track of package contents. It keeps track of which file belongs to the package. Problem is, that users also can (and most likely will) also edit files locally. DeadlyReentry in combination with FAR is such an example, where people most likely will tune DE's settings to better fit the new aerodynamics introduced by FAR. Now lets say DE gets updated. This would cause the user-made changes to be overridden without notice, because we cannot detect these user-made changes easily (unless we extract the original archive and match files against each other, which takes a lot more time). This is only meant to identify local changes not made through the client-application. There already is some kind of standard recommended by Squad, which is being ignored by developers. I mentioned it in my initial post. As long as Squad does not enforce anything, developers will keep ignoring it, simply because they can. But as I already said, when keeping track of which files belong to a specific package, we can simply ignore the fact that there is no common standard. For the client-application it simply does not matter where files are located at, as long as there are no conflicts between mods. I'd just install the mods as-is (as you would do when installing manually) for now. What's your name on IRC? The same as on the forums here? And when would be a good time to catch you there? PS: I'm not used to do planning over forums. I'm more the brainstorming guy, sitting together with a bunch of developers talking about everything. So bear with me
  7. And? We'd still could fall back to only hosting the meta-data for such mods. If their mirror goes down, that's their problem then. We do not necessarily need to modify the release archive in any way. That's completely optional... Modifying the release archive actually is only of concern to us when integrating community fixes, like they were created for KSP 0.23.x B9 for example. By only providing a download link to the original source and hosting some meta-data, we do no violate copyright in any way. That I already explained several times in earlier posts. One last time: Hosting the release archive itself is completely optional! Yes we need to. Else we cannot detect changes to those files on client-side easily. That's something that is required for removal/update of mods. When not storing every file's checksum, we cannot detect conflicts (update makes changes to a file the user/mod already changed locally) and would blindly override local changes. And again, I already explained this several times. We do not repack anything unless we really have to. We also do not need to change directory structures when using a client-side mod manager, as it can be used to easily navigate the directory structures for a specific mod. See previous answer and previous posts. I already talked about this. I'd really prefer talking / chatting about this stuff, as we tend to write about the same things over and over again. I think this is because of the high delay between answers. Would you mind attending a meeting in TeamSpeak/Ventrilo/... or IRC?
  8. @spyhawk, @TeddyDD I updated my example MM-repo repo to further clarify my point: https://github.com/ksprepo/ksp_module-manager/ I moved the release binaries to the release tag and added download information and KSP version dependencies to the meta file. As you can see, this way the meta-file could provide several download sources the client-software could chose from. So ideally we (the repo) would then simply act as a mirror to the original source.
  9. We'd need to evaluate these libraries, before we jump onto one specific one. Platform-independence is important, as we need to support all platforms KSP supports. That's what the index is for. I already explained this earlier. All a client needs to do to check if it is up-to-date is fetching index' head. If the submodules commit id's match, client is up-to-date. That's one API request per minute for unauthenticated requests. When authenticated, you can 20 requests per minute. See https://developer.github.com/v3/#rate-limiting and https://developer.github.com/v3/#increasing-the-unauthenticated-rate-limit-for-oauth-applications That should be more than enough, given that we process as many things locally as possible.
  10. Who said we need users to install git on their machines? There are quite a few good implementations out there (including Mono) which we can use. Alternatively we could simply ship a binary with our client-software and the user wouldn't even necessarily know that git is involved. This looks pretty raw to me, doesn't it? ;-) https://raw.githubusercontent.com/ksprepo/ksp_hot-rockets/master/meta.yaml There should always be a download link in the meta-file. When hosting the mod as an attachment to the release tag, it just simply points there instead of some external source. Yes, use what already exists instead of re-inventing it over and over again It has to. Otherwise a third party is not allowed to modify/build upon it in any way (copyright). That's the problem we have with unlicensed mods right now. Because I want it to be open and free, I chose the MIT license for my example repo, while keeping original work under their original license. (Note: See README.md, which is automatically generated using the meta.yaml) create/clone repo commit changes to meta.yaml create version tag push changes to GitHub attach binary release archive to release tag update index repo Everything here can be easily automated, except the creation of the meta.yaml, which will most likely need manual work or at least verification of auto-detected values.
  11. Well, on the one hand we want as many people as possible to jump onto this train, on the other side you say it's the developers problem. I don't think this is a developer's problem, because they say they are perfectly fine right now. It's a user's problem, because we are unhappy with the current situation. I'd also rather not think of this as a "mod manager" (such tools already exist), but more of a repository and a bunch of management applications. So you'd rather put the "complexity" on service-side instead of putting it on client-side? IMHO a simple "git pull" on the index and subsequent HTTP-requests on the mod's meta-data files is a lot less complex than creating a API collecting and providing the same data. But that's just my opinion. Actually it means mod D was not updated for C v2, yet. Such a example would be RemoteTech2 or even B9. None of them died, but the community provided fixes and/or workarounds. We could provide such workarounds/fixes as well. Well, if you do not see parallel installations as an issue, what about rolling back to an older release of a mod? I myself for example had a lot of problems with newer RemoteTech and InfernalRobotics releases. Installing older releases solved that problem for me. Firespitter and ExsurgentEngineering are other examples. Luckily I maintain a local repo versioning the mods I use and the changes I made to them myself, so I could downgrade even though the official download links on the forums were long gone. Well, I did not say we should turn SixUpdater into a mod manager for KSP. I just wanted to bring it into discussion as a reference. I extensively worked with it, and the backend is perfectly fine (IMHO). I just wanted to talk about it's backend architecture. Why would this be unnecessary complexity? It's just a binary file attached to a release tag, which we'd need anyway. It does not matter if the meta-file reads "{ url: 'http://dropbox.com/foo' }" or "{ url: http://github.com/repo/foo' }". The only difference being that in the latter case we do not lose binary releases and don't hoard dead links. When not archiving old versions, there is no point to use git at all in first place. I already posted a link to an example repo I quickly put up earlier. I also already described the contents of such a meta-data file in previous posts. Basically TeddyDD's example file is a stripped-down version of mine, not including license-text (which in my opinion are absolutely mandatory) and similar stuff. Anyway, here is the link to the example repo again: https://github.com/ksprepo/ A meta-data filecould look like this: https://github.com/ksprepo/ksp_b9-aerospace/blob/master/meta.yaml Note, that I already discussed this earlier and explained how we could provide external download information. In the example repo I setup different branches for source (upstream), development (develop) and the actual release content going out to users (master). The upstream branch is also used to contribute changes (like the community B9 fix) back upstream. The repo would actually only contain the meta.yaml file. Releases then get tagged and a binary release gets added to that tag: https://help.github.com/articles/creating-releases Optionally the maintainer could maintain a upstream branch for pulling in changes from upstream, and contibuting modifications back upstream. But that has nothing to do with the actual mod management here. Note: I did not attach binary files to releases on the example repo, instead I directly put them into the master branch And no offense, but did you already work with git? I don't see any problem here for the client getting all the information it needs. In case of my example repo a simple update process could look like this on client-side: clone/pull https://github.com/ksprepo/mod-repo check submodule revisions vs local revisions in case the installed mod revisions match, do nothing in case they differ, continue get latest meta-data, for example https://github.com/ksprepo/ksp_api-extensions/blob/master/meta.yaml download the new binary release from url provided by meta.yaml replace local files, delete obsolete files A simple search for mods can be done via GitHub's search. The following query for example will lookup the meta-file for the mod "HotRockets" in my example repo: HotRockets in:file,meta.yaml extension:yaml user:ksprepo path:/ Put this into the serch form and you'll land here: https://github.com/search?utf8=%E2%9C%93&q=HotRockets+in%3Afile%2Cmeta.yaml+extension%3Ayaml+user%3Aksprepo+path%3A%2F&type=Code&ref=searchresults Or via the JSON API: https://api.github.com/search/repositories?q=HotRockets+in:file,/meta.yaml+user:ksprepo
  12. You just anwsered your own question here. I'd like to create a simple tool uploading all data necessary to a repository. Everything we're talking about can be done via the API or even git itself: https://developer.github.com/v3/repos/releases/ And these tools do have a lot in common with management, IMHO. They can be the one thing which can at least create some kind of standard. For this to work, they need to be easy to use - and by easy I mean stupidly easy so that basically an infant could use them - but yet powerful enough so that advanced users can bend them to their needs. I also do not see your point about submodules+releases here. This is probably by far the most easy way to create an easy to use index of all mods known to the our system - even across repo boundaries. Updating the index is also pretty easy as it's basically just 4 commands that can easily be wrapped up in a simple bash/batch file: git submodule foreach git pull && git add --all . && git commit -m 'updated index' . && git push After this command your index points to all the latest releases. It's basically a no-brainer to update the index. This could also efficiently reduce traffic, as we only need to download a single git tag (index HEAD) to check if a client is up-to-date or not. The hosting thing is nothing we should put too much time in right now. It basically does not matter if we host the content ourselves or if it is hosted externally. It's just the download link pointing to another source. Meta-files telling the application where to put which file will be needed anyways. Why should we care about them at all? These have nothing to do with our work. Or am I missing something here? Well, hope dies last. Until then, we'll have do do stuff ourselves. Another point for GitHub + client-side tools and abstain from maintaining a separate database would be that it would be absurdly simple to take over our role in case we decide to disappear one day. All someone would have to do is fork the index repo and he'd be done. All resources referenced in the index would still be valid and could be replaced one-by-one without any downtime at all. As already stated in an earlier post, I'd really like GitHub (or any other hosting service) to just serve data. What do you think about this?
  13. I don't see it as realistic to maintain a couple of hundred mods with just a hand full of people over a longer peroid. Of course, if we really want to, we could do that, but that would cause major delays in mod releases on the repository. IMHO the better way is to show mod developers how easy it can be to distribute their mods and how they can benefit from a "mod development kit" providing some libraries covering common tasks. For example toolbar integration, module manager, logging, resource api, etc... There are quite a few good libraries out there, but still people reinvent them over and over again. We do not require the modders to jump in early, but it would help a great deal. I do not want to force them to do anything, I want to help them come together and make life easier for all of us. I already explained this a few posts earlier. That's not correct. Think about specific version dependencies and multiple mod updates at once. Mod A requires mod B version 1 and Mod C version 2. Mod B requires mod D which in turn is not compatible with mod C version 2, yet... and so on. When managing dependencies you can very easily create cycles or conflicts. Now think about a major update of KSP and a user like me that has about 100 mods active, all being updated to the new KSP release. 100 Updates with about 2 to 3 dependencies per mod ~> 250 dependencies to resolve PLUS intermediate dependencies, version conflicts, ... and all this with many users at once. There you got your DDoS I also see absolutely no reason to run this on service-side, as clients can perfectly resolve dependencies on their side. Even more efficiently. Running this on client-side also eliminates the need to provide a central "dependency resolution service" for each repo. Why do you think so? IMHO it really does not matter if you have one or many repositories, as long as they follow a common standard and do not create conflicts. One easy way to eliminate conflicts would be a strict hierarchy the user could define. That's exactly what APT does, for example. It works with multiple repositories following a common standard and uses user-defined priorities to tell which repository to get a package from and which ones to ignore. Has been working perfectly for years for me. Could you please further explain your concern here? That's not correct. There are many people out there maintaining multiple KSP installations alongside with different versions. For a more popular example, see "Scott Manley". Also, maybe some feature is broken in the latest mod release, and people would like to jump back to an older one. Another point would be that a user wants to install a mod A which is only compatible with mod B version 1. But the current release of mod B is version 2, while version 1 is still perfectly compatible with KSP. So the user effectively has to downgrade mod B which in case of an external hoster may result in the user not being able to install mod A, because the old release of mod B is no longer available... TL;DR: Because of compatibility/dependency issues I don't know if there is any public documentation available on the API. The time I used SU was about when ArmAII:OA was released - must be about 3 to 4 years ago... I was building an auto-updater for one of the larger communities servers. As of the time there was no Linux-client available, I simply reverse-engineered the windows-application and build a basic console client for the six-network. Basically it consists of three parts: a central API service which you can query independend so called "networks" consisting of file servers mirroring each other the file servers themselves Disclaimer: As of the time being, things might have changed! You can easily take a look at the API by analyzing the SU-Clients traffic in Wireshark. At least back then the traffic was not encrypted or signed, just a plain HTTP/JSON API. (I do remember something about having to acquire some kind of ticket through the API though...) Actually the "meta-data" idea is something I already stated in my initial post and mention again and again in almost all of my posts here There is no conflict at all, and it is absolutely no problem to attach a binary file to a git tag on GitHub. That's what GitHub calls "release" and (according to GitHub Staff) is the preferred way to distribute binary content. Could you please explain your indisposition in hosting the actual mod itself? I mean, mods can still be hosted elsewhere. We'd simply be another mirror.
  14. We don't need a database for that, we can make use of git there. Every repository could have some kind of "index repository" pointing to the modules it contains via submodules. Just like the example repo does I put up earlier. Why do you think so? A locally cached index would reduce load on service-side and would most likely also be faster. You need to keep dependency-resolution in mind here. This can (at the current number of mods) easily require you to do a couple of hundred queries against your index to resolve a more complex dependency. Given a critical number of users and a popular mod update, this can easily lead to users DDoS'ing their own service I'd rather have the backend only provide the application model (means data only) and let the client applications do the work. Because for this to be successful we need the developers on board. Right now there is almost no standard for mod development. There are little to no commonly used frameworks/libraries. Every developer reinvents the wheel over and over again. Most likely because they simply do not know a solution already exists, or they see no easy way to contribute changes they'd like/need to have in said solution. A central repository PLUS some set of developer tools already containing the most commonly used libraries may help with that issue. When hosting meta-data only, we cannot guarantee that the binary files will remain available over time. Many developers here simply replace their mods with new versions, deleting the old ones. And even if they still serve the old releases, that's mostly limited to one or two predecessor versions only. This would inevitably leas to a point where our repo would contain 99% dead links to binary files which do no longer exist anymore. By hosting the files in the repo itself, we (or the external repo maintainer) can guarantee the availability of all versions indexed in the repo at all times. Because a central repository requires a central instance controlling everything. I (and most likely most of the forums here) do not want a single individual (organization) to be in charge of every mod out there. By using a de-centralized approach, we still can have some kind of "main repository", but also enable individuals to host alternatives themselves. A mod developer for example could host his own repository (as a git repo), maintaining and serving his own mods. The developer tools provided could aid him in this process, not requiring him know anything about git at all. In some cases we cannot (or do not want to) host the mod itself. For example because of license issues, or simply because of high maintenance volume required to keep it up to date. In such cases, we can provide "meta-packages" only containing the source link to the binary files that are hosted elsewhere. But as already said, that could (and most certainly will) lead to broken packages because of dead links. That's the plan ;-) The first step after planning should be a basic prototype providing the core functionality only. Over time, we add features to said prototype and see how they do. Once we got a (almost) fully functional prototype (that means, uploading, downloading, updating, searching mods) we can start planning and creating a proper application. It's important to keep those (optional) features in mind, so we do not get to a point where we cannot easily implement a feature users would like to see, because our application core does not support it without major changes. That's why I'd like to see a modular approach. To be able to easily add features later on. But for that, we first need to know how such modules may look like. What kind of interfaces our core application needs to provide, that does not require major breaking changes later on. Because one thing nobody likes is updates breaking current interfaces / behavior. Try not to focus too much on AUR. Git already provides almost everything we need. The only things we really need to care about is dependency management and efficient search. As for the indexing service: As already stated, git can do this for us as well. For a convenient API for third parties, we could setup a simple GitHub page, providing a JSON/YAML/XML API. I want to mention Six-Updater here again. They do provide such a simple JSON-API which you can easily query. I'd really like you guys to take a look at it, as it (IMHO) does everything right on backend-side (though, the frontend got more than enough issues...) Do you agree with me on these points or are there further questions / alternatives you want to discuss?
  15. Let me summarize the requirements we've got so far: client-software easy installation automatic, non-annoying self-update easy mod management easy installation of mods either via build-in search / catalog or direct input of mod URL easy update of existing mods easy (and complete) removal of existing mods automatic dependency management automatically install hard dependencies notify user of optional dependencies notify user of conflicts auto-resolve dependencies/conflicts if possible [*]no additional dependencies [*]modular architecture, so it can be extended easily [*]developer-tools easy to use tools no "magic" going on behind the scenes means, tools must be simple enough so developers can easily understand what they do and bend them to their needs [*]auto-generation of meta-data files automatically generate this one coule be tricky... do not require the modders to change the way they work right now mod repository de-centralized approach if we suddenly disappear for some reason, someone else can easily take over modders may maintain their own repositories for their own mods easy to setup and maintain easy integration into third-party tools I'd love to see integration on kerbalstuff... [*]hosting host complete mod where possible host only meta-data where needed Do you have anything to add to/remove from this list at this point? PS: I absolutely do not want to look like some arrogant guy telling others what to do. That's not why I am here. All I want to do is discuss this idea and get feedback on it. I'm usually a very direct person, so if you think I'm talking rubbish, feel free to correct me this requires clear rules, to avoid conflicts between repositories file lists version information dependencies easy to use modgen/build/publish tool good integration into IDE
  16. Well, rapid prototyping is not always the way to go. It often leads to uncontrolled growth and unnecessary increase in complexity due to management failures. But that's my personal opinion on that. I personally prefer careful planning, discussions, specifications, more discussions, ... and at some point where nobody involved in the project has any questions anymore, I start developing a prototype. Documentation and tests are another thing. We need to make sure to create documentation and tests as soon as possible. Ideally directly after the first basic prototype was built. I'd also prefer to not start experiments with new languages, as that might (or most certainly will) lead to unexpected problems. As for KSP being developed in Mono, I'd stick to that. That also eliminated additional dependencies on user-side. You should also take a look at dpkg as well as apt/aptitude. - http://anonscm.debian.org/cgit/apt/apt.git/tree/ - http://anonscm.debian.org/cgit/aptitude/aptitude.git/tree/ - http://anonscm.debian.org/cgit/dpkg/dpkg.git/tree/ Also the http://www.six-updater.net/ project could be of interest to us. I already mentioned that earlier, but I'd like to do so again. It's a widely established mod-management solution for the ArmA community. Documentation is also available: http://www.six-projects.net/wagn/Six_Updater+Documentation That's the point. I do not want to 'just do it'. If I do it, I want to do it right. Before blindly developing anything, we should sit together and carefully determine what we need and if there already is an existing solution to at least some of the problems we might find. We should carefully analyze how other solutions work, what they've done right, and why they failed in the end. We need to find out what developers need, what users need, and what we can do to make both of them happy. That's a lot of discussion that needs to be done. And I'd really like to have a (open-minded and established) modder from this forums on board.
  17. @TeddyDD: Problem I see here is that you are blindly jumping into development without carefully planning the requirements. That's the reason why the other attempts failed, and so will you unless you start to plan and even more important: discuss your decisions with others. That's the reason why I wanted to find at least two or three developers to work with me. To discuss and get feedback. You must remember that you do not create an application for yourself, but for others out there that may (or most certainly will) not have the same requirements to your application than you have. Instead of re-inventing everything, you should take a close look to the specification/documentation of a grown-up package management system and adopt it's essential core functionality. Why? Because many (very skilled) people already spend years of their time planning, developing and even more important fixing it. There's no need to run into the same mistakes and problems again, they did. If you are willing to do this right, then I'm in.
  18. As it looks like there is no interest in a proper mod/package management for KSP, I will not create such a tool. Maybe I will fiddle together something for my self, but as for the lack of interest I do not plan to release such a tool to the public any time soon. ** insert random rant about some stubborn people in here ** I still think a repository-like structure (centralized or de-centralized - does not matter) is the way to go. It works pretty well for other games and projects (see Six-Updater for ArmA, APT for Debian) as it makes installing AND (even more important) maintaining/updating mods as easy as possible. TL;DR: Anyways. Find some developers and volunteer maintainers and I'm back in. Else, I'm done here.
  19. I did not mean to offend you in any way. I was just interested in how your solution may look like
  20. And how would that look like? Do you plan to take over Squad and force developers to play by your rules?
  21. I'd really like to meet you in person one day... I just cannot understand how someone can be so stubborn when someone shows up, wants to talk to you, and wants to work with you to make life easier for everyone. Such a project (be it what I proposed, or some other solution) would surely massively increase usability for end-users and could seamlessly integrate with your development environment. We could for example create a MonoDevelop/XamarinStudio/VisialStudio/...-Plugin that automatically updates the meta-data (as far as possible) for you. When done properly you would even have to do less work than you have to do right now. Ideally such a system/standard would have to be enforced by Squad, but it luckily is not, yet. This gives us, the community, the customers, the actual users, the opportunity to show Squad what we want and how things could work. We could a system which "just works", without causing any headache for any of the three sides. As for the GitHub-thing, I already talked to the GutHub-guys, and there's nothing that they could think of that could come in our way, regarding the hosting-part. There are no license issues, no bandwidth issues, no storage size issues, etc. We'd also not have the problem that someone would suddenly abandon the project. If so, just fork the repo, let the users update their source addresses and you're done. You could even host your own repo yourself if you want to. As I said, I really do not care if It's my proposed project, or anyone else. I just want something that works without making me spend days (!) trying to get mods working again when a new KSP release comes out. I do not want to read through 10 pages of posts to find out why X is currently not working 6 pages later someone posts a workaround, that according to 2 pages later does not work when condition Y is met, blah blah... and in the time i crawled through this mess of random posts the modder updated the mod again... It's just a pain in the ass right now IMHO. - anyways, have a nice day!
  22. It's not just the meta-data. We need to make sure we have all information available in a way that our versioning system can easily work with. Changing the existing .version files from this mod would break compatibility (maybe multiple times) and I don't think that's what they want. What I'd suggest is to build this repo as a standalone system, and once we got it working we open up for external developers to build upon it. So the KVC guys could make their mod work with the repo, and in turn also contribute back to us. What do you mean? We would need to create the full client-application doing (almost) all the logic. We're talking about at least a few thousand lines of code. I'd suggest C#/Mono, as that's what KSP is build with, so there would not be any additional client-side dependencies. In addition to that, we'd need to create the developer tools as mentioned earlier. As for the problem with scalability you mentioned over in the other thread: There really is no problem at all. The current plan is to make this a decentralized system. As I already said earlier there is no reason to only have ONE repository. With my proposed architecture you could have hundreds of repositores hosting different mods. Even each developer could have his own repo if he wants to. It really does not matter at all, how many repositories a client uses, as long as there are no conflicts between them. That's exactly what "apt" does on Debian-based distros right now, efficiently handling more than 20k packages and their dependencies in like no time.
  23. There is no format for such a file yet. That's why this is called planning stage, where we plan what we need an define our requirements I linked an early draft version in a previous post (actually there's also a link to an example repo, which ofc. also is an early draft). I took a look at that mod earlier, but that one is really just a basic update notifier based on a static link to a json/yaml-file. I also linked that mod in my initial post i think. I want to create a full-blown mod repository with everything we need - that includes the modders and end-users. So far none of the solutions present do this in any acceptable way. Either they completely focus on one side, have a absolutely unmaintainable code base, do no work correctly or are simply "broken by design" (IMHO). Don't misunderstand me, I do not want to make those developers look bad, I actually learned quite a lot from their failures. Well, ofc. modders do not want to change theys way they work. That's just how developers (including me) are And that is the point why I am trying to create a opt-in solution with absolutely minimal effect on the developers themselves. If they don't like it, fine, just ignore it. Maybe someone else will pick up your mod and maintain it for you. You don't want us to distribute your mod? Fine, simply tell us to leave you alone and/or add this restriction to your license and you're done. @the modders: IMHO it's sad that so many people here fear any kind of change instead of saying "Ye, let's try that. Even if it does not work we may learn something on the way." This Project may have great potential, but I will not do it on my own. I do not want to build a mod management system for myself, I want to build it for this community, with the help of this community. I am open to any suggestions, so tell me what you would like to see. If you prefer talking, we could also organize a teamspeak/skype/ventrilo/google hangout/whatever meeting and talk. But again, I will not start coding anything until I get at least 3 developers to work with me.
  24. That's not correct. Depending on the license you may not be allowed to redistribute the mod in any way. That includes "hosting" the download archive. Not to mention unlicensed content.
  25. Nexus has a subscription model. Without subscribing you will not get any decent download speed. An I remember not being able to download some mod without going 'pro' in the past. But that was long time ago, I cannot exactly remember which mod / nexus platform that was... That's the point. We don't have to support thousands of mods. In the beginning we'd have to support a few dozen most common mods. Then we will get to a point where we either get enough support from the modders / independent maintainers, or we don't and the project is doomed to die. It's that simple. Support for a mod with no active maintainer will be dropped and so it will no longer be available (at least the most current version) via the repo. This in turn may make someone else take over maintainance of the mod, or not. I really do not care which mods area available via the repo. With such a system it's really not that much work to maintain a few dozen mods as a single person. It's basically the same I do right now with a bunch of google docs sheets for my own KSP installation. Right now I have 107 mods on list of which 101 are installed. So one could say I currently actively maintain 101 mods as just a single person. It's important to understand that as a maintainer there is really not much work to do for a single mod. Most mods will not change their directory structure that often over time, so all a maintainer would have to do from update to update is to upload the new version to the repo (or update the download url for non-licensed mods, respectively), replace the version string and maybe update dependencies. That's something that can almost always be done in less than five minutes. I'd really like to read it ;-) I started this thread to discuss, to read what you guys think about this. To read about different attempts / alternate solutions
×
×
  • Create New...