Jump to content

Combining efforts on proper mod management framework / tools / platform


keks

Recommended Posts

We can guarantee the availability of modes with permissive licenses. These closed will disappear anyway. Open mods can be uploaded to dropbox/google drive or Curse and retrieved from there (or from oficial github releases). Using git submodules + github releases will be terribly time-consuming and complicated.
Also I think mod tools are awsome, but do not have too much in common with mods management.

You just anwsered your own question here. I'd like to create a simple tool uploading all data necessary to a repository.

Everything we're talking about can be done via the API or even git itself: https://developer.github.com/v3/repos/releases/

And these tools do have a lot in common with management, IMHO. They can be the one thing which can at least create some kind of standard.

For this to work, they need to be easy to use - and by easy I mean stupidly easy so that basically an infant could use them - but yet powerful enough so that advanced users can bend them to their needs.

I also do not see your point about submodules+releases here. This is probably by far the most easy way to create an easy to use index of all mods known to the our system - even across repo boundaries. Updating the index is also pretty easy as it's basically just 4 commands that can easily be wrapped up in a simple bash/batch file:

git submodule foreach git pull && git add --all . && git commit -m 'updated index' . && git push

After this command your index points to all the latest releases. It's basically a no-brainer to update the index.

This could also efficiently reduce traffic, as we only need to download a single git tag (index HEAD) to check if a client is up-to-date or not.

The hosting thing is nothing we should put too much time in right now. It basically does not matter if we host the content ourselves or if it is hosted externally. It's just the download link pointing to another source. Meta-files telling the application where to put which file will be needed anyways.

Who will be responsible for the dozens of forked repositories?

Why should we care about them at all? These have nothing to do with our work.

Or am I missing something here?

I do not believe that any moder join us and start using this system. Not until it becomes popular among players.

Well, hope dies last. Until then, we'll have do do stuff ourselves.

Another point for GitHub + client-side tools and abstain from maintaining a separate database would be that it would be absurdly simple to take over our role in case we decide to disappear one day. All someone would have to do is fork the index repo and he'd be done. All resources referenced in the index would still be valid and could be replaced one-by-one without any downtime at all.

As already stated in an earlier post, I'd really like GitHub (or any other hosting service) to just serve data.

What do you think about this?

Link to comment
Share on other sites

I don't see it as realistic to maintain a couple of hundred mods with just a hand full of people over a longer peroid. Of course, if we really want to, we could do that, but that would cause major delays in mod releases on the repository. IMHO the better way is to show mod developers how easy it can be to distribute their mods and how they can benefit from a "mod development kit" providing some libraries covering common tasks. For example toolbar integration, module manager, logging, resource api, etc... There are quite a few good libraries out there, but still people reinvent them over and over again.

We do not require the modders to jump in early, but it would help a great deal. I do not want to force them to do anything, I want to help them come together and make life easier for all of us. I already explained this a few posts earlier.

I understand this. And I still believe this doesn't belong to a "mod manager", as this is more a mod developer issue.

That's not correct. Think about specific version dependencies and multiple mod updates at once. Mod A requires mod B version 1 and Mod C version 2. Mod B requires mod D which in turn is not compatible with mod C version 2, yet... and so on. When managing dependencies you can very easily create cycles or conflicts. Now think about a major update of KSP and a user like me that has about 100 mods active, all being updated to the new KSP release. 100 Updates with about 2 to 3 dependencies per mod ~> 250 dependencies to resolve PLUS intermediate dependencies, version conflicts, ... and all this with many users at once. There you got your DDoS :)

I also see absolutely no reason to run this on service-side, as clients can perfectly resolve dependencies on their side. Even more efficiently. Running this on client-side also eliminates the need to provide a central "dependency resolution service" for each repo.

There should be some misunderstanding here, because nobody has mentioned dependency resolution on the server side (only dependency metadata). Also, still don't understand how you got your DDoS. As long as the Json interface can provide info for more than a single package at the same time, you won't have a huge number of requests (just like the AUR and the client I maintain).

Why do you think so? IMHO it really does not matter if you have one or many repositories, as long as they follow a common standard and do not create conflicts. One easy way to eliminate conflicts would be a strict hierarchy the user could define. That's exactly what APT does, for example. It works with multiple repositories following a common standard and uses user-defined priorities to tell which repository to get a package from and which ones to ignore. Has been working perfectly for years for me.

Could you please further explain your concern here?

My concern is unnecessary complexity for a still unborn project.

As stated above, this is a non issue since git is decentralized by nature.

That's not correct. There are many people out there maintaining multiple KSP installations alongside with different versions. For a more popular example, see "Scott Manley".

Also, maybe some feature is broken in the latest mod release, and people would like to jump back to an older one. Another point would be that a user wants to install a mod A which is only compatible with mod B version 1. But the current release of mod B is version 2, while version 1 is still perfectly compatible with KSP. So the user effectively has to downgrade mod B which in case of an external hoster may result in the user not being able to install mod A, because the old release of mod B is no longer available...

TL;DR: Because of compatibility/dependency issues :)

It means mod B v2 conflicts with mod B v1 :P. If mod A can't be installed anymore because its dependency isn't available anymore, then mod A must be updated upstream or it will die. Note that I'm aware of people keeping many version of KSP in parallel, and I'm all for a feature in the client that switching from one install to another one, like some of the actual mod managers. But I do see this more as a copy/backup feature of the KSP install before any major upgrade, implemented in the client, for the KSP versions provided by Squad but not all of them (this is simply crazy to me).

I don't know if there is any public documentation available on the API. The time I used SU was about when ArmAII:OA was released - must be about 3 to 4 years ago... I was building an auto-updater for one of the larger communities servers. As of the time there was no Linux-client available, I simply reverse-engineered the windows-application and build a basic console client for the six-network.

Basically it consists of three parts:

  • a central API service which you can query
  • independend so called "networks" consisting of file servers mirroring each other
  • the file servers themselves

Disclaimer: As of the time being, things might have changed!

You can easily take a look at the API by analyzing the SU-Clients traffic in Wireshark. At least back then the traffic was not encrypted or signed, just a plain HTTP/JSON API.

(I do remember something about having to acquire some kind of ticket through the API though...)

From what I've read, there was a Linux client but it is now deprecated. Anyway, Six-updater is not really documented, not platform independent either and does have a problematic license. I guess we can safely think about another alternative here.

Could you please explain your indisposition in hosting the actual mod itself?

I mean, mods can still be hosted elsewhere. We'd simply be another mirror.

As of hosting, it's just unnecessary complexity again. I don't think a mod manager should provide a complete mirror service.

Could you describe what you mean by "meta-data" here? The more I read you, the more I believe we're talking about different things and that we've never been on the same wavelength. From my perspective (and the one of TeddyDD that I can understand), there wouldn't be any binary distribution (apart from the compiled client itself), just a bunch of simple files that include basic information (name, version, download link, dependency information, etc.).

Could you give an example on what would be on the Git repo (code, raw metadata file, mod itself?) and how the client would get access to that information? GitHub API is nice, but I don't see how to use it to make the relevant information available to the client.

Edited by Spyhawk
Link to comment
Share on other sites

I understand this. And I still believe this doesn't belong to a "mod manager", as this is more a mod developer issue.

Well, on the one hand we want as many people as possible to jump onto this train, on the other side you say it's the developers problem.

I don't think this is a developer's problem, because they say they are perfectly fine right now. It's a user's problem, because we are unhappy with the current situation.

I'd also rather not think of this as a "mod manager" (such tools already exist), but more of a repository and a bunch of management applications.

There should be some misunderstanding here, because nobody has mentioned dependency resolution on the server side (only dependency metadata). Also, still don't understand how you got your DDoS. As long as the Json interface can provide info for more than a single package at the same time, you won't have a huge number of requests (just like the AUR and the client I maintain). My concern is unnecessary complexity for a still unborn project.

So you'd rather put the "complexity" on service-side instead of putting it on client-side? IMHO a simple "git pull" on the index and subsequent HTTP-requests on the mod's meta-data files is a lot less complex than creating a API collecting and providing the same data. But that's just my opinion.

It means mod B v2 conflicts with mod B v1 :P. If mod A can't be installed anymore because its dependency isn't available anymore, then mod A must be updated upstream or it will die.

Actually it means mod D was not updated for C v2, yet. Such a example would be RemoteTech2 or even B9. None of them died, but the community provided fixes and/or workarounds.

We could provide such workarounds/fixes as well.

Note that I'm aware of people keeping many version of KSP in parallel, and I'm all for a feature in the client that switching from one install to another one, like some of the actual mod managers. But I do see this more as a copy/backup feature of the KSP install before any major upgrade, implemented in the client, for the KSP versions provided by Squad but not all of them (this is simply crazy to me).

Well, if you do not see parallel installations as an issue, what about rolling back to an older release of a mod? I myself for example had a lot of problems with newer RemoteTech and InfernalRobotics releases. Installing older releases solved that problem for me. Firespitter and ExsurgentEngineering are other examples. Luckily I maintain a local repo versioning the mods I use and the changes I made to them myself, so I could downgrade even though the official download links on the forums were long gone.

From what I've read, there was a Linux client but it is now deprecated. Anyway, Six-updater is not really documented, not platform independent either and does have a problematic license. I guess we can safely think about another alternative here.

Well, I did not say we should turn SixUpdater into a mod manager for KSP. I just wanted to bring it into discussion as a reference. I extensively worked with it, and the backend is perfectly fine (IMHO). I just wanted to talk about it's backend architecture.

As of hosting, it's just unnecessary complexity again. I don't think a mod manager should provide a complete mirror service.

Why would this be unnecessary complexity? It's just a binary file attached to a release tag, which we'd need anyway.

It does not matter if the meta-file reads "{ url: 'http://dropbox.com/foo' }" or "{ url: http://github.com/repo/foo' }". The only difference being that in the latter case we do not lose binary releases and don't hoard dead links.

When not archiving old versions, there is no point to use git at all in first place.

Could you describe what you mean by "meta-data" here? The more I read you, the more I believe we're talking about different things and that we've never been on the same wavelength. From my perspective (and the one of TeddyDD that I can understand), there wouldn't be any binary distribution (apart from the compiled client itself), just a bunch of simple files that include basic information (name, version, download link, dependency information, etc.).

I already posted a link to an example repo I quickly put up earlier. I also already described the contents of such a meta-data file in previous posts. Basically TeddyDD's example file is a stripped-down version of mine, not including license-text (which in my opinion are absolutely mandatory) and similar stuff.

Anyway, here is the link to the example repo again: https://github.com/ksprepo/

A meta-data filecould look like this: https://github.com/ksprepo/ksp_b9-aerospace/blob/master/meta.yaml

Note, that I already discussed this earlier and explained how we could provide external download information.

In the example repo I setup different branches for source (upstream), development (develop) and the actual release content going out to users (master).

The upstream branch is also used to contribute changes (like the community B9 fix) back upstream.

Could you give an example on what would be on the Git repo (code, raw metadata file, mod itself?) and how the client would get access to that information? GitHub API is nice, but I don't see how to use it to make the relevant information available to the client.

The repo would actually only contain the meta.yaml file. Releases then get tagged and a binary release gets added to that tag: https://help.github.com/articles/creating-releases

Optionally the maintainer could maintain a upstream branch for pulling in changes from upstream, and contibuting modifications back upstream. But that has nothing to do with the actual mod management here.

Note: I did not attach binary files to releases on the example repo, instead I directly put them into the master branch

And no offense, but did you already work with git? I don't see any problem here for the client getting all the information it needs. In case of my example repo a simple update process could look like this on client-side:

  1. clone/pull https://github.com/ksprepo/mod-repo
  2. check submodule revisions vs local revisions

    1. in case the installed mod revisions match, do nothing
    2. in case they differ, continue

    3. get latest meta-data, for example https://github.com/ksprepo/ksp_api-extensions/blob/master/meta.yaml
    4. download the new binary release from url provided by meta.yaml
    5. replace local files, delete obsolete files


      A simple search for mods can be done via GitHub's search. The following query for example will lookup the meta-file for the mod "HotRockets" in my example repo:
      HotRockets in:file,meta.yaml extension:yaml user:ksprepo path:/
      Put this into the serch form and you'll land here: https://github.com/search?utf8=%E2%9C%93&q=HotRockets+in%3Afile%2Cmeta.yaml+extension%3Ayaml+user%3Aksprepo+path%3A%2F&type=Code&ref=searchresults
      Or via the JSON API: https://api.github.com/search/repositories?q=HotRockets+in:file,/meta.yaml+user:ksprepo
Edited by keks
Link to comment
Share on other sites

So client need git installed right?

It looks a bit better when you explain it from the client point of view. So off, if the file meta.yml there is no link to download, the client should look for github releases (or in master as in your example)

If I understood everything well your idea much better use features of github (eg searching)

Edit:

not including license-text (which in my opinion are absolutely mandatory

I do not think that every meta.yaml file should contain a license, which in addition can be confused with mod license.

But that's just my opinion.

Anyway, it looks good.

Would it look like the process of creating such a repository? Or addition to the new packages?

Edited by TeddyDD
Link to comment
Share on other sites

(just a very quick answer, might expand answer later when I get time to do so)

Note: I did not attach binary files to releases on the example repo, instead I directly put them into the master branch

Thanks! That's exactly why your example repo didn't make sense to me in the first place.

And no offense, but did you already work with git? I don't see any problem here for the client getting all the information it needs.

Yes, I did. But my concern about git is exactly the same as TeddyDD above. I do agree that git would be simpler and an effective solution for more advanced users, but I don't believe installing it on users machine is the correct thing to do for non developers. On the other hand, if github API allows easy access to the content of the json/yaml files that would be the perfect solution (not git software install required, no external database for a yaml/json interface).

A simple search for mods can be done via GitHub's search. The following query for example will lookup the meta-file for the mod "HotRockets" in my example repo:

HotRockets in:file,meta.yaml extension:yaml user:ksprepo path:/

Put this into the serch form and you'll land here: https://github.com/search?utf8=%E2%9C%93&q=HotRockets+in%3Afile%2Cmeta.yaml+extension%3Ayaml+user%3Aksprepo+path%3A%2F&type=Code&ref=searchresults

Or via the JSON API: https://api.github.com/search/repositories?q=HotRockets+in:file,/meta.yaml+user:ksprepo

Hmm doesn't seem a direct access to content of yaml/json files is possible. So either git will have to be installed on clients, or the yaml/json file will have to be downloaded independently. Is there a better solution?

Edit: Don't get me wrong, I like your idea. I'm just wondering if there's something more straightforward for a user point of view.

Link to comment
Share on other sites

Yes, I did. But my concern about git is exactly the same as TeddyDD above. I do agree that git would be simpler and an effective solution for more advanced users, but I don't believe installing it on users machine is the correct thing to do for non developers. On the other hand, if github API allows easy access to the content of the json/yaml files that would be the perfect solution (not git software install required, no external database for a yaml/json interface).
So client need git installed right?

Who said we need users to install git on their machines? There are quite a few good implementations out there (including Mono) which we can use. Alternatively we could simply ship a binary with our client-software and the user wouldn't even necessarily know that git is involved.

Hmm doesn't seem a direct access to content of yaml/json files is possible. So either git will have to be installed on clients, or the yaml/json file will have to be downloaded independently. Is there a better solution?

This looks pretty raw to me, doesn't it? ;-)

https://raw.githubusercontent.com/ksprepo/ksp_hot-rockets/master/meta.yaml

So off, if the file meta.yml there is no link to download, the client should look for github releases (or in master as in your example)

There should always be a download link in the meta-file. When hosting the mod as an attachment to the release tag, it just simply points there instead of some external source.

If I understood everything well your idea much better use features of github (eg searching)

Yes, use what already exists instead of re-inventing it over and over again :)

I do not think that every meta.yaml file should contain a license, which in addition can be confused with mod license.

It has to. Otherwise a third party is not allowed to modify/build upon it in any way (copyright). That's the problem we have with unlicensed mods right now.

Because I want it to be open and free, I chose the MIT license for my example repo, while keeping original work under their original license.

(Note: See README.md, which is automatically generated using the meta.yaml)

Would it look like the process of creating such a repository? Or addition to the new packages?

  • create/clone repo
  • commit changes to meta.yaml
  • create version tag
  • push changes to GitHub
  • attach binary release archive to release tag
  • update index repo

Everything here can be easily automated, except the creation of the meta.yaml, which will most likely need manual work or at least verification of auto-detected values.

Link to comment
Share on other sites

Who said we need users to install git on their machines? There are quite a few good implementations out there (including Mono) which we can use. Alternatively we could simply ship a binary with our client-software and the user wouldn't even necessarily know that git is involved.

Correct! There's libgit2sharp that might be exactly what is needed here. Gosh, why didn't think about this in the first place?

I had more a single request for all files in mind, in order to avoid multi requests from the clients... Guess we'll have to live without it.

I also see the GitHub API has a rate limit. Wouldn't this be a problem here?

Link to comment
Share on other sites

Correct! There's libgit2sharp that might be exactly what is needed here. Gosh, why didn't think about this in the first place?

We'd need to evaluate these libraries, before we jump onto one specific one. Platform-independence is important, as we need to support all platforms KSP supports.

I had more a single request for all files in mind, in order to avoid multi requests from the clients... Guess we'll have to live without it.

That's what the index is for. I already explained this earlier. All a client needs to do to check if it is up-to-date is fetching index' head. If the submodules commit id's match, client is up-to-date.

I also see the GitHub API has a rate limit. Wouldn't this be a problem here?

That's one API request per minute for unauthenticated requests. When authenticated, you can 20 requests per minute.

See https://developer.github.com/v3/#rate-limiting and https://developer.github.com/v3/#increasing-the-unauthenticated-rate-limit-for-oauth-applications

That should be more than enough, given that we process as many things locally as possible.

Link to comment
Share on other sites

@spyhawk, @TeddyDD

I updated my example MM-repo repo to further clarify my point: https://github.com/ksprepo/ksp_module-manager/

I moved the release binaries to the release tag and added download information and KSP version dependencies to the meta file.

As you can see, this way the meta-file could provide several download sources the client-software could chose from.

So ideally we (the repo) would then simply act as a mirror to the original source.

Link to comment
Share on other sites

Every mod on this repository must comply to the following directory structure as far as possible. (Some mods may break this convention to retain compatibility to other mods.)

What if author used "All rights reserved" or "CC-BY-NC-ND"? Boom! The system does not work, end of the world? We can not in any way modify the original files. Package files should contain instructions on how to unpack such a mods.

Do we really need SHA1 of every file in archive?

Besides, cool :)

Link to comment
Share on other sites

What if author used "All rights reserved" or "CC-BY-NC-ND"? Boom! The system does not work, end of the world? We can not in any way modify the original files. Package files should contain instructions on how to unpack such a mods.

Do we really need SHA1 of every file in archive?

A single checksum for the archive itself should be more than enough.

That's the issue with repackaging. You'll end up forking and patching the mod to make them nicely integrate together, similarly to any Linux package management system. It's probably the fastest solution, but it requires quite a lot of work by the team maintaining the repo.

Since the major issue is the lack of "standard" to package mods properly, and that Squad doesn't seem to really care, maybe the only viable solution here would be to work our ass off to make them recommend a standard, if enforcing one isn't an option. This idea has already come up in the past, but has it put into action? (pardon my ignorance on this subject).

Link to comment
Share on other sites

I would prefer something like mapping files from the archive into the game folders. Moders can continue to work in their own way. Its less work for us when upgrading package. The folder structure in a mod rarely changes. Now we would have to change it manually every update.

Link to comment
Share on other sites

What if author used "All rights reserved" or "CC-BY-NC-ND"? Boom! The system does not work, end of the world? We can not in any way modify the original files.

And? We'd still could fall back to only hosting the meta-data for such mods. If their mirror goes down, that's their problem then. We do not necessarily need to modify the release archive in any way. That's completely optional... Modifying the release archive actually is only of concern to us when integrating community fixes, like they were created for KSP 0.23.x B9 for example.

By only providing a download link to the original source and hosting some meta-data, we do no violate copyright in any way. That I already explained several times in earlier posts.

One last time: Hosting the release archive itself is completely optional!

Do we really need SHA1 of every file in archive?
A single checksum for the archive itself should be more than enough.

Yes we need to. Else we cannot detect changes to those files on client-side easily. That's something that is required for removal/update of mods.

When not storing every file's checksum, we cannot detect conflicts (update makes changes to a file the user/mod already changed locally) and would blindly override local changes.

That's the issue with repackaging. You'll end up forking and patching the mod to make them nicely integrate together, similarly to any Linux package management system. It's probably the fastest solution, but it requires quite a lot of work by the team maintaining the repo.

Since the major issue is the lack of "standard" to package mods properly, and that Squad doesn't seem to really care, maybe the only viable solution here would be to work our ass off to make them recommend a standard, if enforcing one isn't an option. This idea has already come up in the past, but has it put into action? (pardon my ignorance on this subject).

And again, I already explained this several times. We do not repack anything unless we really have to. We also do not need to change directory structures when using a client-side mod manager, as it can be used to easily navigate the directory structures for a specific mod.

I would prefer something like mapping files from the archive into the game folders. Moders can continue to work in their own way. Its less work for us when upgrading package. The folder structure in a mod rarely changes. Now we would have to change it manually every update.

See previous answer and previous posts. I already talked about this.

I'd really prefer talking / chatting about this stuff, as we tend to write about the same things over and over again.

I think this is because of the high delay between answers. Would you mind attending a meeting in TeamSpeak/Ventrilo/... or IRC?

Edited by keks
Link to comment
Share on other sites

That I already explained several times in earlier posts... And again, I already explained this several times... I already talked about this.

To be honest, you're really not helping by integrating all the optional, unnecessary stuff in your example repo. From what you've written in your first post, what you've included in your example repo and the various "optional" stuff you've talked about in this thread, you've lost me along this thread. My mind tries to find the ideal, simpler way to achieve the objective, while you're trying to integrate everything into this project at once, which personally confuses me :)

Yes we need to. Else we cannot detect changes to those files on client-side easily. That's something that is required for removal/update of mods.

When not storing every file's checksum, we cannot detect conflicts (update makes changes to a file the user/mod already changed locally) and would blindly override local changes.

It depends how you handle it. Packages managers usually keep track of a list of files that are installed by each package, while you're proposing to check if a file still belongs to a package, or if it has been overwritten since then (as I understand it, am I right?). I think a saner conflict management would use the former method.

I'd really prefer talking / chatting about this stuff, as we tend to write about the same things over and over again.

I think this is because of the high delay between answers. Would you mind attending a meeting in TeamSpeak/Ventrilo/... or IRC?

I'd suggest you write a clear document, with a short example and an example repo (or submodules repo) that doesn't integrate all the optional stuff you have in mind. To me, "optional" means "unnecessary". We could then talk about it over IRC.

Since the major issue is the lack of "standard" to package mods properly, and that Squad doesn't seem to really care, maybe the only viable solution here would be to work our ass off to make them recommend a standard, if enforcing one isn't an option. This idea has already come up in the past, but has it put into action? (pardon my ignorance on this subject).

This part has been overlooked. Is this really not an option? Has it ever been tried previously?

Link to comment
Share on other sites

To be honest, you're really not helping by integrating all the optional, unnecessary stuff in your example repo. From what you've written in your first post, what you've included in your example repo and the various "optional" stuff you've talked about in this thread, you've lost me along this thread. My mind tries to find the ideal, simpler way to achieve the objective, while you're trying to integrate everything into this project at once, which personally confuses me :)
I'd suggest you write a clear document, with a short example and an example repo (or submodules repo) that doesn't integrate all the optional stuff you have in mind. To me, "optional" means "unnecessary"

My initial post is not up-to-date anymore. It contains an initial idea, which developed over time. I know some things got lost over time, that's why I suggested a meeting where we can directly talk to each other, so we can directly reply to questions instead of waiting several hours between replies. In my experience this helps a lot. I'd really prefer a direct chat over a written down specification at this time.

Writing specifications, examples and stuff just takes up too much time, which I currently do not have. However, if you want me to, I could fiddle together a quick&dirty prototype repo and application demonstrating such an approach. This will take some time though, and would be Linux-only as I do not have access to a Windows machine over the next weeks.

As for the optional features: It's important to keep them in mind early, and include them in the initial planning. Else we end up in a state where we cannot easily implement feature X later on, because we did do something in a specific way which does not allow X to be added without also introducing major changes/breaks to feature Y. I already talked about this.

Once we agreed to some point, we need to isolate critical features from optional ones. Optional features get put on backlog with low priority and will be implemented at some later point. It's just important to keep those in mind :)

So far nothing of what I lately talked about is very complicated to implement. The only complicated stuff are the developer tools, which obviously are not "critical" to the basic function of such an management application (IMHO), but still important to push this project forward. Because we simply cannot maintain all mods ourselves. Number of mods will grow, and maintenance costs will increase.

It depends how you handle it. Packages managers usually keep track of a list of files that are installed by each package, while you're proposing to check if a file still belongs to a package, or if it has been overwritten since then (as I understand it, am I right?). I think a saner conflict management would use the former method.

Not really. The file/checksum list in my example file is such an index keeping track of package contents. It keeps track of which file belongs to the package.

Problem is, that users also can (and most likely will) also edit files locally. DeadlyReentry in combination with FAR is such an example, where people most likely will tune DE's settings to better fit the new aerodynamics introduced by FAR. Now lets say DE gets updated. This would cause the user-made changes to be overridden without notice, because we cannot detect these user-made changes easily (unless we extract the original archive and match files against each other, which takes a lot more time).

This is only meant to identify local changes not made through the client-application.

This part has been overlooked. Is this really not an option? Has it ever been tried previously?

There already is some kind of standard recommended by Squad, which is being ignored by developers. I mentioned it in my initial post.

As long as Squad does not enforce anything, developers will keep ignoring it, simply because they can.

But as I already said, when keeping track of which files belong to a specific package, we can simply ignore the fact that there is no common standard. For the client-application it simply does not matter where files are located at, as long as there are no conflicts between mods. I'd just install the mods as-is (as you would do when installing manually) for now.

. We could then talk about it over IRC.

What's your name on IRC? The same as on the forums here? And when would be a good time to catch you there?

PS: I'm not used to do planning over forums. I'm more the brainstorming guy, sitting together with a bunch of developers talking about everything. So bear with me :)

Edited by keks
Link to comment
Share on other sites

And? We'd still could fall back to only hosting the meta-data for such mods. If their mirror goes down, that's their problem then. We do not necessarily need to modify the release archive in any way. That's completely optional... Modifying the release archive actually is only of concern to us when integrating community fixes, like they were created for KSP 0.23.x B9 for example.

And even this is easily solved using a package manager.

You have the master-(meta)-package B9_vX.Y.Z, which points to the download link to latest officially released B9 package. (I'm guessing it has a restrictive license, I haven't used B9 myself).

You then have the community-fix package B9_vX.Y.Z-CommunityFix_v1 which -depends- on the original, unchanged B9_vX.Y.Z package and includes the community fixes.

The client would see the "new" version of B9, install the original B9 (if not already installed), and then install the fixes over the top. Bingo.

Later on when the mod-author releases an updated B9 (eg, vX.Y+1.0), the master meta-package for that is created, and the client uses that to upgrade B9.

PS. I don't have a huge amount of time IRL, but I'm happy to help out where possible. I'm a long-time Debian user, so I'm fully on-board using some sort of mod-repository. Even with 20-odd mods it's already a RPITA to manage.

Keks, I think I've followed your overall architecture. I'm not very conversant with git (only used the basics) but I am a programmer so happy to help with coding. Don't know C#, but I've got a few years C (/C++) under my belt so don't see that being an issue. Also fully agree to use C# to reduce client-side dependencies; although it may be useful to prototype clients in a scripting language first (it's what I did for some Apps for Maemo before recoding them in C++) once we have a rough design sketched out.

Link to comment
Share on other sites

And even this is easily solved using a package manager. [...]

That would make the "fix-package" obsolete when the base package gets updated. The fix would have to be uninstalled manually or marked as conflicting so it automatically gets removed. I personally prefer not to create temporary packages as this increases maintenance. But that's something that needs further evaluation and we can decide later on.

PS. I don't have a huge amount of time IRL, but I'm happy to help out where possible. I'm a long-time Debian user, so I'm fully on-board using some sort of mod-repository. Even with 20-odd mods it's already a RPITA to manage.

Any help is greatly appreciated :-)

Keks, I think I've followed your overall architecture. I'm not very conversant with git (only used the basics) but I am a programmer so happy to help with coding. Don't know C#, but I've got a few years C (/C++) under my belt so don't see that being an issue. Also fully agree to use C# to reduce client-side dependencies; although it may be useful to prototype clients in a scripting language first (it's what I did for some Apps for Maemo before recoding them in C++) once we have a rough design sketched out.

I fully agree. Once we got all important questions answered, the next step is implementing a basic prototype. Git and Mono/C# are not that difficult to learn and understand. I'd happily give you guys a crash-course in git if you want me to. I myself am new to Mono/C# as well, but as I've worked with Java for quite some time now and C# is quite similar, that's no problem :-)

For the prototype application I'd like to suggest Python, here as it is (mostly) platform independent and it's very easy to learn, yet powerful enough. But I'm fully open to any suggestions here :)

Link to comment
Share on other sites

By only providing a download link to the original source and hosting some meta-data, we do no violate copyright in any way. That I already explained several times in earlier posts.

And what about the installation of such a mod? It may have a completely different folder structure than our standard.

IRC? It's a good idea.

I usually have time between 8 am - 12 pm and 8 pm - 00:00:00 am GMT

(I hope I calculated it correctly)

How about you?

Edit: Forum is trolling me :) Hello Micha Welcome aboard!

And even this is easily solved using a package manager. [...]

Precisely what solution I like. Nobody moves original files.

Edit2:

For the prototype application I'd like to suggest Python, here as it is (mostly) platform independent and it's very easy to learn, yet powerful enough. But I'm fully open to any suggestions here

It would be great to write the entire program in Python... but Python apps deployment is a nightmare :C But to write the prototype is a good language.

Edited by TeddyDD
Link to comment
Share on other sites

That would make the "fix-package" obsolete when the base package gets updated. The fix would have to be uninstalled manually or marked as conflicting so it automatically gets removed. I personally prefer not to create temporary packages as this increases maintenance. But that's something that needs further evaluation and we can decide later on.

Why would the "fix package" need to be handled specially? When the updated upstream package is made available in the repository, the package manager client detects it as an update and performs the upgrade. While the detailed technicalities need to be worked out, at a high level the installer just uninstalls the existing package, then installs the new one.

For the prototype application I'd like to suggest Python, here as it is (mostly) platform independent and it's very easy to learn, yet powerful enough. But I'm fully open to any suggestions here :)

No quarrels from me. Have used Python a little bit in the past.

Precisely what solution I like. Nobody moves original files.

I never said the packaging client couldn't move files ;)

As I understood it I think Keks's proposal included making mods conform to a standardised layout, as long as the mod allows (ie, no hard-coded paths).

As far as I understood the proposed packaging system, it would primarily consist of:

1. metadata for each mod

2. a client

3. a set of developer/maintainer tools

Metadata would include the obvious (name, version, license, URLs, etc) but also simple rules (eg, which files to copy from the upstream's zip file into the KSP "GameData" directory).

Optionally the maintainer of a mod might repackage a mod and mirror it in the repository if the license of the mod allows for it.

NB. The maintainer would, ideally, be the mod developer, but doesn't have to be, and is unlikely to be at first until the system is proven.

Which IRC channel/server?

Link to comment
Share on other sites

I never said the packaging client couldn't move files

I mean we do not modify any original mods. Example:

B9 R4.0c for 0.22 > Works out of the box

B9 4.1 (our unofficial) for 0.24.2 > Depends on: B9 Aerospace Pack Fixes Pack (community) which depend on original B9 0.22 R4.0c

Then Bac9, Taverius and K3|Chris are releasing B9 5.0

B9 5.0 for 0.24.2 dependent on: etc...

Link to comment
Share on other sites

Why would the "fix package" need to be handled specially? When the updated upstream package is made available in the repository, the package manager client detects it as an update and performs the upgrade. While the detailed technicalities need to be worked out, at a high level the installer just uninstalls the existing package, then installs the new one.
I mean we do not modify any original mods. Example:

B9 R4.0c for 0.22 > Works out of the box

B9 4.1 (our unofficial) for 0.24.2 > Depends on: B9 Aerospace Pack Fixes Pack (community) which depend on original B9 0.22 R4.0c

Then Bac9, Taverius and K3|Chris are releasing B9 5.0

B9 5.0 for 0.24.2 dependent on: etc...

That would increase complexity in the installation process. For example when you downgrade packages or when we simply have multiple fixed versions released (maybe the first fix did not work correctly). Also these temporary packages would have to remain available over time, bloating up the repo. However, instead of patching the release, we could integrate some kind of patch-mechanism to the installer:

The meta-file would then point to a patch-set in addition to the original content. This way we would not have to create "temporary" packages, could simply bump up the version number of the original package and still would not have to mess around with original source release files. We'd also not have compatibility issues as this would actually be treated as a normal release.

Our release B9 R4.0c+ksprepo-1 would then actually point to the download of B9 R4.0c, but additionally point to a set of community patches which would be applied after the original release file.

No quarrels from me. Have used Python a little bit in the past.

So I guess we agreed on Python here. (for the prototype only)

As I understood it I think Keks's proposal included making mods conform to a standardised layout, as long as the mod allows (ie, no hard-coded paths).

Yes, that was my initial thought. But we actually do not have to standardize directory layout. From perspective of our application it simply does not matter if the mods are structured, or simply put all together in a single folder, because we have the meta-data describing each package and its contents. I'd make this point optional, as it might cause confusion/problems on client-side when the files are re-arranged.

As far as I understood the proposed packaging system, it would primarily consist of:

1. metadata for each mod

2. a client

3. a set of developer/maintainer tools

Correct.

Metadata would include the obvious (name, version, license, URLs, etc) but also simple rules (eg, which files to copy from the upstream's zip file into the KSP "GameData" directory).

That's correct. By enlisting package contents we can also easily ignore files that are part of the original release archive, but are of no meaning to us. Documentation or source files for example do not need to be moved to the GameData directory, but may be part of the release archive.

Optionally the maintainer of a mod might repackage a mod and mirror it in the repository if the license of the mod allows for it.

NB. The maintainer would, ideally, be the mod developer, but doesn't have to be, and is unlikely to be at first until the system is proven.

Correct. Ideally, the maintainer would be the developer himself, but it can basically be done by anyone. As long as the mod license allows redistribution in unmodified form, the maintainer also should attach the unmodified release archive to the GitHub relesae tag, so the repo serves as a mirror.

Which IRC channel/server?

I'd suggest the official KSP channel linked at the top of the page:

irc.esper.net #KSPOfficial

Edited by keks
Link to comment
Share on other sites

My initial post is not up-to-date anymore. It contains an initial idea, which developed over time. I know some things got lost over time, that's why I suggested a meeting where we can directly talk to each other, so we can directly reply to questions instead of waiting several hours between replies. In my experience this helps a lot. I'd really prefer a direct chat over a written down specification at this time.

Seriously, man? About 50% of your posts are complaining about "I already said it before" and "I've explained it several times", yet you're unable to keep track of all the changes in a clear, simple manner in a post or reference document? No wonder it's difficult to follow you...

Writing specifications, examples and stuff just takes up too much time, which I currently do not have.

Enough time to repeat yourself again and again, yet no time to put up a clear example in a concise manner? Seriously?

As for the optional features: It's important to keep them in mind early, and include them in the initial planning. Else we end up in a state where we cannot easily implement feature X later on, because we did do something in a specific way which does not allow X to be added without also introducing major changes/breaks to feature Y. I already talked about this.

Yes, I know this. The single reason I'm asking you to talk about the critical features only is that your brain goes in every direction at the same time, and you seem unable to prioritize.

Once we agreed to some point, we need to isolate critical features from optional ones. Optional features get put on backlog with low priority and will be implemented at some later point. It's just important to keep those in mind :)

And I'm very functioning in the opposite way, because there's no way everybody could agree at some point if you start to consider everything at once!

Better ask yourself what are the core features, document them in a simple and clear manner, and then talk about optional features and consider how they affect the already core features we'd agreed on.

What's your name on IRC? The same as on the forums here? And when would be a good time to catch you there?

I really believe there's no point in having an IRC meeting right now, because having a meeting to talk about everything is pointless. It's the very same as talking about nothing.

Please have an RFC first, that we can review before talking about it.

On a side note, I believe IRC meeting are probably the less efficient method to achieve something, unless you have a clear agenda pre-defined.

PS: I'm not used to do planning over forums. I'm more the brainstorming guy, sitting together with a bunch of developers talking about everything. So bear with me :)

Yes, can confirm. You're pretty bad at planning in written form :D To be honest, I'm not sure I could bear with such personality on the long term - clashes will ensue for sure. Might be better for me to step down at this stage.

Link to comment
Share on other sites

Seriously, man? About 50% of your posts are complaining about "I already said it before" and "I've explained it several times", yet you're unable to keep track of all the changes in a clear, simple manner in a post or reference document? No wonder it's difficult to follow you...
Enough time to repeat yourself again and again, yet no time to put up a clear example in a concise manner? Seriously?

Well, that's why this is called a discussion, I guess. I said I'm not here to tell you how to do something, I'm here to talk. My opinion on things changes as we talk, and that's the reason why I do this. To get feedback from you and hear your proposals.

I thought people answering here did in fact read the whole discussion, and not just the first and last post.

I also did not complain at all. If I sounded a bit harsh, that was not my intention. I just mentioned that I already talked about X in detail before.

EDIT:

Also, I did put up several examples early on. Even a complete example repo including some management scripts. The idea developed over time and so the example became obsolete. I updated a example mod in the repo not too long ago, so it matches the current state of our discussion here. I linked it in a previous post and explained what I did there.

/EDIT

Yes, I know this. The single reason I'm asking you to talk about the critical features only is that your brain goes in every direction at the same time, and you seem unable to prioritize.
Yes, can confirm. You're pretty bad at planning in written form :D To be honest, I'm not sure I could bear with such personality on the long term - clashes will ensue for sure. Might be better for me to step down at this stage.

I'm not going in every direction at once. I'm taking a look at this as a whole. I take into account as many information as I have available at a given point. I plan for the future, not just for a single aspect at once.

Another problem is replying to different (unrelated) people at once. I cannot really direct this conversation into a single direction, because I cannot really influence what someone asks at a given point.

A forum is not really ideal for this, as we'd need to create several threads for each question / part of the application. A ticket-system would IMHO be much better for this kind of discussion. I also must admit that I made some mistakes early on, that I did not try to steer the discussion into a specific direction. As I said, I'm not used to plan a project via a forum. It's different :)

And I'm very functioning in the opposite way, because there's no way everybody could agree at some point if you start to consider everything at once!

Better ask yourself what are the core features, document them in a simple and clear manner, and then talk about optional features and consider how they affect the already core features we'd agreed on.

We're actually discussing three things here: The repo, the client-application(s) and the actutal package layout. It see how this can become a bit confusing when context keeps switching.

See answer to reply below.

I really believe there's no point in having an IRC meeting right now, because having a meeting to talk about everything is pointless. It's the very same as talking about nothing.

Please have an RFC first, that we can review before talking about it.

On a side note, I believe IRC meeting are probably the less efficient method to achieve something, unless you have a clear agenda pre-defined.

Well, I did not think that it would be necessary to create a specification / RFC at this point, as nothing is fix yet. I really did not expect this amount of confusion, as the whole discussion can be reviewed at any given time. But I will respect this and consider your request. But I don't think I will be able to put up anything close to a spec before Sunday.

I think a meeting would help because we'd be there all at once, taking to each other and we'd be able to steer the discussion into specific directions instead of randomly replying to posts.

Edited by keks
Link to comment
Share on other sites

The meta-file would then point to a patch-set in addition to the original content. This way we would not have to create "temporary" packages, could simply bump up the version number of the original package and still would not have to mess around with original source release files. We'd also not have compatibility issues as this would actually be treated as a normal release.

Our release B9 R4.0c+ksprepo-1 would then actually point to the download of B9 R4.0c, but additionally point to a set of community patches which would be applied after the original release file.

From that what you write from the client point of view would look the same: Download original release and then download patches/fixes and override the necessary files :| I don't understand you. I don't think patch for B9 4.0 that causes mod works on KSP 0.23.5 is a temporary package.

Link to comment
Share on other sites

I don't understand you. I don't think patch for B9 4.0 that causes mod works on KSP 0.23.5 is a temporary package.

As far as I understood you, you proposed to create a community-fix package overriding parts of the "original" package. So in your example there would actually be two packages then:

The original R4.0c package and a R4.0c-community-fix package. Once v5.0 gets released, the R4.0c-community-fix package must be removed to not create conflicts with 5.0.

That's the point I am concerned of. Creating packages which will have to be removed later on, because you'd need to mark them somehow, that they are meant to be removed later on.

Your workflow would look something like this:

  • install R4.0c
  • install R4.0c-community-fix-1
  • remove R4.0c-community-fix
  • update R4.0c to 5.0

where as what I proposed would look like this:

  • install R4.0c
  • update R4.0c to R4.0c+community-fix-1
  • update R4.0c+community-fix-1 to 5.0

difference is, that from the package's point of view, R4.0c+community-fix-1 is just another regular version released, following the regular update process, instead of an independent package which would have to be release at a later point.

From that what you write from the client point of view would look the same: Download original release and then download patches/fixes and override the necessary files :|

Yes, it workes (almost) exactly the same way your approach does, but does not create a new package. It instead could be realized by simply adding a patch-set to the release tag in addition to the original release :)

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...