Jump to content

Combining efforts on proper mod management framework / tools / platform


keks

Recommended Posts

I see that the discussion has died? Recently I've been busy trying to force Golang and node-webkit to cooperate xD

Today I read documentation of github api. Very easy to get a list of files in the repository. And easy to download files in raw versions.

Github is a really great place for a package repository :)

I wonder... Github api returns json files. Personally, I prefer YAML, but perhaps packs should be in JSON too? Tools programmers do not have to worry about the two formats. Its not big deal I know, but it is better to simplify everything as much as possible.

Edit:

It's slightly expanded version of your sample package. In my opinion, contains all the necessary information to install this mod.

It can be hosted together with the mod but its not required.

http://pastebin.com/gnK7Yf48

Edited by TeddyDD
Link to comment
Share on other sites

As it looks like there is no interest in a proper mod/package management for KSP, I will not create such a tool.

Maybe I will fiddle together something for my self, but as for the lack of interest I do not plan to release such a tool to the public any time soon.

** insert random rant about some stubborn people in here **

I still think a repository-like structure (centralized or de-centralized - does not matter) is the way to go.

It works pretty well for other games and projects (see Six-Updater for ArmA, APT for Debian) as it makes installing AND (even more important) maintaining/updating mods as easy as possible.

TL;DR: Anyways. Find some developers and volunteer maintainers and I'm back in. Else, I'm done here.

Link to comment
Share on other sites

I know what you mean :)

In any case, I will continue to write such a tool even just for me. I hope I will not do some terrible mistakes with design and coding (I am a visual artist, not a programmer xD) At least I will learn something. We'll see what comes up. BTW, thank you. This topic motivated me to work. It is nice to know that someone thinks similar to me :)

EDIT:

https://github.com/TeddyDD/ExampleKSPrepo

So far so good.

Edited by TeddyDD
Link to comment
Share on other sites

Unfortunately guys, that's the way it is here and I guess it was the same when Debian have been cleaned up, people are sceptics (fair) and conservative (human standard issue :) ), so without anything to show, no one would move forward.

And I think lot's of mod are pure selfish creation, who can really stand in front and claim "I did it for the community !" ? Not so many according to their behaviour regarding modifying, improving, redistributing their creations.

Link to comment
Share on other sites

Working on this :)

Have you guys considered using something already existing?

I'm an Arch Linux user, and that distribution provides a complete infrastructure for the Arch User Repository (AUR) where people can contribute "packages" (in fact "metadata" that will be used to build and install packages).

It looks a lot like what you are trying to achieve here, with a Web Interface, a complete package description with dependencies information and a =dropbox"]RPC interface using Json that clients can use to download and install "packages".

It also comes with a complete user management system, voting system (for popular packages), flagging system for obsolete packages. Instead of reinventing the wheel, adapting the AUR software to have a "Kerbal User Repositoy" (KUR?) would be simpler, and only effort would be needed to provide clients.

Link to comment
Share on other sites

At the beggining I thought about adopting NPM or bower. I'm afraid that these solutions are not very suited for KSP. Aur is a great system, but it would be hard to adapt it to our needs. And in any case, I do not know how to do it. And these solutions requires the server. The idea that I'm working requires only a repository on Github.

If anyone has any ideas on how to improve what I have created so far go here: https://trello.com/b/28wWVbaS/kerbal-packages-system

Or here: https://github.com/TeddyDD/ExampleKSPrepo/issues?q=is%3Aopen+is%3Aissue

Or write here, no matter :) Maybe we can create something together.

Link to comment
Share on other sites

And these solutions requires the server. The idea that I'm working requires only a repository on Github.

You are right. If it is possible to achieve such a mod manager without an eternal server, then it should be done that way. The simpler, the better :)

Link to comment
Share on other sites

I think part of features you are talking about (voting, flagging, download statistics) can be implemented later. Somehow :)

Progress: The initial version of the tool used to generate the packets based on the zip file:

- reads the name and version of mod (usually)

- generates the checksum

- looking for mod's forum thread

I'm going to add automatic detection of file structure in the zip archive.

Sample output (ignore checksum, It was generated based on dummy file): http://pastebin.com/AZhmbkvW

Maintainer must check the generated file, add some information, and the package is ready.

Edited by TeddyDD
Link to comment
Share on other sites

TeddyDD> I would add an additional "Category" field to make filtering search a bit easier on the client side too. This should probably be an json array, to allow multiple categories (see curse.com).

I'm also not sure if the "copyright" field is really necessary,as it somewhat duplicates the License field. Or maybe you're referring to an "authors" field here? If yes, renaming it might be wise. Overall, it looks good. I'm convinced this is the right step towards proper mods management.

Link to comment
Share on other sites

Yep, I mean "authors". I change it today. Good suggestion.

Category field is also nice, but I think it would be better to call them "keywords" or "tags" (but tags can be confused with git tags)

Besides, I wonder if I should add a field "conflicts". I do not know if I will ever be needed. What do you think?

Link to comment
Share on other sites

Yep, I mean "authors". I change it today. Good suggestion.

Category field is also nice, but I think it would be better to call them "keywords" or "tags" (but tags can be confused with git tags)

Besides, I wonder if I should add a field "conflicts". I do not know if I will ever be needed. What do you think?

I think the purpose of a "category" field would be to provide a pre-defined, limited set of options to choose from. A "keywords" or "tags" field might be overused and eventually be useless for search filtering purpose.

Yes, a "conflicts" field would be very useful. You might think about allowing a "provides" and "replaces" fields too. Those 3 would be optional, but implementing them would ensure that any complex mod management operation that might occur in the future would be possible (ie, mod A requires dependency B, which is fulfilled by plugin B or C, or a mod X is renamed to Y).

Link to comment
Share on other sites

Field of categories will be a problem. I do not know how the client application can search the packages in the repository.

We would need an index. Keeping it manually would be difficult, especially if the repository managed by more than one person.

At the moment, the only information available to client is list of names and versions of mods for given ksp version. In this case, client would have to download all the packages from the repository to search by categories or keywords.

I think users will have to look for mods on the forum. At least for now.

I'm looking for suggestions on how to solve this problem.

I like the ideas for fields. I'll add them soon. So far we have: depends, optional, conflicts, replaces, provides. All of this fields are optional.

Link to comment
Share on other sites

Field of categories will be a problem. I do not know how the client application can search the packages in the repository.

We would need an index. Keeping it manually would be difficult, especially if the repository managed by more than one person.

At the moment, the only information available to client is list of names and versions of mods for given ksp version. In this case, client would have to download all the packages from the repository to search by categories or keywords.

Yes. If you want to avoid using an eternal website and provide only the individual json files, I'm afraid there isn't any other solution than downloading all the files and reconstructing a local database.

I like the ideas for fields. I'll add them soon. So far we have: depends, optional, conflicts, replaces, provides. All of this fields are optional.

I'd use "optdepends" instead of "optional". It would be much clearer what this field really is, since all of these field are indeed "optional". It is also what Arch uses so I'm already accustomed to it :P

Lastly, I'm wondering about the requirement of including the version in the name file. I understand now this is to make the version available to client without downloading all files first, but to be honest I believe you'll have to do it anyway. The version number in the name file duplicates the one inside the file, and also makes it harder to update the json file when the mod is updated. Instead of simply updating the json file through github, you'll have to delete the file and create another one, making it harder to track change in a single file and not taking advantage of the ability of git to record only the difference between two changes.

Link to comment
Share on other sites

Yes. If you want to avoid using an eternal website and provide only the individual json files, I'm afraid there isn't any other solution than downloading all the files and reconstructing a local database.

It would be enough to clone the repository and use git pull. But try to tell a player that he has to install git :> No way.

I think now the main goal is to facilitate the installation and updates. We focus on searching later.

If someone wants to write a client that pulls the whole repository can use description field.

I'd use "optdepends" instead of "optional". It would be much clearer what this field really is, since all of these field are indeed "optional". It is also what Arch uses so I'm already accustomed to it :P

Once I changed it already. I have to think about it.

Lastly, I'm wondering about the requirement of including the version in the name file. I understand now this is to make the version available to client without downloading all files first, but to be honest I believe you'll have to do it anyway. The version number in the name file duplicates the one inside the file, and also makes it harder to update the json file when the mod is updated. Instead of simply updating the json file through github, you'll have to delete the file and create another one, making it harder to track change in a single file and not taking advantage of the ability of git to record only the difference between two changes.

The idea is not to delete old packages only add new ones. You may want an older version of mod.

Version in the file name will be needed. Program will read package version from file name. Version stored in the file is not necessary but I think I'll leave it as an optional field.

Besides, I think we do not need field replaces. Implementation of this would be complicated and can be replaced with order of installation. And relationships between the KSP mods are fairly simple.

Link to comment
Share on other sites

@TeddyDD:

Problem I see here is that you are blindly jumping into development without carefully planning the requirements. That's the reason why the other attempts failed, and so will you unless you start to plan and even more important: discuss your decisions with others.

That's the reason why I wanted to find at least two or three developers to work with me. To discuss and get feedback.

You must remember that you do not create an application for yourself, but for others out there that may (or most certainly will) not have the same requirements to your application than you have.

Instead of re-inventing everything, you should take a close look to the specification/documentation of a grown-up package management system and adopt it's essential core functionality. Why? Because many (very skilled) people already spend years of their time planning, developing and even more important fixing it. There's no need to run into the same mistakes and problems again, they did.

If you are willing to do this right, then I'm in.

Link to comment
Share on other sites

Precisely what I was doing. I started to write the specification as I prefer to work on concrete things. It's a point of reference. If there is something you do not like, let's change it.

I started writing a manager to make it easier to determine the needs (and learn some golang) but this secondary project.

Unfortunately guys, that's the way it is here and I guess it was the same when Debian have been cleaned up, people are sceptics (fair) and conservative (human standard issue ), so without anything to show, no one would move forward.

I just want to have something to show. Even if the prototype will later be abandoned in favor of a better solution.

Instead of re-inventing everything, you should take a close look to the specification/documentation of a grown-up package management system

I use this:

http://bower.io/docs/creating-packages/

https://www.debian.org/doc/debian-policy/ch-relationships.html

https://www.npmjs.org/doc/files/package.json.html

But do not think that any of these systems could be used directly.

If you are willing to do this right, then I'm in.

I am willing to do anything, as long as the idea will be developed.

If you know how to do better is simply Just do it.

If you have an idea how we can consult the specification of the package I'm going to join and help.

Link to comment
Share on other sites

Precisely what I was doing. I started to write the specification as I prefer to work on concrete things. It's a point of reference. If there is something you do not like, let's change it.

Well, rapid prototyping is not always the way to go. It often leads to uncontrolled growth and unnecessary increase in complexity due to management failures. But that's my personal opinion on that. I personally prefer careful planning, discussions, specifications, more discussions, ... and at some point where nobody involved in the project has any questions anymore, I start developing a prototype.

Documentation and tests are another thing. We need to make sure to create documentation and tests as soon as possible. Ideally directly after the first basic prototype was built.

I started writing a manager to make it easier to determine the needs (and learn some golang) but this secondary project.

I just want to have something to show. Even if the prototype will later be abandoned in favor of a better solution.

I'd also prefer to not start experiments with new languages, as that might (or most certainly will) lead to unexpected problems. As for KSP being developed in Mono, I'd stick to that. That also eliminated additional dependencies on user-side.

You should also take a look at dpkg as well as apt/aptitude.

- http://anonscm.debian.org/cgit/apt/apt.git/tree/

- http://anonscm.debian.org/cgit/aptitude/aptitude.git/tree/

- http://anonscm.debian.org/cgit/dpkg/dpkg.git/tree/

Also the http://www.six-updater.net/ project could be of interest to us.

I already mentioned that earlier, but I'd like to do so again. It's a widely established mod-management solution for the ArmA community.

Documentation is also available: http://www.six-projects.net/wagn/Six_Updater+Documentation

If you know how to do better is simply Just do it.

That's the point. I do not want to 'just do it'. If I do it, I want to do it right.

I am willing to do anything, as long as the idea will be developed.

[...]

If you have an idea how we can consult the specification of the package I'm going to join and help.

Before blindly developing anything, we should sit together and carefully determine what we need and if there already is an existing solution to at least some of the problems we might find.

We should carefully analyze how other solutions work, what they've done right, and why they failed in the end.

We need to find out what developers need, what users need, and what we can do to make both of them happy.

That's a lot of discussion that needs to be done. And I'd really like to have a (open-minded and established) modder from this forums on board.

Edited by keks
Link to comment
Share on other sites

Let me summarize the requirements we've got so far:

  • client-software
    • easy installation
    • automatic, non-annoying self-update
    • easy mod management
      • easy installation of mods

      1. either via build-in search / catalog

      2. or direct input of mod URL
    • easy update of existing mods
    • easy (and complete) removal of existing mods
    • automatic dependency management
      • automatically install hard dependencies
      • notify user of optional dependencies
      • notify user of conflicts
      • auto-resolve dependencies/conflicts if possible

    [*]no additional dependencies

    [*]modular architecture, so it can be extended easily

    [*]developer-tools

    • easy to use tools

    1. no "magic" going on behind the scenes
      • means, tools must be simple enough so developers can easily understand what they do and bend them to their needs

[*]auto-generation of meta-data files

  • automatically generate

  1. this one coule be tricky...


  2. do not require the modders to change the way they work right now


  3. mod repository
    • de-centralized approach

if we suddenly disappear for some reason, someone else can easily take over
modders may maintain their own repositories for their own mods



  1. easy to setup and maintain
  2. easy integration into third-party tools
    • I'd love to see integration on kerbalstuff...

[*]hosting

  • host complete mod where possible
  • host only meta-data where needed

Do you have anything to add to/remove from this list at this point?

PS: I absolutely do not want to look like some arrogant guy telling others what to do. That's not why I am here. All I want to do is discuss this idea and get feedback on it. I'm usually a very direct person, so if you think I'm talking rubbish, feel free to correct me :)

this requires clear rules, to avoid conflicts between repositories
file lists
version information
dependencies
easy to use modgen/build/publish tool
good integration into IDE
Link to comment
Share on other sites

@TeddyDD: I've though about the whole search issue, and I realized that not allowing search from the start would be a major mistake. The whole point of a client software is to allow easy installation and upgrade. If the user can't install mods without browsing manually, he'd rather install mods manually too. Thus, an online index is required, which means you'll need to feed some database with the raw metadata. It also avoid client to implement this feature locally, which is certainly not the smartest way of handling it.

@keks: I agree with most of you last comments. However, I really believe your list of requirement is too complex for this project to be successful (why those developer-tools? Why hosting complete mods again? Why allowing external meta-data repo, when you can just use a unique repo on github?). I'd keep the set of features as minimal as possible, and build upon it if required. As I see it, there's 3 steps involved here:

* A meta-data format that can be easily written and maintained manually (through a dvcs like git). TeddyDD has covered this subject.

* An simple online index that can return Json information to clients (or even other third party website!). Search, and detailed information of specific packages are needed.

* A client software that use that Json information to download/check integrity/install with proper dependency management.

The first two points are covered by software like the AUR. I'm currently taking a closer look at its code, but I'd say it does 90% of what would be required right now.

The major issue is that it actually requires direct uploading of "meta-data" files on the server. Direct git integration is planned, but not yet implemented. This will be done in the next version of the AUR.

Also, Arch Linux is a rolling release distribution and as such, managing multi-versioned software doesn't really make sense. As such, the available software are always the "latest" available. However, the AUR allows upload of various versions under a different name when the need arise. Those are mostly temporary issues anyway.

Ideally, we'd have an index and json interface online as soon as possible, allowing the work to be done on clients.

Link to comment
Share on other sites

@TeddyDD: I've though about the whole search issue, and I realized that not allowing search from the start would be a major mistake. The whole point of a client software is to allow easy installation and upgrade. If the user can't install mods without browsing manually, he'd rather install mods manually too. Thus, an online index is required, which means you'll need to feed some database with the raw metadata.

We don't need a database for that, we can make use of git there. Every repository could have some kind of "index repository" pointing to the modules it contains via submodules. Just like the example repo does I put up earlier.

It also avoid client to implement this feature locally, which is certainly not the smartest way of handling it.

Why do you think so? A locally cached index would reduce load on service-side and would most likely also be faster.

You need to keep dependency-resolution in mind here. This can (at the current number of mods) easily require you to do a couple of hundred queries against your index to resolve a more complex dependency. Given a critical number of users and a popular mod update, this can easily lead to users DDoS'ing their own service :)

I'd rather have the backend only provide the application model (means data only) and let the client applications do the work.

@keks: I agree with most of you last comments. However, I really believe your list of requirement is too complex for this project to be successful
why those developer-tools?

Because for this to be successful we need the developers on board. Right now there is almost no standard for mod development. There are little to no commonly used frameworks/libraries. Every developer reinvents the wheel over and over again. Most likely because they simply do not know a solution already exists, or they see no easy way to contribute changes they'd like/need to have in said solution.

A central repository PLUS some set of developer tools already containing the most commonly used libraries may help with that issue.

Why hosting complete mods again?

When hosting meta-data only, we cannot guarantee that the binary files will remain available over time.

Many developers here simply replace their mods with new versions, deleting the old ones. And even if they still serve the old releases, that's mostly limited to one or two predecessor versions only.

This would inevitably leas to a point where our repo would contain 99% dead links to binary files which do no longer exist anymore.

By hosting the files in the repo itself, we (or the external repo maintainer) can guarantee the availability of all versions indexed in the repo at all times.

Why allowing external meta-data repo, when you can just use a unique repo on github

Because a central repository requires a central instance controlling everything. I (and most likely most of the forums here) do not want a single individual (organization) to be in charge of every mod out there. By using a de-centralized approach, we still can have some kind of "main repository", but also enable individuals to host alternatives themselves.

A mod developer for example could host his own repository (as a git repo), maintaining and serving his own mods. The developer tools provided could aid him in this process, not requiring him know anything about git at all.

In some cases we cannot (or do not want to) host the mod itself. For example because of license issues, or simply because of high maintenance volume required to keep it up to date. In such cases, we can provide "meta-packages" only containing the source link to the binary files that are hosted elsewhere. But as already said, that could (and most certainly will) lead to broken packages because of dead links.

I'd keep the set of features as minimal as possible, and build upon it if required.

That's the plan ;-) The first step after planning should be a basic prototype providing the core functionality only. Over time, we add features to said prototype and see how they do. Once we got a (almost) fully functional prototype (that means, uploading, downloading, updating, searching mods) we can start planning and creating a proper application.

It's important to keep those (optional) features in mind, so we do not get to a point where we cannot easily implement a feature users would like to see, because our application core does not support it without major changes. That's why I'd like to see a modular approach. To be able to easily add features later on. But for that, we first need to know how such modules may look like. What kind of interfaces our core application needs to provide, that does not require major breaking changes later on. Because one thing nobody likes is updates breaking current interfaces / behavior.

As I see it, there's 3 steps involved here:

* A meta-data format that can be easily written and maintained manually (through a dvcs like git). TeddyDD has covered this subject.

* An simple online index that can return Json information to clients (or even other third party website!). Search, and detailed information of specific packages are needed.

* A client software that use that Json information to download/check integrity/install with proper dependency management.

The first two points are covered by software like the AUR. I'm currently taking a closer look at its code, but I'd say it does 90% of what would be required right now.

The major issue is that it actually requires direct uploading of "meta-data" files on the server. Direct git integration is planned, but not yet implemented. This will be done in the next version of the AUR.

Also, Arch Linux is a rolling release distribution and as such, managing multi-versioned software doesn't really make sense. As such, the available software are always the "latest" available. However, the AUR allows upload of various versions under a different name when the need arise. Those are mostly temporary issues anyway.

Ideally, we'd have an index and json interface online as soon as possible, allowing the work to be done on clients.

Try not to focus too much on AUR. Git already provides almost everything we need. The only things we really need to care about is dependency management and efficient search.

As for the indexing service: As already stated, git can do this for us as well. For a convenient API for third parties, we could setup a simple GitHub page, providing a JSON/YAML/XML API.

I want to mention Six-Updater here again. They do provide such a simple JSON-API which you can easily query. I'd really like you guys to take a look at it, as it (IMHO) does everything right on backend-side (though, the frontend got more than enough issues...)

Do you agree with me on these points or are there further questions / alternatives you want to discuss?

Edited by keks
Link to comment
Share on other sites

Great. I extensively answered your post but lost all the content just before I could send it.

I feel very lazy now but I'll do the very quick version instead:

  • As long as you requires developers to jump in to succeed, I believe the project is doomed to failed from the start. The current situation is the main constraint, we have to make it work with what we have now. Unless Squad jumps in and enforce a specific way to create/packages mods, which I don't see coming any time soon.
  • I was referring to the recreation of the local index from the raw-metadata files as "not the smartest way" of handling it since a Json interface and one single request as being enough to get the required information for updating purpose. Not sure how you manage to think about couples of hundreds requests and DDoS from there :confused: (you'd only need one request per level of dependencies across all the installed mods, which means about 3 if you have complex mods).
  • Central repository. Yes, you can keep one central repo only. Since it will be using Git anyway, fork can be possible anytime if "something goes wrong" on the human side. Everything else is unnecessary complexity.
  • Old broken links: Why should we care about them? Squad allows download of the two last KSP version, and by it's development model KSP is a moving target. Mods have to adapt or die. No point in supporting old versions of mods.
  • I had a look at Six-Updater but the only relevant information I could find is that the license is problematic (CC BY-NC-ND). Haven't found what the Yaml/json interface look like. Could you provide a link here?
  • Overall feeling: There's two conflicting views in this thread: The "meta-data" idea only from TeddyDD, and the more complex whole repository repackaging/hosting idea from yours. Obviously, I adhere to the former idea.

Link to comment
Share on other sites

As long as you requires developers to jump in to succeed, I believe the project is doomed to failed from the start. The current situation is the main constraint, we have to make it work with what we have now. Unless Squad jumps in and enforce a specific way to create/packages mods, which I don't see coming any time soon.

I don't see it as realistic to maintain a couple of hundred mods with just a hand full of people over a longer peroid. Of course, if we really want to, we could do that, but that would cause major delays in mod releases on the repository. IMHO the better way is to show mod developers how easy it can be to distribute their mods and how they can benefit from a "mod development kit" providing some libraries covering common tasks. For example toolbar integration, module manager, logging, resource api, etc... There are quite a few good libraries out there, but still people reinvent them over and over again.

We do not require the modders to jump in early, but it would help a great deal. I do not want to force them to do anything, I want to help them come together and make life easier for all of us. I already explained this a few posts earlier.

I was referring to the recreation of the local index from the raw-metadata files as "not the smartest way" of handling it since a Json interface and one single request as being enough to get the required information for updating purpose. Not sure how you manage to think about couples of hundreds requests and DDoS from there :confused: (you'd only need one request per level of dependencies across all the installed mods, which means about 3 if you have complex mods).

That's not correct. Think about specific version dependencies and multiple mod updates at once. Mod A requires mod B version 1 and Mod C version 2. Mod B requires mod D which in turn is not compatible with mod C version 2, yet... and so on. When managing dependencies you can very easily create cycles or conflicts. Now think about a major update of KSP and a user like me that has about 100 mods active, all being updated to the new KSP release. 100 Updates with about 2 to 3 dependencies per mod ~> 250 dependencies to resolve PLUS intermediate dependencies, version conflicts, ... and all this with many users at once. There you got your DDoS :)

I also see absolutely no reason to run this on service-side, as clients can perfectly resolve dependencies on their side. Even more efficiently. Running this on client-side also eliminates the need to provide a central "dependency resolution service" for each repo.

Central repository. Yes, you can keep one central repo only. Since it will be using Git anyway, fork can be possible anytime if "something goes wrong" on the human side. Everything else is unnecessary complexity

Why do you think so? IMHO it really does not matter if you have one or many repositories, as long as they follow a common standard and do not create conflicts. One easy way to eliminate conflicts would be a strict hierarchy the user could define. That's exactly what APT does, for example. It works with multiple repositories following a common standard and uses user-defined priorities to tell which repository to get a package from and which ones to ignore. Has been working perfectly for years for me.

Could you please further explain your concern here?

Old broken links: Why should we care about them? Squad allows download of the two last KSP version, and by it's development model KSP is a moving target. Mods have to adapt or die. No point in supporting old versions of mods.

That's not correct. There are many people out there maintaining multiple KSP installations alongside with different versions. For a more popular example, see "Scott Manley".

Also, maybe some feature is broken in the latest mod release, and people would like to jump back to an older one. Another point would be that a user wants to install a mod A which is only compatible with mod B version 1. But the current release of mod B is version 2, while version 1 is still perfectly compatible with KSP. So the user effectively has to downgrade mod B which in case of an external hoster may result in the user not being able to install mod A, because the old release of mod B is no longer available...

TL;DR: Because of compatibility/dependency issues :)

I had a look at Six-Updater but the only relevant information I could find is that the license is problematic (CC BY-NC-ND). Haven't found what the Yaml/json interface look like. Could you provide a link here?

I don't know if there is any public documentation available on the API. The time I used SU was about when ArmAII:OA was released - must be about 3 to 4 years ago... I was building an auto-updater for one of the larger communities servers. As of the time there was no Linux-client available, I simply reverse-engineered the windows-application and build a basic console client for the six-network.

Basically it consists of three parts:

  • a central API service which you can query
  • independend so called "networks" consisting of file servers mirroring each other
  • the file servers themselves

Disclaimer: As of the time being, things might have changed!

You can easily take a look at the API by analyzing the SU-Clients traffic in Wireshark. At least back then the traffic was not encrypted or signed, just a plain HTTP/JSON API.

(I do remember something about having to acquire some kind of ticket through the API though...)

Overall feeling: There's two conflicting views in this thread: The "meta-data" idea only from TeddyDD, and the more complex whole repository repackaging/hosting idea from yours. Obviously, I adhere to the former idea.

Actually the "meta-data" idea is something I already stated in my initial post and mention again and again in almost all of my posts here :) There is no conflict at all, and it is absolutely no problem to attach a binary file to a git tag on GitHub. That's what GitHub calls "release" and (according to GitHub Staff) is the preferred way to distribute binary content.

Could you please explain your indisposition in hosting the actual mod itself?

I mean, mods can still be hosted elsewhere. We'd simply be another mirror.

Edited by keks
Link to comment
Share on other sites

By hosting the files in the repo itself, we (or the external repo maintainer) can guarantee the availability of all versions indexed in the repo at all times.

We can guarantee the availability of modes with permissive licenses. These closed will disappear anyway. Open mods can be uploaded to dropbox/google drive or Curse and retrieved from there (or from oficial github releases). Using git submodules + github releases will be terribly time-consuming and complicated. Who will be responsible for the dozens of forked repositories?

Why do you think so? A locally cached index would reduce load on service-side and would most likely also be faster.

Agree. The more things can be done ​​on the client side, the better.

Also I think mod tools are awsome, but do not have too much in common with mods management.

I do not believe that any moder join us and start using this system. Not until it becomes popular among players.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...