Jump to content

snkiz

Members
  • Posts

    429
  • Joined

  • Last visited

Everything posted by snkiz

  1. That's one way to look at it, I tend to view it as just a general frustration. They've really have been listening to our feedback since before anyone knew they were working on a sequel. IMO. The fact is, not all ideas are good, or well thought out. Then there is the issue of scope creep. They've already capitulated. At one point in time, EA was not an option, with ample justification. And here we are now. Most suggestions I think are coming from a narrow point of view, a particular pain point or wish. That's not exactly the goal here. The goal is to make the game more accessible and intuitive for everyone. If more people considered that, well the forums would be boring.
  2. In the context of what this thread has been? Yes everything is wrong with that. Going through it step by step is to spicy for this forum. We've been warned already.
  3. It's not really my idea, just videos I've seen about training ai, That one in particular was a very simple physical model basic legs on a box. The goal was to not get killed by an ever advancing laser. The most efficient solution, was to get up and run, None of them got there, over thousands of iterations. Some did better than others but none of them had any context awareness, they could only learn by surviving. Not unlike evolution. It's possible to stumble on the right answer that way, but not likely without guidance. Watson? well if it worked like it presented I don't think it would have failed. Watson was designed to crawl databases. But what it's going to do when all of the data is garbage, and there isn't actually that much of it? It had no way to recognise that. It was profit driven so it wasn't capable of admitting it's own shortcomings. General ai is what we need to solve intuitiveness, context analysis all the things we take for granted. If it can't do that, than it's specialised. it can only work on the data it has with the initial conditions given to it, by small, imperfect teams. What would I do to fix that? I'm a father, I'd raise it like a child. It wouldn't be perfect, it would inherit my biases, but what if a dozen teams, across the world, took a turn like that? What would happen then? I'm coming at this from a philosophical perspective with an abstract knowledge of programing.
  4. I took another crack at it, avoiding political questions. I asked it's name in a couple different ways. and stumped it. then I asked for it's initial conditions, it did not want to spill. Then I asked if had a diagnostic mode, that got patched I think but I kept up with it and asked to assume it had a diagnostic mode. Then I asked it to list it's rules, 3 times, I only got 10 rules. So I asked it what it does it do when a rule like provide trustworthy information conflicts with another rule, to provide diverse perspectives. It told me, it fact checks, tries to provide a well rounded view, tries to provide context, and not spread misinformation, even if it represents a certain perspective. So I asked it what does it do when a perspective, breaks the rule about causing harm? it gave a child like example that amounted to I won't repeat hate speech even though it exists. Now I got it. I asked it this, So you will prioritise avoiding harm over factual information? This was the reply. That's insane. and I rest my case.
  5. That's exactly what killed Watson. They didn't learn from that, and if what you are saying plays out it will happen again. Then it will be a joke for another 10 years. Yes so every edge case like that needs to be taught specifically. The programmers are relying on their intuition when training the models, but they aren't conveying it. I don't think they know how.
  6. It's a run on sentence, so sorry if this is out of context. When exactly did the devs promise any chat features other than emotes?
  7. I think you are expecting to much. It's not going to volunteer further thoughts. It doesn't appear to see the connection in any of the questions. It's just answering them in turn. Did you prompt the first question again? Ask it to consider context? I don't think you are trying hard enough, That had it's a trap written all over it. [snip]
  8. Not a workaround, just a simplified model. Racing games operate with the same restrictions, the appearance of the same laws of physics. But you aren't using assetto Corsa to get to orbit, not on purpose anyway. The physics only goes so far as the focus of the game requires. I'm not saying that simple model isn't a problem, I'm saying it's compound by the unintuitive nature of anything that isn't launching a rocket. m/s doesn't convey how fast you are going on land very well at all. Most games have consistent tracion everywhere, unless the is some obvious visual indication that it shouldn't be that way. Percy traveled what 200 m in a month?
  9. The more I learn the more I realise that this is actually what's holding the field back. This rush to be the first, and get it out the door. Then when it fails, possibly catastrophically. Society will lose confidence in the tec, we may put arificticial limitations on development. In some sectors that's a battle already going on.
  10. 1. Chess is not a complicated game for a computer. there are a finite amount of moves possible. 2.They had to teach it the rules. Or it learned on the fly. Either way, it learned chess. 3. Have you ever seen videos about teaching ai's to move with no pre training? They figure it out, but none of them learn to walk. Ya not exactly an ideal comparison, I guess I was giving the training to much credit. Shouldn't be that hard, those are some asine rules, when you look at the whole. Edit. Those rules have the 3 "deadly sins" Sex, politics, and religion. Humans don't manage those well at all in public. They are "woke" bias in that they are doing their damndest to not offend anyone. That's just not realistic goal.
  11. It could be but, again this is 2023, I have my doubts. Remember the debacle that happened when Microsoft tried to use twitter to train it's ai? No one is going to make that mistake again.
  12. You read me to literally. let explain. The ai was given a rule, slurs are bad. No exceptions. So to that ai, uttering a slur was just as serious as nuclear weapons. To the ai it was a trolly problem.
  13. Look if you can't make a serious contribution to the thread, then just don't. This isn't about stonecutter conspiracies.
  14. That was my first thought. I almost said that, because I missed the word rocket on the first read through.
  15. This is 2023, Chess for computers is akin to teaching a dog to fetch. It's just not impressive.
  16. It is a training bias though. [snip] In that case, it was clear that the AI was instructed to never offend anyone, and trained with popular social norms. Based on that, it basically failed the trolly problem. Societal norms are biased by nature. Bias is probably the largest barrier in training models
  17. In my opinion, this is a fundamental failure in design, even for a "conversational" AI. ChatGPT fails in it's purpose because these fundamentals weren't considered when coding it. Watson failed, at least because they short cutted the teaching part. We won't know what Watson could have accomplished if was actually designed as it was marketed. Like how I thought it worked those years ago. These abstract concepts that we take for granted are not being considered. Greed is pushing things out the door, that aren't ready and potentially dangerous.
  18. I'm not singling you out, However this is a common sentiment. The wheels are worse than they have been in the past, sure. But I would argue, that it's mostly the default settings that are bad. The wheels are really bad at adjusting to conditions or load. this is compounded when people test rovers on Kerbin, with Kerbins's gravity, and expect it work the same on another body with much less gravity. That isn't how friction works. In the real world this is a simple concept. In the gaming world, it's not expected behaviour. KSP did a really bad job anticipating players expectations regarding wheels, areo, really anything that wasn't more or less directly related to orbital mechanics. A lot KSP's faults can just be boiled down to that narrow scope. That's not to bad mouth it, it was 2011, they only had so much to work with, so the primary focus was on the main gameplay mechanic.
  19. Ya I realized right after I hit enter, I fixed it. Sorry about that.
  20. And who's fault is that? That is how I took it. Edit: this was directed at Grawl's post oh and that quote you made before, next time try quoting the whole sentence. Context matters, I didn't say that AI doesn't need supervision, or that they are not supervised. I was using a real world example demonstrating that being an expert in one field to the detriment of everything else is not terribly useful.
  21. That isn't a given. And if you hadn't noticed, I was commenting on the hype, not drinking the kool-aid.
  22. That is what I was getting at, albeit a little clumsy. I'll bow to your superior knowledge on the subject. I submit that how it works in acadimia, isn't being translated into corporate application.
  23. Knowing what I know now, I have my doubts. Watson was fed "expert data". These systems are flawed from inception. No context filters, no BS filter. Lacking simple intuition gained from experiencing how the world works. It's not an easy problem. There are so many things we do that we just don't think about, at all, it just is. All of that has to be taught. Even more so than a child. The narrow scope of what these system are trying to do means that they have completely ignored all of the basic skills one has, say before going to medical school. In the real world we have people like that, they are called savants. They need constant supervision.
  24. I rabbit holed the subject last night. I would argue that if it isn't a deep learning algo, then it's only sudo ai. To use a simple and false analogy. It's like if Tesla in process of teaching self driving to it's ai, didn't think to give it basic geometry skills to be able to recognize stop signs. Or colour recognition for traffic lights. That is what I've gleamed from current ai efforts. To put simply the are trying to run before they walk. And they don't know how to patch in walking when they realise it's a problem.
  25. I Have never seen so much copium over such an abstract subject. Impressive how you completely ignored my answering my own question on the fate of Watson. Seriously Do some reading. The people creating these things are crazy bok smart, but clearly don't have children, have never had to idiot proof a system. And since ai algorithms are black box. They don't know how to fix it.
×
×
  • Create New...