I read the article but I didn’t check out the platform yet. Thought it might be useful for my fellow autistic people.

      • ZILtoid1991@kbin.social
        link
        fedilink
        arrow-up
        32
        arrow-down
        3
        ·
        1 year ago

        AI has bias issues. While humans can be aware of them and course-correct, with the AI, not so much, and that’s just comes before all the biased data it was trained on.

        • Haui@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          1 year ago

          Ok, I understand. As someone who worked with AI and in hiring in the past I feel like (specifically ND focused) AI can’t do a worse job than traditional recruiting (which is also increasingly done with AI). But I might be wrong. On the other hand so could be you. Have a good one. :)

          • Pirky@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            1
            ·
            edit-2
            1 year ago

            Another thing to add on: it can be difficult for AI to “unlearn” things. So if it learned a bias that it shouldn’t have, getting rid of it will be particularly hard.

          • sky@codesink.io
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            1 year ago

            It absolutely can do a worse job, and be more biased. Not to mention Sam Altman is backing it? Yeesh. I’m good.

            • Haui@discuss.tchncs.deOP
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              Can you somehow prove that? I don’t see how „absolutely“ reinforces your claim. If conventional hiring wasn’t a bag of dicks, hiring companies (which are shit as well) wouldn’t make billions in revenue.

              But I don’t recognize altman. The name sounds familiar. I might need to check him out.

              • 520@kbin.social
                link
                fedilink
                arrow-up
                5
                ·
                edit-2
                1 year ago

                AI can absolutely screw up these things as bad or worse than any other program.

                AI sucks at nuances it isn’t explicitly trained on. That’s how you get AIs at eating disorder charities recommending things like 500 calorie daily deficits (this actually happened).

                AI might be able to get a technically accurate translation, but can’t always tell what’s culturally offensive or colloquially given a new meaning.

                For example, in Spanish “Soy” means “I”, and “Caliente” means “Hot”. What do you think “Soy caliente” means?

                Well if you got ‘I am hot’, Google Translate will actually agree with you…but it doesn’t mean that at all. What it actually means is ‘I am horny’.

                • Haui@discuss.tchncs.deOP
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  Yeah, I get it. Pretty rough around the edges, no doubt. I still don’t think this makes „AI powered“ or „assistet“ worse than conventional recruiting. That’s all I‘m saying. It’s also a buzz word that gets used for a lot more than it is worth btw.

                • yetAnotherUser@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  While not Google Translate, it’s a more advanced translation service.

                  AI is surprisingly advanced and there’s a lot more towards translation than you might think. But you’re right: AI absolutely sucks at nuances it isn’t trained on. That’s pretty much the reason ChatGPT and other “general purpose AIs” will always perform (much) worse than specialized ones.

              • PsychedSy@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                I don’t know if there’s a great way to compare AI vs worthless recruiters, so finding something objective might be difficult. AI is going to pick up on systemic biases in reality and I’m not sure you can sanitize the data enough to avoid that.

                • Haui@discuss.tchncs.deOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  I agree that this is unfortunate. I think what I‘m trying to say is that we see this in AI while recruiting in most companies is trash and most people familiar with AI have no knowledge of how bad recruiting actually is.

              • Black AOC@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                But I don’t recognize altman.

                OpenAI’s founder/CEO… So yeah, I’ll be taking two or three large steps back from this idea.

        • inspxtr@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 year ago

          I think the bias issues will always be there, but usually worsened, less detected (or delayed detection), and exacerbated when the people working on the original problem do not suffer such issues. Eg: if most people working on facial recognition are white and male.

          While I do have my reservation with AI technologies, I think this is a worthwhile effort that the people encountering the same issues work to identify and address them, especially in this case they lead the effort, rather than just be a consultant on it.

          They can lead the effort on collecting new data, or adapt new ways of looking at data, metricizing objectives in a more appropriate manner for the targeted audience. Based on the article, I think they are doing this.

          • 520@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            How would a bias towards NDs work?

            ND is a wiiiiide spectrum of conditions, and even within those conditions, you have subsets of quirks that are rare if not unique to a person.

            How would an AI know how to tailor its operating methods and communication?

              • 520@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                By being trained on how that wide range of NDs communicate, what their symptoms are, how medical professionals diagnose them, etc.

                That’s the problem. The standard for NDs in terms of how they communicate can be literally anything that isn’t typical of an NT. Same with symptoms, and even medical professionals can often fuck up diagnoses.

                NDs tend to recognize other NDs; if we can do it, an AI sure as hell can.

                There are plenty of NDs that are very good at masking. To the point where no one would be able to tell just by looking at them.

                And an AI doesn’t have the same datasets you do. You can look at their body language, listen to their voice, etc. Any privacy respecting AI will have to go from written language alone. And have fun adapting your model for other languages!

    • Haui@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      1 year ago

      I‘m very curious how people choose the current shitty system over something made by autistic people for nds just because AI or [person]. You folks probably have no problem finding awesome jobs. Surprise: a lot of us are getting pretty fucked by the current system.

      • Franzia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        AI has been proposed and failed at the task of making shitty systems better. This makes me very skeptical of all AI projects.

        Mentra uses AI is to parse through job descriptions to make sure they are cognitively accessible and broken down in a consistent format that is not exclusionary.

        This is probably not a big deal tbh. AI is already used everywhere in job descriptions and resume parsing.

        How does this effect disclosure? This might have a problem in the future over Doctor-confirmed Autistic, self-diagnosis, and the labor rights and protections that can come with disclosing.

        Mentra’s website UI and UX is incredible. What a great and easy to read experience.

  • r3df0x ✡️✝☪️@7.62x54r.ru
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    The thing that concerns me is that this could be used for discrimination, especially if it’s accessible by search engines. I’d put everything behind a registration wall. I would never encourage an autistic person to post their diagnosis online because there’s too much risk of prejudice.