• iopq@lemmy.world
    link
    fedilink
    arrow-up
    64
    arrow-down
    1
    ·
    2 months ago

    I have some suggestions: let’s not make people translate to English unless they are learning English. I don’t want to be thinking about whether “I’m coming Friday” is correct grammar in English. I want to be thinking about my target language!

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      33
      ·
      edit-2
      2 months ago

      Thanks for the suggestion, I’ll definitely try to make the app as language inclusive as possible!

      Also, sorry if I might’ve been too vague with the post title. The app is just similar to Duolingo in terms of structure and the idea, however it’s not specific to language learning but supposed to cater to any subject, really.

      For example, I personally use it to study for my university subjects.

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      2 months ago

      This app seems to be about any generic courses, not just language learning. So someone can make a language course in the way you’ve described

    • 𝒎𝒂𝒏𝒊𝒆𝒍@sopuli.xyz
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      2 months ago

      Yeah, it’s my minor pet peeve with Duolingo, like source language and my language doesn’t have/need suffixes like “the” or “a” so I often forget about it, it’s soo annoying to fail because of such minor thing, especially when their suggested English often looks terrible

      • GravitySpoiled@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        2 months ago

        In some languages that’s not a minor thing because of the gender. I mean that’s a problem of the language which should improve but for now you have to use the gender for good communication

        • iopq@lemmy.world
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          2 months ago

          We’re talking about, say, learning Spanish and Duolingo be like “now translate this very long and overly specific sentence to English”

          Then you end up trying to construct the English sentence even though you’re learning Spanish

          Here’s an example where I think my sentence is perfectly fine, but it just expected a different word order. It expected me to put If at the beginning, but I didn’t notice it was capitalized.

          Korean doesn’t even have capital letters, why is it doing some gotcha about English capitalization when I already know English?

            • iopq@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              2 months ago

              Yeah, I’m just testing it out. For a true Duolingo experience it would need fill in the blank and audio

              • Cr4yfish@lemmy.worldOP
                link
                fedilink
                arrow-up
                2
                ·
                2 months ago

                “Fill in the blank” is now available, just got done coding it.

                If you want to try it out, I created a new course “Testing out new question types”.

              • Cr4yfish@lemmy.worldOP
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 months ago

                Yeah agree, I’ll definitely implement that one.

                Right now I’m working on “match the cards”.

                Edit: For audio I’m not so sure on how I would do it. I don’t think most people would record it themselves when creating a course so I would need to generate it. Then you’d have the issue about correct pronunciation…

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    2 months ago

    This is a really great use of LLM! Seriously great job! Once it’s fully self-hostable (including the LLM model), I will absolutely find it space on the home server. Maybe using Rupeshs fastdcpu as the model and generation backend could work. I don’t remember what his license is, though.

    Edit: added link.

  • GravitySpoiled@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    2 months ago

    That sounds cool! Is there already a release? If not, don’t rush it :)

    Edit: never mind. I just saw the website 😅

        • Cr4yfish@lemmy.worldOP
          link
          fedilink
          arrow-up
          5
          ·
          2 months ago

          Hm that’s very weird. I can’t replicate it and I used some random SSL checker website and it checks out as well.

          Really not sure why that’s happening.

          • rmuk@feddit.uk
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            WFM. Looks like you’re using Let’s Encrypt, which is fine, and everything seems to be consistent. I think you’re good.

          • youmaynotknow@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            2 months ago

            It’s a great looking site at first glance (haven’t signed up yet). I just sandboxed a browser and let it run without forcing HTTPS. Funny thing is that it does show it as being https when disabling https enforcement.

            I’ll take it for a spin this afternoon when I get back home (or in my phone when I get bores at the recital my wife is forcing me to go to 🤣🤣🤣).

  • AliasAKA@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    Is there any interest in getting local models to run using this? I’d rather not use Gemini, and then all the data can reside locally (and not require a login).

    I’d be happy to work on this, though I’m a python developer not a typescript one.

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      Yeah, good idea. It’s possible to do that with WebLLM & Langchain. Once Langchain is integrated, it’s kinda similar to the Python Version so should be do-able I think.

      • AliasAKA@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Ah interesting — again happy to help out if there’s anything I can contribute to. I can make a feature request on github if there’s interest.

  • JustMarkov@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    Cool project! Is there any plans on releasing a mobile app in the future? I’m allergic to PWAs.

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      12
      ·
      2 months ago

      Thanks, haha. I’d love develop a Native App for it too but this is a zero-budget Project (aside from the Domain). PlayStore has a one-time fee so that’s 25€ for Android + 8€/Month for the IOS AppStore just to have the App on there.

      In theory, I could just have a downloadable .apk for Android to circumvent the fee but most people don’t want to install a random .apk from the internet. And I’m not developing a Native App for like 3 people excluding myself (I’m an iPhone user).

      Soo, yeah that’ll probably not happen :(.

      • Eyck_of_denesle@lemmy.zip
        link
        fedilink
        arrow-up
        15
        ·
        2 months ago

        This post gathered a bit of traction. So hopefully more people help out. F droid is a better marketplace for oss compared to playstore because people downloading from playstore act entitles a little, especially towards oss software.

      • JustMarkov@lemmy.ml
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 months ago

        In theory, I could just have a downloadable .apk for Android to circumvent the fee but most people don’t want to install a random .apk from the internet.

        You could’ve considered F-Droid.

        Soo, yeah that’ll probably not happen :(.

        It’s sad. Anyway, good luck with future development!

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I personally love PWAs — why the hate for them? Personally I think more apps should be PWAs instead.

      • JustMarkov@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Native apps will always be better, imo. I think less apps should use PWAs. No offense to those who use them, though. It’s just my personal preference.

        • Cr4yfish@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          Fair opinion. Native Apps do have some huge advantages, but also some disadvantages.

          I’ve coded both before (although way more PWAs) and with Native you also run into Platform issues as long as you don’t ship exclusively for one Platform.

          PWAs have a huge advantage here since they run the same everywhere as long as the Platform has a browser which is not safari.

  • Silmathoron ⁂@floss.social
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    2 months ago

    @Cr4yfish nice project 🙂
    I’m a bit worried about the AI part, though, as you’d want an app whose main purpose is “learning” to guarantee, if not the reliability of the material (since anyone can contribute), at least the reliability of the course generation process that it proposes.
    As far as I know, this is not possible with current generative AI tools, so what’s your plan to make sure hallucinations do not creep in?

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      Thanks. My general strategy regarding GenAI and reducing the amount of hallucinations is by not giving it the task to make stuff up, but to just work on existing text - that’s why I’m not allowing users to create content without source material.

      However, LLMs will be LLMs and I’ve been testing it out a lot and found already multiple hallucinations. I built in a reporting system, although only reporting stuff works right now, not viewing reported questions.

      That’s my short term plan to get a good content quality, at least. I also want to move away from Vercel AI & Gemini to a Langchain Agent system or Graph maybe, which will increase the output Quality.

      Maybe in some parallel Universe this really takes off and many people work on high quality Courses together…

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      The UI mostly works offline once loaded in due to aggressive caching. Downloading Course Content was on the initial Roadmap but I removed it since I wasn’t sure if anyone would like the feature.

      Syncing stuff is a real pain in the ass but I’ll implement it if at least a couple people want it.

      • Auster@thebrainbin.org
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        I don’t know how much of a subset I am, but I still use dictionary softwares from Windows 95~2000 era and Android softwares on a completely offline and vanilla VM, partly due to internet randomly going bad, and partly because I am neurotic about digital contents vanishing once support ends.

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      8
      ·
      2 months ago

      I use Gemini, which supports PDF File uploads, combined with structured outputs to generate Course Sections, Levels & Question JSON.

      When you upload a PDF, it first gets uploaded to a S3 Database directly from the Browser, which then sends the Filename and other data to the Server. The Server then downloads that Document from the S3 and sends it to Gemini, which then streams JSON back to the Browser. After that, the PDF is permanently deleted from the S3.

      Data Privacy wise, I wouldn’t upload anything sensitive since idk what Google does with PDFs uploaded to Gemini.

      The Prompts are in English, so the output language is English as well. However, I actually only tested it with German Lecture PDFs myself.

      So, yes, it probably works with any language that Gemini supports.

      Here is the Source Code for the core function for this feature:

      export async function createLevelFromDocument(
          { docName, apiKey, numLevels, courseSectionTitle, courseSectionDescription }: 
          { docName: string, apiKey: string, numLevels: number, courseSectionTitle: string, courseSectionDescription: string }) 
          {
          
          const hasCourseSection = courseSectionTitle.length > 0 && courseSectionDescription.length > 0;
      
          // Step 1: Download the PDF and get a buffer from it
          const blob = await downloadObject({ filename: docName, path: "/", bucketName: "documents" });
          const arrayBuffer = await blob.arrayBuffer();
          
          // Step 2: call the model and pass the PDF
          //const openai = createOpenAI({ apiKey: apiKey });
          const gooogle = createGoogleGenerativeAI({ apiKey: apiKey });
      
          const courseSectionsPrompt = createLevelPrompt({ hasCourseSection, title: courseSectionTitle, description: courseSectionDescription });
          
          const isPDF = docName.endsWith(".pdf");
      
          const content: UserContent = [];
      
          if(isPDF) {
              content.push(pdfUserMessage(numLevels, courseSectionsPrompt) as any);
              content.push(pdfAttatchment(arrayBuffer) as any);
          } else {
              const html = await blob.text();
              content.push(htmlUserMessage(numLevels, courseSectionsPrompt, html) as any);
          }
      
          const result = await streamObject({ 
              model: gooogle("gemini-1.5-flash"),
              schema: multipleLevelSchema,
              messages: [
                  {
                      role: "user",
                      content: content
                  }
              ]
          })
          
      
          return result;
      }
      
    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      Haha. Well we can’t all actually be Duolingo and employ people to create the courses :D

        • Robust Mirror@aussie.zone
          link
          fedilink
          arrow-up
          3
          ·
          2 months ago

          I’ve made custom flashcards for anki to study stuff and I tested this for some similar things and it was a lot faster and easier. Anki feels like it takes forever so the investment to make a custom set is only worth it for things you need to study for a long time.

          If all you want is to generate a bunch of flashcards fast and you have a pdf with the info presented clearly it’s an easy method.

        • Cr4yfish@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Well, yes, in a way at least. I’m not pretending to invent something never done before. Although it already has multiple features that Anki doesn’t have.

  • grapemix@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    Is it for self-host ppl too?

    For all projects/apps, I am looking for OIDC, S3 and PgSQL. It’s easier to implement these features earlier and these features will make any projects more popular in the self host community.

    • Cr4yfish@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Is it for self-host ppl too?

      In theory not an issue. I use Supabase, which you can self host as well.

      You can also self host the Mistral Client, but not Gemini. However, I am planning to move away from Gemini towards a more open solution which would also support self hosting, or in-browser AI.

      I am looking for OIDC, S3 and PgSQL

      Since I use Supabase, it runs on PgSQL and Supabase Storage, which is just an Adapter to AWS S3 - or any S3, really. For Auth, I use Supabase Auth which uses OAuth 2.0, that’s the same as OIDC right?

      • grapemix@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        Very cool. You can check out ollama for hosting local ai model.

        OIDC is an extension of OAuth2 that focuses on user authentication rather than user authorization. Once OIDC authenticates a user, it uses OAuth2 specifications to perform authorization.

        The easiest way to support oidc is thru using lib from your framework/language. All major language should already have oidc lib. Take a look for authelia which has pretty nice doc. We host lots of app and we don’t want to login hundred times for each apps. It’s nice to login once only and all apps play nice to each other ;)

  • bloubz@lemmygrad.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Cool concept! Good luck with it

    Hope you can get around to let switch models, and maybe let people use open-source/open data models?

    I’ve also heard about Vercel AI SDK that let’s you use different models with a common SDK so that it doesn’t rely on implementation