• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 3rd, 2023

help-circle
  • So far it’s been good! Lemmy has made me hopeful for better social media. I’m not hugely into twitter-style social media so I was never really able to appreciate Mastadon.

    I’m actually quite surprised with how much content is here already. There are regular posts and conversations, and a good mix of content. It’s not at the level reddit is in terms of volume, but I don’t feel starved or anything. I look forward to the future here!





  • I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.









  • It doesn’t necessarily replace search engines, but I’ve been using chatgpt and sometimes Bing chat more and more. Like others have said, it does hallucinate all the time, and cannot be trusted to be 100% correct. I don’t see that as a problem though, as long as I have some way to verify what it says, assuming accuracy is important. The amount of time wasted by bad answers is easily made up with the time savings on correct, or correct-ish answers.

    I’m a software engineer, so a common work pattern will be to ask chatgpt “write me code to do X, meeting constraints Y and Z”. As long as the subject isn’t too obscure, it’ll generally produce something I can work with. I then adapt that code sample to work in the actual context it is needed, and then debug it as if it were my own code. Sometimes it’ll make up function and things like that, but I’ll fix those and it doesn’t take any more time than if I had to go learn that function as I wrote my own implementation.

    Another scenario is when I get an error I’m unfamiar with. Often times, I can ask chatgpt to explain the error, and sometimes even fix it for me. This usage more directly replaces a search engine. If the fix doesn’t work, then I’ll do it the old fashioned way.

    I’m strongly looking forward to github copilot X to be even more integrated than chatgpt in this work flow.