Is it because oft the author using multiple clauses and multiple layers of context in the first two paragraphs?
If yes, then I understand why. I find myself making the same mistake quite often because my first language is German, which often uses clauses (at least it’s more common than in english).
Yes, they’re using several abbreviations, without explaining them properly, which isn’t ideal. It’s likely to keep the article short, which comes at the expense of people unfamiliar with the topic)l/organizations.
Another news site I regularly visit has a small information button besides abbreviations with a popup to explain a term, which also links to Wikipedia. This makes understanding articles about unfamiliar topics way easier.
I’m not mad at it :-) It was written for people who were familiar with the situation and posted in a subject matter specific forum.
24 hours later I feel like a bit of an ass. I thought about how many times I’ve picked up a technical article and wished for a bit less background… it’s kind of nice that we don’t have to talk about the whole history of OSS before getting to the news/subject.
I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.
They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.
A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark AI LLM generated texts (which the TLDR bots do).
Yeah that’s right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.
Anyone else immediately get a migraine trying to read the first 2 paragraphs/sentences of that article?
Is it because oft the author using multiple clauses and multiple layers of context in the first two paragraphs?
If yes, then I understand why. I find myself making the same mistake quite often because my first language is German, which often uses clauses (at least it’s more common than in english).
Most likely. My own unfamiliarity with the subject matter plays a part too.
It wasn’t badly written… but it probably could have used a brief introduction.
Yes, they’re using several abbreviations, without explaining them properly, which isn’t ideal. It’s likely to keep the article short, which comes at the expense of people unfamiliar with the topic)l/organizations.
Another news site I regularly visit has a small information button besides abbreviations with a popup to explain a term, which also links to Wikipedia. This makes understanding articles about unfamiliar topics way easier.
I’m not mad at it :-) It was written for people who were familiar with the situation and posted in a subject matter specific forum.
24 hours later I feel like a bit of an ass. I thought about how many times I’ve picked up a technical article and wished for a bit less background… it’s kind of nice that we don’t have to talk about the whole history of OSS before getting to the news/subject.
All communities should enable the summarizing bot https://github.com/RikudouSage/LemmyAutoTldrBot
It was struggling harder than I was ;-)
I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.
They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.
A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark
AILLM generated texts (which the TLDR bots do).Not calling ML and LLM “AI” would also help. (I went offtopic even more)
I think the Internet would benefit a lot, if peope would mark their Informations with sources!
Yeah that’s right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.