

That’s my experience as well. In order to get a healthy, moderate amount of news you have to actively avoid them. I even tried avoiding weather forecasts at one point but people kept telling me.
Independent thinker valuing discussions grounded in reason, not emotions.
I say unpopular things but never something I know to be untrue. Always open to hear good-faith counter arguments. My goal is to engage in dialogue that seeks truth rather than scoring points.
That’s my experience as well. In order to get a healthy, moderate amount of news you have to actively avoid them. I even tried avoiding weather forecasts at one point but people kept telling me.
Yeah, I agree with all of this. What I’m pushing back against is the absolute, dismissive tone some people take whenever the potential dangers of AGI are brought up. Once someone is at least willing to accept the likely reality that we’ll have AGI at some point, then we can move on to debating the timescale.
If an asteroid impact were predicted 100 years from now, at what point should we start taking steps to prevent it? Framing it this way makes it feel more urgent - at least to me.
I’d rather stay informed too but not at the expense of my mental wellbeing. It’s not political posts that’s the issue here - it’s the amount of them.
I’d embarrass her so hard after talking to me like that.
It’s pretty disingenuous to suggest that my decision to block political posts on Lemmy comes from a desire to stay ignorant.
sticking your head in the sand and hoping it doesn’t effect you until it’s over is probably not the best way to handle it.
Consuming anger inducing political articles all day every day is probably not the best way to handle it either. For most people, this likely has far greater effect on their mental wellbeing than what these guys are doing policywise. On Reddit, there atleast was a slim chance that you could make an convincing enough argument that someone would drop their support for them but good luck finding such person on Lemmy. We’re preaching to the choir here.
That is what does most of the heavy lifting for me too. Nearly all political discussion here is, in fact, discussion about either Trump or Musk. I’d be willing to bet that at any moment, atleast third of the posts on anyone’s front page includes either of those names.
You can’t know that.
No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.
Sure, but that’s still just a speedbump. In a few hundred or thousand years the civilization would rebound and we’d continue from where we left.
I’d like to be informed too but at this point it’s essentially a choice between being firehosed by american politics or blocking all of it. I wouldn’t mind if it was just the occasional article when something actually newsworthy happens. Now it’s for a large part just recreational outrage.
No worries dude!
This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.
That is not even remotely what I’m saying.
I’ve unfollowed so many communities due to the overwhelming amount of political articles already. If I keep doing that there’s not going to be anything left to browse soon. People are posting that stuff even on meme communities.
Firstly, I’ve been talking about improvements in AI technology broadly, not any specific subfield. Secondly, you can’t know that. While I doubt LLMs will directly lead to AGI, I wouldn’t claim this with absolute certainty - there’s always a chance they do, or at the very least, that they help us discover what the next step should be.
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
We already know that General Intelligence is possible. The question that remains is wether it can be replicated artificially.
None, it’s the web version.
Sorry for late reply, I no longer use this account.