

True! Then instead of spilling my coffee on the counter I could spill it on the counter instead.
Freedom is the right to tell people what they do not want to hear.
True! Then instead of spilling my coffee on the counter I could spill it on the counter instead.
The best coffee I’ve ever drank was from Aeropress but honestly, if you use freshly ground beans on a Moccamaster they’re quite difficult to tell apart.
It’s not to protect it from cracking - it’s to stop the leftover coffee from burning onto it, since I only rinse it after use.
I don’t waste good coffee.
It’s intentional. Leaves an air gap between the pot and the hotplate.
When I make coffee just for myself, I always measure out the same amount of water and this never happens. But my SO is slightly less autistic about it than I am and makes inconsistent amounts when brewing for the two of us - and I just can’t stand the thought of pouring even a drop of coffee down the drain. So, I spill it on the table and floor instead.
I live in a small granny cottage and “my desk” means the kitchen table 2.5 meters away. I technically could move it to my desk and it would still remain in the kitchen.
Or maybe I just need to start drinking straight from the jug.
Moccamaster is relatively popular brand where I live. Most people know about it. It always boggles my mind when I see a middle class family with a 35€ coffee maker. Why cheap out on something you’re using multiple times a day for the rest of your life? These things are not that expensive and spare parts are widely available.
I mean, honestly this is one of the better uses for machine learning. Not that this age checking is a good thing but if you’re going to do it on a mass scale then this seems like the right approach. I imagine that especially for a relatively heavy user this is going to be extremely accurate and far better than the alternative of providing a selfie let alone picture of an ID.
My word filters reliably block 3rd of my front page here. Includes every keyword seen here and much more.
Way to move the goalposts.
If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.
Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.
So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.
Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.
What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?
Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.
Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.
Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.
It won’t solve anything
Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy.
Asking investment advice from a system that’s designed to do nothing else but generate natural sounding language based on probabilities is pretty stupid.
That being said, what’s wrong with this answer? I think it’s more or less a good and balanced take.
Here’s the first half of it that I left out:
“AI” as an investment isn’t one thing—it’s more like a category of bets, ranging from hardware to software to services, each with wildly different risk profiles. So the honest answer is: yes, it can be a good investment—but only if you understand what you’re actually investing in.
Here’s why that nuance matters:
Buying Nvidia stock in 2019 was a good AI investment. Buying it now, after a 10x run-up? Much less clear—it’s priced as if they’re the sole arms dealer in a forever war.
OpenAI, Anthropic, etc. aren’t publicly traded, so retail investors can’t buy them directly. Instead, you get exposure via companies like Microsoft, Amazon, or other backers—meaning you’re not really investing in “AI” directly, but as part of a much broader bundle.
AI startups and ETFs are all over the place—some are thinly veiled hype vehicles chasing trends, while others are building real infrastructure (like vector databases, chip design tools, or specialized AI services). Picking the wrong one is like investing in Pets.com during the dot-com boom—it sounds techy, but the business might be garbage.
Thematic ETFs like BOTZ or ROBO give you AI exposure but are diluted by their attempt to hedge across subsectors. They tend to underperform when compared to cherry-picking the winners.
I’m unable to replicate your results. I get a long and nuanced aswer. Mind sharing the answer you got?
When I asked the same thing the conclusion was:
So is AI a good investment? The sector has long-term potential, especially in areas like chip manufacturing, enterprise automation, and maybe foundational model licensing. But it’s also deeply speculative right now, with prices reflecting hype as much as earnings.
If you’re thinking long-term and can stomach volatility, AI is worth including. If you’re chasing short-term returns because you think “AI is the future,” you might be buying someone else’s exit.
It’s a bit sensationalist to say someone “faces possible jail time” just because the maximum sentence for breaking a law is up to three years in prison. Also, he’s not being sued for reviewing handheld gaming devices - it’s about the ones shipped with copyrighted content.
It’s called a rage bait - and it’s working.