• 0 Posts
  • 290 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • So bizarrely the best experience is to self host and pirate. That’s what you get when the entire entertainment industry is hostile to consumers.

    When Netflix first became big, it was popular because it was a one-stop shop for almost all your content. It was like a big library of content in one place, you pay a reasonable monthly fee and it’s all there. Piracy dipped as a result.

    Now all the content is fragmented into numerous walled gardens you have to pay separate fees to access. People can only consume the same amount but now they have to pay 4 or 5 fees as the content is spread out.

    Unsurprisingly piracy is booming again.


  • Also the water is just a medium for energy transfer; it can be reused & recycled in near perpetuity in a closed system.

    We’re used to open systems with water in power stations, including cooling towers etc, because water is abundant on earth so it’s cheaper to just dump it back into the atmosphere; we probably take the whole thing for granted.

    But it could be engineered to be a closed system a bit like a coolant in a refrigeration unit cycling back and forth. And it probably will need to be a closed system in the future in space where water will be incredibly precious.



  • It sounds like your system clock may be the issue.You have a system clock inside your device. Linux usually uses the internet to set your clock but still refers to your system clock. If the internet provided time is too far off your system clock it may ignore it and display your system time.

    KDE respects the NTP clock settings used by your linux system, while ironically Gnome does not and does its own thing directly with the time date control. This is probably why you’re now noticing a problem.

    So either your system clock is supposed to be UTC and actually set to local time, or your system clock is correct but your timezone in linux is way off.

    If you use timedatectl status in a terminal it’ll show your current local time, UTC and RTC time, as well as your timezone and whether the RTC is set to your local timezone or UTC. RTC is your hardware clock on your device.

    If “RTC is local tz” says no, then the value for RTC and UTC should be the same, as your hardware clock is set to be the UTC time. And if the UTC time is wrong then your system is uaing your hardware clock to incorrectly work out the UTC. UTC is the 0 timezone worldwide and has an absolute value - its the same for everyone and you can esily.find it with a search engine. If the displayed UTC is wrong on your system, then you’re out of sync with everyone.

    So how to fix it if its wrong:

    One way would be to tell your systen what the hardware clock should be and then set it correct. Use “timedatectl set-local-rtc 1” to make it set to be in your local time zone. Or if you want it to be UTC you can use timedatectl set-local-rtc 0. You can use either but UTC is better.

    That should fix the issue as the network time will now come in correctly.

    But if you wanted you can also manually set the local time and date with timedatectl set-time hh:mm:ss. Once that is set then your RTC should also be changed and be back in sync depending on whether you set it up to be also local or UTC. When you set the local tine it will work out the UTC value based on your timezone. Note if the timezone is wrong it’ll still be wrong!

    If you can’t set the time because NTP (network time) is running, you could.leave it and the clock should now sort itself out. But if you want to force mannually set the time you can turn off NTP if you want: “timedatctl set-ntp false” You could leave it off and set the time manually using “timedatectl set-time hh:mm:ss”

    If still getting NTP error messagss you could also disable the NTP system job temporarily: systemctl disable --now chronyd. Turn it back on afterwards with systemctl enable --now chronyd

    Finally do make sure the timezone is correct. I know you say it is but timedatectl shows you what the system thinks it is, and if ita wrong then rtc/utc will still be wrong as the timezone is used to convert from local time to UTC. You can use timedatectl to change the timezone: timedatectl set-timezone name.

    There are loads of valid timezones but only valid ones will work. Get your local timezones official name online or use timedatectl list-timezones to see all the options. You can filter uaing egrep etc.

    Hopefully that’ll fix the issue for you. You can also boot into your bios and manually set the hardware clock if needs be but linux still needs to know whether its supposed to be utc ir local time.


  • I’d recommend either OpenSuSE or Fedora, both with KDE. They’re big, well supported distros, which should install without issue and provide a slick modern experience. I use OpenSuSE, as I find the YaST system tools convenient and user friendly.

    I’d avoid Ubuntu, multiple issues. Mint is a good distro but I think any big mainstream distro “just works” now, so I’d go for something that uses a slicker desktop. I prefer KDE, which is available on Mint but just isn’t as tightly integrated as their own Cinnamon desktop.


  • I’ve tried Arch - it allows you to make a system that is exactly what you want. So no bloat installing stuff you never need or use. It also gives you absolute control.

    On other distros like Fedora, you get a pre configured system set up for a wide range of users. You can reduce down the packages somewhat but you will often have core stuff installed that is more than you’ll need as it caters to everyone.

    Arch allows you to build it yourself, and only install exactly the things you actually want, and configure then exactly how you want.

    Also you learn an awful lot about Linux building your system in this way.

    I liked building an arch system in a virtual machine, but I don’t think I could commit to maintaining an arch install on my host. I’m happy to trade bloat for a “standard” experience that means I can get generic support. The more unique your system the more unique your problems can be I think. But I can see the appeal of arch - “I made this” is a powerful feeling.



    • OS - - > Linux OpenSuSE with KDE

    • YouTube - - > Freetube - opensource, private YouTube client for Linux, MacOS and Windows

    • Downloading music/videos --> yt-dlp

    • Downloading videos/images --> gallery-dl

    • Email - - > Thunderbird (really moved forward in last few years)

    • Notes - - > Joplin

    Selfhosting (mine is on raspberry pi) :

    • Streaming library - Jellyfin

    • Photo library - imich

    • Downloads - qbittorrent, prowlaar, radaar, sonaar, lazy librarian in a docker stack with VPN

    • smart home - Homeassistant

    • filesync - - > Syncthing (I don’t have problems with long file names - maybe a Windows issue or Linux FS? I use EXT4 on all my devices and don’t use Windows anymore)




  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlTimeshift
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    26 days ago

    Looking at your error it’s because Rsync is erroring.

    I’d starr by testing Rsync with an individual text file saving to /dev/dm-0 and see what error is returned.

    Timeshift is good but it basically is just a tool to use Rsync to save a copy of your system folders (or other folders if you wish).

    Rsync needs to be able to read the source and write to the destination, so I’d start with testing that Rsync is able to do that.

    Given you’re using an encrypted partition it’s possible you’re trying to read/write to the wrong locations. You’ve provided device UUIDs but you’d probably actually need to be backing up the mounted decrypted locations? I.e. the root file system / will actually be a mounted location in your Linux set up, probably under /run, with symlinka pointing to it for all the different system folder. Similar for /home/ if you want to back up personal files.

    The device UUID would point to the filesystem containing the encrypted file (managed by LUKS) which will have very limited read/write permissions, rather than directly to the decryoted contents / or /home partitions as you’d expect in a normal system. In particular if /dev/dm-0 (looks to be an nvme drive) is an encrypted destination then really you also want to be pointing directly to it’s decrypted mounted location to write your files into, not the whole device.

    Edit: think of it like this, you don’t want to back up the encrypted container with Timeshift, you want to back up the decryoted contents (your filesystem) into amother location in your filesystem (encrypted or decrypted). If the destination is also an encrypted location you need to back up into its file system, not the device where the encrypted file sits. So use more specific filesystem paths not UUIDs. That would be something like /mnt/folder or /run/folder not /dev/anything as that’s hardware location, and not directly mounted in an encrypted filesystem unlike how it can be in a non-encryoted system.


  • 100% CPU use doesnt make sense. RAM would be the main constraint not the CPU. Worth looking into - maybe a bug or broken piece of software.

    Also the DE may he more the issue than the distro itself. You could install an even more lightweight desktop environment like Open box. Also worth checking whether youre using x11 or Wayland. Its easy to imagine Wayland has not been optimised or extensively tested on something like your device, and could. Easily be a random bug if the DE is pushing your CPU to 100%

    There are super lightweight distros like Puppy linux.


  • It had to happen eventually. Seems reasonable time to make the moce. It’ll be beneficial for all Linux users, and probably a huge relief for Gnome devs to be be able to focus purely on wayland.

    It just will suck a bit for those on rolling release distros who still experience major issues with Wayland, particularly when its not Gnome or Wayland projects that need to make a fox - looking at you Nvidia.

    I wouldn’t be surprised if other big DEs, such as KDE, start making firmer plans for dropping X11. I’m one of the 30% of KDE users still using X11 - for me it was Nvidia issues, and I do remain anxious about being reliant on drivers from a notoriously bad manufacturer. Having said the drivers have improved massively over the past 18-24 months for me at least, and maybe everyone moving over to Wayland is what’s needed to force Nvidia to act.


  • In terms of KDE dependencies, you’re talking basically about QT. The amount of packages you download shouldnt be too much and likely used for other QT programs which are common.

    However there is also GSconnect which is a Gnome extension and uses the KDE connect protocol.

    I would say that your concerns regarding the KDE Connect dependencies should be balanced against the good Android and iOS support, and the wide use of KDE connect means it is well maintained, supported and responsive to security updates. These considerations may outweigh the installation of packages that you otherwise won’t be using? It may be better to go mainstream and accept the dependencies than hunt down a lesser supported alternative and deal woth the associated shortcomings.



  • And Women wouldn’t trust a man has taken it because, ultimately, they’re the ones who become pregnant not men.

    While companies have looked into male drug-based contraceptives, ultimately even if it were 100% effective, it would never beat female drug-based contraceptives. It’d have a market sure - but it wouldn’t stop women taking birth-control because it’d remain the only way for them to be sure.


  • It’s about short term vs long term costs, and AWS has priced itself to make it cheaper short term but a bit more expensive long term.

    Companies are more focused on the short term - even if something like AWS is more expensive long term, if it saves money in the short term that money can be used for something else.

    Also many companies don’t have the money upfront to build out their own infrastructure quickly in the short term, but can afford longer term gradual costs. The hope would be even though it’s more expensive, they reach a scale faster where they make bigger profits and it was worth the extra expense to AWS.

    This is how a lot of outsourcing works. And it’s exacerbated by many companies being very short term and stock price focused. Companies could invest in their own infrastructure for long term gain, but they often favour short term profit boosts and cost reduction to boost their share price or pay out to share holders.

    Companies frequently so things not in their long term interests for this reason. For example, companies that own their own land and buildings sell them off and rent them back. Short term it gives them a financial boost, long term it’s a permanent cost and loss of assets.

    In Signals case it’s less of a choice; it’s funded by donations and just doesn’t have the money to build out it’s own data centre network. Donations will support ongoing gradual and scaling costs, but it’s unlikely they’d ever get a huge tranch of cash to be able to build data centres world wide. They should still be using multiple providers and they should also look to buildup some Infrastructure of their own for resilience and lower long term costs.


  • It does make sense for Signal as this is a free app that does not make money from advertising. It makes money from donations.

    So every single message, every single user, is a cost without any ongoing revenue to pay for it. You’re right about the long run but you’d need the cash up front to build out that infrastructure in the short term.

    AWS is cheap in the sense that instead of an initial outlay for hardware, you largely only pay for actual use and can scale up and down easily as a result. The cost per user is probably going to be higher than if you were to completely self host long term, but that does then mean finding many millions to build and maintain data centres all around the world. Not attractive for an organisation living hand to mouth.

    However what does not make sense is being so reliant on AWS. Using other providers to add more resilience to the network would make sense.

    Unfortunately this comes back to the real issue - AWS is an example of a big tech company trying to dominate a market with cheap services now for a potential benefits of a long term monopoly and raised prices in the future. They have 30% market share and already an outage by Amazon is highly disruptive. Even at 30% we’re at the point of end users feeling locked in.


  • So in terms of hardware, I use a Raspberry Pi 5 to host my server stack, including Jellyfin with 4k content. I have a nvme module with a 500gb stick and an external HDD with 4tb of space via USB. The pi5 is headless and accessed directly via SSH or RDC.

    The Raspberry Pi 5 has H.265 hardware decoding and if you’re serving 1 video at a time to any 1 client you shouldn’t have any issues, including up to 4k. It will of course use resources to transcode if the client can’t support that content directly but the experience should be smooth for 1 user.

    For more clients it will depend on how much heavy lifting the clients do. I my case I have a mini PC plugged into my TV, I stream content from my pi5 to the mini PC and the mini PC is doing the heavy lifting in terms of decoding. The hardware on the pi5 is not; it just transfer the video and the client does the hard work. If all your clients are capable then such a set up would work with the pi5.

    An issue would come if you wanted to stream your content to multiple devices at the same time and the clients don’t directly support H.265 content. In that case, the pi5 would have to transcode the content to another format bit by but as it streams it to the client. It’d cope with 1 user for sure but I don’t know how many simultanous clients it could support at 1440p.

    The other consideration is what other tools are being use on the sever at the same time. Again for me I live alone so I’m generally the only user of my pi5 servers services. Many services are low powered but I do find things like importing a stack of PDFs into Paperless NGX is surprisingly CPU intense and in that case the device could struggle if also expected to transcode content.

    I think from what you describe the pi5 could work but you may also want to look at higher powered mini PC as your budget would allow that.

    For reference I use dietpi as the distro on my server, and I use a mix of dietpi packages (which are very well made for easy install and configuration) and docker. I am using quite a few docker stacks now due to the convenience of deploying. Dietpi is debian based, and has a focus on providing pre configured packages to make set up easy, but it is still a full debian system and anything can be deployed on it.

    Obviously the other consideration in the pi5 is an ARM device and a mini PC would be X86_64. But so far I’ve not found any tools or software I’ve wanted that aren’t compiled and available for the Pi5 either via dietpi or docker; ARM devices are popular in this realm. I have come across a bug in docker on ARM devices which broke my VPN set up - that was very frustrating and I had to downgrade docker a few months ago while awaiting the fix. That may be worth noting given docker is very important in this realm and most servers globally are still x86.

    If I were in your position and I had $200 I’d buy the maximum CPU and GPU capability I could in 1 device, so I’d actually lean to a mini PC. If you want to save money then the Pi5 is reasonabkr value but you’d need to include a case and may want to consider a nvme or ssd companion board. Those costs add up and the value of the mini PC may compare better as an all in one device; particularly if you can get a good one second hand. There are also other SBC that may offer even better value or more power than a pi5.

    Also bear in mind for me I have a mini PC and pi5; they do different things with the pi5 is the server but the mini PC is a versatile device and I play games on it for example. If you will only have 1 server device and pre exisiting smart tvs etc you’ll be more reliant on the servers capabilities so again may want to opt for the most powerful device you can afford at your price point.


  • Open Office? It hasn’t been touched in a decade. LibreOffice is the true continuation of Open Office, which was forked off after Oracle bought Sun and OO had been left with poor governance and slow updates.

    Open Office finally ended up under the Apache foundation but hasn’t been maintained since 2014.

    LibreOffice has had continual development with both bug fixes and new features, and the Open Document Foundation gives it good governance and independence as an open source project…

    Honestly, switch to Libre Office.