

Not yet, it manages snaps but I can add flatpaks as well. I made it so all definitions are in a single file so that people can easily contribute by testing. I will update the documentation to reflect this.
Not yet, it manages snaps but I can add flatpaks as well. I made it so all definitions are in a single file so that people can easily contribute by testing. I will update the documentation to reflect this.
I am not sure I understand the use case but as it stands now you can select a profile and run Gosuki once, it will import everything from said profile or even all profiles if you prefer.
Thanks for feedback, much appreciated. Yes I totally agree and multi device sync is a top priority that will be hopefully implemented in the next release.
I am developing this program out of personnel need so it keeps me motivated to keep improving it.
Packaging definitely on my todo list: it will cover all common unix platforms
Right now it’s not yet built-in but it’s on the high priority list of features and should be fairly easy to add soon. It will be either a self hosted sync server or a replicated architecture.
For mobile it can be achieved with the autoimport feature and Syncthing
deleted by creator
Haha I literally had no idea even that Pocket was shutting down !! I guess one more argument to avoid subscription services.
Yes that’s exactly how it works. It’s the most difficult part of the project especially for Mozilla browsers.
I figured this is the last place browser vendors didn’t dare yet to lock us out from it.
The program relies on pulling the bookmarks out of the filesystem. I have no idea how feasible it would be on Android given the sandboxed nature of apps.
However, it is possible to manually export from time to time your bookmarks to a folder that is synced with Syncthing for example. Gosuki has a bookmark html auto import feature that watches a folder and imports all bookmarks files when a change is detected. This is how I am personally doing it right now.
IMO Despite some unjustified rumors Arch is a very stable distro. For me it feels the same as Debian stability wise while still being on the cutting edge side. The Arch wiki is the second most important reason.
On your HD in a portable sqlite database. It is a fully offline solution.
Good to know thanks. For info all modern bluetooth chips have compatibility issues with Linux anyway.
May I ask which motherboard was it ? I am planning to get soon exactly the same specs but with an X870e Taichi which is renowed for good compatibility with Linux.
Good advice is to ask in the Level1Tech forums before buying new hardware.
start from man man
and learn to use man pages. Get used to lookup man pages for any topic before checking on the internet
LMFO I was on the reddit thread reading this post and coudn’t believe my eyes reading the comments. We’re living truly revelation times. Like you said this is a long due wakeup call for the rest of the “uncivilized” world.
xmpp + omemo => good luck
One thing I can imagine is even something like unconscious “self censorship”, choosing more permissive license to attract more people and even corporations which will hire developers…
This is the result of years of anti-copyleft propaganda which started to pay off. Now, all that corps need to do is wait for new projects and libraries to pop up and subtly (more than often openly) allocate resources to whichever project they need, or simply EEE. A much easier exercise than it was during the early years of copyleft where we could literally have a free alternate operating system to Microsoft, Apple and IBM while they were openly fighting it. Read on the Education and Government Incentives program for a reminder of what corporations are capable of.
I highly doubt these are sponsored by any big corp, just hobbyists/students that think it is interesting project to undertake that don’t care as much about the GPL as much as they care about doing something interesting to them.
I wanted to test this theory, quickly looking at the commit history you can see that although the project might have started as a hobby/student weekend project, it is currently maintained by someone with an official affiliation of director at Mozilla corp.
PS: I am not pointing the finger to any entity here, I picked this project as an example to have a discussion on this topic.
I am planning to try it out, but for caddy users I came up with a solution that works after being bombarded by AI crawlers for weeks.
It is a custom caddy CEL expression filter coupled with caddy-ratelimit and caddy-defender.
Now here’s the fun part, the defender plugin can produce garbage as response so when a matching AI crawler fits it will poison their training dataset.
Originally I only relied on the rate limiter and noticed that AI bots kept trying whenever the limit was reset. Once I introduced data poisoning they all stopped :)
git.blob42.xyz { @bot <<CEL header({'Accept-Language': 'zh-CN'}) || header_regexp('User-Agent', '(?i:(.*bot.*|.*crawler.*|.*meta.*|.*google.*|.*microsoft.*|.*spider.*))') CEL abort @bot defender garbage { ranges aws azurepubliccloud deepseek gcloud githubcopilot openai 47.0.0.0/8 } rate_limit { zone dynamic_botstop { match { method GET # to use with defender #header X-RateLimit-Apply true #not header LetMeThrough 1 } key {remote_ip} events 1500 window 30s #events 10 #window 1m } } reverse_proxy upstream.server:4242 handle_errors 429 { respond "429: Rate limit exceeded." } }
If I am not mistaken the 47.0.0.0/8 ip block is for Alibaba cloud