Maybe consider static ip assignment in your DHCP server (e.g. internet router) if at all possible… Then you can add a name to it to /etc/hosts
.
Alternatively you could use Avahi to provide mdns names to your local network
Maybe consider static ip assignment in your DHCP server (e.g. internet router) if at all possible… Then you can add a name to it to /etc/hosts
.
Alternatively you could use Avahi to provide mdns names to your local network
Do you have that file? If not, then unset SSH_AUTH_SOCK
will work just as well.
If it does exist, then I suppose it has good chances of working correctly :). ssh-add -l
will try to use that socket and list your keys in the service (or list nothing if there are no keys, but it would still work without error).
At the end of the log you find:
822413 connect(4, {sa_family=AF_UNIX, sun_path="/run/user/1000/gcr/ssh"}, 110) = 0
...
822413 read(4,
meaning it’s trying to interact with the ssh-agent, but it (finally) doesn’t give a response.
Use the lsof
command to figure out which program is providing the agent service and try to resolve issue that way. If it’s not the OpenSSH ssh-agent, then maybe you can disable its ssh-agent functionality and use real ssh-agent in its place…
My wild guess is that the program might be trying to interactively verify the use of the key from you, but it is not succeeding in doing that for some reason.
I guess it’s worth checking if those names point to the expected binaries, but I also think it would be highly unlikely they would be anything else than just /usr/bin/ssh
and /usr/bin/ssh-agent
.
As mentioned, -v
(or -vv
) helps to analyze the situation.
My theory is that you already have something providing ssh agent service, but that process is somehow stuck, and when ssh tries to connect it, it doesn’t respond to the connect, or it accepts the connection but doesn’t actually interact with ssh. Quite possibly ssh doesn’t have a timeout for interacting with ssh-agent.
Using eval $(ssh-agent -s)
starts a new ssh agent and replaces the environment variables in question with the new ones, therefore avoiding the use of the stuck process.
If this is the actual problem here, then before running the eval
, echo $SSH_AUTH_SOCK
would show the path of the existing ssh agent socket. If this is the case, then you can use lsof $SSH_AUTH_SOCK
to see what that process is. Quite possibly it’s provided by gnome-keyring-daemon
if you’re running Gnome. As to why that process would not be working I don’t have ideas.
Another way to analyze the problem is strace -o logfile -f ssh ..
and then check out what is at the end of the logfile
. If the theory applies, then it would likely be a connect
call for the ssh-agent.
I think the main problem is that Chromium still contributes towards the browser engine monoculture, as it is bug-for-bug compatible with Chrome. Therefore if you switch to Chromium, it’s still enough for the web sites to test for Chrome compatibility, which they will, because it has the largest market share. Users of competing browsers suffer, further driving the lure of Chrome (or Chromium).
On the other hand, if people switched to some other engine, one that does not share the same core engine or even the same history, this will no longer hold: web sites would need to be developed against the spec, or at least against all the browsers they might realistically expect their customers to use.
I highly doubt businesses would have been this fast in making the switch.
Well that’s exactly the worry. Why shouldn’t it be? It is their business and livehood.
As if taking down the systems is the biggest cybersecurity threat a company might have.
Apparently Lapce has remote development as its core feature. But I only (re?)learned of it today…
How didn’t tramp
work out for you?
A great git integration can work well in an editor. I use Magit in Emacs, which is probably as full-featured Git-client as there can be. Granted, for operations such as cherry-picking or rebasing on top of a branch or git reset
I most often use the command line (but Magit for interactive rebase).
But editor support for version management can give other benefits as well, for example visually showing which lines are different from the latest version, easy access to file history, easy access to line-based history data (blame), jumping to versions based on that data, etc.
As I understand it vscode support for Git is so basic that it’s easy to understand why one would not see any benefits in it.
It still maintains their market position, which has value. For example, you might not visit other sites because they don’t have the content you want (and the content stays on YT because they have the viewers), or you might even share YT links to other people.
Yes, just mount to /mnt/videos
and symlink that as needed.
I guess there are some benefits in mounting directly to $HOME
, though, such as find
/fd
work “as expected”, and also permissions will be limited automatically per the $HOME
permissions (but those can be adjusted manually).
For finding files I use plocate
, though, so I wouldn’t get that marginal benefit from mounting below $HOME
.
My /home is also on a separate filesystem, so in principle I don’t like to mounting data under there, because then I cannot unmount /home (e.g. for fsck purposes) unless I unmount also all the other filesystems there. I keep all my filesystems on LVM.
So I just mount to /mnt and use symlinks.
Exception: sshfs I often mount to home.
But how many use it for browsing, which I imagine this data is from?
Thanks!
The mention was at about 12:06, in the form that OLM breaks down at about 50 users “give or take”, so it’s not really a limitation imposed by the system itself and it would be difficult to impose it. I doubt this is the experience of all Matrix e2ee users at least at that exact point, but e2ee has always had some growth pains, so there could people with those issues; on the other hand few large rooms are e2ee to begin with, so experience on those is limited. E2ee also requires the users to be more mindful about their data as in not to lose their private keys, and these problems probably increase linearly as the room size increases.
I didn’t notice any claim of rooms larger than 50 becoming public.
I’ve only heard a second-hand info about it, but apparently one local policital party uses e2ee in Matrix with hundreds of people in the room, so that should be a proof that the encryption is not limited to 50 users—and this info sounds just as well founded as the information provided by the video ;).
The guy carries on stating that pretty much all of the huge matrix rooms are not end-to-end-encrypted, and I have no reason to doubt that. Personally I see little point in having such large rooms encrypted anyway, because if you have a large room you will also likely have very relaxed checks on who gets to enter it (e.g. it could be completely public), and if that’s the case, then so can any party who wishes to monitor the room join the room as well. E2ee won’t be protecting those cases. (While at the same time you lose server-side search feature and efficient notifications, though at least the latter one is being fixed with out-of-envelope notification data—which again leaks a bit more metadata…)
The video also makes it sound like that if you have a Matrix Home Server in the network, it’s going to end up hosting CSAM. This is only the case if one of the users of that HS are in a room that has the content, so it’s not like it will just automatically get migrated there. I imagine vast majority of Matrix Home Servers have limited account creation abilities (e.g. companies, personal home servers, organizations, etc), eliminating or at least highly discouraging this kind of issue.
Btw, the video makes an excellent point about the Matrix CDN issue, which is being fixed currently as well (that change is already merged to the matrix spec), by requiring authentication. Next steps is going to associate media to messages, making this kind of thing even more strict. All this means IRC bridges will need to start hosting Matrix-side contents by themselves, though…
Because encryption doesn’t work for rooms over 50 people, so any room over that size is public by default.
By public you mean non-encrypted? How does that work? When you create a room, you default to encryption, and there is only one participant (the room creator). And you cannot turn off encryption, so what then happens when you get 51 participants?
Also existing non-encrypted rooms are never automatically switched to encryption, so the switch must be explicit. Does it refuse to do it if there are more than 50 participants?
I’ve never heard of this limit nor was I able to find info about it (so a link would be great), but there could some factor that increases problems as the number of people increases… Perhaps 50 is some practical suggestion for the maximum number of people to have in encrypted sessions?
Am I to understand correctly that if you are running Gtk+ apps in the Gnome compositor, you get this working, but if you are running non-Gnome compositor with Gtk+ apps, it will not work? Or is it independent of the compositor?
You should have backups. Preferably also snapshots. Then rm will feel less scary.
Then there are the cases where you want the LLM to actually interact with the page, using the current web page state and your credentials.
For example, one might want to tell it to uncheck all the “opt in” checkboxes in the page… And express this task in plain English language.
Many useful interactive agent tasks could be achieved with this. The chatbot would be merely the first step.