One thing Reddit dominates on is search results. I’m looking things up and seeing so many links to reddit, which I guess is going to help keep that place relevant (unless those subreddits stay dark).
I wondered how Lemmy and this fed thingy stuff all works for that? With more posts can we expect to see people arriving through search results?
There’s a lot of things that factor into the answer, but I think overall it’s gonna be pretty random. Some instances are on domains without “Lemmy” in the name, some don’t include “Lemmy” in the site name configuration, and in the case of some like my own instance, I set the
X-Robots-Tag
response header such that search engines that properly honor the header won’t crawl or index content on my instance. I’ve actually taken things a step further with mine and put all public paths except for the API endpoints behind authentication (so that Lemmy clients and federation still work with it), so you can’t browse my instance content without going through a proper client for extra privacy. But that goes off-topic.Reddit was centralized so could be optimized for SEO. Lemmy instances are individually run with different configuration at the infrastructure level and the application configuration level, which if most people leave things fairly vanilla, should result in pretty good discovery of Lemmy content across most of these kinds of instances, but I would think most people technical enough to host their own instances would have deviated from defaults and (hopefully) implemented some hardening, which would likely mess with SEO.
So yeah, expect it to be pretty random, but not necessarily unworkable.
Easily the best answer here, I think the people who think it will work “just like Reddit” are unfamiliar with federation still, and aren’t used to thinking things through in those terms.
Not to mention that Google results in general have been pretty trash for a couple years now. I don’t expect fediverse content to be prominent for some time unless there is a dedicated service that indexes everything.
I mean why couldn’t there be a dedicated service that indexes everything? Whoever makes it and gets it working in a user friendly manner is going to have a significant level of control on the content that is shown in the results. If you don’t want it, it isn’t indexed. I don’t have to stretch the imagination to think of parties that have good reason to want to be first to do that across Activity Pub as a whole. Mastodon is already a big frontrunner in that regard.
I kind of feel like Kagi will be all over this with it’s forum ‘lens’ for search, but it’s paid. Maybe boardreader would focus on this too?
Google search isn’t as good as it used to be and using startpage.com to break the filter bubble isn’t effective as much anymore either. So we probably all also need to start remembering like 1999 and different search engine for different things and looking for what works the best.
Your “off-topic” sounded pretty cool to me! I love that that is something anyone can do when hosting a lemmy instance. You get to choose if it’s searchable on the web! Obviously there are search engines which ignore the no scraping/indexing header, but the rest of what you did should counteract that, noice.
Yeah, if you’re running something yourself, you can do pretty much whatever you want in order to protect it. Especially if it’s behind a reverse proxy. Firewalls are great for protecting ports, but reverse proxies can be their own form of protection, and I don’t think a lot of people associate them with “protection” so much. Why expose paths (unauthenticated) that don’t need to be? For instance, in my case with my Lemmy instance, all any other instance needs is access to the
/api
path which I leave open. And all the other paths are behind basic authentication which I can access, so I can still use the Lemmy web interface on my own instance if I want to. But if I don’t want others browsing to my instance to see what communities have been added, or I don’t want to give someone an easy glance into what comments or posts my profile has made across all instances (for a little more privacy), then I can simply hide that behind the curtain without losing any functionality.It’s easy to think of these things when you have relevant experience with things such as web development, debugging web applications, full stack development, and subject matter knowledge in those and related areas.
I’d be interested in how you did this, this seems like one of the best ways I’ve seen for securing a lemmy instance.
One easy way to do that is to set up something like Nginx as a reverse proxy in front and forward
/api
clean, but forward everything else with basic auth.The steps broadly would be:
- Set up an Nginx instance
- Set up a block in Nginx to proxy
/
to your Lemmy instance - Set up basic auth on that block
- Set up a smaller block that will only proxy calls to
/api
and other endpoints you want public, like previously with/
- Make your Lemmy instance unreachable from the broader internet, eg. if you’re on a single server, make it listen on 127.0.0.1 instead of 0.0.0.0, but make sure Nginx can still reach it
And you’re done.
I have a single Nginx container that handles reverse proxying of all my selfhosted services, and I break every service out into its own configuration file, and use
include
directives to share common configuration across them. For anyone out there with Nginx experience, my Lemmy configuration file should make it fairly clear in terms of how I handle what I described above:server { include ssl_common.conf; server_name lm.williampuckering.com; set $backend_client lemmy-ui:1234; set $backend_server lemmy-server:8536; location / { set $authentication "Authentication Required"; include /etc/nginx/proxy_nocache_backend.conf; if ($http_accept = "application/activity+json") { set $authentication off; set $backend_client $backend_server; } if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") { set $authentication off; set $backend_client $backend_server; } if ($request_method = POST) { set $authentication off; set $backend_client $backend_server; } auth_basic $authentication; auth_basic_user_file htpasswd; proxy_pass http://$backend_client; } location ~* ^/(api|feeds|nodeinfo|.well-known) { include /etc/nginx/proxy_nocache_backend.conf; proxy_pass http://$backend_server; } location ~* ^/pictrs { proxy_cache lemmy_cache; include /etc/nginx/proxy_cache_backend.conf; proxy_pass http://$backend_server; } location ~* ^/static { proxy_cache lemmy_cache; include /etc/nginx/proxy_cache_backend.conf; proxy_pass http://$backend_client; } location ~* ^/css { proxy_cache lemmy_cache; include /etc/nginx/proxy_cache_backend.conf; proxy_pass http://$backend_client; } }
It’s definitely in need of some clean-up (for instance, there’s no need for multiple location blocks that do the same thing for caching, a single expression can handle all of the ones with identical configuration to reduce the number of lines required), but I’ve been a bit lazy to clean things up. However it should serve as a good example and communicate the general idea of what I’m doing.
One thing to keep in mind is that Google currently penalizes links that don’t end in the common top domains like “.com”, “.org” and similar. So something like lemmy.world, if indexed, will rank lower than a site ending in .com with the same keyword density.
Google went from being the most important website on the internet to being more and more useless, it’s amazing seeing such a massive company go downhill. But they have so much money that they’ll be able to stay big forever from capital alone.
What do you use as a search engine instead of Google? I feel like I’ve tried everything, but always end up back at Google search.
Been using Ecosia and so far its been very good. I did not have a need to use Google once.
Ecosia uses the Bing algorithm by the way, but with tree planting and better privacy.
Yandex is good if you’re sailing the high seas.
SearXNG
Startpage, SwissCows, DuckDuckGo, Qwant, SearX
Let Google be irrelevant. It kind of already is there in the absence of Reddit.
The nerds always blaze a trail when boring old entrenched media ruins good things. In this case the thing being ruined is a search engine that makes the critical mistake of assuming a traditionally “prestigious” .com equates value. Fuck the old establishment, it’s time to ditch decrepit big tech and remake the internet the way it was meant to be. It’s time to reinvent how we share and discover content.
Fine with me. We will have a lot of users and become bigger then reddit and Google will still treat us like second class citizens. oh well for Google they are missing out
One would hope! I can find results from lemmy instances on Google - they are definitely crawling them, but their page rank is going to start out very low.
I guess you’d have to try it out, right? Maybe look up some topics and point Google to Lemmy. Honestly haven’t looked much into the whole community beyond setting up a Mastodon account a while back and looking into it a bit more this week.
One thing I’d love to see and would probably help quite a lot with searchability is to have blog and CMS software, instead of having dedicated comment system, integrate a “discuss on Fediverse” button.
It could bring up possible communities based on blogpost/article tags. And since Lemmy supports pingbacks the system would know about the discussion threads and it could even show few last posts from each.
To me it seems like win/win situation for all parties involved.
I actually added a custom search engine to Firefox… so I can search something on Lemmy. I have the keyword ‘LW’ for Lemmy.World search right now (because Lemmy.ml was offline a while).
Basically, do the Lemmy search (search term
ssss
) then edit/replacessss
>%s
and copy the entire link.https://lemmy.world/search/q/%s/type/All/sort/TopAll/listing_type/All/community_id/0/creator_id/0/page/1
Then using ‘add custom search engine’ extension on Firefox, you add it.
You don’t need an extension to add search bookmarks.
Add a bookmark, put the
%s
in the URL of it where the search term shall be put, and give it a keyword. Then in the address bar you usekeyword search text
to search.
My guess is just that Reddit happily lets search engines crawl it, so that content is well-indexed, and because Reddit threads are often linked to from elsewhere the site is considered good quality.
I’d imagine Lemmy would eventually get to the same point naturally if enough information is shared here. At least, assuming it doesn’t block search engines.
Hmm although I don’t really understand how federation will fit with that, given it basically means the same content is duplicated on a bunch of domains.
A lot of search engines rely on backlinks to rank the reliablitly/validity of a site so even if a given instance was picked up to have enough places reference it to be seen as a valid source would ve a pretty heavy lift.
Unfortunately Lemmy isn’t great for SEO because lemmy-ui heavily relies on JavaScript to render the page, which search bots avoid.
I imagine it’ll take a while for fediverse stuff to be high up on search results but it should still work and appear the same way as reddit posts do, just using the federated domains instead of all only being on one site. Hopefully people do start arriving for that reason though!
I would expect Lemmy to show up equally in the search results if there is enough relevant content. My tiny tiny instance is already showing up in search results, crawlers can definitely find stuff on here. It would be great if at some point we can append “lemmy” to search queries to get the good stuff like we could with Reddit.