Inverting the Web
We use search engines because the Web does not support accessing documents by anything other than URL. This puts a huge amount of control in the hands of the search engine company and those who control the DNS hierarchy.
Given that search engine companies can barely keep up with the constant barrage of attacks, commonly known as "SEO". intended to lower the quality of their results, a distributed inverted index seems like it would be impossible to build.
@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).
Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.
My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...
@freakazoid ... and reputation.
But a mechanism in which:
1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known
... might disrupt things a tad.
NB: the reputation bits might build off social / netgraph models.
But yes, I've been thinking on this.
Also YaCy as sean mentioned.
There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.
Being dropped by Firefox BTW.
That provides a query API only, not a distributed index, though.
@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.
If you request by IP, you'll get only the default / primary host on that IP address.
That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.
@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.
@mathew I think this discussion hinges more on the host part, and what it might reference other than DNS as an HTTP (or HTTPS) protocol reference, so as to break from the DNS oligarchy.
An alternative is to define other protocol references, as with, say, doi://, which address specific content.
There's the PURL concept of Internet Archive.
And how to create a self-sustaining decentralised namespace is challenging.
@dredmorbius @freakazoid @kick @enkiv2 Back even further, the plan was that the web would eventually use URIs, which would be dereferenced to fragile URLs. But the host-independent transport layer never happened because one-way links that break were "good enough". URIs only really survived in the DTDs.
@enkiv2 So, no, you _don't_ need content permanently at addresses.
You only need a persistently accessible _gateways_ to URI-referenced content, much as you're already starting to see through nascent schemes such as DOI-based URIs for academic articles, e.g.:
Web browsers don't yet know what to do with that. A DDG bang search, Sci-Hub, or https://doi.org should though.
Other content-based addressing methods likewise.
@dredmorbius @enkiv2 @mathew @freakazoid @kick
This lets us keep HTTP for transport through a hack but I'm not sure how useful that is in a world where IPFS, DAT, and bittorrent magnet links all exist & are mature technologies. (Opera has supported bittorrent as transport for years, & there are plugins for IPFS and DAT along with fringe browsers like Brave that support them out of the box.) HTTP has already been replaced by HTTPS which has been replaced with QUIC in most cases now...
@enkiv2 @dredmorbius @mathew @freakazoid @kick
In other words, in terms of getting widespread support for a big protocol change, the killer isn't compatibility with or similarity to already-existing standards like HTTP but, basically, whether or not it ships with chrome (and thus with every major browser other than firefox).
[Notice Regarding the Transfer of the mstdn.jp / mastodon.cloud Services] We have received several inquiries showing interest in a transfer following the announcement of the end of the mstdn.jp and mastodon.cloud services. As a result of subsequently evaluating the situation and making preparations, we have decided that the corresponding services will be transferred to a company in the United States on June 30. We will make an announcement regarding the name of the company that the services will be transferred to once preparations have been made. Thank you.