Inverting the Web
We use search engines because the Web does not support accessing documents by anything other than URL. This puts a huge amount of control in the hands of the search engine company and those who control the DNS hierarchy.
Given that search engine companies can barely keep up with the constant barrage of attacks, commonly known as "SEO". intended to lower the quality of their results, a distributed inverted index seems like it would be impossible to build.
@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).
Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.
My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...
@freakazoid ... and reputation.
But a mechanism in which:
1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known
... might disrupt things a tad.
NB: the reputation bits might build off social / netgraph models.
But yes, I've been thinking on this.
Also YaCy as sean mentioned.
There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.
Being dropped by Firefox BTW.
That provides a query API only, not a distributed index, though.
@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.
If you request by IP, you'll get only the default / primary host on that IP address.
That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.
@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.
@dredmorbius @kick @enkiv2 HTTP URLs don't have any way to specify the lookup mechanism. RFC3986 says the part after the // and optional authentication info followed by @ is a "registered name" or an address. It doesn't say the name has to be resolved via DNS but does say it is up to the local system to decide how to resolve it. So if you just wanted self-certifying names or whatever you can use otherwise unused TLDs the way Tor does with .onion.
@freakazoid @dredmorbius @kick @enkiv2
Just to point out -- URLs/URIs are W3C specced but aren't part of HTTP. (You guys know this but it's important to make this distinction here.) HTTP URLs are always over HTTP & so can't be content-addressed -- they're always host-based. But you can stick an SSB, IPFS, or onion address in an HTML anchor tag.
@enkiv2 It's also a transition path, which addresses another element of this question.
If we're looking at coming up with a DNS-independent addressing scheme, then operating a set of reflectors, relays, or gateways (similar to Usenet-Email, Usenet-Web, or Internet-BBC gateways), might offer a path.
The relays _might_ be an online infrastructure, including a distributed one (in both IP and namespace) _or_ a locally-provisioned one as an HTTP or Tor proxy.
@enkiv2 The advantage is in being able to partition the URL into the DNS-dependent and -independent elements.
The proxy is DNS-dependent (though you can override it locally). The content, metadata, role-based, or other location-independent scheme is passed on to the proxy.
This gives a backwards-compatible path from the Old Web to the New.
And on the New Web you'd have the location-independent addressing as standard.
Meantime You Can Get There From Here. Which helps.
@dredmorbius @enkiv2 @freakazoid @kick You can try to fight this way, but you're loosing your time. A global sihft of paradigm in terms of cybersoace architecture is necessary. Still, by the mean time, we can. find clever trucj to fuck them but, according to me, this should never distractvus from building our own standards and alternative. cyberspace architecture. We're ahead microsoft. In termscof concepts. Never loose that of sight.
Everyone is welcome as long as you follow our code of conduct! Thank you. Mastodon.cloud is maintained by Sujitech, LLC.