Inverting the Web 

@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).

Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.

My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...

@freakazoid ... and reputation.

But a mechanism in which:

1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known

... might disrupt things a tad.

Somewhat more:
news.ycombinator.com/item?id=2

NB: the reputation bits might build off social / netgraph models.

But yes, I've been thinking on this.

@enkiv2 I know SEARX is: en.wikipedia.org/wiki/Searx

Also YaCy as sean mentioned.

There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.

Being dropped by Firefox BTW.

That provides a query API only, not a distributed index, though.

@freakazoid @drwho

@dredmorbius @enkiv2 @freakazoid YaCy isn't federated, but Searx is, yeah. YaCy is p2p.
@dredmorbius @enkiv2 @freakazoid Also, the initial criticism of the URL system isn't entirely there: the DNS is annoying, but isn't needed for accessing content on the WWW. You can directly navigate to public IP addresses and it works just as well, which allows you to skip the DNS. (You can even get HTTPS certs for IP addresses.)

Still centralized, which is bad, but centralized in a way that you can't really get around in internetworked communications.

@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.

If you request by IP, you'll get only the default / primary host on that IP address.

That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid I don't think it's that big of a problem (famous last words)? If you're playing with virtual hosts then a planned local network distribution set-up ala P9/Inferno could be set up quite easily to have initial (all?) connections go through a single box/host, couldn't you? I haven't read the HTTP spec since I was a child so I'm not sure if there's anything that'd prevent this.

@kick Sorry, not following.

"Virtual hosts" in the HTTP sense are simply HTML targets _on that webserver_ which are differentiated by the requested hostname (fully or partially qualified). Not in the virtual machine (Xen, VMWare, qemu, etc.) sense.

So local network distribution is irrelevant?

Not familiar with (Plan 9?) Inferno.

The question is how distributed hosts across the Internet can request HTTP resources via URLs, without DNS.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid Ohh, I misinterpreted (bit tired; just got finished jogging in ankle-deep snow; thinking suboptimally). Assumed you meant over different boxes using same IP.
Sign in to participate in the conversation
mastodon.cloud

Everyone is welcome as long as you follow our code of conduct! Thank you. Mastodon.cloud is maintained by Sujitech, LLC.