Inverting the Web 

@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).

Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.

My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...

@freakazoid ... and reputation.

But a mechanism in which:

1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known

... might disrupt things a tad.

Somewhat more:
news.ycombinator.com/item?id=2

NB: the reputation bits might build off social / netgraph models.

But yes, I've been thinking on this.

@enkiv2 I know SEARX is: en.wikipedia.org/wiki/Searx

Also YaCy as sean mentioned.

There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.

Being dropped by Firefox BTW.

That provides a query API only, not a distributed index, though.

@freakazoid @drwho

@dredmorbius @enkiv2 @freakazoid YaCy isn't federated, but Searx is, yeah. YaCy is p2p.
@dredmorbius @enkiv2 @freakazoid Also, the initial criticism of the URL system isn't entirely there: the DNS is annoying, but isn't needed for accessing content on the WWW. You can directly navigate to public IP addresses and it works just as well, which allows you to skip the DNS. (You can even get HTTPS certs for IP addresses.)

Still centralized, which is bad, but centralized in a way that you can't really get around in internetworked communications.

@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.

If you request by IP, you'll get only the default / primary host on that IP address.

That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.

@enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.

@freakazoid Question: is there any inherent reason for a URL to be based on DNS hostnames (or IP addresses)?

Or could an alternate resolution protocol be specified?

If not, what changes would be required?

(I need to read the HTTP spec.)

@kick @enkiv2

@freakazoid Answering my own question: no, there's not:

"As far as HTTP is concerned, Uniform Resource Identifiers are simply formatted strings which identify--via name, location, or any other characteristic--a resource."

tools.ietf.org/html/rfc2616#se

@kick @enkiv2

@dredmorbius @freakazoid @kick @enkiv2
Earlier RFCs had defined meanings for the parts of HTTP URLs, but vendors ignored the standards so now URL paths are just an arbitrary string which could mean anything.

@mathew I think this discussion hinges more on the host part, and what it might reference other than DNS as an HTTP (or HTTPS) protocol reference, so as to break from the DNS oligarchy.

An alternative is to define other protocol references, as with, say, doi://, which address specific content.

There's the PURL concept of Internet Archive.

And how to create a self-sustaining decentralised namespace is challenging.

@freakazoid @kick @enkiv2

@dredmorbius @freakazoid @kick @enkiv2 Back even further, the plan was that the web would eventually use URIs, which would be dereferenced to fragile URLs. But the host-independent transport layer never happened because one-way links that break were "good enough". URIs only really survived in the DTDs.

@mathew More on "why" would be interesting.

Insufficient motivation?
Sufficient of resistance?
Excess complexity?
Apathy?

@freakazoid @kick @enkiv2

@dredmorbius @mathew @freakazoid @kick @enkiv2
A URL/URI distinction (with permanent URIs) would mean having static content at addresses & having that be guaranteed. There wasn't initial support for any guarantees built into the protocol, & commercial web tech uses relied upon the very lack of stasis to make money: access control, personalized ads, periodically-updating content like blogs, web services (a way to productize open source code & protect proprietary from disassembly).

Follow

@enkiv2 So, no, you _don't_ need content permanently at addresses.

You only need a persistently accessible _gateways_ to URI-referenced content, much as you're already starting to see through nascent schemes such as DOI-based URIs for academic articles, e.g.:

doi://10.1007@978-3-319-47458-8

Web browsers don't yet know what to do with that. A DDG bang search, Sci-Hub, or doi.org should though.

Other content-based addressing methods likewise.

@mathew @freakazoid @kick

@dredmorbius @enkiv2 @mathew @freakazoid @kick
This lets us keep HTTP for transport through a hack but I'm not sure how useful that is in a world where IPFS, DAT, and bittorrent magnet links all exist & are mature technologies. (Opera has supported bittorrent as transport for years, & there are plugins for IPFS and DAT along with fringe browsers like Brave that support them out of the box.) HTTP has already been replaced by HTTPS which has been replaced with QUIC in most cases now...

@enkiv2 @dredmorbius @mathew @freakazoid @kick
In other words, in terms of getting widespread support for a big protocol change, the killer isn't compatibility with or similarity to already-existing standards like HTTP but, basically, whether or not it ships with chrome (and thus with every major browser other than firefox).

Sign in to participate in the conversation
mastodon.cloud

[Notice Regarding the Transfer of the mstdn.jp / mastodon.cloud Services] We have received several inquiries showing interest in a transfer following the announcement of the end of the mstdn.jp and mastodon.cloud services. As a result of subsequently evaluating the situation and making preparations, we have decided that the corresponding services will be transferred to a company in the United States on June 30. We will make an announcement regarding the name of the company that the services will be transferred to once preparations have been made. Thank you.