Inverting the Web
We use search engines because the Web does not support accessing documents by anything other than URL. This puts a huge amount of control in the hands of the search engine company and those who control the DNS hierarchy.
Given that search engine companies can barely keep up with the constant barrage of attacks, commonly known as "SEO". intended to lower the quality of their results, a distributed inverted index seems like it would be impossible to build.
@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).
Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.
My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...
@freakazoid ... and reputation.
But a mechanism in which:
1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known
... might disrupt things a tad.
NB: the reputation bits might build off social / netgraph models.
But yes, I've been thinking on this.
Also YaCy as sean mentioned.
There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.
Being dropped by Firefox BTW.
That provides a query API only, not a distributed index, though.
@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.
If you request by IP, you'll get only the default / primary host on that IP address.
That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.
@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.
@dredmorbius @kick @enkiv2 HTTP URLs don't have any way to specify the lookup mechanism. RFC3986 says the part after the // and optional authentication info followed by @ is a "registered name" or an address. It doesn't say the name has to be resolved via DNS but does say it is up to the local system to decide how to resolve it. So if you just wanted self-certifying names or whatever you can use otherwise unused TLDs the way Tor does with .onion.
There are alternate URLs, e.g., irc://host/channel
I'm wondering if a standard for an:
http://<address-proto><delim>address> might be specifiable.
Onion achieves this through the onion TLD. But using a reserved character ('@' comes to mind) might allow for an addressing protocol _within_ the HTTP URL itself, to be used....
@enkiv2 Bang simply as available notation. Now that I think of it, it might make a good routing _mechanism_ specifier:
Again, I'm not sure this is better than individual protocols.
Another option would be to specify some service proxy, which could then handle routing. URI encoding doesn't seem to directly provide that, apps/processes define own proxy use.
@dredmorbius @enkiv2 @kick @freakazoid
Bang was used in usenet addresses to separate a series of hosts in order to specify a routing, since UUCP would be done by machines calling specific other known machines nightly over landline phones. You'd see bang routing in usenet archives as late as the early 90s. I'd be surprised if it's not still theoretically supported in URLs.
@enkiv2 Email also.
I used (though understood poorly) bang-path routing at the time.
So yes, I'm familiar with the usage and notation. The question of whether or not it's appropriate here is ... the question.
At present, HTTP URL's *presume* DNS.
The problem is that DNS itself is proving problematic in numerous ways, that ... don't seem reasonably tractable. The dot-org fiasco is pretty much the argument I've been looking for against the "just host your own domain" line.
@enkiv2 That's at best worked with difficulty for large organisations -- domain lapses, etc., occur with regularity.
Domain squatting, typosquatting, and a whole mess of other stuff, is a long-standing issue.
In that light, Google's killing the URL _might_ not be _all_ bad, but they've been Less Than Clear on what their suggested alternative is. And I trust them less far than I can throw them.
For individuals, the issues of persistent online space is a huge issue.
@enkiv2 Then there's the whole question of how many spaces is enough. There are arguments for _both_ persistence _and_ flexibility / alternatives, and locking everyone into a _single_ permanent identity generally Does Not End Well.
The notion of a time-indexed identity might address some of this. Internet Archive's done some work in this area. Assumptions of network immutability tend to break. In time.
@dredmorbius @enkiv2 @kick @freakazoid
Yeah. Any immutability needs to be enforced because when the W3C declared that changing web pages is Very Rude all the scam artists & incompetents did it anyway. Content archival projects like waybackmachine become easier if you have static addresses for static content & some kind of mechanism to repoint at a different set of static documents (like IPFS+IPNS).
@enkiv2 I'd argue that there's a place for redacting content -- see the Bryan Cantril thread from 1996 previously referenced. That's ... embarassing. Not particularly useful, though perhaps as a cautionary tale.
There's a strong argument that most social media should be fairly ephemeral and reach-limited.
There are exceptions, and *both* promiting *and* concealing information can be done for good OR evil.
@dredmorbius @enkiv2 @kick @freakazoid
In terms of negative feedback -- I don't consider redaction of already-published material to be the best or most useful form. We see problems that could be solved by this, if mirroring & wayback machine & screenshots didn't exist. I'm more hopeful about solving the dunking problem with norms.
Reach is a lot more nuanced & powerful. Permanent & reach-limited like SSB feels like the right thing for nominally-public stuff.
@enkiv2 @dredmorbius @kick The idea that norms can solve this problem is incredibly naive. Norms aren't going to fix it when someone hacks into your computer and videos you masturbating to something perfectly legal but weird and then publishes it all over the 'net. The average person isn't going to become enlightened enough in our lifetimes for this not to cause significant harm to someone.
@enkiv2 @dredmorbius @kick Systems like the Wayback Machine exist in a gray area right now. They take down stuff when asked, but it's a PITA to ask everyone who might mirror a piece of content. There needs to be a standard for this, such that when you have a valid order to take something down, any site that mirrors the content can process that automatically.
@freakazoid There's also a wide space between "destroy all extant copies" and "embargo publication until the principals and associates are well dead", as is presently the case for many personal materials.
Note that this need not merely be the _author_, but also those directly affected or mentioned. Sometimes additional others (descendants / associates).
But at some point, "private" _should_ pass into "common history". Usually.
Cultural appropriation remains an argument.
@freakazoid ... that there are possible descendants.
How should matters be adjudicated?
(I'm not saying any one claim or right is correct or wrong. Only that the story can become ... complicated. Multiple alternate variants might be formulated. Many with factual basis according to anthropological records.)
Everyone is welcome as long as you follow our code of conduct! Thank you. Mastodon.cloud is maintained by Sujitech, LLC.