Inverting the Web
We use search engines because the Web does not support accessing documents by anything other than URL. This puts a huge amount of control in the hands of the search engine company and those who control the DNS hierarchy.
Given that search engine companies can barely keep up with the constant barrage of attacks, commonly known as "SEO". intended to lower the quality of their results, a distributed inverted index seems like it would be impossible to build.
@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).
Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.
My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...
@freakazoid ... and reputation.
But a mechanism in which:
1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known
... might disrupt things a tad.
NB: the reputation bits might build off social / netgraph models.
But yes, I've been thinking on this.
Also YaCy as sean mentioned.
There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.
Being dropped by Firefox BTW.
That provides a query API only, not a distributed index, though.
@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.
If you request by IP, you'll get only the default / primary host on that IP address.
That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.
@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.
@dredmorbius @kick @enkiv2 HTTP URLs don't have any way to specify the lookup mechanism. RFC3986 says the part after the // and optional authentication info followed by @ is a "registered name" or an address. It doesn't say the name has to be resolved via DNS but does say it is up to the local system to decide how to resolve it. So if you just wanted self-certifying names or whatever you can use otherwise unused TLDs the way Tor does with .onion.
There are alternate URLs, e.g., irc://host/channel
I'm wondering if a standard for an:
http://<address-proto><delim>address> might be specifiable.
Onion achieves this through the onion TLD. But using a reserved character ('@' comes to mind) might allow for an addressing protocol _within_ the HTTP URL itself, to be used....
@kick Clue seeks clue.
You're asking good questions and making good suggestions, even where wrong / confused (and I do plenty of both, that's not a criticism).
You're helping me (and I suspect Sean) think through areas I've long been bothered about concerning the Web / Internet. Which I appreciate.
(Kragen may have this all figured out, he's far certainly ahead of me on virtually all of this, and has been for decades.)
@kragen I see a lot of this coming down to:
- What is the incremental value of additional information sources? At some point, net of validation costs, this falls below zero.
- Google's PageRank relied on inter-document and -domain relations. Author-based trust hasn't carried as much weight. I believe it needs to.
- Randomisation around ranking should help avoid systemib bias lock-ins.
- Penalties for fraud, with increasing severity and duration for repeats.
@dredmorbius @kick @enkiv2 @freakazoid I've thought that it might be reasonable to bootstrap a friendnet by assigning newcomers (randomly or by payment) to "foster families" or "undergraduate faculties" to allow them to gain enough whuffie to become emancipated. ideally, gradually, rather than through an emancipation cliff analogous to legal majority or a B.S.
@kragen Challenge on any such scheme is scaling quickly enough, relative to other systems.
Though if the founding cohort is sufficiently interesting, you'll have the reverse problem: too many people wanting in.
An inspiration I've long had for this is Lawrence Lessig's "signed by" convention at the ... Yale Wall, I think, described in "Code and Other Laws of Cyberspace".
That applied to anonymous messages, but for new users might also work.
@kragen It's effectively a socialisation problem -- how do you introduce new members to a society?
But doing that *without* creating an inculcated old-boys/girls/nbs network, or any of the usual ethnic or socioeconomic cliques. Something that most systems have generally failed at.
Random assignments should help but aren't of themselves sufficient.
Everyone is welcome as long as you follow our code of conduct! Thank you. Mastodon.cloud is maintained by Sujitech, LLC.