Inverting the Web
We use search engines because the Web does not support accessing documents by anything other than URL. This puts a huge amount of control in the hands of the search engine company and those who control the DNS hierarchy.
Given that search engine companies can barely keep up with the constant barrage of attacks, commonly known as "SEO". intended to lower the quality of their results, a distributed inverted index seems like it would be impossible to build.
@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).
Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.
My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...
@freakazoid ... and reputation.
But a mechanism in which:
1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known
... might disrupt things a tad.
Somewhat more:
https://news.ycombinator.com/item?id=22093403
NB: the reputation bits might build off social / netgraph models.
But yes, I've been thinking on this.
@dredmorbius @freakazoid
Isn't yandex a federated search engine? Maybe @drwho has input?
@enkiv2 I know SEARX is: https://en.wikipedia.org/wiki/Searx
Also YaCy as sean mentioned.
There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.
Being dropped by Firefox BTW.
That provides a query API only, not a distributed index, though.
@kick @enkiv2 @dredmorbius Not true; there are several decentralized routing systems out there. UIP, 6/4, Yggdrasil, Cjdns, I2P, and Tor hidden services to name just a few. Once you're no longer using names that are human-memorizable you can move to addresses that are public key hashes and thus self-certifying.
A system designed for content retrieval doesn't really need a way to refer to location at all. IPFS, for example, only needs content-based keys and signature-based keys.
@kick I'm with you in advocating for human-readable systems. IPv4 is only very barely human-readable, almost entirely by techies. IPv6 simply isn't, nor are most other options.
Arguably DNS is reaching a non-human-readable status through TLD propogation.
Borrowing from some ideas I've been kicking around of search-as-identity (with ... possible additional elements to avoid spoof attacks), and the fact that HTTP's URL is *NOT* bound to DNS, there may be ways around this.
@kick I'll disagree with you that WoT doesn't scale, again, at least in part.
We rely on a mostly-localised WoT all the time in meatspace. Infotech networks' spatial-insensitivity makes this ... hard to replicate, but I'm not prepared to say it's _entirely_ impossible.
Addressing based on underlying identifiers, tied to more than just content (I'm pretty sure that _isn't_ ultimately sufficient), we might end up with _something_ useful.
@kick To be clear, I'm trying to distinguish WoT-as-concept as opposed to WoT-as-implementation.
In the sense of people relying on a trust-based network in ordinary social and commerce interactions in real life, not in a PGP or other PKI sense, that's effectively simply _how we operate_.
Technically-mediated interactions introduce complications -- limited information, selective disclosure, distance, access-at-a-distance.
But the principles of meatsapce trust can apply.
@kick That is: direct vs. indirect knowledge. Referrals. TOFU. Repeated encounters. Tokenised or transactional-proof validations.
Those are the _principles_.
The specific _mechanics_ of trust on a technical network are harder, but ... probably tractable. The hurdle for now seems to be arriving at data and hardware standards. We've gone through several iterations which Scale Very Poorly or Are Hard To Use.
We can do better at both.
@kick A roundabout response, though I think it gets somewhere close to an answer.
"Trust" itself is not _perfect knowledge_, but _an extension of belief beyond the limits of direct experience._ The etymology's interesting: https://www.etymonline.com/word/trust
Trust is probabalistic.
Outside of direct experience, you're always trusting in _something_. And ultimately there's no direct experience -- even our sight, optic nerve, visual perception, sensation, memory, etc., are fallable.
@kick I have been warning close friends and family members (some elderly and prone to dismiss technological threats and concerns as "nonsense" or "nothing I would want to use" or "beyond my understanding" or "but why would someone do that", v. frustrating) about DeepFakes and FastSpeech technologies.
I know that at least one has had faked-voice scam phone calls, though they realised this eventually. I'm predicting #deathOfTelephony based in part on this, BTW.