Inverting the Web 

@freakazoid What methods *other* than URL are you suggesting? Because it is imply a Universal Resource Locator (or Identifier, as URI).

Not all online content is social / personal. I'm not understanding your suggestion well enough to criticise it, but it seems to have some ... capacious holes.

My read is that search engines are a necessity born of no intrinsic indexing-and-forwarding capability which would render them unnecessary. THAT still has further issues (mostly around trust)...

@freakazoid ... and reputation.

But a mechanism in which:

1. Websites could self-index.
2. Indexes could be shared, aggregated, and forwarded.
4. Search could be distributed.
5. Auditing against false/misleading indexing was supported.
6. Original authorship / first-publication was known

... might disrupt things a tad.

Somewhat more:

NB: the reputation bits might build off social / netgraph models.

But yes, I've been thinking on this.

@enkiv2 I know SEARX is:

Also YaCy as sean mentioned.

There's also something that is/was used for Firefox keyword search, I think OpenSearch, a standard used by multiple sites, pioneered by Amazon.

Being dropped by Firefox BTW.

That provides a query API only, not a distributed index, though.

@freakazoid @drwho

@dredmorbius @enkiv2 @freakazoid YaCy isn't federated, but Searx is, yeah. YaCy is p2p.
@dredmorbius @enkiv2 @freakazoid Also, the initial criticism of the URL system isn't entirely there: the DNS is annoying, but isn't needed for accessing content on the WWW. You can directly navigate to public IP addresses and it works just as well, which allows you to skip the DNS. (You can even get HTTPS certs for IP addresses.)

Still centralized, which is bad, but centralized in a way that you can't really get around in internetworked communications.

@kick HTTP isn't fully DNS-independent. For virtualhosts on the same IP, the webserver distinguishes between content based on the host portion of the HTTP request.

If you request by IP, you'll get only the default / primary host on that IP address.

That's not _necessarily_ operating through DNS, but HTTP remains hostname-aware.

@enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 IP is also worse in many ways than using DNS. If you have to change where you host the content, you can generally at least update your DNS to point at the new IP. But if you use IP and your ISP kicks you off or whatever, you're screwed; all your URLs are new invalid. Dat, IPFS, FreeNet, Tor hidden sites, etc, don't have this issue. I suppose it's still technically a URL in some of these cases, but that's not my point.

@freakazoid Question: is there any inherent reason for a URL to be based on DNS hostnames (or IP addresses)?

Or could an alternate resolution protocol be specified?

If not, what changes would be required?

(I need to read the HTTP spec.)

@kick @enkiv2

@dredmorbius @kick @enkiv2 HTTP URLs don't have any way to specify the lookup mechanism. RFC3986 says the part after the // and optional authentication info followed by @ is a "registered name" or an address. It doesn't say the name has to be resolved via DNS but does say it is up to the local system to decide how to resolve it. So if you just wanted self-certifying names or whatever you can use otherwise unused TLDs the way Tor does with .onion.

@freakazoid Hrm....


There are alternate URLs, e.g., irc://host/channel

I'm wondering if a standard for an:

http://<address-proto><delim>address> might be specifiable.

Onion achieves this through the onion TLD. But using a reserved character ('@' comes to mind) might allow for an addressing protocol _within_ the HTTP URL itself, to be used....

@kick @enkiv2

@dredmorbius @kick @enkiv2 @ is already reserved for the optional username[:password] portion before the hostname.

@freakazoid @dredmorbius @enkiv2 Is ! still reserved (! may be a DNS thing actually, thinking about it further)?

@kick As of RFC 2369, "!" was unreserved. That RFC is now obsolete. Not sure if status is changed.

@enkiv2 @freakazoid

@dredmorbius @enkiv2 @freakazoid Entirely unrelated because I just remembered this based on @kragen's activity in this thread:

Vaguely shocked that I'm interacting with both of you because I'm pretty sure you two are the people I've (at least kept in memory for long enough) read the words of online consistently for longest. (Since I was like, eight, maybe, on Kragen's part. Not entirely sure about you but less than I've checked for by a decent margin at least.)

@kick Clue seeks clue.

You're asking good questions and making good suggestions, even where wrong / confused (and I do plenty of both, that's not a criticism).

You're helping me (and I suspect Sean) think through areas I've long been bothered about concerning the Web / Internet. Which I appreciate.

(Kragen may have this all figured out, he's far certainly ahead of me on virtually all of this, and has been for decades.)

@enkiv2 @kragen @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid while I appreciate the vote of confidence, and I did spend a long time figuring out how to build a scalable distributed index, I am as at much of a loss as anyone when it comes to figuring out the social aspect of the problem (SEO spam, ranking, funding).

@dredmorbius @kick @enkiv2 @freakazoid building a non-distributed index has gotten a lot easier though. when I published the Nutch paper it was still not practical for a regular person to crawl most of the public textual web, from a cost perspective. (not sure if it's practical now, though, due to cloudflare)

@kragen @dredmorbius @enkiv2 @freakazoid I think it would be? Given the people working at Cloudflare, it seems like they'd whitelist whatever you're crawling with if you asked the right person assuming it didn't become something everyone and their cat was requesting to do.

@kragen I see a lot of this coming down to:

- What is the incremental value of additional information sources? At some point, net of validation costs, this falls below zero.

- Google's PageRank relied on inter-document and -domain relations. Author-based trust hasn't carried as much weight. I believe it needs to.

- Randomisation around ranking should help avoid systemib bias lock-ins.

- Penalties for fraud, with increasing severity and duration for repeats.

@kick @enkiv2 @freakazoid

@kragen - Some way of vetting new arrivals / entities, such that legitimate newcomers aren't entirely locked out of the system. Effectively letters of recommendation or reference.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid I've thought that it might be reasonable to bootstrap a friendnet by assigning newcomers (randomly or by payment) to "foster families" or "undergraduate faculties" to allow them to gain enough whuffie to become emancipated. ideally, gradually, rather than through an emancipation cliff analogous to legal majority or a B.S.

@kragen Challenge on any such scheme is scaling quickly enough, relative to other systems.

Though if the founding cohort is sufficiently interesting, you'll have the reverse problem: too many people wanting in.

An inspiration I've long had for this is Lawrence Lessig's "signed by" convention at the ... Yale Wall, I think, described in "Code and Other Laws of Cyberspace".

That applied to anonymous messages, but for new users might also work.

@kick @enkiv2 @freakazoid

@kragen It's effectively a socialisation problem -- how do you introduce new members to a society?

But doing that *without* creating an inculcated old-boys/girls/nbs network, or any of the usual ethnic or socioeconomic cliques. Something that most systems have generally failed at.

Random assignments should help but aren't of themselves sufficient.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid human societies have hierarchies of prestige; we can't hope to eliminate those through incentive design. We can hope to prevent things like despotism, witch-burning, the Inquisition, the Holocaust, and the burning of the Library of Alexandria. But there's going to be an old-enbies network, unavoidably.

@kragen That's the Iron Rule of Oligarchy, so, yeah.

But we don't have to help them along any. And if we can figure out negative-feedback mechanisms to retard the process, so much the better.

@kick @enkiv2 @freakazoid

@dredmorbius @kragen @kick @enkiv2 @freakazoid
Stafford Beer had some ideas about ways to rotate people through groups in such a way that ideas echo through a network. Based on graph theory & permutation. I've forgotten the name. Worth looking into as a way to grow/integrate folks into a large group by making connection in a smaller one & getting mirroring/feedback.

@dredmorbius @kragen @enkiv2 @freakazoid How much privacy are you willing to sacrifice with this?

Taking a single possibility (I listed a few) from a thing I wrote to a couple of posts up-thread but didn’t send because I want to hear someone’s opinion on a sub-problem of one of the guesses listed:

Seed with trusted users (i.e. people submitting sites to crawl), rank preferentially by age (time-limited; would eventually wear off), then rank on access-by-unique-users. Given that centralized link aggregators wouldn’t disappear, someone throws HN in, for example, the links on HN get added into the pool, whichever get clicked on most rise up, eventually get their own ranking, etc.

This works especially well if using what I sent the e-mail to inquire a little more about: cluster sorting rather than just barebacking text (this is what Yippy does, for example, and what Blekko used to do), because it promotes niche results better than Google’s model with smaller datasets, and when users have more seamless access to better niches, more sites can get rep easier. Example: try vs. throwing your username into Google. The clustering allows for much more informative/interesting results, I think, especially if doing inquisitive searching.

Kragen mentioned randomly introducing newcomers (adding noise), but I think it might work better still if noise was added to the searches for at least the beginning of it. A single previously-unclicked link on the first five pages of search results?

@kick As little as possible.

I've not participated online under my real name (or even vague approximations of it) for a decade or more. That was seeming increasingly unattractive to me already then. And I'd been online for at least two decades by that point.

Of the various dimensions of trust, anti-sock-puppetry is one axis. It's not the only one. It matters a lot in some contexts. Less in others.

Doxxing may be occasionally warranted.

Umasking is a risk.

@enkiv2 @kragen @freakazoid

@dredmorbius @enkiv2 @kragen @freakazoid Privacy isn't just deanonymizing! You can also track pseudonyms.

@kick Right. My comments were aimed more at qualifying my interest in / preferences for privacy.

I'm finding contemporary society to be very nearly intolerable. And probably ultimately quite dangerous.

@enkiv2 @kragen @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid yeah, although in many ways it's an improvement over Golden Horde society, Ivan the Terrible society, Third Crusade society, Diocletian society, Qin Er Shi society, Battle of the Bulge society, Khmer Rouge society, Holodomor society, People's Temple society, the society that launched the Amistad, etc. We didn't start the fire.

@kragen I'm referencing specifically the surveillance aspects, and the accellerating pace of that espeically over the past two decades or so. Though you can trace the trends back the the 1970s, generally.

Paul Baran was writing of the risks ~1966-1968, which is 52-54 years ago now.

IBM were actively demonstrating the risks 1939-1945.

Herbert Simon conveniently ignorant of this in 1978, when Zuboff discovered surveillance capitalism in her research.

@kick @enkiv2 @freakazoid

@kragen Of the various drawbacks of the Mongol Hordes, massive mobile technological surveillance was not a prominent aspect.

The Battle of the Bulge and Holdomor societies _did_ benefit from informational organisation. Khmer Rouge and People's Temple may have, and the capabilities certainly existed.

General capabilities began ~1880, again with Holerith, nascent IBM.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid depending on who you were and where you lived, it was easy to end up with very little privacy after the Mongol invasion. The fact that the technologies employed were things like chains and swords rather than punched cards and loyalty scores was cold comfort to the enslaved. But, yes, I meant that the societies were more regrettable overall, not necessarily specifically along the surveillance axis.

@kragen My evolving thought is that privacy is an emergent concept, it's a force that grows proportionately to the ability to invade personal space and sanctum.

Pretechnical society had busybodies, gossibs, evesdroppers, spies, and assassins.

But if you wanted to listen to or observe someone, you had to put a body in proximity to do it. Preliterate (or largely so) society plebes didn't even leave paper trails. A baptismal, marriage, and will, if you were lucky.

@kick @enkiv2 @freakazoid

@kragen We're at an age where a chat amongst friends, as here, is creating a distributed global written record, doubtless being scraped by academics, corporations, and state and nonstate surveillance systems.

US phone call history records date to the mid-1980s (if not before). Purchase, social, employment, and location records are comprehensive for at least the past decade, if not five or more.

@kick @enkiv2 @freakazoid

Show more
Show more

@dredmorbius @kick @enkiv2 @freakazoid Well, privacy invasion was more typically done by your father, your husband, or your owner in many of these societies, rather than by the secret police. But it was in many cases quite pervasive. Of course when we think about medieval Europe, it's easier to imagine ourselves as monks, knights, or at least yeomen, than as villeins in gross, vagabonds, or women who died in forced childbirth, precisely because of that paper trail.

Show more
Show more

@kragen @dredmorbius @enkiv2 @freakazoid It was better in the 1960-80s for the most part, but sometimes I still think of:

[5000 well thought out lines of a single mail response on how Linux wipes the floor with Solaris performance-wise >quoted] Have you ever kissed a girl? - Bryan

So the problem was at least prevalent by ‘96.

@kick @enkiv2 @dredmorbius @freakazoid not sure Dave Miller's privacy was being invaded there? much less in a technologically inescapable way

@kragen @enkiv2 @dredmorbius @freakazoid No, not Miller (I was referring to Bryan, because that post will never, ever be forgotten). I admittedly might have gotten lost (it's 6:00AM here and I haven't slept in two days, so I may have gotten threading messed up), but the connection in my head was -

Ah, yeah, I see what's up: I was thinking of a different thread with a similar set of people in it + @dredmorbius's line "I'm finding contemporary society to be very nearly intolerable. And probably ultimately quite dangerous." + comments RE: previous art of problem-space.

There's something that resembles danger in some manner when you can track everything a person's ever said with a name that can be paired with their home address pretty easily I think; lack of privacy mixed with full, unmutable history (for the bad parts, less so for the good parts) makes things very interesting nowadays.

@kick That danger / risk is an interesting one.

Some people focus on strictly one element -- the State, or Corporations, or Terrorists, or Narcocriminals, or the Criminally Insane, or Griefers, or Stalkers / Exes.

It's kind of all of the above.

In some cases I'm not fully sure that it's simply having civic systems and rule of law which matter more.

But mostly it' the data, the ability to use and misuse it, or simply presuming data exist, that enables evil.

@enkiv2 @kragen @freakazoid

Show more

@dredmorbius @kick @enkiv2 @freakazoid one of the nice things about PageRank is that the Perron–Frobenius theorem guarantees a well-defined result precisely because it has no penalties; penalties can give rise to the Eigenmoses problem, as described in

@dredmorbius @kick @enkiv2 @freakazoid Trump supporters label NPR as "fake news"; Trump opponents label Fox as "fake news". Presumably one side will win and the other will be penalized for linking to fake news, with increasing severity and duration or repeats. There's no particular reason to expect that it will be the correct side. See also: the Crusades, blood libel, babies ripped out of incubators, Lysenkoism. PageRank is immune to that.

@kragen True.

There's objective truth, and there's concensus truth. The two seldom match up.

Old Mr. Free Speech Hisself, John Stuart Mill, wasn't optimistic on the truth's capacity to out.

If it's necessary to set up competing credentialing networks which operate independently (competing churches?), that ... might have to happen.

Motivated irrationality is, unfortunately, A Thing. And can be quite lucrative and rewarding, at least in the short term.

@kick @enkiv2 @freakazoid

@kragen @dredmorbius @kick @enkiv2 @freakazoid
In the absence of any negative feedback, whoever can produce the most positive feedback will win (and when competing on access to information, winning accumulates). Whoever gets an early monopoly has a lot of control over the worldview even after they lose that monopoly...

@enkiv2 Pretty much this.

It's an evolutionary problem, I think, with likely analogues and lessons in biological evolution.

Negative feedbacks are fitness checks?

@enkiv2 Right.

Though my question was, specifically: are negative feedbacks fitness checks? That is, the "selection" process within "variation, inheritance, and selection".

And vice versa: are fitness checks / selection processes negative feedback?

Not sure that they are or aren't. Musing on this.

Within a systems context, yes, negative feedback is required for sustainable function.

@dredmorbius @enkiv2
elimination of options based on failure of fitness checks certainly is a subset of negative feedback. i'm not assuming that the negative feedback in question is non-arbitrary though. it's just that in the absence of any negative feedback, everything goes positive, and whoever has the largest reach cannot be beaten. with negative feedback a powerful actor can be deplatformed by a coalition.

@kragen A key problem with that is that current Web tooling makes it all but impossible to assess or even assert authority.

Hell, Usenet had this better in the mid-1990s with PGP-clearsigned messages.

We're a quarter goddamend century on.

@kick @enkiv2 @freakazoid

@dredmorbius @kick @enkiv2 @freakazoid Well, we haven't really been working to fix the problem. Any of these problems. Well, maybe you have. But people like @Gargron are few and far between. Maybe we need better educational institutions.

Sign in to participate in the conversation

Everyone is welcome as long as you follow our code of conduct! Thank you. is maintained by Sujitech, LLC.