I've been dealing a lot with NFS recently, and the conclusion so far is to avoid it at all cost.

@boilingsteam because $HOME mounted on a NFS share leads to unexpected delays, git and npm become slow. I've been running jobs in a cluster, as the number of workers increases access to data becomes costlier.

@boilingsteam It makes things very easy in the beginning but then too painful to change.

@boilingsteam For example, I tend to symlink ~/.cache to a local partition. But then I need to be sure that on every machine I login to that destination exists.

@boilingsteam as of my cluster problem, I might start an rsync daemon to serve data. I already copy data to a local partition using rsync, so that should be easy.

On a second thought, I might use BitTorrent to spread data to the workers. It looks like an ideal setup. I have one BT node at the beginning, when I start many jobs, traffic to the initial BT node will be reduced, as workers will share data between themselves.

I am not sure if this fits with your workflow, but you can also use Syncthing to spread data across clients. Works very well.

Yeah. It's like Dropbox but in Free Software. Been using it for years without issue.

I guess it depends what you use LFS for. I use it as an extra online storage for my setup, and I can't say I have ever had any issues this way.

Sign in to participate in the conversation

Generalistic and moderated instance.