Is there any way to clone a repository from the web incrementally?

时光总嘲笑我的痴心妄想 提交于 2019-11-30 04:22:31

问题


I'm on dialup in lousy place (yes, it still happens in 2011), and trying to clone a huge repository. It starts without problem, but every time the dialup disconnects (which is unavoidable, it seems), the !#%$* hg rolls everything back and I'm left again with an empty directory.

Is there a solution other than doing it on a remote PC and then downloading the whole thing by FTP or something?


回答1:


In a bash-like shell you could do something like this:

$ hg init myclone
$ cd myclone
$ for REV in `seq 10 10 100` ; do hg pull -r $REV <REMOTEREPO>; done

Starting at 10, each pull downloads the next 10 revisions, up to 100. In case of a lost connection, adjust the first argument to seq to match what you've already pulled.




回答2:


Depending on how flaky your connection is, there are two options for performing initial clones.

First, you can try so-called “streaming clones”. These minimize Time To First Byte, but do generally require a bit more data to be transferred.

Here’s how to do a streaming clone:

$ hg clone --uncompressed https://~~~~

Your second option will be a hg clone –-rev operation, followed by a number of incremental pulls. This behaves similarly to cloning a repository in some distant past and doing occasional updates.

$ hg clone --rev 5 https://~~~~



回答3:


Based on the suggestions here,

I created a repo that did this.

https://github.com/nootanghimire/hg-clone-bash

It's optimized for a single repo, but i guess you can fork and work on it! :)



来源:https://stackoverflow.com/questions/4716200/is-there-any-way-to-clone-a-repository-from-the-web-incrementally

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!