Large SVN checkout fails sporadically

强颜欢笑 提交于 2021-02-05 21:31:17

问题


I'm currently experiencing issues during a large, full SVN repository checkout (20GB+), where the checkout process will halt randomly. The repository is composed of many small text files and a few large CSV files.

It's been difficult to narrow down the issue as the error only pops up a few hours into the checkout. From what I've seen, it's not a specific file that halts the process and verifying using svnadmin returned no errors.

Errors:

Typical Apache Error Log:

Unable to deliver content.  [500, #0]
Unable to deliver content.  [500, #0]
Could not write data to filter.  [500, #175002]
Could not write data to filter.  [500, #175002]
Provider encountered an error while streaming a REPORT response.  [500, #0]
A failure occurred while driving the update report editor  [500, #730053]

Specs:

Server: Windows Server 2003 running XAMPP v1.8.2-5, Apache v2.4, and SVN v1.8.9. It was recently updated from Apache v2.2 and SVN v1.5.3, which was experiencing similar issues.

Clients: Windows 7 running TortoiseSVN v1.8.8 x64, recently updated from v1.8.3 x64 which was experiencing similar issues. Command-line SVN v1.8.9.

I'm using the HTTP protocol to perform the checkout.


Things I've tried:

Setting the "TimeOut" directive on Apache to a higher value (up to 30000 seconds).

Setting the "SVNAdvertiseV2Protocol" directive to off.

Setting the "SVNPathAuthz" directive to off.

Setting the "SVNCompressionLevel" directive to "0".


回答1:


We've been running into this same problem recently. So far I think it has been related to newer subversion clients.

Apache dav_svn_module directive

SVNAllowBulkUpdates Prefer

Seems to help. After adding that into apache conf, no issues have been found. Before that most of the large check outs failed.

I found a discussion thread which explains the problems which are related to subversion clients that are newer than version 1.8.x. See the mailing list thread.




回答2:


I had the following errors:

Unable to deliver content.  [500, #0]
Could not write data to filter.  [500, #175002]

I didn't even use the mod_deflate so that couldn't be it. In my case it turned out to be the authentication (auth_digest_module) causing the error. If a checkout lasts more than 300 seconds, I would have the above error logged in my Apache server log.

The problem is the default AuthDigestNonceLifetime 300 directive. See here. My solution was to set this directive to infinity: AuthDigestNonceLifetime -1




回答3:


This seems to be an encoding problem of the files according to this post on the subversion maillinglist. You can either look for AddEncoding x-gzip .gz entries in your apache config and remove them or add this to your <Location /svn>…</Location> entry:

RemoveEncoding .gz
RemoveEncoding .Z

This was actualy mentioned in the changelog, but neither did I care to read this and learned it the hard way…




回答4:


I ran into the same problem attempting to do a snv checkout on a moderately sized (500MB) repository using a server consisting of Centos 7.4.1708, Apache 2.4.6, Subversion 1.9.15 and Windows 10 clients using TortoiseSVN 1.9.7 from behind an Apache Reverse Proxy.

The solution for me was to add SVNAllowBulkUpdates Off similar to teori's answer. I attempted to use "SVNAllowBulkUpdates Prefer", but when I restarted httpd, it threw an error saying "SVNAllowBulkUpdates must be On or Off". My final SVN/Apache configuration file is:

<Location /svn >
    DAV svn
    SVNParentPath /svn
    SVNAllowBulkUpdates Off
    AuthType Basic
    AuthName "SVN Repo"
    AuthUserFile /var/svn/svn-auth-user
    Require valid-user
</Location>

Other thoughts: I do not believe the Timeout and AuthDigestNonceLifetime settings are directly related to the problem. I did attempt to use them but neither had any effect. I specifically experimented with the timeout, keepalive and keepalivetimeout settings on both the SVN host and the reverse proxy host.

The problem may be related to "deflating", but I also disabled it as Tim S. suggested and it also had not effect. The reason I still think it may be related is that, after eliminating the error, I noticed that the number of bytes transferred was substantially greater than before.




回答5:


One other possible cause I discovered for the "Could not write data to filter" errors is a NAT Loopback or Hairpin Loopback. We have our SVN Repository Server on a guest VM inside an ESXi host. SVN Clients within the same ESXi host were trying to reference the Repositories using a URL that would resolve out to the internet then "hairpin loopback" back into the LAN and ESXi host.

SVN Client guest VMs on the same ESXi host would consistently get the following errors in /etc/httpd/logs/ssl_error_log when trying to do a TortoiseSVN Checkout:

[dav:error] [pid 2204] Unable to deliver content.  [500, #0]
[dav:error] [pid 2204] Could not write EOS to filter.  [500, #104]
[dav:error] [pid 2204] Could not write data to filter  [500, #104]
[dav:error] [pid 1687] Unable to deliver content.  [500, #0]
[dav:error] [pid 1687] Could not write data to filter.  [500, #104]
[dav:error] [pid 1687] Could not write data to filter  [500, #104]
[dav:error] [pid 1686] Provider encountered an error while streaming a REPORT response.  [500, #0]
[dav:error] [pid 1686] A failure occurred while driving the update report editor  [500, #32]
[dav:error] [pid 1686] Broken pipe  [500, #32]

The TortoiseSVN logs would simply say:

ra_serf: An error occurred during SSL communication

Fixed by changing the references to the SVN Repository URL using local IP addresses instead of a URL that resolved out to the internet. Other SVN clients in the same LAN but were not in ESXi, such as our laptops had no problem with the Loopback, only the SVN clients in ESXi had this error.



来源:https://stackoverflow.com/questions/25413625/large-svn-checkout-fails-sporadically

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!