Can't backup to S3 with OpsCenter 5.2.1

早过忘川 提交于 2019-12-01 14:42:57

I have been having the exact same problem since updating to OpsCenter 5.2.x and just was able to get it working properly.

I removed all the settings suggested in the previous answer and then created new buckets in us-west-1, us-west-2 and us-standard. After this I was able to successfully able to add all of those as destinations quickly and easily.

It appears to me that the problem is that OpsCenter may be trying to list the objects in the bucket that you configure initially, which in my case for the 2 existing ones we were using had 11TB and 19GB of data in them respectively.

This could explain why increasing the timeout for some worked and not others.

Hope this helps.

Chris Gerlt

Try adding the remote_backup_region property to the cluster configuration file under the [agents] heading in "cluster-name".conf. Valid values are: us-standard, us-west-1, us-west-2, eu-west-1, ap-northeast-1, ap-southeast-1

Does that help?

The problem was resolved by a combination of 2 things.

  1. Delete the entire contents of the existing S3 bucket (or create a new bucket as previously suggested by @kaveh-nowroozi).
  2. Edit /etc/datastax-agent/datastax-agent-env.sh and increase the heap size to 512M as suggested by a DataStax engineer. The default was set at 128M and I kept doubling it until backups became successful.
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!