nfs

docker mount nfs with local_lock=all

▼魔方 西西 提交于 2021-02-18 22:33:27
问题 I have docker-compose.yml file volumes: nfs: driver: local driver_opts: type: nfs o: addr=192.168.100.1,rw device: ":/mnt/storage" my container have mounted volume with options: type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.100.1,mountvers=3,mountproto=tcp,local_lock=none,addr=192.168.100.1) with local_lock=none and i can't change this option to local_lock=all I tried: volumes: nfs: driver: local driver_opts:

Access to NFS-Share from Java-Application

只愿长相守 提交于 2021-02-07 04:15:31
问题 I am trying to access a NFS share on a CentOS 6.3 System from within a Java application. I've tried the following libraries but can't get either to work: YaNFS Trying to access the NFS Share with YaNFS I run into an NfsException with ErrorCode 10001 (NFSERR_BADHANDLE). Sometimes the text of the Exception says "Stale NFS file handle". My code for YaNFS is: public static void main(String[] args) { XFile xf = new XFile("nfs://192.168.1.10/nfs-share"); nfsXFileExtensionAccessor nfsx =

Access to NFS-Share from Java-Application

扶醉桌前 提交于 2021-02-07 04:13:46
问题 I am trying to access a NFS share on a CentOS 6.3 System from within a Java application. I've tried the following libraries but can't get either to work: YaNFS Trying to access the NFS Share with YaNFS I run into an NfsException with ErrorCode 10001 (NFSERR_BADHANDLE). Sometimes the text of the Exception says "Stale NFS file handle". My code for YaNFS is: public static void main(String[] args) { XFile xf = new XFile("nfs://192.168.1.10/nfs-share"); nfsXFileExtensionAccessor nfsx =

Access to NFS-Share from Java-Application

这一生的挚爱 提交于 2021-02-07 04:12:15
问题 I am trying to access a NFS share on a CentOS 6.3 System from within a Java application. I've tried the following libraries but can't get either to work: YaNFS Trying to access the NFS Share with YaNFS I run into an NfsException with ErrorCode 10001 (NFSERR_BADHANDLE). Sometimes the text of the Exception says "Stale NFS file handle". My code for YaNFS is: public static void main(String[] args) { XFile xf = new XFile("nfs://192.168.1.10/nfs-share"); nfsXFileExtensionAccessor nfsx =

Access to NFS-Share from Java-Application

主宰稳场 提交于 2021-02-07 04:12:08
问题 I am trying to access a NFS share on a CentOS 6.3 System from within a Java application. I've tried the following libraries but can't get either to work: YaNFS Trying to access the NFS Share with YaNFS I run into an NfsException with ErrorCode 10001 (NFSERR_BADHANDLE). Sometimes the text of the Exception says "Stale NFS file handle". My code for YaNFS is: public static void main(String[] args) { XFile xf = new XFile("nfs://192.168.1.10/nfs-share"); nfsXFileExtensionAccessor nfsx =

chown: invalid user: ‘nfsnobody’ in fedora 32 after install nfs

冷暖自知 提交于 2021-01-29 09:27:11
问题 I am install nfs using this command in fedora 32: sudo dnf install nfs-utils and then I create a dir to export storage: [dolphin@MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports /home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash) now I could mount this dir with root user like this: sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt now I want to make a step forward to make it it avaliable to any user from any ip(the

Kubernetes, cannot mount NFS share via DNS

那年仲夏 提交于 2021-01-28 08:20:47
问题 I am trying to mount a NFS share (outside of k8s cluster) in my container via DNS lookup, my config is as below apiVersion: v1 kind: Pod metadata: name: service-a spec: containers: - name: service-a image: dockerregistry:5000/centOSservice-a command: ["/bin/bash"] args: ["/etc/init.d/jboss","start"] volumeMounts: - name: service-a-vol mountPath: /myservice/por/data volumes: - name: service-a-vol nfs: server: nfs.service.domain path: "/myservice/data" restartPolicy: OnFailure nslookup of nfs

Docker NFS volume using Ansible

旧时模样 提交于 2021-01-28 05:48:33
问题 Given a simple example such as $ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=192.168.1.1,rw \ --opt device=:/path/to/dir \ foo How can I do the same using Ansible? I tried for example - name: NFS volume mount docker_volume: driver: "local" driver_options: type: nfs o: "addr=192.168.1.1,rw" device: /path/to/dir volume_name: foo Which will create the volume without errors but it will fail when the volume is used with docker_container module. TASK [oracle-database :

How to deploy Postgresql on Kubernetes with NFS volume

拜拜、爱过 提交于 2021-01-28 03:56:29
问题 I'm using the below manifest to deploy postgresql on kubernetes within NFS persistent volume: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 6Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 10.139.82.123 path: /nfsfileshare/postgres --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs2 spec: accessModes: - ReadWriteMany resources: requests: storage: 6Gi --- apiVersion: v1 kind: Service metadata: name: db

DRBD项目实施之NFS高可用架构(NFS+Heartbeat+Drbd)

£可爱£侵袭症+ 提交于 2021-01-11 03:40:50
由于目前线上的两台NFS服务器,一台为主,一台为备。主到备的数据同步,靠rsync来做。由于数据偏重于图片业务,并且还是千万级的碎图片。在目前的业务框架下,NFS服务是存在单点的,并且数据的同步也不能做完全实时性,从而导致不能确保一致性。因此,出于对业务在线率和数据安全的保障,目前需要一套新的架构来解决 NFS 服务单点和数据实时同步的问题。 然后,就没有然后了。 下面是一个丑到爆的新方案架构图,已经在公司测试环境的部署,并且进行了不完全充分的测试。 架构拓扑 : 简单描述: 两台 NFS 服务器,通过 em1 网卡与内网的其他业务服务器进行通信,em2网卡主要负责两台 NFS 服务器之间心跳通信,em3网卡主要负责drbd数据同步的传输。 前面的2台图片服务器通过 NFS 集群提供出来的一个VIP 192.168.0.219 来使用 NFS 集群服务。 一、项目基础设施及信息介绍 1、设备信息 现有的两台 NFS 存储服务器的硬件配置信息: CPU: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz MEM: 16G Raid: RAID 1 Disk: SSD 200G x 2 网卡:集成的 4 个千兆网卡 Link is up at 1000 Mbps, full duplex 前端两台静态图片服务器硬件配置信息: 略 2、网络 浮动 VIP