问题
I want to simply login to a postgres db from outside my K8 cluster. I'm created the following config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PORT
value: '5432'
- name: POSTGRES_DB
value: postgres
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
---
apiVersion: v1
kind: Service
metadata:
name: postgres-srv
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432
Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5432: "default/postgres-srv:5432"
I've checked kubectl get services and attempted to use the endpoint and the cluster-ip. Neith of these worked.
psql "postgresql://postgres:password@[ip]:5432/postgres"
The pod is running and the logs say everything is ready. Anything I'm missing here? I'm running the cluster in digital ocean.
edit:
I want to be able to access the DB from my host. (sub.domain.com) I've bounced the deployments and still can't get in. The only config I've targeted is what is shown above. I do have an A record for my domain and can access my other exposed pods via my ingress nginx service
回答1:
You can expose TCP and UDP services with ingress-nginx configuration.
For example using GKE with ingress-nginx, nfs-server-provisioner and the bitnami/postgresql helm charts:
kubectl create secret generic -n default postgresql \
--from-literal=postgresql-password=$(openssl rand -base64 32) \
--from-literal=postgresql-replication-password=$(openssl rand -base64 32)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install -n default postgres bitnami/postgresql \
--set global.storageClass=nfs-client \
--set existingSecret=postgresql
Patch the ingress-nginx tcp-services ConfigMap:
kubectl patch cm -n ingress-nginx tcp-services -p '{"data": {"5432": "default/postgres-postgresql:5432"}}'
Update the controllers Service for the proxied port (i.e. kubectl edit svc -n ingress-nginx ingress-nginx):
- name: postgres
port: 5432
protocol: TCP
targetPort: 5432
Note: you may have to update the existing ingress-nginx controller deployments args (i.e. kubectl edit deployments.apps -n ingress-nginx nginx-ingress-controller) to include --tcp-services-configmap=ingress-nginx/tcp-services and bounce the ingress-nginx controller if you edit the deployment spec (i.e. kubectl scale deployment -n ingress-nginx --replicas=0 && kubectl scale deployment -n ingress-nginx --replicas=3).
Test the connection:
export PGPASSWORD=$(kubectl get secrets -n default postgresql -o jsonpath={.data.postgresql-password} |base64 -d)
docker run --rm -it \
-e PGPASSWORD=${PGPASSWORD} \
--entrypoint psql \
--network host \
postgres:13-alpine -U postgres -d postgres -h example.com
Note: I manually created an A record in Google Cloud DNS to resolve the hostname to the clusters external IP.
Update: in addition to creating the ingress-nginx config, installing the bitnami/postgresql chart etc. it was necessary to Disable "Proxy Protocol" on the Load Balancer to get the connections working for a deployment in DigitalOcean (postgres will LOG: invalid length of startup packet otherwise):
来源:https://stackoverflow.com/questions/64040744/unable-to-login-to-postgres-inside-kubernetes-cluster-from-the-outside