What does 'Compute Engine Network Internet Egress' mean to Google Cloud?

匿名 (未验证) 提交于 2019-12-03 03:04:01

问题:

I've started a simple Tomcat webserver in Google Cloud Platform, this month I was charged for a service called 'Compute Engine Network Internet Egress from Americas to China: 2636.552 Gibibyte (Project:xxx)' and for the service 'Compute Engine Network Internet Egress from Americas to Americas'.

What does 'Compute Engine Network Internet Egress from America to China' really mean?

回答1:

Just to make sure we're on the same page regarding terminology:

  • ingress: traffic entering or uploaded into Google Cloud Platform
  • egress: traffic exiting or downloaded from Google Cloud Platform

As you can see from the Google Cloud Platform network pricing page, ingress traffic is free, while egress traffic is charged based on the source and destination of such traffic.

So in your examples:

Compute Engine Network Internet Egress from Americas to China [...]

means that your data, stored in Americas in Google Cloud Platform, was downloaded from China.

Compute Engine Network Internet Egress from Americas to Americas [...]

means that your data, stored in Americas in Google Cloud Platform, was downloaded from Americas.

If this was not expected or intended, i.e., you wanted to run a private server, it's possible that these are just bots hitting your server and downloading every possible HTML page, image file, clicking on every link, etc. This means that you should put some authentication/authorization in front of your Tomcat server to make sure that it's not automatically crawled or attacked by every bot out there that just scans all IPs and attempts to connect to every port in the hopes of downloading useful data.

Consider IP filtering as well, or a firewall configuration which does not respond to requests from IP ranges you won't expect to use your service. Again, remember that ingress traffic is free, so as long as you don't generate any outbound traffic for a given request, you won't be charged.

Note that you can stop good, standards-abiding web crawlers using /robots.txt approach, but you still need to protect your service from the not-so-good actors.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!