YAML

Force YAML values to be strings

浪尽此生 提交于 2021-02-08 14:32:34
问题 Look at this code, under Python 2.7: >>> import yaml >>> yaml.load('string: 01') {'string': 1} >>> :( Is it possible to obtain the string 01 without modifying the yaml file? I didn't find anything in the docs. 回答1: Try: >> import yaml >> yaml.load('string: 01', Loader=yaml.loader.BaseLoader) {u'string': u'01'} 回答2: I was seeking for exactly the opposite effect: Numbers where being converted to stings, but numbers where wanted. I was using accidentally the BaseLoader (Dame copy-paste!). The

Reuse portion of github action across jobs

£可爱£侵袭症+ 提交于 2021-02-08 14:18:13
问题 I have a workflow for CI in a monorepo, for this workflow two projects end up being built. The jobs run fine, however, I'm wondering if there is a way to remove the duplication in this workflow.yml file with the setting up of the runner for the job. I have them split so they run in parallel as they do not rely on one another and to be faster to complete. It's a big time difference in 5 minutes vs. 10+ when waiting for the CI to finish. jobs: job1: name: PT.W Build runs-on: macos-latest steps:

Why does the YAML spec mandate a space after the colon?

≡放荡痞女 提交于 2021-02-08 12:19:36
问题 The YAML spec clearly states: Mappings use a colon and space (“: ”) to mark each key: value pair. So this is legal: foo: bar But this, Ain't: foo:bar I see many people online that are ranting about the space. I think they have a point. I got burned by it several times myself. Why is the space mandatory? What was the design consideration behind it? 回答1: It's easy to miss, because that specification uses the bizarre convention of only highlighting the last character of an internal link, but the

Apache Flink 进阶(四):Flink on Yarn / K8s 原理剖析及实践

元气小坏坏 提交于 2021-02-08 11:57:41
Apache Flink 进阶(四):Flink on Yarn / K8s 原理剖析及实践 周凯波(宝牛) Flink 中文社区 本文根据 Apache Flink 进阶篇系列直播课程整理而成,由阿里巴巴技术专家周凯波(宝牛)分享,主要介绍 Flink on Yarn / K8s 的原理及应用实践,文章将从 Flink 架构、Flink on Yarn 原理及实践、Flink on Kubernetes 原理剖析三部分内容进行分享并对 Flink on Yarn/Kubernetes 中存在的部分问题进行了解答。 Flink 架构概览 Flink 架构概览–Job 用户通过 DataStream API、DataSet API、SQL 和 Table API 编写 Flink 任务,它会生成一个JobGraph。JobGraph 是由 source、map()、keyBy()/window()/apply() 和 Sink 等算子组成的。当 JobGraph 提交给 Flink 集群后,能够以 Local、Standalone、Yarn 和 Kubernetes 四种模式运行。 Flink 架构概览–JobManager JobManager的功能主要有: 将 JobGraph 转换成 Execution Graph,最终将 Execution Graph 拿来运行;

How to set version of the prerelease NuGet according to the latest released NuGet automatically?

强颜欢笑 提交于 2021-02-08 09:51:44
问题 I have two pipelines, release and prerelease. In release pipeline version set up like this, the counter for the patch and manually set major/minor version: variables: solution: '**/*.sln' buildPlatform: 'Any CPU' buildConfiguration: 'Release' majorVersion: '1' minorVersion: '1' patchVersion: $[counter(format('{0}.{1}', variables['majorVersion'], variables['minorVersion']), 0)] productVersion: $[format('{0}.{1}.{2}', variables['majorVersion'], variables['minorVersion'], variables['patchVersion

SnakeYaml dump function writes with single quotes

梦想的初衷 提交于 2021-02-08 06:56:47
问题 Consider the following code: public void testDumpWriter() { Map<String, Object> data = new HashMap<String, Object>(); data.put("NAME1", "Raj"); data.put("NAME2", "Kumar"); Yaml yaml = new Yaml(); FileWriter writer = new FileWriter("/path/to/file.yaml"); for (Map.Entry m : data.entrySet()) { String temp = new StringBuilder().append(m.getKey()).append(": ").append(m.getValue()).toString(); yaml.dump(temp, file); } } The output of the above code is 'NAME1: Raj' 'NAME2: Kumar' But i want the

SnakeYaml dump function writes with single quotes

一个人想着一个人 提交于 2021-02-08 06:56:22
问题 Consider the following code: public void testDumpWriter() { Map<String, Object> data = new HashMap<String, Object>(); data.put("NAME1", "Raj"); data.put("NAME2", "Kumar"); Yaml yaml = new Yaml(); FileWriter writer = new FileWriter("/path/to/file.yaml"); for (Map.Entry m : data.entrySet()) { String temp = new StringBuilder().append(m.getKey()).append(": ").append(m.getValue()).toString(); yaml.dump(temp, file); } } The output of the above code is 'NAME1: Raj' 'NAME2: Kumar' But i want the

容器 coredns 问题排查整理

爷,独闯天下 提交于 2021-02-08 06:40:16
1.问题描述 客户侧在变更容器安全组之后出现网络不通。 2.问题排查 1)接到客户反馈 Kubernetes 托管版集群出现网络问题,电话沟通后授权进行查看:Pod 网络通畅,域名解析出现异常;(ping IP 可通,但ping域名不通) 2)结合客户操作,怀疑与安全组配置有关,尝试进一步排查安全组问题。详细排查无问题后,决定重启 coredns POD。重启后 coredns POD 漂移到其它 ECS上,集群中大部分主机恢复正常; 3)确认coredns原宿主机存在网络连接问题,将该主机踢出集群后,集群恢复正常; 4)经过环境测试后最终定位原因在于客户侧误解 Kubernetes 集群安全组页面“解绑实例”功能为解绑安全组,导致误操作解绑和绑定ENI 网卡,同时产品健康检查机制存在缺陷,无法探测到辅助网卡的链路问题,导致问题无法快速发现并解决,最终导致客户集群网络无法联通。 3.优化改进 1)优化安全组页面存在“解绑实例”功能文案,同时增加由 Kubernetes 集群创建的网卡在用户解绑时的风险提示,避免客户误操作引发业务中断; 2)优化健康检查机制,确保辅助网卡链路异常场景能够被快速发现。 4.问题复现 4.1 环境准备 1)kubernetes托管版集群,网络模式为Terway,kube-proxy代理模式为IPVS,四节点,需要创建测试的应用pod; 图1:初始环境 2

Knative入门系列6:Knative的使用

笑着哭i 提交于 2021-02-08 05:15:31
作者:Brian McClain & Bryan Friedman 译者:殷龙飞 审校:孙海洲、邱世达、王刚、宋净超 Knative 是一个基于 Kubernetes 的,用于构建、部署和管理现代 serverless 应用的平台。Getting Started with Knative 是一本由 Pivotal 公司赞助 O’Reilly 出品的电子书,公众号后台回复“ knative ”获取英文版下载地址。本书中文版由 ServiceMesher 社区自发翻译系列文章,这是该系列的第6章。 通过前面的章节已经扎实掌握 Knative 的组件了,现在是时候开始研究一些更高级的主题了。Serving 为如何路由流量提供了相当大的灵活性,还有其他的构建模板使构建应用程序变得容易。只需几行代码即可轻松制作我们自己的事件源。在本章中,我们将深入研究这些功能,让我们的代码在 Knative 上更容易地运行。 创建和运行 Knative Services 第 2 章 介绍了 Knative Service 的概念。 回想一下,Knative 中的 Service 是单个配置和路由集合的组合。在 Knative 和 Kubernetes 体系内,它最终是 Pod 中的 0 个或多个容器以及其他使您的应用程序可寻址的组件。所有这些都由具有强大流量策略选项的路由层支持。

Kedro - how to pass nested parameters directly to node

旧巷老猫 提交于 2021-02-08 03:41:21
问题 kedro recommends storing parameters in conf/base/parameters.yml . Let's assume it looks like this: step_size: 1 model_params: learning_rate: 0.01 test_data_ratio: 0.2 num_train_steps: 10000 And now imagine I have some data_engineering pipeline whose nodes.py has a function that looks something like this: def some_pipeline_step(num_train_steps): """ Takes the parameter `num_train_steps` as argument. """ pass How would I go about and pass that nested parameters straight to this function in data