jira

Ozone的Erasure Coding方案设计

与世无争的帅哥 提交于 2020-07-27 08:37:56
文章目录 前言 EC技术以及EC下的存储效率的提升 Ozone下的EC方案设计 Container Level的EC实现 Block Level的EC实现 引用 前言 众所周知,在当下存储系统中为了存储效率的提升,Erasure Coding(纠删码)技术在扮演着一个越来越重要的角色。比如说目前Hadoop HDFS中,它就已经能够支持EC功能了。在EC模式下,HDFS 可以不必存储多打3份这样的冗余副本数来为了容灾保护。存储效率的提高意味着存储海量数据所需要的存储节点资源的减少。不过本文并不是聊HDFS的EC实现的,而是谈谈时下另外一个存储系统Ozone的EC设计,也是简单聊聊在Ozone的对象存储模式下,EC要如何实现以及它能够带来的好处。 EC技术以及EC下的存储效率的提升 关于EC技术本身,全称Erasure Coding,中文称之为纠删码技术。EC的具体算法实现细节网上资料讲述的也已经很多了,本文不做过分细致的阐述。 简单的来说,就是将原始数据块进行划分成多个data block,然后根据EC算法的计算,产生出新的校验块(parity block),如下所示: 当上述数据块或者校验块发生丢失或者损坏的情况时,系统可以根据EC算法进行重新生成来恢复,以此达到数据保护的效果。 还有一个问题,这里的数据存储效率的提升体现在哪里呢? 继续以上述例子为例,在上面3个数据块

Gunicorn failed to start as it fails to identify the version of a package

余生颓废 提交于 2020-07-10 10:25:23
问题 While gunicorn is attempting to run flask server, the following error is shown: Traceback (most recent call last): File "/root/.local/share/virtualenvs/customer-account-automation-gLS21FFx/lib/python3.7/site-packages/pbr/version.py", line 442, in _get_version_from_pkg_resources provider = pkg_resources.get_provider(requirement) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 344, in get_provider return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] File "

How to use Eclipse Neon's Mylyn with Jira since connector was discontinued

好久不见. 提交于 2020-05-24 21:17:13
问题 After a terrible decision by Atlassian to discontinue the Eclipse Connector for Jira, it seems to me like there is no way to use Mylyn in Eclipse Neon to integrate with Jira. Is it just me? Are there any workarounds? I tried installing Tasktop Dev Pro but it failed complaining about a jar not found in the Update Site. Besides, it doesn't seem like a proper solution since it brings a lot more than we need 回答1: It definitively works with Eclipse Oxygen (still), using the Updatesite from

Jira JQL: how to find the busiest hours of a queue?

断了今生、忘了曾经 提交于 2020-05-17 06:46:47
问题 Jira Server v7.12.1#712002 We have noticed that at certain periods of the day there are more tickets assigned to "Operations" queue than usual, so we need to back this impression with real statistics. We extracted all the tickets that at some point were assigned to "Operations" queue via the following query: project = "Client Services" AND assignee WAS "Operations" The results of the query above include the timestamp value in the "Updated" field, however this field reflects the last time the

How to use JIRA REST client library?

冷暖自知 提交于 2020-05-15 09:07:18
问题 I need to use JIRA REST client version 5.2.0 or higher. Cloud JIRA does not work with an earlier version of the client. In my pom.xml file I have the following dependencies: <dependency> <groupId>com.atlassian.jira</groupId> <artifactId>jira-rest-java-client-core</artifactId> <version>5.2.1</version> </dependency> <dependency> <groupId>com.atlassian.jira</groupId> <artifactId>jira-rest-java-client-app</artifactId> <version>5.2.1</version> </dependency> When I building the project, I get an

How to use JIRA REST client library?

孤街浪徒 提交于 2020-05-15 09:06:31
问题 I need to use JIRA REST client version 5.2.0 or higher. Cloud JIRA does not work with an earlier version of the client. In my pom.xml file I have the following dependencies: <dependency> <groupId>com.atlassian.jira</groupId> <artifactId>jira-rest-java-client-core</artifactId> <version>5.2.1</version> </dependency> <dependency> <groupId>com.atlassian.jira</groupId> <artifactId>jira-rest-java-client-app</artifactId> <version>5.2.1</version> </dependency> When I building the project, I get an

How to use JIRA REST client library?

删除回忆录丶 提交于 2020-05-15 09:06:11
问题 I need to use JIRA REST client version 5.2.0 or higher. Cloud JIRA does not work with an earlier version of the client. In my pom.xml file I have the following dependencies: <dependency> <groupId>com.atlassian.jira</groupId> <artifactId>jira-rest-java-client-core</artifactId> <version>5.2.1</version> </dependency> <dependency> <groupId>com.atlassian.jira</groupId> <artifactId>jira-rest-java-client-app</artifactId> <version>5.2.1</version> </dependency> When I building the project, I get an

ZooInspe的下载使用

ぐ巨炮叔叔 提交于 2020-05-08 06:43:08
ZooInspe是Zookeeper的一个图形化客户端,可以以图形化的方式操作zkServer上的znode。 下载地址: https://issues.apache.org/jira/secure/attachment/12436620/ZooInspector.zip 解压,src是源码,build是编译好的。双击运行build里面的.jar文件。 来源: oschina 链接: https://my.oschina.net/u/4353003/blog/4256462

Kafka服务不可用(宕机)问题踩坑记

。_饼干妹妹 提交于 2020-05-07 16:06:44
背景 某线上日志收集服务报警,打开域名报502错误码。 收集服务由2台netty HA服务器组成,netty服务器将客户端投递来的protobuf日志解析并发送到kafka,打开其中一个应用的日志,发现如下报错: org .apache .kafka .common .errors .TimeoutException: Expiring 1 record(s) 在排除了netty服务的错误之后,去查看kafka的日志。 发现报错,排查过程如下; 配置信息 系统 kafka版本 broker数量 CentOS7.4 2.1.0 3 线上有三台Kafka Broker,id分别为0、1、2,服务器只部署了Kafka服务。 问题 线程是否存活 首先jps查看Kafka线程是否存活,三台机器都没问题,kafka依然在运行。 GC问题 查看kafkaServer-gc.log.1.current的日志,gc日志没发现异常。 Broker 0/server.log [2019-08-02 15:17:03,699] WARN Attempting to send response via channel for which there is no open connection, connection id 172.21.3.14:9092-172.21.3.11:54311-107706

ViewFs的多Replication模式:Nfly link模式

我的梦境 提交于 2020-05-05 15:35:26
文章目录 前言 Nfly link模式的由来 Nfly link实现细节分析 引用 前言 在多集群模式下,为了保证数据的一定冗余性要求,我们有时会跨集群或跨data center去备份一些重要的数据。这样可以避免某天一旦一个cluster或者data center处于不可用状态时,从而影响集群正常的数据服务。如果在不额外实现此功能代码的情况下,我们可以采用简单直接的Distcp工具来做集群间的数据拷贝。不过这种方式无法做到实时的数据replication,我们可以按照实际的使用场景做到一天同步一次或者小时级别的同步。不过本文笔者要介绍与此相关的一个重要的ViewFs的新特性:Nfly link模式。 Nfly link模式的由来 社区在JIRA HADOOP-12077:Provide a multi-URI replication Inode for ViewFs 中提出了在ViewFs模式下能够通过多uri地址的方式做跨集群的replication。而这里提到的multi-URL mount point link即Nfly模式,这里的N指的是N个data center。 在社区JIRA里,将数据冗余备份在不同的data center(cluster)里,保持high availability是一方面,可以到时出问题时应以可以做failover到下一个URI地址读写数据