checkpoint

PostgreSQL 参数调整(性能优化)

怎甘沉沦 提交于 2019-12-11 23:26:55
PostgreSQL 参数调整(性能优化) https://www.cnblogs.com/VicLiu/p/11854730.html 知道一个 shared_pool 文章写的挺好的 还没仔细看 昨天分别在外网和无外网环境下安装PostgreSQL,有外网环境下安装的相当顺利。但是在无外网环境下就是两个不同的概念了,可谓十有八折。感兴趣的同学可以搭建一下。 PostgreSQL安装完成后第一件事便是做相关测试,然后调整参数。 /*CPU 查看CPU型号*/ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c /*查看物理CPU个数*/ cat /proc/cpuinfo | grep "physical id" | sort -u | wc -l /*查看逻辑CPU个数*/ cat /proc/cpuinfo | grep "processor" | wc -l /*查看CPU内核数*/ cat /proc/cpuinfo | grep "cpu cores" | uniq /*查看单个物理CPU封装的逻辑CPU数量*/ cat /proc/cpuinfo | grep "siblings" | uniq /*计算是否开启超线程 ##逻辑CPU > 物理CPU x CPU核数 #开启超线程 ##逻辑CPU = 物理CPU

Keras model training memory leak

夙愿已清 提交于 2019-12-11 10:19:43
问题 I'm new with Keras, Tensorflow, Python and I'm trying to build a model for personal use/future learning. I've just started with python and I came up with this code (with help of videos and tutorials). My problem is that my memory usage of Python is slowly creeping up with each epoch and even after constructing new model. Once the memory is at 100% the training just stop with no error/warning. I don´t know too much but the issue should be somewhere within the loop (If I´m not mistaken). I know

When does Hadoop Framework creates a checkpoint (expunge) to its “current” directory in trash?

穿精又带淫゛_ 提交于 2019-12-11 01:45:04
问题 From a long time, I have observed that Hadoop framework set a checkpoint on the trash current directory irrespective of a time interval whereas permanently deletes the file/directory within the specified deletion interval after creating the automatic checkpoint. Here is what, I have tested: vi core-site.xml <property> <name>fs.trash.interval</name> <value>5</value> </property> hdfs dfs -put LICENSE.txt / hdfs dfs -rm /LICENSE.txt fs.TrashPolicyDefault: Namenode trash configuration: Deletion

oracle的SCN和Checkpoint_Change#的关系

非 Y 不嫁゛ 提交于 2019-12-10 16:59:16
我们知道ORACLE中有SCN(System Change Number)和Checkpoint_Change#,那这两者的关系是什么呢,其实Checkpoint_Change#是来源于SCN,SCN是时刻在变化的,Checkpoint_Change#是在数据发生了检查点的时候才改变的,它的值来源于SCN.下面通过一个例子来说明. 1.获取当前的SCN SQL> select dbms_flashback.get_system_change_number() from dual; DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() ----------------------------------------- 1275075 2.产生检查点 SQL> alter system checkpoint; System altered. 3.从数据文件和数据头文件中查看检查点 SQL> column name format a50; SQL> select name,checkpoint_change# from v$datafile; NAME CHECKPOINT_CHANGE# -------------------------------------------------- ------------------ E:\APP

log file switch (checkpoint incomplete) - 容易被误诊的event

早过忘川 提交于 2019-12-10 13:09:50
本文转自 https://blogs.oracle.com/database4cn/log-file-switch-checkpoint-incomplete-%e5%ae%b9%e6%98%93%e8%a2%ab%e8%af%af%e8%af%8a%e7%9a%84event 首先来看如下的一份AWR,12分钟的采样,DB Time 105分钟。 DB Name DB Id Instance Inst num Startup Time Release RAC R11204 2114874159 R11204 1 23-Oct-17 10:10 11.2.0.4.0 NO Host Name Platform CPUs Cores Sockets Memory (GB) nascds18 Linux x86 64-bit 2 2 1 11.64 Snap Id Snap Time Sessions Cursors/Session Begin Snap: 3 23-Oct-17 10:55:46 37 2.5 End Snap: 4 23-Oct-17 11:08:27 53 2.3 Elapsed: 12.67 (mins) DB Time: 105.90 (mins) Top event 发现 buffer busy waits 和 log file switch

SVN tagging equivalent in TFS 2012

风格不统一 提交于 2019-12-10 02:26:25
问题 I recently migrated to the TFS 2012 and I have worked with SVN for a long time. In SVN I used " Tags " to mark some important " checkpoints " of development, ie when I finished a software version (alpha, beta) I created a Tag for that version. If some mistake happen, I am " protected ". Now, I need the same behaviour (or equivalent) to use in the TFS source control, but I'm confused as to its structure. How I use " Tagging " in TFS ? 回答1: In Team Foundation Server, labels are similar to tags

Codeforces 1264C Beautiful Mirrors with queries (概率dp)

守給你的承諾、 提交于 2019-12-09 23:00:22
队里有人问我这题,我就做了一下,然后就ac了 题意 这个人会从1到n这n个镜子(其中记了若干个checkpoint),他依次问孰与城北徐公美,第 \(i\) 个镜子回答Yes的概率是 \(p_i\) ,No的概率是 \(1-p_i\) .如果 \(i\) 回答了Yes,他就会去问 \(i+1\) ,否则返回去问上一个checkpoint那个镜子。求最后问了镜子n且回答了Yes所需的期望次数。 他会做 \(Q\) 次更新,每次更新会增加或删除一个checkpoint。保证1永远是一个checkpoint。 \(Q,n\leq 2*10^5\) . 解 从概率dp入手:设 \(E_i\) 为问了前 \(i-1\) 个镜子,将要问 \(i\) ,距离问完还要的次数的期望。很明显 \(E_{n+1}=0\) . 显然有递推公式 \[ E_i=p_iE_{i+1}+(1-p_i)E_{c}+1 \] 其中 \(c\) 是小于等于 \(i\) 的下标最大的checkpoint。我们把 \(n+1\) 也看做一个checkpoint。 观察一个checkpoint \(c\) ,记大于它的最小checkpoint为 \(c'\) . 有 \[ \begin{align} E_{c'-1}&=p_{c'-1}E_{c'}+(1-p_{c'-1})E_c+1\\ E_{c'-2}&=p_{c'-2

图像识别模型

爱⌒轻易说出口 提交于 2019-12-09 09:08:27
一、数据准备    首先要做一些数据准备方面的工作:一是把数据集切分为训练集和验证集, 二是转换为tfrecord 格式。在data_prepare/文件夹中提供了会用到的数据集和代码。首先要将自己的数据集切分为训练集和验证集,训练集用于训练模型, 验证集用来验证模型的准确率。这篇文章已经提供了一个实验用的卫星图片分类数据集,这个数据集一共6个类别, 见下表所示       在data_prepare目录中,有一个pic文件夹保存原始的图像文件,这里面有train 和validation 两个子目录,分别表示训练使用的图片和验证使用的图片。在每个目录中,分别以类别名为文件夹名保存所有图像。在每个类别文件夹下,存放的就是原始的图像(如jpg 格式的图像文件)。下面在data_prepare 文件夹下,使用预先编制好的脚本data_convert .py,使用以下命令将图片转换为为tfrecord格式。 python data_convert.py   data_convert.py代码中的一些参数解释为: # -t pic/: 表示转换pic文件夹中的数据。pic文件夹中必须有一个train目录和一个validation目录,分别代表训练和验证数据集。 #–train-shards 2:将训练数据集分成两块,即最后的训练数据就是两个tfrecord格式的文件。如果自己的数据集较大

Checkpoints in Google Colab

拜拜、爱过 提交于 2019-12-08 02:47:25
How do I store my trained model on Google Colab and retrieve further on my local disk? Will checkpoints work? How do I store them and retrieve them after some time? Can you please mention code for that. It would be great. Google Colab instances are created when you open the notebook and are deleted later on so you can't access data on different runs. If you want to download the trained model to your local machine you can use: from google.colab import files files.download(<filename>) And similarly if you want to upload the model from your local machine you can do: from google.colab import files

Flink 基本工作原理

旧城冷巷雨未停 提交于 2019-12-07 20:22:35
Flink 是新的stream计算引擎,用java实现。既可以处理stream data也可以处理batch data,可以同时兼顾Spark以及Spark streaming的功能,与Spark不同的是,Flink本质上只有stream的概念,batch被认为是special stream。Flink在运行中主要有三个组件组成,JobClient,JobManager 和 TaskManager。主要工作原理如下图 用户首先提交Flink程序到JobClient,经过JobClient的处理、解析、优化提交到JobManager,最后由TaskManager运行task。 JobClient JobClient是Flink程序和JobManager交互的桥梁,主要负责接收程序、解析程序的执行计划、优化程序的执行计划,然后提交执行计划到JobManager。为了了解Flink的解析过程,需要简单介绍一下Flink的Operator,在Flink主要有三类Operator, Source Operator ,顾名思义这类操作一般是数据来源操作,比如文件、socket、kafka等,一般存在于程序的最开始 Transformation Operator 这类操作主要负责数据转换,map,flatMap,reduce等算子都属于Transformation Operator, Sink