memory

jena处理Owl

﹥>﹥吖頭↗ 提交于 2020-03-24 12:43:58
创建Owl模型,参数可以制定那种形式的推理机,比如owl dl: OntModel m=ModelFactory.createOntologyModel(); OntModel m=ModelFactory.createOntologyModel(OntModelSpec.OWL_DL_MEM); OntModelSpec Language profile Storage model Reasoner OWL_MEM OWL full in-memory none OWL_MEM_TRANS_INF OWL full in-memory transitive class-hierarchy inference OWL_MEM_RULE_INF OWL full in-memory rule-based reasoner with OWL rules OWL_MEM_MICRO_RULE_INF OWL full in-memory optimised rule-based reasoner with OWL rules OWL_MEM_MINI_RULE_INF OWL full in-memory rule-based reasoner with subset of OWL rules OWL_DL_MEM OWL DL in-memory none OWL_DL_MEM

StackOverflowError vs OutOfMemoryError

别说谁变了你拦得住时间么 提交于 2020-03-24 06:48:35
When you start JVM you define how much RAM it can use use for processing. JVM divides this into certain memory locations for its processing purpose, two of those are Stack & Heap OutOfMemoryError is related to Heap. If you have large objects (or) referenced objects in memory, then you will see OutofMemoryError . If you have strong references to objects, then GC can't clean the memory space allocated for that object. When JVM tries to allocate memory for new object and not enough space available it throws OutofMemoryError because it can't allocate required amount of memory. How to avoid : Make

Reducing NbClust memory usage

ぐ巨炮叔叔 提交于 2020-03-23 17:49:30
问题 I need some help with massive usage of memory by the NbClust function. On my data, memory balloons to 56GB at which point R crashes with a fatal error. Using debug() , I was able to trace the error to these lines: if (any(indice == 23) || (indice == 32)) { res[nc - min_nc + 1, 23] <- Index.sPlussMoins(cl1 = cl1, md = md)$gamma Debugging of Index.sPlussMoins revealed that the crash happens during a for loop. The iteration that it crashes at varies, and during the loop memory usage varies

MySql存储引擎介绍

半世苍凉 提交于 2020-03-23 13:01:28
MySQL5.5以后默认使用 InnoDB 存储引擎,其中InnoDB和BDB提供事务安全表,其它存储引擎都是非事务安全表。 若要修改默认引擎,可以修改配置文件中的default-storage-engine。可以通过:show variables like 'default_storage_engine';查看当前数据库到默认引擎。命令: show engines 和 show variables like 'have%' 可以列出当前数据库所支持到引擎。其中Value显示为disabled的记录表示数据库支持此引擎,而在数据库启动时被禁用。在MySQL5.1以后,INFORMATION_SCHEMA数据库中存在一个ENGINES的表,它提供的信息与show engines;语句完全一样,可以使用下面语句来查询哪些存储引擎支持事物处理:select engine from information_chema.engines where transactions = 'yes'; 可以通过engine关键字在创建或修改数据库时指定所使用到引擎。 主要存储引擎:MyISAM、InnoDB、MEMORY和MERGE介绍: 在创建表到时候通过 engine=... 或 type=... 来指定所要使用到引擎。 show table status from DBname 来查看指定表到引擎

38 是否要使用memory引擎的表

£可爱£侵袭症+ 提交于 2020-03-23 12:47:00
38 是否要使用 memory 引擎的表 内存表的数据组织结构 create table t1(id int primary key, c int) engine=Memory; create table t2(id int primary key, c int) engine=innodb; insert into t1 values(1,1),(2,2),(3,3),(4,4),(5,5),(6,6),(7,7),(8,8),(9,9),(0,0); insert into t2 values(1,1),(2,2),(3,3),(4,4),(5,5),(6,6),(7,7),(8,8),(9,9),(0,0); 可以看到,内存表 t1 的返回结果里面 0 在最后一行,在 innodb 表 t2 的返回结果, 0 在第一行,二者的差别要从他们的 主键索引 的组织方式说起。 表 t2 是 innodb 表,是主键索引 id 的方式, innodb 表的数据放在主键索引树上,是一个 B+ tree ,如下 主键索引上的值是 有序 存储的,在 select * 的时候,就会按照叶子节点上从左往右扫描,所以结果里 0 在第一行。 与 innodb 不同, memory 引擎的数据和索引是分开的, 可以看到,内存表的数据部分以数组的方式单独存放,而主键 id 索引里,存的是每个数据的位置

Zabbix监控JVM内存

空扰寡人 提交于 2020-03-22 17:50:58
上篇最后提到了jstat,jstat可以查看统计JVM内存信息,那么结合Zabbix,就可以监控多实例的JVM内存了。 1、下面两个脚本部署在被监控主机: vm.py 用于JVM实例PID查找,ps命令亦可以换成jdk自带的jps工具: #!/usr/bin.python # import os import json data = {} tcp_list = [] port_list = [] command = "ps -ef | grep weblogic.Server | grep -v \"grep web\" | awk '{print $2}'" lines = os.popen(command).readlines() for line in lines: port=line.strip('\n') # port = line.split(':')[1] port_list.append(port) for port in list(set(port_list)): port_dict = {} port_dict['{#PID}'] = port tcp_list.append(port_dict) data['data'] = tcp_list jsonStr = json.dumps(data, sort_keys=True, indent=4) print

CUDA ---- 简介

那年仲夏 提交于 2020-03-22 15:12:39
CUDA简介 CUDA是并行计算的平台和类C编程模型,我们能很容易的实现并行算法,就像写C代码一样。只要配备的NVIDIA GPU,就可以在许多设备上运行你的并行程序,无论是台式机、笔记本抑或平板电脑。熟悉C语言可以帮助你尽快掌握CUDA。 CUDA编程 CUDA编程允许你的程序执行在异构系统上,即CUP和GPU,二者有各自的存储空间,并由PCI-Express 总线区分开。因此,我们应该先注意二者术语上的区分: Host:CPU and itsmemory (host memory) Device: GPU and its memory (device memory) 代码中,一般用h_前缀表示host memory,d_表示device memory。 kernel是CUDA编程中的关键,他是跑在GPU的代码,用标示符__global__注明。 host可以独立于host进行大部分操作。当一个kernel启动后,控制权会立刻返还给CPU来执行其他额外的任务。所以,CUDA编程是异步的。一个典型的CUDA程序包含由并行代码补足的串行代码,串行代码由host执行,并行代码在device中执行。host端代码是标准C,device是CUDA C代码。我们可以把所有代码放到一个单独的源文件,也可以使用多个文件或库。NVIDIA C编译器(nvcc

On Linux, how do you determine which individual pages are resident?

喜夏-厌秋 提交于 2020-03-21 19:18:08
问题 How can one determine which individual pages are resident (i.e., committed in RAM)? On Linux, /proc/pid/smaps gives, for a fixed set of ranges, how many bytes are resident in that range, but this information doesn't tell you what actual ranges of memory are resident. As for what this is intended to be used for: I already have data associating allocation ranges with the source line info. This is useful for finding who is allocating how much. Given resident memory ranges, I could correlate the

Memory consumption of a list and set in Python

房东的猫 提交于 2020-03-21 18:07:27
问题 >>> from sys import getsizeof >>> a=[i for i in range(1000)] >>> b={i for i in range(1000)} >>> getsizeof(a) 9024 >>> getsizeof(b) 32992 My question is, why does a set consume so much more memory compared to a list? Lists are ordered, sets are not. Is it an internal structure of a set that consumes memory? Or does a list contain pointers and set does not? Or maybe sys.getsizeof is wrong here? I've seen questions about tuples, lists and dictionaries, but I could not find any comparison between

Memory consumption of a list and set in Python

人走茶凉 提交于 2020-03-21 18:04:05
问题 >>> from sys import getsizeof >>> a=[i for i in range(1000)] >>> b={i for i in range(1000)} >>> getsizeof(a) 9024 >>> getsizeof(b) 32992 My question is, why does a set consume so much more memory compared to a list? Lists are ordered, sets are not. Is it an internal structure of a set that consumes memory? Or does a list contain pointers and set does not? Or maybe sys.getsizeof is wrong here? I've seen questions about tuples, lists and dictionaries, but I could not find any comparison between