ram

Google Colaboratory: misleading information about its GPU (only 5% RAM available to some users)

老子叫甜甜 提交于 2019-11-28 14:58:52
update: this question is related to Google Colab's "Notebook settings: Hardware accelerator: GPU". This question was written before the "TPU" option was added. Reading multiple excited announcements about Google Colaboratory providing free Tesla K80 GPU, I tried to run fast.ai lesson on it for it to never complete - quickly running out of memory. I started investigating of why. The bottom line is that “free Tesla K80” is not "free" for all - for some only a small slice of it is "free". I connect to Google Colab from West Coast Canada and I get only 0.5GB of what supposed to be a 24GB GPU RAM.

C++: Uninitialized variables garbage

喜你入骨 提交于 2019-11-28 14:09:05
int myInt; cout << myInt; // Garbage like 429948, etc If I output and/or work with uninitialized variables in C++, what are their assumed values? Actual values in the memory from the "last user"? e.g.: Program A is closed, it had an int with the value 1234 at 0x1234 -> I run my program, myInt gets the address 0x1234 , I output it like above -> 1234 Is it just random garbage? Alex Martelli "Random garbage" but with emphasis on "garbage", not on "random" – i.e., absolutely arbitrary garbage without even any guarantee of "randomness" – the compiler and runtime systems are allowed to have

Node.js read big file with fs.readFileSync

谁说我不能喝 提交于 2019-11-28 09:21:03
问题 I try to load big file (~6Gb) into memory with fs.readFileSync on the server with 96GB RAM. The problem is it fails with the following error message RangeError: Attempt to allocate Buffer larger than maximum size: 0x3fffffff bytes Unfortunately I didn't find how it is possible to increase Buffer, it seems like it's a constant. How I can overcome this problem and load a big file with Node.js? Thank you! 回答1: From a joyent FAQ: What is the memory limit on a node process? Currently, by default

Manually generate elf core dump

六月ゝ 毕业季﹏ 提交于 2019-11-28 08:54:59
问题 I am looking for manually generating an ELF Core Dump file. I have a RAM dump from my program, and can also retrieve register informations and so on. With this data, I would like to build an ELF core dump file, similar as those generated by Linux kernel when a program crashes, the goal would be to analyse this core dump with a GDB specifically made for my platform. I have been looking for core dumps specifications or detailed format, but did not find what I wanted : What sections does my core

FPGA面试题

♀尐吖头ヾ 提交于 2019-11-28 08:37:55
FPGA面试题——网上资料整理 2019-08-23 21:22:30 1:什么是同步逻辑和异步逻辑?(汉王) 同步逻辑是时钟之间有固定的因果关系。 异步逻辑是各时钟之间没有固定的因果关系。 〔补充〕: 同步时序逻辑电路的特点:各触发器的时钟端全部连接在一起,并接在系统时钟端,只有当时钟脉冲到来时,电路的状态才能改变。改变后的状态将一直保持到下一个时钟脉冲的到来,此时无论外部输入 x 有无变化,状态表中的每个状态都是稳定的。 异步时序逻辑电路的特点:电路中除可以使用带时钟的触发器外,还可以使用不带时钟的触发器和延迟元件作为存储元件,电路中没有统一的时钟,电路状态的改变由外部输入的变化直接引起。 2:同步电路和异步电路的区别: 同步电路:存储电路中所有触发器的时钟输入端都接同一个时钟脉冲源,因而所有触发器的状态的变化都与所加的时钟脉冲信号同步。 异步电路:电路没有统一的时钟,有些触发器的时钟输入端与时钟脉冲源相连,这有这些触发器的状态变化与时钟脉冲同步,而其他的触发器的状态变化不与时钟脉冲同步。 3:时序设计的实质: 电路设计的难点在时序设计,时序设计的实质就是满足每一个触发器的建立/保持时间的要求。 4:建立时间与保持时间的概念? 建立时间:触发器在时钟上升沿到来之前,其数据输入端的数据必须保持不变的时间。 保持时间:触发器在时钟上升沿到来之后,其数据输入端的数据必须保持不变的时间

Dynamic memory allocation in embedded C

…衆ロ難τιáo~ 提交于 2019-11-28 06:45:45
问题 Can I use functions malloc and delete in embedded C? For example, I have one function, where was created pointer on structure with function malloc. This function return address in ram and I can use this . After exit from my function, where memory was allocated, this pointer will be deleted or this memory reserved for this, while not will be function delete terminated ? Typedef struct { Char varA; Char varB } myStruct ; Void myfunc ( void) { myStruct * ptrStruct = ( myStruct *) malloc ( sizeof

How are the gather instructions in AVX2 implemented?

﹥>﹥吖頭↗ 提交于 2019-11-28 06:23:09
Suppose I'm using AVX2's VGATHERDPS - this should load 8 single-precision floats using 8 DWORD indices. What happens when the data to be loaded exists in different cache-lines? Is the instruction implemented as a hardware loop which fetches cache-lines one by one? Or, can it issue a load to multiple cache-lines at once? I read a couple of papers which state the former (and that's the one which makes more sense to me), but I would like to know a bit more about this. Link to one paper: http://arxiv.org/pdf/1401.7494.pdf I did some benchmarking of the AVX gather instructions and it seems to be a

get server ram with php

筅森魡賤 提交于 2019-11-28 04:56:26
Is there a way to know the avaliable ram in a server (linux distro) with php (widthout using linux commands)? edit: sorry, the objective is to be aware of the ram available in the server / virtual machine, for the particular server (even if that memory is shared). rcoder If you know this code will only be running under Linux, you can use the special /proc/meminfo file to get information about the system's virtual memory subsystem. The file has a form like this: MemTotal: 255908 kB MemFree: 69936 kB Buffers: 15812 kB Cached: 115124 kB SwapCached: 0 kB Active: 92700 kB Inactive: 63792 kB ...

How to delete multiple pandas (python) dataframes from memory to save RAM?

爱⌒轻易说出口 提交于 2019-11-28 03:34:56
I have lot of dataframes created as part of preprocessing. Since I have limited 6GB ram, I want to delete all the unnecessary dataframes from RAM to avoid running out of memory when running GRIDSEARCHCV in scikit-learn. 1) Is there a function to list only, all the dataframes currently loaded in memory? I tried dir() but it gives lot of other object other than dataframes. 2) I created a list of dataframes to delete del_df=[Gender_dummies, capsule_trans, col, concat_df_list, coup_CAPSULE_dummies] & ran for i in del_df: del (i) But its not deleting the dataframes. But deleting dataframes

How can I find out the total physical memory (RAM) of my linux box suitable to be parsed by a shell script?

倖福魔咒の 提交于 2019-11-28 02:56:19
I'm typing a shell script to find out the total physical memory in some RHEL linux boxes. First of all I want to stress that I'm interested in the total physical memory recognized by kernel, not just the available memory . Therefore, please, avoid answers suggesting to read /proc/meminfo or to use the free , top or sar commands -- In all these cases, their " total memory " values mean " available memory " ones. The first thought was to read the boot kernel messages: Memory: 61861540k/63438844k available (2577k kernel code, 1042516k reserved, 1305k data, 212k init) But in some linux boxes, due