computer-architecture

What does it mean by word size in computer?

ぐ巨炮叔叔 提交于 2019-11-29 19:32:40
I have tried to get a grasp of what "word" means and I have looked in the wiki and the definition is vague. So my question is what is "word size" ? Is it the length of the data bus, address bus? "Word size" refers to the number of bits processed by a computer's CPU in one go (these days, typically 32 bits or 64 bits). Data bus size, instruction size, address size are usually multiples of the word size. Just to confuse matters, for backwards compatibility, Microsoft Windows API defines a WORD as being 16 bits, a DWORD as 32 bits and a QWORD as 64 bits, regardless of the processor. One answer is

Computer Architecture: How do applications communicate with an operating system? [closed]

若如初见. 提交于 2019-11-29 18:25:27
Prelude: This is admittedly a fairly broad question regarding computer architecture, but one that I hear from others and wonder about quite often myself. I also don't think that there is a direct or quick answer to this. However, I was hoping someone well-versed in systems architecture could provide some insight. Some background: I am primarily a full-stack developer focusing mostly on web technologies and databases. I do have some background in C and tinkering with a good deal of low-level stuff, but that was a very long time ago and was non-academic. As such, I never got very deep into OS

Where is -32768 coming from?

北城以北 提交于 2019-11-29 17:25:08
This is LC3 Assembly code I am working with .ORIG x3000 LOOP LDI R0, KBSR BRzp LOOP From LC3 Assembly , I know that LDI is a load indirect addressing mode, meaning it read in an address stored at an location and then read the value at that location From Lc3 Keyboard , I know that KBSR is the keyboard status register, which is one when keyboard has received a new character. Here is my test run in Lc3 simulator? I entered the character 'a' After executing LDI R0, KBSR, register 0 stores a value of -32768. Does anyone know, based off my definitions for ldi and KBSR where this number is coming

How do computers translate everything to binary? When they see a binary code, how do they know if it represents a number or a word or an instruction?

微笑、不失礼 提交于 2019-11-29 14:55:14
问题 I know how computers translate numbers to binary. But what I don't understand is that I've heard that computers translate everything (words, instructions, ...) to binary, not just numbers. How is this possible? Could you show me some examples? Like how does a computer translate the letter "A" to binary? And when computers see a binary code, how can they know if that long string of 0s and 1s represents a number or a word or an instruction? . Exemple: Let's say that a computer programmer

How much time does it take to fetch one word from memory?

北战南征 提交于 2019-11-29 12:39:31
问题 Taking Peter Norvig's advice, I am pondering on the question: How much time does it take to fetch one word from memory, with and without a cache miss? (Assume standard hardware and architecture. To simplify calculations assume 1Ghz clock) 回答1: Seems like Norvig answers this himself: execute typical instruction 1/1,000,000,000 sec = 1 nanosec fetch from L1 cache memory 0.5 nanosec branch misprediction 5 nanosec fetch from L2 cache memory 7 nanosec Mutex lock/unlock 25 nanosec fetch from main

Are cache-line-ping-pong and false sharing the same?

冷暖自知 提交于 2019-11-29 02:52:14
问题 For my bachelor thesis I have to evaluate common problems on multicore systems. In some books I have read about false sharing and in other books about cache-line-ping-pong. The specific problems sound very familiar, so are these the same problems but given other names? Can someone give me names of books which discuss these topics in detail? (I already have literature from Darry Glove, Tanenbaum,...) 回答1: Summary: False sharing and cache-line ping-ponging are related but not the same thing.

Understanding word alignment

主宰稳场 提交于 2019-11-29 02:44:41
问题 I understand what it means to access memory such that it is aligned but I don’t understand why this is necessary. For instance, why can I access a single byte from an address 0x…1 but I cannot access a half word (two bytes) from the same address. Again, I understand that if you have an address A and an object of size s that the access is aligned if A mod s = 0 . But I just don’t understand why this is important at the hardware level. 回答1: Hardware is complex; this is a simplified explanation.

What branch misprediction does the Branch Target Buffer detect?

不羁岁月 提交于 2019-11-28 23:45:32
I am currently looking at the various parts of the CPU pipeline which can detect branch mispredictions. I have found these are: Branch Target Buffer (BPU CLEAR) Branch Address Calculator (BA CLEAR) Jump Execution Unit (not sure of the signal name here??) I know what 2 and 3 detect, but I do not understand what misprediction is detected within the BTB. The BAC detects where the BTB has erroneously predicted a branch for a non-branch instruction, where the BTB has failed to detect a branch, or the BTB has mispredicted the target address for a x86 RET instruction. The execution unit evaluates the

word size and data bus

一曲冷凌霜 提交于 2019-11-28 23:20:47
问题 I am confused about the definition of word size . I read that the word size of a processor is its data bus width. Like an 8 bit processor has an 8 bit wide data bus. I recently read that the maximum size of the virtual address space is determined by word size i.e. if the word size is n bits the max virtual address space is 2^n -1. But I always thought that maximum virtual address space is determined by address bus width i.e. an n bits wide address bus can address maximum 2^n bytes. So, what

Why is the range of signed byte is from -128 to 127 (2's complement) and not from -127 to 127?

谁都会走 提交于 2019-11-28 16:43:48
I read Why is the range of bytes -128 to 127 in Java? it says 128 is 10000000. Inverted, it's 01111111, and adding one gets 10000000 again so it concludes -128 is 10000000 so +128 cannot be represented in 2's complement in 8 bits, but that means we can represent it in 9 bits, so 128 is 010000000 and so taking its 2's complement -128 is 110000000, so is representation of -128 10000000 or 110000000 ? Is the representaion bit dependent ? Why not simply make the lower range -127 fot 8 bits instead of writing -128 as 10000000 ? Why is the range of unsigned byte is from -128 to 127? It's not. An