How does binary translate to hardware?

后端 未结 14 1008
长发绾君心
长发绾君心 2020-12-23 10:14

I understand how the code is compiled to assembly, and that assembly is a 1:1 replacement with binary codes. Can somebody help me understand how binary is conne

相关标签:
14条回答
  • 2020-12-23 10:53

    A complete answer to your question would encompass a book, and a fairly thick one at that.

    When you say "code" I'm assuming you're referring to a high level compiled language, like C++. Usually, a compiler translates this code into machine language, or binary as you state in your question. We'll neatly avoid all discussion of managed vs. unmanaged code, p-code, etc. That is, we're just talking about compilers that target specific processors/operating systems. Java, for example, compiles into a pseudo-code called bytecode. We're also going to avoid the whole matter of link editing, or linking, which is how multiple source modules get compiled into machine language then bound together into a single executable program.

    Okay, now that we've covered most of what we're not going to cover, here's what usually happens. And by "usually", I mean most compiled languages in a DOS, Linux or Windows environment. The source code is translated into machine language, which is written out to an executable file. This executable file contains, more or less, an image of what the program should look like in memory. When you tell the operating system to run your program, the OS's equivalent of a "Load and Go" executes. What that means is that the memory image in the executable file is loaded into memory, then the operating system does a machine language JUMP to the first instruction in the program. The CPU then blindly follows the instructions from thereon, until an EXIT is encountered.

    This whole JUMP... EXIT nastiness is a drastic oversimplification for modern OS's. As you can imagine, if the CPU were to follow, with blind obedience, the instructions in a program that's gone astray, the computer would crash... or worse. Such was the fate of many an errant program in the early days, and a prime contributor to many a BSOD.

    0 讨论(0)
  • 2020-12-23 10:56

    This question is very complicated, I have 2 degrees in exactly that and I've still only scratched the surface.

    If you want an intro to how that all works together, MIT has some free classes available you can look over online. This one is probably the best one to get you started.

    0 讨论(0)
  • 2020-12-23 10:57

    I think this is actually a fun question. I would say "here's how to build a computer in a few easy steps".

    • Start with some simple logic circuits, such as AND, OR, NOT, and a flip-flop. A flip-flop is a pair of transistors arranged so that if one is ON, the other is OFF, or vice-versa. That way it can "remember" one bit of information, so you can think of it as a storing a single binary digit. Some input lines can put it in one state or the other, and thus "write" to it.

    • You can store a bigger number by having a bunch of flip-flops, and call it a "register". For example, if you have four flip-flops in a register, there are 16 possible combinations, so you can think of it as holding a number from 0 to 15.

    • Skipping ahead a little bit, you can buy a "memory chip". What that is is a good number of registers, like say 16 of them. It has 4 wires coming in (the "address" wires), and it has 4 wires coming out (the "data" wires). So a number from 0 to 15 can come in as an address, and that selects one of the 16 registers, whose value is presented on the output data wires (thus "reading" it). Another few wires can cause data to come IN on the data wires to cause numbers to be put into ("written") the register.

    • Now suppose you have an external 4-bit register (call it R), and a bit of circuitry, so that it

      1. presents the value in R to the memory address
      2. reads the 4-bit value at that register and moves it into R
      3. and repeats this over and over

    Depending on the numbers that have been pre-loaded into the memory, you can see that this thing will cycle around through a series of numeric addresses, because the number at each address determines what the next address will be.

    Now, you can embellish this thing in a lot of ways. You can expand the memory to have 5-bits of address (32 registers). Then if one of the address lines is connected to the outside world, it will do different things depending on the outside world. That's a "finite-state-machine".

    You can replace the R register with a simple counter, and call it a "program counter". You can take the data coming out of the memory and call it an "instruction", and use some of its bits to read other memory addresses and load a set of arithmetic registers. You can use some to control whether the R register simply increments, or maybe gets a new address stored in it. That's called "jumping".

    Of course, this is a very simple computer, but that's roughly how they started out.

    0 讨论(0)
  • 2020-12-23 11:04

    (Vastly simplified)

    The binary (say a string of binary from a line of machine code/asm) is loaded into memory from say disk. Then an instruction is sent by the processor logic to memory controller to load the contents of the memory into a processor local resister. It then gets interpreted as an instruction to do by the processor.

    I learned this level of stuff by doing microcoding at college.

    In reality there are many more steps that could occur, depending on the processor complexity and power. The processor is made up of various parts (ALU, registers etc) and they cooperate in getting instructions, data and processing. If you are interested in this level of understand and I commend you for asking the question, Id say get a book on computer architecture. I used Structure Computer Organisation by Tanenbaum at college.

    0 讨论(0)
  • 2020-12-23 11:06

    SW is not just the SW language it is written in, as say jotted down on a piece of paper. SW takes on physical form as well. At some point the software at the conceptual level crosses over into software at the physical level, and that occurs when a programmer starts typing code into a keyboard in whatever SW language he/she is working. From the moment a keystroke is tapped, it's electrons all the way down...this is the point then, where the interface occurs, in that from the moment a keyboard is tapped the whole business becomes the manipulation of electrons -- as complex, sophisticated and ingenious an endeavor as that may be. Thinking in terms of binary 0's and 1's is just a metaphor for high & low voltage, already a physical manifestation beyond the keystroke. When you type the letter I as the first letter of IF...THEN into a keyboard, the voltages corresponding to 01001001 are placed in the first 8 slots of the registry, through electrical impulses prompted by your physically tapping the I key. From here on out it's electronics.

    0 讨论(0)
  • 2020-12-23 11:06

    Every thing you write on a text editor, first, it is stored in a memory (electric signals generated from the keybord) no matter in which code (ascii,...). From memory, those signals are fed into the computer monitor and you can see the source code you are typing. Then, you run your compiler (or assembler) that reads the source code in memory (electric signals) and converts it to machine code, storing those transformed electric signals in another region of the memory (electric signals again). When the cpu reads the machine code, what it sees are electric signals. There is no logic levels, so there is no need to convert logic level in voltage level.

    0 讨论(0)
提交回复
热议问题