How does binary translate to hardware?

后端 未结 14 1009
长发绾君心
长发绾君心 2020-12-23 10:14

I understand how the code is compiled to assembly, and that assembly is a 1:1 replacement with binary codes. Can somebody help me understand how binary is conne

相关标签:
14条回答
  • 2020-12-23 11:07

    When we look at Binary or a programming language we are looking at symbols and words that a human can understand which describe essentially the electrical activity of an electronic system. Ultimately, speed of energy, amount of energy, loss of energy, creation of heat and light, and the CHANGE that the substance which the energy is flowing through undergoes as a result of being energized as well as how that effects the activity of the energy as it flows is understood by a human and manipulated or harnessed with a piece of electronics. One factor in understanding the complex system which is a computer or other device, is knowing what the electricity that's flowing inside of it is doing. To know this we have described one of the electricity's behaviors using the numbers 1 and 0. You must know what a physical element is doing and be able to give a standard description of that in order to keep track of what happens as you change the factors that you think contribute to the control of the element/Substance/Subatomic Particle, the numbers help you to rationalize what happening with particles that you are otherwise unable to see.

    So Binary is a visual representation which uses numbers to describe the activity of electricity in piece of hardware. Either their is a flow of charge or their is no flow of charge. The charge is converted to D/C from AC and the activity of the charged particles is stabilized by the clock, the speed of the energy flowing through the circuit, the resistance (which results in a loss of energy and the creation of heat, and the amount of time before a piece of a circuit is de-energized all comes together allowing us to understand and use our understanding of these phenomenon to come up with a unit of measure to standardize the amount of energy flowing per a unit of time. Which further assists us as we try to harness the energy. The rest of the computers components are like a really advanced system of Transistors, capacitors, and resistors that manipulate the charge that flows to and through those components, the components trap the charge temporarily or slow it down until the component reaches a certain energy level and then an arc flash occurs causing the stored energy to energize another component or area of the system with a specific (highly controlled) amount of energy. each digit of binary is a representation of a bit, a bit is an explanation of electrical behavior in a certain electrical component(known from here forward as signal), a byte is 8 pieces of electrical signal. It is generally agreed that you need at least 8 bits of electrical signal in order for the computer to make orderly and practical use of the electricity flowing into it.

    In other words electricity is just reduced and energy stabilized in its behavior and then split up and directed through a series of components which need electricity to operate and then the newly energized components do a thing that a human desires for it to do.

    0 讨论(0)
  • 2020-12-23 11:09

    This is a huge, very complicated topic. The best textbook I've seen on the subject is Patterson/Hennesy's "Computer Organization and Design", which has many editions.

    Other than suggesting you read it, I wouldn't dare try to cram a semester-long class into a 500-character answer box.

    0 讨论(0)
  • 2020-12-23 11:09

    There is a Book named code which goes over that as well as any computer organization text. Thogh the answers here are all good.

    0 讨论(0)
  • 2020-12-23 11:09

    I dont see this as huge and complicated, the closer to the hardware the simpler it gets.

    Write a disassembler, thats how the hardware does it. Most processors include the opcodes or instruction set in the same manual as the assembler language.

    Look at the opcode for say an add instruction using registers, a few of the bits determine the source register, a few bits for destination register a few bits say that this is an add instruction. Let's say this instruction set you are looking at uses only two registers for a register based add. There is some logic, an adder, that can add two items the size of registers and output a result and a carry bit. Registers are stored on chip in memory bits sometimes called flip flops. So when an add is decoded the input registers are tied to the add logic using electronic switches. These days this happens at the beginning of the clock cycle, by the end of the clock cycle the adder has a result and the output is routed to the bits for the destination register and the answer is captured. Normally an add will modify the flags in the flag register. When the result is too big to be stored in the register (think about what happens when you add the decimal numbers 9 and 1 you get a 0 carry the 1 right?). There is some logic that looks at the output of the adder and compares the bits with the value zero that sets or clears the z flag in the flag register. Another flag bit is the sign bit or n bit for negative, that is the most significant bit of the answer. This is all done in parallel.

    Then say your next instruction is jump if zero (jump if equal), the logic looks at the z flag. If set then the next instruction fetched is based on bits in the instruction that are added to the program counter through the same or another adder. Or perhaps the bits in the instruction point to an address in memory that hold the new value for the program counter. Or maybe the condition is false, then the program counter is still run through an adder but what is added to it is the size of the instruction so that it fetches the next instruction.

    The stretch from a disassembler to a simulator is not a long one. You make variables for each of the registers, decode the instructions, execute the instructions, continue. Memory is an array you read from or write to. The disassembler is your decode step. The simulator performs the same steps as the hardware, the hardware just does it in parallel using different programming tricks and different programming languages.

    Depending on how implemented your disassembler might start at the beginning of the program and disassemble to the end, your simulator would start at the beginning but follow the code execution which is not necessarily beginning to end.

    Old game console simulators like MAME have processor simulators that you can look at. Unfortunately, esp with MAME, the code is designed for execution speed not readability and most are completely unreadable. There are some readable simulators out there if you look though.

    A friend pointed me at this book http://www1.idc.ac.il/tecs/ which I would like to read, but have not yet. Perhaps it is just the book you are looking for.

    Sure hardware has evolved from trivial state machines that take many clocks to fetch, decode, and execute serially. My guess is that if you just understood the classic fetch, decode and execute that is enough for this question. Then you may have other more specific questions, or perhaps I misunderstood the question and you really wanted to understand the memory bus and not the decoder.

    0 讨论(0)
  • 2020-12-23 11:09

    In a very simplified way the computer can be represented as an infinite loop (implemented in hardware) and abilities to do simple arithmetic operations (also implemented in hardware). In the loop it does the following:

    • read the memory at PC (PC is the loop counter which gets incremented)
    • decode the command and operands
    • execute the command
    • write the results back to memory

    And that's it. Also there are control commands which can change the PC and which are used for "if ... then ... else" statement.

    0 讨论(0)
  • 2020-12-23 11:13

    Essentially, and very broadly speaking, the instructions end up in memory, and the program counter (PC) register holds the address of the first instruction.

    The processor supports instructions that load can move data to/from memory to registers. The processor moves data into the instruction register, and that instruction gets executed at the next clock tick.

    I'm not qualified to explain the electrical engineering behind that, but you could probably look it up.

    Of course, this all fairly simplified, as there is a huge amount of parallelism going on in modern processors, and I don't even pretend to grok that in any meaningful way.

    0 讨论(0)
提交回复
热议问题