Seeing how Instructions get Translated (Computer Architecture)

倾然丶 夕夏残阳落幕 提交于 2019-12-04 07:53:54

问题


Little bit of a confusing question. But Im really looking for learning some low level programming. Thing is, Dev boards like Arduino/Etc. really hide alot of whats going on.

I have spent some time learning about Computer Architecture, Logic/Gates/Sequential Logic/etc.. (I went even as far as to learn the Physics of Semiconductors and Electronics related to it all, just to know what exactly is going on, as well as how Gates are made using CMOS Transistors and such).

But thats about where it ends....and I want to be able to understand how an Instruction (Like Hex/or Assembly/etc.. code) is moving through a Simple as Possible computer (alot of books i've used went straight from like Gates to a Computer....without really the in between). Even something simple like.....storing a value in a register or memory location (and maybe printing to a pixel? or something).

I think something that would be interesting would be perhaps even writing an emulator eventually. I have experience with High Level languages, but i've heard something like a 6502 might be a good start since you use alot of Assembly, and the instruction set isn't too large.

Does anyone know of any resources/thoughts/books that might help? I've gone through "Elements of Computing Systems", and while......it is a good book I don't really feel like it goes through whats really going on and seeing it happen. This might be more of a Electronics.stackexchange question, if so I apologize.


回答1:


Try to build your own simple CPU. It isn't as hard as it seems: LOGISIM




回答2:


Overview

You really have a ton of options. I'll summarize my views on how instructions get translated, but I'll also provide a few of the options I had when I was getting started.

My Take

First off, it's easiest to think in terms of binary input. Let's say you have a 16-bit microprocessor. (That is, the instructions are encoded in 16 binary bits.) Consider an assembly operation SET that places a number into a register. For example:

SET(R1, 12) // Stores 12 into register 1

Let's arbitrarily (because even in any standard architecture, the choice is really arbitrary) choose that the SET instruction is translated into the following 16-bit binary value I:

0001 0001 0000 1100

Basically, I just made up a convention. But here's how I break it down. I choose to let bits I[15:12] (expressed in big-endian notation) represent a particular instruction. I choose to let the integer 1 correspond to the instruction SET. Now I've decided upon that convention, I can say that if I have a SET instruction, let bits I[11:8] correspond to the register. (Obviously that means I will only have 16 registers: 4^2=16). Finally, I let bits I[7:0] correspond to the data I want to store in the given register. Let's look at SET(R1, 12) again in binary (I separate each group of four by newlines for clarity):

if I =  0001 0001 0000 1100
I[15:12] = 0001 (binary) = 1 (decimal) = SET instruction
I[11:8] = 0001 (binary) = 1 (decimal) = R1 since I[15:12] correspond to SET.
I[7:0] = 0000 1100 (8-bit binary) = 12 (decimal) = value to store in R1.

As you can see, everything else within a microprocessor becomes very simple. Let's say your store 4 lines of instructions in RAM. You have a counter attached to a clock. The counter counts through lines in RAM. When the clock "ticks" a new instruction comes out of the ram. (That is the next instruction comes out of RAM - though this could be somewhat arbitrary once you insert JUMP statements.) The output of RAM goes through multiple bit selectors. You select bits I[15:12] and send them to a Control Unit (CLU) which tells you which instruction you are trying to convey. I.e. SET, JUMP, etc. Then depending on which instruction is found, you can decide to allow registers to be written or registers to be added or whatever else you choose to include in your architecture.

Now thankfully, the arbitrary conventions for binary values of machine instructions has already been chosen for you (if you want to follow them). This is exactly what is defined by an Instruction Set Architecture (ISA). For example MIPS, HERA, etc. Just for clarity, the actual implementation you create when you design the circuitry and whatnot is called the micro-architecture.

Learning Resources

Texts

The Harris and Harris book is one of the most famous texts for undergraduate computer architecture courses. It is a very simple and useful text. The entire thing is available in a PDF here for free by some random school. (Download it quick!) I found it super helpful. It goes through basic circuits, topics in discrete mathematics, and by the time you get to chapter 7 building a microprocessor is a piece of cake. It took me ~3 days to complete a 16-bit microprocessor after having read that book. (Granted, I have had background in discrete math, but that's not super important.)

Another super helpful and very standard book is the Hennessy and Patterson book that is also available in PDF form from some random school. (Download it quick!) The Harris and Harris book was a simplification based off this book. This book goes into a lot more detail.

Open Source Microprocessors

There are a ton of open source microprocessors out there. It was super helpful for me to be able to reference them when I was building my first microprocessor. The ones with Logisim files are especially nice to play with because you can view them graphically and click and mess with them like that. Here are a few of my favorite sites and specific mps:

4-bit:

  • Very Simple Microprocessor - ISA and Design

16-bit:

  • Some random Russian guy's Microprocessor - Logisim Files
  • Warren's CPU - Logisim Files
  • Elementary-Microprocessor - Logisim Files

Open Cores - I don't really get this site. I applied for an account but they haven't really gotten back... Not a big fan but I imagine if you have an account it must be awesome.

Tools

Logisim

As mentioned previously, Logisim is a great resource. The layout is entirely graphical and you can very easily see what is going on bit-wise at any point in time by selecting a wire. It is in Java so I'm pretty sure it works on whatever machine you want it to. It's also an interesting historical perspective on graphical computer programming languages.

Simulation

In Logisim, you can simulate actual software being run. If you have a compiler that compiles binaries to the ISA you are targeting, then you can simply load the binary or hex file into Logisim RAM and run the program. (If you don't have a compiler, it's still possible and a great exercise to write a four line Assembly program and translate it by hand yourself.) Simulation is by far the coolest and most gratifying part of the entire process! :D Logisim also provides a CLI for doing this more programmatically.

HDL

The more modern form of generating/designing micro-architectures is through the use of a Hardware Description Language (HDL). The most famous examples include Verilog and VHDL. These are often (confusingly!) modeled after sequential languages like Ada and C/C++. However this is by far the preferred design method because verification of a model/design is much better defined. In my opinion, it's much easier to reason about the textual representation than to examine graphically. Just as programmers can poorly organize code, hardware developers can poorly organize the graphic layout of a graphic design of a micro-architecture. (Although this argument can certainly be applied to HDL.) It's still easier to document textually than graphically and to design more modularly in general using an HDL.

If you're interested in learning this, there are tons of undergraduate hardware courses with open curriculum and lab work discussing and learning about using HDL to describe circuits and micro-architectures. You can find these simply by googling. Or you can also try to learn HDL through the next step - tools that convert C/C++ code to HDL. If you're interested, then Icarus Verilog is a good open source compiler and simulator for Verilog.

Simulation

Using a tool like Icarus Verilog, you can also easily simulate a real program being run from a binary. You simply wrap your microprocessor in another Verilog script that loads a file or a string into RAM through some bus. Piece of cake! :D

HLS

In recent years, High Level Synthesis (HLS) has also gained significant foothold on the market. This is the conversion of C/C++ code into actual circuits. This is pretty incredible because existing C/C++ can (but not always) be converted into hardware.

(I say not always because not all C/C++ code is synthesizable. In a circuit, the bit stream exists everywhere at once. In software, we think of code as being sequential. This is a terrible mode to think in if you are trying to design hardware!!)

But as you may be able to guess this ability is incredible for optimizing certain aspects of code on hardware such as matrix operations or math in general. However, this is relevant to you because you can use HLS tools to see how a C implementation of a dot-product (for instance) might be translated into HDL. I personally feel like this is a great way to learn.

Simulation

HLS simulation is just as easy as simulating HDL because the high level code is simply converted into HDL. Then you can simulate and run tests on it exactly how I explained above.




回答3:


Here is an example framework for a very simple 6502 instruction set simulator (since you mentioned 6502 more than once). it starts with a simple 6502 program, and these are the only instructions that I am going to demonstrate, but even with a program like this you can get an understanding and have some instant gratification of seeing something work.

   and #$00
   ora #$01
 top:
   rol
   bcc top
   and #$00

Yes, I am well aware that I am not booting the simulator properly. I am assuming this binary is based at address zero, using the xa65 assembler (apt-get install xa65). The binary after being assembled is:

hexdump -C a.o65 
00000000  29 00 09 01 2a 90 fd 29  00                       |)...*..).|
00000009

and this is the simple, reduced instruction, simulator

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

FILE *fp;

#define MEMMASK 0xFFFF
unsigned char mem[MEMMASK+1];

unsigned short pc;
unsigned short dest;
unsigned char a;
unsigned char x;
unsigned char y;
unsigned char sr;
unsigned char sp;
unsigned char opcode;
unsigned char operand;

unsigned char temp;

int main ( void )
{
    memset(mem,0xFF,sizeof(mem)); //if we execute a 0xFF just exit

    fp=fopen("a.o65","rb");
    if(fp==NULL) return(1);
    fread(mem,1,sizeof(mem),fp);
    fclose(fp);

    //reset the cpu
    pc=0; //I know this is not right!
    a=0;
    x=0;
    y=0;
    sr=0;
    sp=0;
    //go
    while(1)
    {
        opcode=mem[pc];
        printf("\n0x%04X: 0x%02X\n",pc,opcode);
        pc++;
        if(opcode==0x29) //and
        {
            operand=mem[pc];
            printf("0x%04X: 0x%02X\n",pc,operand);
            pc++;
            printf("and #$%02X\n",operand);
            a&=operand;
            if(a==0) sr|=2; else sr&=(~2);
            sr&=0x7F; sr|=a&0x80;
            printf("a = $%02X sr = $%02X\n",a,sr);
            continue;
        }
        if(opcode==0x09) //ora
        {
            operand=mem[pc];
            printf("0x%04X: 0x%02X\n",pc,operand);
            pc++;
            printf("ora #$%02X\n",operand);
            a|=operand;
            if(a==0) sr|=2; else sr&=(~2);
            sr&=0x7F; sr|=a&0x80;
            printf("a = $%02X sr = $%02X\n",a,sr);
            continue;
        }
        if(opcode==0x2A) //rol
        {
            printf("rol\n");
            temp=a;
            a<<=1;
            a|=sr&0x01;
            sr&=(~0x01); if(temp&0x80) sr|=0x01;
            if(a==0) sr|=2; else sr&=(~2);
            sr&=0x7F; sr|=a&0x80;
            printf("a = $%02X sr = $%02X\n",a,sr);
            continue;
        }
        if(opcode==0x90) //bcc
        {
            operand=mem[pc];
            printf("0x%04X: 0x%02X\n",pc,operand);
            pc++;
            dest=operand;
            if(dest&0x80) dest|=0xFF00;
            dest+=pc;
            printf("bcc #$%04X\n",dest);
            if(sr&1)
            {
            }
            else
            {
                pc=dest;
            }
            continue;
        }
        printf("UNKNOWN OPCODE\n");
        break;
    }
    return(0);
}

and the simulator output for that simple program.

0x0000: 0x29
0x0001: 0x00
and #$00
a = $00 sr = $02

0x0002: 0x09
0x0003: 0x01
ora #$01
a = $01 sr = $00

0x0004: 0x2A
rol
a = $02 sr = $00

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $04 sr = $00

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $08 sr = $00

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $10 sr = $00

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $20 sr = $00

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $40 sr = $00

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $80 sr = $80

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0004: 0x2A
rol
a = $00 sr = $03

0x0005: 0x90
0x0006: 0xFD
bcc #$0004

0x0007: 0x29
0x0008: 0x00
and #$00
a = $00 sr = $03

0x0009: 0xFF
UNKNOWN OPCODE

the full 6502 instruction set is a long weekend minimums worth of work if you write it from scratch, as you figure stuff out you may end up restarting the project a few times, perfectly natural. The hardware (in general, not necessarily 6502) for a processor isnt all that much different in concept to what happens in a simulator. You have to get the instruction, decode the instruction, fetch the operands, execute, and save the results. Just like with doing this in software, in hardware you can create interesting ways to make it faster or smaller or whatever your goal might be.

6502 is still a big project if you implement the whole thing, not as big as z80, but things like risc16 is maybe half an hour to understand and write the whole simulator (then another half hour to make an assembler). the pic12, 14, or 16 is more work than risc16 but not too much more, can bang through it quickly and is an educational experience with how simple the design is. the pdp11 and msp430 are no doubt related somehow, both well documented (all of the ones I mention are well documented) and nicely/mostly orthogonal and the risc like decoding is a different experience than the cisc like 6502/z80/x86. (pdp11 is supported natively by gcc/gnu tools). Mips is super simple if you can work your algorithm around the branch defer slot.

good luck, have fun...




回答4:


MIT puts all the resources for their Computation Structures class, 6.004 online. It's essentially an Introduction to Computer Architecture class that's very focused on project-based learning.

The OpenCourseWare version is here. I would particularly concentrate on the labs, particularly 2, 3, 5 and 6 which take you from building an ADDER to and ALU all the way to a very simple processor (the Beta).

The link to the most recent course materials is here.

I remember when I took this class as being the first time that everything "clicked" in terms of understanding how simple logic gates and electrical components could translate into the complex machine I was using to simulate it. Highly recommended resource.




回答5:


One text which does take you step by step from gates to a fully functional computer and beyond is The Elements of Computing Systems by Noam Nisan and Shimon Schocken 2005 ISBN 9780262640688. Their website is at www.nand2tetris.org




回答6:


The first few chapters of The Art of Assembly Language talk about how bits are stored physically using latches and goes into how these are used to assemble registers to hold bytes of data and how instructions such as left/right shift can be implemented in hardware. Probably not as in depth as you are looking for but it really opened my eyes when I read this stuff.



来源:https://stackoverflow.com/questions/16082839/seeing-how-instructions-get-translated-computer-architecture

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!