generic adder “inference architecture”: simulation error

情到浓时终转凉″ 提交于 2020-01-07 01:21:11

问题


So, I have to create a generic N-bit adder with carry in and carry out. I have made two fully working architectures so far, one using the generate function and one using the rtl description as follows:

entity:

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;

entity adder_n is
generic (N: integer:=8);
port (
    a,b: in std_logic_vector(0 to N-1);
    cin: in std_logic;
    s: out std_logic_vector(0 to N-1);
    cout: out std_logic);
end adder_n;

architectures 1 and 2:

    --STRUCT
architecture struct of adder_n is
    component f_adder
        port (
            a,b,cin: in std_logic;
            s,cout: out std_logic);
    end component;
signal c: std_logic_vector(0 to N);
begin
    c(0)<=cin;
    cout<=c(N);
    adders: for k in 0 to N-1 generate
        A1: f_adder port map(a(k),b(k),c(k),s(k),c(k+1));
    end generate adders;
end struct;
--END STRUCT

architecture rtl of adder_n is
    signal c: std_logic_vector(1 to N);
begin
    s<=(a xor b) xor (cin&c(1 to N-1));
    c<=((a or b) and (cin&c(1 to N-1))) or (a and b);
    cout<=c(N);
end rtl;

Now, my problem is in the third architecture where I'm trying to infer the adder. Even though the following architecture I created compiles just fine, when I try to simulate it, I get a simulation error (on Modelsim), which I have attached at the end of this post. I'm guessing there's something wrong with the numeric_std definitions. I am trying to avoid the arith library and I'm still trying to get used to the IEEE standard. Any ideas are welcomed!! Thank you!

Inference arch:

--INFERENCE

architecture inference of adder_n is
    signal tmp: std_logic_vector(0 to N);
    signal atmp, btmp, ctmp, add_all : integer :=0;
    signal cin_usgn: std_logic_vector(0 downto 0);
    signal U: unsigned(0 to N);
begin

    atmp <= to_integer(unsigned(a));
    btmp <= to_integer(unsigned(b));
    cin_usgn(0) <= cin;
    ctmp <= to_integer(unsigned(cin_usgn));


    add_all <= (atmp + btmp + ctmp);
    U <= to_unsigned(add_all,N);

    tmp <= std_logic_vector(U);
    s <= tmp(0 to N-1);
    cout <= tmp(N); 
end inference;

-- END

Simulation error:

# Cannot continue because of fatal error.
# HDL call sequence:
# Stopped at C:/altera/14.1/modelsim_ase/test1_simon/adder_inference.vhd 58 Architecture inference


回答1:


The length of U is N+1 (0 to N)

Changing

    U <= to_unsigned(add_all,N);

To

    U <= to_unsigned(add_all,N+1);

Will prevent a length mismatch between the left hand side and right hand side of the signal assignment in architecture inference of adder_n.

The passed parameter to to_unsigned specifies the length.



来源:https://stackoverflow.com/questions/30020402/generic-adder-inference-architecture-simulation-error

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!