This function was apparently not well tested, because any use of
acc_handle_object() triggered a use of vpi_handle_by_name that was
buggy.
The implementation was awkwardly written, to parts of it were redone.
The tf_igetlongtime function may pass in any kind of object, so the
scale() function may need to convert an object handle to the handle
for the objects parent scope.
The netparray_t::slice_dimensions bug was the most insidious and
caused all manner of confusion. Also fix some other packed array
and unpacked array (and mixed) indexing calculations.
Get rid of the data_type, signed_flag, and range arguments to the
pform_module_define_port because they add no value within the
parse.y parser. Cleaning these out will hopefully ease the addition
of new functionality.
This can be handled entirely in the parser, where we rewrite the
syntax to me a begin/end block that contains the index variable
declaration and the for loop.
The root cause was that NetESignal::dup_exr() was not copying the calculated
type (signed/unsigned) of the expression.
In passing, found and fixed a similar issue when calculating a blended value
for a constant ternary expression.
%exec_ufunc assumed that because a function can never block, a call to
vthread_run() on the function code would only return when the final %end
instruction had been executed. This is not true if the function contains
a named block, which will be executed via a %fork instruction, allowing
the main function thread to suspend after a %join instruction. The fix
is to break %exec_ufunc into two instructions, the first setting the
function inputs and executing the function code, the second collecting
the function result. This provides the opportunity for the parent thread
to suspend after the %exec_ufunc instruction until all its children have
completed.
The vvp code generator was optimising away adds and subtracts where one
operand was a constant zero. This is not valid for 4-state arithmetic.
It was also optimising away multiplies by a constant zero - but in this
case getting it wrong and effectively multiplying by 1.
For shift operations evaluated at compile time, the compiler was converting
the right operand to a native unsigned long value. If the operand exceeded
the size of an unsigned long, excess bits were discarded, which could lead
to an incorrect result.
The fix I've chosen is to add an as_unsigned() function to the verinum class
which returns the maximum unsigned value if the internal verinum value is
wider than the native unsigned type. This then naturally gives the correct
result for shifts, as the verinum bit width is also an unsigned value.
I've changed the as_ulong() and as_ulong64() functions to do likewise, as
this is more likely to either give the correct behaviour or to give some
indication that an overflow has occurred where these functions are used.
When an expression is elaborated, the compiler converts multiplies with
one constamt zero operand into a constant zero value. This is only valid
if the other operand is not a 4-state variable.
Unsized expressions can expand to extremely large widths. Usually this
is actually a mistake in the source code, but it can lead to the compiler
temporarily using extremely large amounts of memory, or in the worst
case, crashing. This adds a cap on the width of unsized expressions (by
default 65536 bits, but overridable by the user), and causes a warning
message to be output when the cap is reached.
When performing the initial assignment for a procedural continuous
assignment, any previous continuous assignment to the destination
signal must be unlinked first, otherwise the initial value for the
assignment will propagate to any other nets that are driven by the
original source signal.
The verinum arithmetic operators now observe the standard Verilog
rules for calculating the result width if all operands are sized.
If any operand is unsized, the result is lossless, as before.
They also now all observe the standard rules for handling partially
undefined operands (if any operand bit is 'x', the entire result is
'x').
I've also added the unary '-' operator, and renamed v_not() to be
the unary '~' operator. This has allowed some simplification in
other parts of the compiler.
A procedural continuous assignment is supposed to be updated any time
a variable on the RHS changes. Currently this only happens if the RHS
is a simple signal.
In the case that the RHS of a procedural continuous assignment is a simple
vector that is wider than the LHS, changes to the RHS vector cause the
entire vector to be sent to port 1 of the LHS vvp_fun_signal object. This
vector needs to be coerced to the size of the LHS. Note that this is a
stopgap fix until vvp handles arbitrary expressions on the RHS of a
procedural continuous assignment.
Signed vector power operations were being implemented using the double
pow() function. This gave inaccurate results when the operands or
result were not exactly representable by a 64-bit floating point number.
During expression evaluation, the compiler attempts to optimise away
relational operations when one side is constant and all possible values
of the other side would result in the relation being true. This is not
a valid optimisation if the other side is a 4-state variable, as an
'x' or 'z' will result in the relation being unknown.