| Age | Commit message (Collapse) | Author |
|
Similar to the previously-added UnknownBody, the new optional interface
MarkedBody allows hcl.Body implementations to suggest a set of marks that
ought to be applied to any value that's generated to represent the content
of that body.
The dynblock extension then uses this to get hcldec to mark the whole
object representing any block that was generated by a dynamic block whose
for_each was marked, for a better representation of the fact that a
block's existence was decided based on a marked value.
|
|
Due to the quite messy heritage of this codebase -- including a large part
of it being just a fork of my earlier personal project ZCL -- there were
many different conventions for how to pretty-print and diff values in the
tests in different parts of the codebase.
To reduce the dependency sprawl, this commit now standardizes on:
- github.com/davecgh/go-spew for pretty-printing
- github.com/google/go-cmp for diffing
These two dependencies were already present anyway, are the most general
out of all of the candidates, and are also already in use by at least some
of HCL's most significant callers, such as HashiCorp Terraform.
The version of go-cmp we were previously using seems to have a bug that
causes the tests to crash when run under the Go race detector, so I've
also upgraded that dependency to latest here to clear that bug.
|
|
When hcldec needs to return a synthetic value, it must use
WithoutOptionalAttributesDeep to ensure the correct type for Null or
Unknown values. Optional attributes are normally removed from the value
type when decoding, but since hcldec is dealing with the decoder spec
directly, the optional attr types are still going to be present in these
cases.
|
|
The interactions between value marks and unknown value refinements can be
a little tricky, so this new addition to the "RefineWith" tests confirms
that it does indeed handle marked values correctly when passing through
the refinement spec.
|
|
This test case is here to anticipate a _possible_ bug that isn't actually
buggy in the current implementation: if an attribute spec is given a
non-dynamic type constraint and then refined based on that type constraint
then the hcldec implementation must perform the type conversion first and
only then attempt to add the refinements.
Another possible variation here would be for the attribute spec to have
a dynamic type constraint (cty.DynamicPseudoType) and then try to refine
its result. That case isn't tested here because that's always an
implementation error in the calling application: RefineValueSpec must be
used only in ways that are valid for the full range of types that the
nested spec could produce, and there are no refinements that are valid
for the full range of cty.DynamicPseudoType. That situation can and will
panic at runtime, alerting the application developer that they've used
hcldec incorrectly. There is no way for end-user input to cause this panic
if the calling application is written correctly.
This doesn't actually change the system behavior. It's a regression test
to catch possible regressions under future maintenance.
|
|
This new spec type allows adding value refinements to the results of some
other spec, as long as the wrapped spec does indeed enforce the
constraints described by the refinements.
|
|
* [COMPLIANCE] Add Copyright and License Headers
* add copywrite file and revert headers in testdata
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
Co-authored-by: Liam Cervante <liam.cervante@hashicorp.com>
|
|
Always attempt to decode dynamic blocks to provide validation. If the
iterator is unknown the value will be discarded, but the diagnostics are
still useful to unsure the structure is correct.
|
|
Allow unknown block bodies during decoding
|
|
This adds ValidateSpec, a new decoder Spec that allows one to add
custom validations to work with values at decode-time.
The validation is run on the value after the wrapped spec is applied to
the expression in question. Diagnostics are expected to be returned,
with the author having flexibility over whether or not they want to
specify a range; if one is not supplied, the range of the wrapped
expression is used.
|
|
Most of the time, the standard expression decoding built in to HCL is
sufficient. Sometimes though, it's useful to be able to customize the
decoding of certain arguments where the application intends to use them
in a very specific way, such as in static analysis.
This extension is an approximate analog of gohcl's support for decoding
into an hcl.Expression, allowing hcldec-based applications and
applications with custom functions to similarly capture and manipulate
the physical expressions used in arguments, rather than their values.
This includes one example use-case: the typeexpr extension now includes
a cty.Function called ConvertFunc that takes a type expression as its
second argument. A type expression is not evaluatable in the usual sense,
but thanks to cty capsule types we _can_ produce a cty.Value from one
and then make use of it inside the function implementation, without
exposing this custom type to the broader language:
convert(["foo"], set(string))
This mechanism is intentionally restricted only to "argument-like"
locations where there is a specific type we are attempting to decode into.
For now, that's hcldec AttrSpec/BlockAttrsSpec -- analogous to gohcl
decoding into hcl.Expression -- and in arguments to functions.
|
|
The two cases where we decode attribute values should include the
expression and EvalContext in any diagnostics they generate so that the
calling application can give hints about the types and values of variables
that are used within the expression.
This also includes some adjustments to the returned source ranges so that
both cases are consistent with one another and so that both indicate
the entire expression as the Subject and include the attribute name in
the Context. Including the whole expression in the range ensures that
when there's a problem inside a tuple or object constructor, for example,
we'll show the line containing the problem and not just the opening [
or { symbol.
|
|
The main HCL package is more visible this way, and so it's easier than
having to pick it out from dozens of other package directories.
|
|
This is in preparation for the first v2 release from the main HCL
repository.
|
|
When nested attributes are of type cty.DynamicPseudoType, a block spec
that is backed by a cty collection is annoying to use because it requires
all of the blocks to have homogenous types for such attributes.
These new specs are similar to BlockListSpec and BlockMapSpec
respectively, but permit each nested block result to have its own distinct
type.
In return for this flexibility, we lose the ability to predict the exact
type of the result: these specs must just indicate their type as being
cty.DynamicPseudoType themselves, since we need to know how many blocks
there are and what types are inside them before we can know our final
result type.
|
|
Our BlockList, BlockSet, and BlockMap specs all produce cty collection
values, which require all elements to have a homogeneous type. If the
nested spec contained an attribute of type cty.DynamicPseudoType, that
would create the risk of each element having a different type, which would
previously have caused decoding to panic.
Now we either handle this during decode (BlockList, BlockSet) or forbid
it outright (BlockMap) to prevent that crash. BlockMap could _potentially_
also handle this during decode, but that would require a more significant
reorganization of its implementation than I want to take on right now,
and decoding dynamically-typed values inside collections is an edge case
anyway.
|
|
This is the hcldec interface to Body.JustAttributes, producing a map whose
keys are the child attribute names and whose values are the results of
evaluating those expressions.
We can't just expose a JustAttributes-style spec directly here because
it's not really compatible with how hcldec thinks about things, but we
can expose a spec that decodes a specific child block because that can
then compose properly with other specs at the same level without
interfering with their operation.
The primary use for this is to allow the use of the block syntax to define
a map:
dynamic_stuff {
foo = "bar"
}
JustAttributes is normally used in static analysis situations such as
enumerating the contents of a block to decide what to include in the
final EvalContext. That's not really possible with the hcldec model
because both structural decoding and expression evaluation happen
together. Therefore the use of this is pretty limited: it's useful if you
want to be compatible with an existing format based on legacy HCL where a
map was conventionally defined using block syntax, relying on the fact
that HCL did not make a strong distinction between attribute and block
syntax.
|
|
Previously this implementation was doing only one level of recursion in
its walk, which gave the appearance of working until the
transform/container-type specs (DefaultSpec, TransformSpec, ...) were
introduced, creating the possibility of "same body children" being more
than one level away from the initial spec.
It's still correct to only process the schema and content once, because
ImpliedSchema is already collecting all of the requirements from the
"same body children", and so our content object will include everything
that the nested specs should need to analyze needed variables.
|
|
Previously it was not implementing the two optional interfaces required
for this, and so decoding would fail for any AttrSpec or block spec nested
inside.
Now it passes through attribute requirements from both the primary and
default, and passes block requirements only from the primary, thus
allowing either fallback between two attributes, fallback from an
attribute to a constant, or fallback from a block to a constant. Other
permutations are also possible, but not very important.
|
|
This new spec type allows evaluating an arbitrary expression on the
result of a nested spec, for situations where the a value must be
transformed in some way.
|
|
This function returns a map describing all of the child block types
declared inside a spec. This can be used for recursive decoding of bodies
using the low-level HCL API, though in most cases callers should just use
Decode which does recursive decoding of an entire nested structure in
a single call.
|
|
This function returns the type of value that should be returned when
decoding the given spec. As well as being generally useful to the caller
for book-keeping purposes, this also allows us to return correct type
information when we are returning null and empty values, where before we
were leaning a little too much on cty.DynamicPseudoType.
|
|
A BlockLabelSpec can be placed in the nested spec structure of one of the
block specs to require and obtain labels on that block.
This is a more generic methodology than BlockMapSpec since it allows the
result to be a list or set with the labels inside the values, rather than
forcing all the label tuples to be unique and losing the ordering by
collapsing into a map structure.
|
|
|
|
|
|
These were not previously consistent with the usual style.
|
|
Previously, due to a bug, it was only visiting the first two levels of
specifications, missing anything referenced at lower levels.
|
|
This is a wrapper that allows a default value to be applied if a primary
spec results in a null value.
|
|
|
|
This is a super-invasive update since the "zcl" package in particular
is referenced all over.
There are probably still a few zcl references hanging around in comments,
etc but this takes care of most of it.
|
|
The main "zcl" package requires a bit more care because of how many
callers it has and because of its two subpackages, so we'll take care
of that one separately.
|