| Age | Commit message (Collapse) | Author |
|
|
|
Similar to the previously-added UnknownBody, the new optional interface
MarkedBody allows hcl.Body implementations to suggest a set of marks that
ought to be applied to any value that's generated to represent the content
of that body.
The dynblock extension then uses this to get hcldec to mark the whole
object representing any block that was generated by a dynamic block whose
for_each was marked, for a better representation of the fact that a
block's existence was decided based on a marked value.
|
|
Previously if the for_each expression was marked then expansion would
fail because marked expressions are never directly iterable.
Now instead we'll allow marked for_each and preserve the marks into the
values produced by the resulting block as much as we can. This runs into
the classic problem that HCL blocks are not values themselves and so
cannot carry marks directly, but we can at least make sure that the values
of any leaf arguments end up marked.
|
|
|
|
If the iterator is misconfigured (e.g. a string instead of a reference) it leads to
follow up issues in the validation of for_each which in turn leads to misleading
error messages. See hashicorp/terraform#34132 for an example.
|
|
Callers might have additional rules for what's acceptable in a for_each
value for a dynamic block. For example, Terraform wants to forbid using
sensitive values here because it would cause the expansion to disclose the
length of the given collection.
Therefore this provides a hook point for callers to insert additional
checks just after the for_each expression has been evaluated and before
any of the built-in checks are run.
This introduces the "functional options" pattern for ExpandBlock for the
first time, as a way to extend the API without breaking compatibility with
existing callers. There is currently only this one option.
|
|
Check for duplicate keys in objects when building types from expressions
|
|
|
|
|
|
The `can` and `try` functions can return more precise results in some
cases. Rather than try to inspect the expressions for any unknown
values, rely on the evaluation result to be correct or error and base
the decision on the evaluated values and errors.
A fundamental requirement for the `try` and `can` functions is that the
value and types remain consistent as argument values are refined. This
can be done provided we hold these conditions regarding unknowns to be
true:
- An evaluation error can never be fixed by an unknown value becoming
known.
- An entirely known value from an expression cannot later become
unknown as values are refined.
- A expression result must always have a consistent type and value.
only allowing the refinement of unknown values and types.
- An expression result cannot be conditionally based on the
"known-ness" of a value (which is really the premise for all previous
statements).
As long as those conditions remain true, the result of the `try`
argument's evaluation can be trusted, and we don't need to bail out
early at any sign of an unknown in the argument expressions.
While the evaluation result of each argument can be trusted in isolation
however, the fact that different types and values can be returned by
`try` means we need to convert the return to the most generic value
possible to prevent inconsistent results ourself (adhering to the 3rd
condition above). That means anything which is not entirely known must
be converted to a dynamic value.
Evan more refinement might still be possible in the future if all
arguments are evaluated and compared for compatibility, but care needs
to be taken to prevent changing known values within collections from
different arguments even when types are identical.
|
|
If either the given value is refined non-null or if the default value is
refined non-null then the final attribute value after defaults processing
is also guaranteed non-null even if we don't yet know exactly what the
value will be.
This rule is pretty marginal on its own, but refining some types of value
as non-null creates opportunities to deduce further information when the
value is used under other operations later, such as collapsing an unknown
but definitely not null list of a known length into a known list of that
length containing unknown values.
|
|
* remove type assumptions when retrieving child default values
* fix imports
|
|
* [COMPLIANCE] Add Copyright and License Headers
* add copywrite file and revert headers in testdata
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
Co-authored-by: Liam Cervante <liam.cervante@hashicorp.com>
|
|
* Apply defaults using custom traversal of types
* sort imports
* address comments
* small refactoring, and update documentation
|
|
* add test cases that verify the downstream go-cty fix has worked
* update go-cty
|
|
|
|
|
|
|
|
|
|
When parsing optional object attribute defaults, we previously verified
that the default value was convertible to the attribute type. However,
we did not keep this converted value.
This commit uses the converted default value, rather than delaying
conversion until later. In turn this prevents crashes when transforming
collections which contain objects with optional attributes, caused by
incompatible object types at the time of defaults application.
|
|
This commit extends the type expression package to add two new features:
- In constraint mode, the `optional(...)` modifier can be used on object
attributes to allow them to be omitted from input values to a type
conversion process. Any such missing attributes will be replaced with
a `null` value of the appropriate type upon conversion.
- In the new defaults mode, the `optional(...)` modifier takes a second
argument, which accepts a default value of an appropriate type. These
defaults are returned alongside the type constraint, and may be
applied prior to type conversion through the new `Defaults.Apply()`
method.
This change is upstreamed from Terraform, where optional object
attributes have been available for some time. The defaults functionality
is new and due to be released with Terraform 1.3.
|
|
Always attempt to decode dynamic blocks to provide validation. If the
iterator is unknown the value will be discarded, but the diagnostics are
still useful to unsure the structure is correct.
|
|
|
|
Allow dynamic blocks to indicate when the entire block value is unknown
|
|
|
|
The try(...) and can(...) functions are intended to make it more
convenient to work with deep data structures of unknown shape, by allowing
a caller to concisely try a complex traversal operation against a value
without having to guard against each possible failure mode individually.
These rely on the customdecode extension to get access to their argument
expressions directly, rather than only the results of evaluating those
expressions. The expressions can then be evaluated in a controlled manner
so that any resulting errors can be recognized and suppressed as
appropriate.
|
|
Most of the time, the standard expression decoding built in to HCL is
sufficient. Sometimes though, it's useful to be able to customize the
decoding of certain arguments where the application intends to use them
in a very specific way, such as in static analysis.
This extension is an approximate analog of gohcl's support for decoding
into an hcl.Expression, allowing hcldec-based applications and
applications with custom functions to similarly capture and manipulate
the physical expressions used in arguments, rather than their values.
This includes one example use-case: the typeexpr extension now includes
a cty.Function called ConvertFunc that takes a type expression as its
second argument. A type expression is not evaluatable in the usual sense,
but thanks to cty capsule types we _can_ produce a cty.Value from one
and then make use of it inside the function implementation, without
exposing this custom type to the broader language:
convert(["foo"], set(string))
This mechanism is intentionally restricted only to "argument-like"
locations where there is a specific type we are attempting to decode into.
For now, that's hcldec AttrSpec/BlockAttrsSpec -- analogous to gohcl
decoding into hcl.Expression -- and in arguments to functions.
|
|
|
|
This experimental extension is not ready to be included in a release. It
ought to be reworked so that "include" blocks get replaced with what they
include _in-place_, preserving the relative ordering of blocks.
However, there is no application making use of this yet and so we'll defer
that work until there's a real use-case to evaluate it with.
|
|
The main HCL package is more visible this way, and so it's easier than
having to pick it out from dozens of other package directories.
|
|
This is in preparation for the first v2 release from the main HCL
repository.
|
|
|
|
Previously our behavior for an unknown for_each was to produce a single
block whose content was the result of evaluating content with the iterator
set to cty.DynamicVal. That produced a reasonable idea of the content, but
the number of blocks in the result was still not accurate, and that can
present a problem for applications that use unknown values to predict
the overall shape of a not-yet-complete structure.
We can't return an unknown block via the HCL API, but to make that
situation easier to recognize by callers we'll now go a little further and
force _all_ of the leaf attributes in such a block to be unknown values,
even if they are constants in the configuration. This allows a calling
application that is making predictions to use a single object whose
leaves are all unknown as a heuristic to recognize what is effectively
an unknown set of blocks.
This is still not a perfect heuristic, but is the best we can do here
within the HCL API assumptions. A fundamental assumption of the HCL API
is that it's possible to walk the block structure without evaluating any
expressions and the dynamic block extension is intentionally subverting
that assumption, so some oddities are to be expected. Calling applications
that need a fully reliable sense of the final structure should not use
the dynamic block extension.
|
|
In normal situations the block type name alone is enough to determine the
appropriate schema for a child, but when callers are otherwise doing
unusual pre-processing of bodies to dynamically generate schemas during
decoding they are likely to need to take similar steps while analyzing
for variables, to ensure that all of the references can be located in
spite of the not-yet-applied pre-processing.
|
|
Our API previously had a function only for retrieving the variables used
in the for_each and labels arguments used during an Expand call, and
expected callers to then interrogate the resulting expanded block to find
the other variables required to fully decode the content.
That approach is insufficient for any application that needs to know the
full set of required variables before any evaluation begins, such as when
a dependency graph will be constructed to allow a topological traversal
through blocks while evaluating.
Now we have WalkVariables, which finds both the variables used to expand
_and_ the variables within any blocks. This also renames
WalkForEachVariables to WalkExpandVariables since that name is more
accurate with the addition of the "label" argument into the expand-time
dependency set.
There is also a hcldec-based helper wrapper for each of those, allowing
single-shot analysis of blocks for applications that use hcldec.
This is a breaking change to the dynblock package API, because the old
WalkForEachVariables and ForEachVariablesHCLDec functions are no longer
present.
|
|
Previously we were incorrectly passing down the original forEachCtx down
to nested child blocks for recursive expansion. Instead, we must use the
iteration-specific constructed EvalContext, which then allows any nested
dynamic blocks to use the parent's iterator variable in their for_each or
labels expressions, and thus unpack nested data structures into
corresponding nested block structures:
dynamic "parent" {
for_each = [["a", "b"], []]
content {
dynamic "child" {
for_each = parent.value
content {}
}
}
}
|
|
|
|
If a diagnostic occurs while we're evaluating an expression, we'll now
include a reference to that expression in the diagnostic object. We
previously added the corresponding EvalContext here too, and so with these
together it is now possible for a diagnostic renderer to see not only
what was in scope when the problem occurred but also what parts of that
scope the expression was relying on (via method Expression.Variables).
|
|
When we're evaluating expressions, we may end up evaluating the same
source-level expression a number of times in different contexts, such as
in a 'for' expression, where each one may produce a different set of
diagnostic messages.
Now we'll attach the EvalContext to each expression diagnostic so that
a diagnostic renderer can potentially show additional information to help
distinguish the different iterations in rendered diagnostics.
|
|
|
|
This uses the expression static analysis features to interpret
a combination of static calls and static traversals as the description
of a type.
This is intended for situations where applications need to accept type
information from their end-users, providing a concise syntax for doing
so.
Since this is implemented using static analysis, the type vocabulary is
constrained only to keywords representing primitive types and type
construction functions for complex types. No other expression elements
are allowed.
A separate function is provided for parsing type constraints, which allows
the additonal keyword "any" to represent the dynamic pseudo-type.
Finally, a helper function is provided to convert a type back into a
string representation resembling the original input, as an aid to
applications that need to produce error messages relating to user-entered
types.
|
|
Now that we have the necessary functions to deal with this in the
low-level HCL API, it's more intuitive to use bare identifiers for these
parameter names. This reinforces the idea that they are symbols being
defined rather than arbitrary string expressions.
|
|
|
|
|
|
A pattern has emerged of wrapping Expression instances with other
Expressions in order to subtly modify their behavior. A key example of
this is in ext/dynblock, where wrap an expression in order to introduce
our additional iteration variable for expressions in dynamic blocks.
Rather than having each wrapper expression implement wrapping
implementations for our various syntax-level-analysis functions (like
ExprList and AbsTraversalForExpr), instead we define a standard mechanism
to unwrap expressions back to the lowest-level object -- usually an AST
node -- and then use this in all of our analyses that look at the
expression's structure rather than its value.
|
|
For applications already using hcldec, a decoder specification can be used
to automatically drive the recursive variable detection walk that begins
with WalkForEachVariables, allowing all "for_each" and "labels" variables
in a recursive block structure to be detected in a single call.
|
|
The previous ForEachVariables method was flawed because it didn't have
enough information to properly analyze child blocks. Since the core HCL
API requires a schema for any body analysis, and since a schema only
describes one level of configuration structure at a time, we must require
callers to drive a recursive walk through their nested block structure so
that the correct schema can be provided at each level.
This API is rather more complex than is ideal, but is the best we can do
with the HCL Body API as currently defined, and it's currently defined
that way in order to properly support ambiguous syntaxes like JSON.
|
|
This extension allows an application to support dynamic generation of
child blocks based on expressions in certain contexts. This is done using
a new block type called "dynamic", which contains an iteration value
(which must be a collection) and a specification of how to construct a
child block for each element of that collection.
|
|
|
|
|