|
1 | 1 | # Quality in Depth |
2 | 2 |
|
3 | | -Similar to the notion of [Defense in Depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)), quality in depth is a concept relating to how multiple layers of quality assurance can be used to ensure that modules continuously improve in quality over time. |
| 3 | +Inspired by the concept of [Defense in Depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)), quality in depth refers to implementing multiple layers of quality assurance to ensure continuous improvement in the quality of modules. |
4 | 4 |
|
5 | | -## Quality Checks We Use |
| 5 | +## Quality checks we use |
6 | 6 |
|
7 | | -These are some of the standard quality checks that are frequently used to ensure that modules are of high quality. |
| 7 | +These are the standard quality checks we employ to maintain high-quality modules. |
8 | 8 |
|
9 | | -### Automated Testing |
| 9 | +### Automated testing |
10 | 10 |
|
11 | | -The most important tool in our arsenal for ensuring high quality modules is automated testing. Nothing will catch more bugs than actually attempting to provision infrastructure using a module, verifying that the infrastructure is provisioned as expected, and then tearing it down. |
| 11 | +Automated testing is the most effective method for ensuring the quality of modules. It involves provisioning infrastructure using a module, verifying it works as expected, and tearing it down. |
12 | 12 |
|
13 | | -To support this, we use the [Terratest](https://github.com/gruntwork-io/terratest) library, which is an open source Go library maintained by Gruntwork that makes it easier to automate this process. |
| 13 | +We rely on [Terratest](https://github.com/gruntwork-io/terratest), an open-source Go library maintained by Gruntwork, to facilitate this process. Terratest enables local testing and CI pipeline tests against live cloud environments to verify that all library modules function as intended. |
14 | 14 |
|
15 | | -These tests can be run locally, and are run in CI pipelines against live cloud environments to ensure that every module in the library works as expected. |
| 15 | +### Pre-commit hooks |
16 | 16 |
|
17 | | -### Pre-commit Hooks |
| 17 | +Pre-commit hooks enable module authors to identify and address issues early in the development process. They are also enforced during CI runs to maintain compliance and consistency. |
18 | 18 |
|
19 | | -Prior to committing any software, module authors leverage a suite of pre-commit hooks to ensure that quality is introduced as early as possible. These hooks are run again in CI to ensure that authors did in-fact run them locally. |
| 19 | +For details on available hooks and repository-specific configurations, refer to the documentation in the [pre-commit repository](https://github.com/gruntwork-io/pre-commit?tab=readme-ov-file#pre-commit-hooks). |
20 | 20 |
|
21 | | -For a list of hooks available for authors, and for information on the hooks leveraged for a particular repository in the library, see the documentation in the [pre-commit](https://github.com/gruntwork-io/pre-commit?tab=readme-ov-file#pre-commit-hooks) repository. |
| 21 | +### Security scanning |
22 | 22 |
|
23 | | -### Security Scanning |
| 23 | +While ensuring security often involves good practices and sound judgment, static analysis tools can identify potential module vulnerabilities. |
24 | 24 |
|
25 | | -While making modules secure is often a practice of exercising good judgement and following best practices, there are some tools that can help identify security vulnerabilities in modules through static analysis. |
| 25 | +- [Terrascan](https://github.com/tenable/terrascan) is used in CI pipelines to detect vulnerabilities through static analysis. |
| 26 | +- [Steampipe](https://github.com/turbot/steampipe) performs live test cloud environment scans to detect security risks not captured by static analysis. These live test scans validate CIS compliance of modules like the [cis-service-catalog](https://github.com/gruntwork-io/terraform-aws-cis-service-catalog). |
26 | 27 |
|
27 | | -The tool used to achieve this in CI is [Terrascan](https://github.com/tenable/terrascan). This is an open source tool that is run continuously to ensure that our modules do not have any security vulnerabilities that are easily detectable. |
| 28 | +### Automated documentation generation |
28 | 29 |
|
29 | | -In addition to static analysis, we also use the [Steampipe](https://github.com/turbot/steampipe) tool to scan live test cloud environments for security vulnerabilities that might not be easily detectable through static analysis. This tool is particularly useful in ensuring the CIS compliance of the [cis-service-catalog](https://github.com/gruntwork-io/terraform-aws-cis-service-catalog) provided to automate the provisioning of CIS compliant AWS accounts. |
| 30 | +While well-written, human-generated documentation captures intent and technical details, automated documentation generation ensures accuracy and up-to-date information. Gruntwork employs custom tools to supplement manually written documentation with automatically generated details, available in the [Library Reference](/library/reference). |
30 | 31 |
|
31 | | -### Automated Documentation Generation |
| 32 | +## Quality checks we don't use |
32 | 33 |
|
33 | | -Generally, the best documentation is that which is written by a person and accurately conveys not only the technical details of a module, but also the intent behind the module. However, it is also useful to have automated documentation generation to ensure that the documentation is always up to date. |
| 34 | +Not every quality check is practical or valuable enough to justify its implementation. Each quality check incurs a cost, and we aim to maintain a high signal-to-noise ratio by using the most impactful methods within available resources. |
34 | 35 |
|
35 | | -To achieve that goal, some custom tooling is used to automatically generate documentation for modules in the library in the context of the manually written documentation to express intent. You can see this documentation by navigating to the [Library Reference](/library/reference) |
| 36 | +### Infrastructure cost |
36 | 37 |
|
37 | | -## Quality Checks We Don't Use |
| 38 | +Tools like [Infracost](https://github.com/infracost/infracost) can help assess the cost of live infrastructure. However, it is less relevant to a module library and is not currently part of our quality checks. |
38 | 39 |
|
39 | | -It is impossible to use every conceivable quality check, and some quality checks are more valuable than others. Every quality check has a cost associated with it, and it is important to be judicious in the quality checks that are used to ensure a high signal to noise ratio. That being said, there is also limited time and resources to implement new quality checks on the entire library. |
| 40 | +### Further exploration |
40 | 41 |
|
41 | | -Gruntwork strives to use the most valuable quality checks that result in high quality modules with the resources available to us. |
42 | | - |
43 | | -These are some quality checks that we are aware of, but don't use. They may not be in use at the moment because they are not valuable enough to justify the cost of implementing them, or because they are not a good fit for the library. Regardless, they are referenced here so that they can be considered in the future, and so that you can evaluate whether they are a good fit for your own modules. |
44 | | - |
45 | | -### Infrastructure Cost |
46 | | - |
47 | | -A useful tool for evaluating the cost of infrastructure is [Infracost](https://github.com/infracost/infracost). This is a useful tool, but more practically useful for live infrastructure than a library of modules. |
48 | | - |
49 | | -### More to be Discovered |
50 | | - |
51 | | -Quality checks that are _not_ used are harder to think of than those that _are_ used. If you have any suggestions for quality checks that we should consider, please let us know by sending a pull request to this document. |
| 42 | +Determining unused quality checks is inherently challenging. If you have suggestions for additional quality checks we should consider, please contribute by submitting a pull request to this document. |
0 commit comments